John Tsiligaridis, Department of Math and Computer Science, Heritage University, Toppenish, WA, USA
Autoencoders (AEs) are Deep Learning (DL) models that are well known for their ability to compress and reconstruct data. When an AE compresses input data, a latent space is created which yields a compressed representation of the original data with a smaller set of features. Genetic Algorithms (GAs) based on evolutionary principles can be used to optimize various hyperparameters of a DL model. This work involves two tasks. First, it focuses on the application of an AE on image data along with various configurations of the AE structure and its constituent encoder/decoder structure using Multi-Layer Perceptrons (MLPs). Visualizations of the AE loss functions during training are provided, along with various latent space results obtained using clustering techniques. The second focus of the paper is on the application of the GA on a Convolutional AE where optimization of the Convolutional Neural Networks (CNN) encoder/decoder structures is done by converting the architecture into genes for image classification. We see that the AE is a flexible and robust model that can successfully be applied on a variety of image datasets and the GA model surpasses the AE model.
Machine Learning, Deep Learning, Autoencoders, Genetic Algorithms.
Sai Kiran Padmam, Partha Sarthi Samal,independent,United States of America
Online retail has come a long way since its early days of static web pages and manual price comparisons. The new frontier embraces artificial intelligence (AI) to interpret user queries, mediate real-time auctions among multiple vendors, and deliver personalized recommendations at blazing speeds. This paper highlights how such AI agents operate under the hood, drawing upon machine learning, reinforcement learning, and multi-agent coordination principles. We also offer glimpses into emerging research challenges and future directions that may reshape online shopping entirely.
Andreas Shaji Jacob, Kulwant Singh, Muhammad Shafique, Ruben Movsisyan, Seungbin Lee, and Ugur Randa, Pacific States University, USA
The rapid adoption of Artificial Intelligence (AI) across industries has revealed limitations17 in traditional requirement analysis methodologies, which were not designed to address the com-18 plexities and iterative nature of AI-based projects. This paper proposes a refined thought19 process for requirement analysis tailored to the needs of AI-driven initiatives, whether AI is the20 primary focus or an integrated component of a larger system. By emphasizing the dynamic21 interplay between data, models, and deployment environments, the proposed approach departs22 from linear methodologies, advocating for an adaptive and iterative process. Using case studies,23 we demonstrate how this concept ensures better alignment with business goals, enhances data24 utility, and improves model performance while addressing ethical considerations and practical25 constraints. This paper aims to provide practitioners, researchers, and project owners with26 actionable insights to optimize AI project outcomes in an increasingly complex technological27 landscape.
Lutfur Rahman Fahad, Mukit AL Elahi, Nayem Miah, Himel, Somon, Niaz Dhara, Mia, and Adipta, Department of Computer Science, Pacific States University, USA
Agile Scrum methodology is widely regarded as a transformative approach to software de velopment, emphasizing flexibility, collaboration, and iterative progress. However, its adoption is not without challenges, which vary significantly across organizations of different sizes, in-dustries, and geographic distributions. This study explores recurring issues such as resource constraints, communication barriers, and resistance to change, while highlighting trends and successful practices in addressing these challenges. By synthesizing insights from existing lit-erature and limited interviews, this research aims to provide actionable recommendations and a tailored framework for organizations seeking to optimize their Agile Scrum implementation.The findings contribute to a deeper understanding of how industry-specific contexts influence the effectiveness of Agile practices
Jannatul Mawa, Md Nafis Azad Nobel, Sajeet Raj Aryal, Kazi, Taehyun Kim, Pritom Das, Mahbu Khan, Department of Computer Science, Pacific States, University, USA
The cloud computing environment offers significant flexibility and access to computing re- sources at a reduced cost. This technology is rapidly transforming the landscape of e-services across various fields. In this paper, we examine cloud computing services and applications, high- lighting examples of services offered by leading Cloud Service Providers (CSPs) such as Google, Microsoft, Amazon, HP, and Salesforce. We also showcase innovative cloud applications in areas like e-learning, Enterprise Resource Planning (ERP), and e-governance. This study aims to help individuals and organizations recognize how cloud computing can deliver customized, reliable, and cost-effective solutions across a diverse range of applications.
Sheresh Zahoor1, Pietro Li`o1, Ga ̈el Dias2, and Mohammed Hasanuzzaman3, 1University of Cambridge, 2Normandie Univ, GREYC, 3Queens University Belfast
Diabetes is a global health crisis, demanding advanced genomic approaches to uncover molecular mechanisms and identify therapeutic targets. This study introduces the Genomic Causal Framework (GCF), a novel approach combining genomic data analysis, causal modeling, and predictive analytics to provide actionable insights into diabetes pathology. These include identifying potential therapeutic targets, such as CXCL8, S100A8, and COL1A1, implicated in chronic inflammation and complications like diabetic nephropathy. The framework also highlights regulatory genes, such as ROBO1 and FCGR2A, as upstream drivers of disease progression. Using the Diabetes genome dataset (GSE132831), we identify differentially expressed genes (DEGs) with pyDESeq2, stratifying upregulated and downregulated genes. These DEGs form the basis for constructing a protein-protein interaction (PPI) network, revealing critical functional pathways. The GCF framework integrates Causal Bayesian Networks (CBNs) and Probability Trees (PTrees) to move beyond prediction and enable causal reasoning. CBNs model causal relationships between genes and diabetic outcomes, while PTrees quantify their impact. Achieving 82.22% accuracy and 95% recall, GCF ensures reliable patient identification, with SHAP analysis enhancing interpretability and biological relevance. Its integration of causal reasoning with predictive analytics prioritises biologically relevant features for clinical and research applications. By bridging causal inference with functional genomics, this study advances biomarker discovery and therapeutic target identification, providing a powerful tool for precision medicine in Type 2 Diabetes. Unlike traditional machine learning, our approach enhances interpretability while uncovering critical insights into disease development and progression.
Diabetes, Causal Bayesian Network, Probability Trees, Genomics.
Kostas Dimitrios1 and Kostas Ioannis2, 1National and Kapodistrian University of Athens, 2University of Piraeus
In this article we will analyse how Information and communication technologies can be used in teaching Algorithms Information and communication technologies use algorithms can help create numerous applications that can solve a wide variety of problems.by University students. There are many applications in Universities specifically in laboratories teachers and students use algorithms to solve numerical and algebraic system problems. In Partial differential equations, we use Numerical Analysis to solve a system of PDEs. In research, also most people use Algorithms to validate their theoretical findings. Students need to use suitable algorithms to solve problems. Nowadays many people use Algorithms to solve problems in their companies or in their organisation and in every context, algorithms are widely used. This work is about teaching algorithms used to solve problems using the numerical analysis methods.
Algorithms, Innovative, technology, Analysis.
Mshabab Alrizah, Jazan University, Jazan, Saudi Arabia
EasyList is a widely used filter list that enhances online privacy and security by blocking tracking mechanisms, advertisements, and other unwanted web elements. As an open-source project, its sustainability is based on collaborative contributions, issue tracking, and continuous updates to address emerging challenges. GitHub plays a crucial role in facilitating the management of EasyList, offering tools for version control, issue resolution, and community-driven improvements. This study explores the complexities of maintaining EasyList on GitHub by analyzing multiple-year issue reports. Through data collection, trend analysis, and resolution efficiency evaluation, this research provides insight into contributor engagement, frequently reported domains, and the overall effectiveness of the maintenance process. The findings highlight the importance of community participation in maintaining EasyList, the ongoing need for adaptive strategies against evolving tracking and ad-serving techniques, and the broader implications for open-source project management.
Ad-blocking, open-source maintenance, GitHub, tracking prevention, filter lists, crowdsourcing collaboration.
Theodora-Stavroula Korma, Department of Communication and Information studies, Rijkuniversiteit Groningen, Groningen, The Netherlands
Predictive policing, an algorithm-driven crime prevention initiative, claims to render the criminal justice system more effective and neutral. Yet, this essay argues that these algorithmic models reinforce system-level prejudices and unfairly focus on over marginalized populations while amplifying injustice. As these models draw from historical data covering four decades shaped by biased police operations, they can magnify racial profiling and harden social hierarchies. Furthermore, these systems lack of transparency and accountability has ethical consequences on surveillance, due process, and civil rights violations. In line with Design Justice principles, this paper calls for a redesign of predictive policing that is not about control by systems but the empowerment of communities. Instead of being used as enforcement tools, these algorithms must be redesigned to address root causes of social harm, promote equitable resource allocation, and engage communities in decision-making. Through participatory governance and moral algorithmic design, predictive technologies can serve justice rather than subvert it, so that communities are protected, not monitored.
Predictive policing, algorithmic bias, systemic injustice, racial profiling, Design Justice.
Anthony Chidi Nzomiwu & Michael Nwobodo, Krakow University of Economics, Poland
This comprehensive review examines the convergence and integration of emerging technologies and their transformative impact across industries, drawing from empirical research and documented case studies from 2000 to 2025. The analysis demonstrates how the fusion of physical, digital, and biological technologies has created unprecedented synergies, fundamentally altering traditional operational paradigms. Through detailed examination of implementations in manufacturing, healthcare, financial services, and agriculture, the study reveals patterns of successful technology integration and their measurable impacts. Key findings indicate that successful technological transformation requires systematic attention to three critical dimensions: technical infrastructure, organizational capabilities, and societal implications. The research identifies significant challenges, including interoperability issues, security vulnerabilities, workforce transformation, and ethical considerations, while providing evidence based frameworks for addressing these challenges. The study contributes to both theoretical understanding and practical implementation of emerging technologies, offering insights for policymakers, business leaders, and researchers. The conclusion synthesizes strategic implications for future technological development, emphasizing the need for integrated approaches to innovation, governance, and sustainability.
Technological convergence; Digital transformation; Industry 4.0; Systems integration; Innovation management; Artificial intelligence; Sustainable technology; Digital ethics; Organizational capabilities; Cyber-physical systems; Smart manufacturing; Digital infrastructure; Technological innovation.
Iblal Rakha1 and Noorhan Abbas2, 1Oxford University Hospitals NHS Foundation Trust, Oxford, OX3 9DU, UK, 2University of Leeds, Woodhouse, Leeds, LS2 9JT, UK
The NHS faces mounting pressures, resulting in workforce attrition and growing care backlogs. Pharmacy services, critical for ensuring medication safety and effectiveness, are often overlooked in digital innovation efforts. This pilot study investigates the potential of Large Language Models (LLMs) to alleviate pharmacy pressures by answering clinical pharmaceutical queries. Two retrieval techniques were evaluated: Vanilla Retrieval Augmented Generation (RAG) and Graph RAG, supported by an external knowledge source designed specifically for this study. ChatGPT 4o without retrieval served as a control. Quantitative and qualitative evaluations were conducted, including expert human assessments for response accuracy, relevance, and safety. Results demonstrated that LLMs can generate high-quality responses. In expert evaluations, Vanilla RAG outperformed other models and even human reference answers for accuracy and risk. Graph RAG revealed challenges related to retrieval accuracy. Despite the promise of LLMs, hallucinations and the ambiguity around LLM evaluations in healthcare remain key barriers to clinical deployment. This pilot study underscores the im-portance of robust evaluation frameworks to ensure the safe integration of LLMs into clinical workflows. However, regulatory bodies have yet to catch up with the rapid pace of LLM development. Guidelines are urgently needed to address the issues of transparency, explainability, data protection, and validation, to facilitate the safe and effective deployment of LLMs in clinical practice.
Large Language Model Evaluation, Retrieval Augmented Generation, Clinical Question Answering, Knowledge Graphs, Healthcare Artificial Intelligence.
Salahuddin Alawadhi1 and Noorhan Abbas2, 1Salahuddin Alawadhi University of Leeds Dubai, UAE, 2Noorhan Abbas School of Computer Science University of Leeds, United Kingdom
The integration of Retrieval-Augmented Generation (RAG) with Large Language Models (LLMs) has shown potential in providing precise, contextually relevant responses in knowledge-intensive domains. This study investigates the ap-plication of RAG for ABB circuit breakers, focusing on accuracy, reliability, and contextual relevance in high-stakes engineering environments. By leveraging tailored datasets, advanced embedding models, and optimized chunking strategies, the research addresses challenges in data retrieval and contextual alignment unique to engineering documentation. Key contributions include the development of a domain-specific dataset for ABB circuit breakers and the evaluation of three RAG pipelines: OpenAI GPT-4o, Cohere, and Anthropic Claude. Advanced chunking methods, such as paragraph-based and title-aware segmentation, are assessed for their impact on retrieval accuracy and response generation. Results demonstrate that while certain configurations achieve high precision and relevancy, limitations persist in ensuring factual faithfulness and completeness, critical in engineering contexts. This work underscores the need for iterative improvements in RAG systems to meet the stringent demands of electrical engineering tasks, including design, troubleshooting, and operational decision-making. The findings in this paper help advance research of AI in highly technical domains such as electrical engineering.
Retrieval-Augmented Generation (RAG), Electrical Engineering, ABB Circuit Breakers, Chunking, Embeddings
Jonathan Bennion1, Shaona Ghosh2, Mantek Singh3, Nouha Dziri4,1The Objective AI, USA, 2Nvidia, USA, 3Google, USA, 4Allen Institute for AI (AI2), USA
Various AI safety datasets have been developed to measure LLMs against evolving interpretations of harm. Our evaluation of five recently published open-source safety benchmarks reveals distinct semantic clusters using UMAP dimensionality reduction and k-means clustering (silhouette score: 0.470). We identify six primary harm categories with varying benchmark representation. GretelAI, for example, focuses heavily on privacy concerns, while WildGuardMix emphasizes self-harm scenarios. Significant differences in prompt length distribution suggests confounds to data collection and interpretations of harm as well as offer possible context. Our analysis quantifies benchmark orthogonality among AI benchmarks, allowing for transparency in coverage gaps despite topical similarities. Our quantitative framework for analyzing semantic orthogonality across safety benchmarks enables more targeted development of datasets that comprehensively address the evolving landscape of harms in AI use, however that is defined in the future.
AI benchmark meta-analysis, LLM Embeddings, Dimensionality reduction, K-means clustering, AI safety.
Zhiyuan Liu, Computing and Communications School of Lancaster University
The essay begins by setting out a detailed scenario for the deployment of face recognition systems in public places. Based on this scenario, two statutes that companies need to focus on and a relevant legal case are critically discussed. The essay then integrates the two statutes into the scenario and makes critical recommendations for security design decisions, both managerial and technical, based on the legal requirements. The essay concludes with a summary of the findings and insights.
Network Protocols, Wireless Network, Mobile Network, Virus, Worms &Trojon.
Salahuddin Alawadhi1 and Noorhan Abbas2, 1Salahuddin Alawadhi University of Leeds Dubai, UAE, 2Noorhan Abbas School of Computer Science University of Leeds, United Kingdom
The integration of Retrieval-Augmented Generation (RAG) with Large Language Models (LLMs) has shown potential in providing precise, contextually relevant responses in knowledge-intensive domains. This study investigates the ap-plication of RAG for ABB circuit breakers, focusing on accuracy, reliability, and contextual relevance in high-stakes engineering environments. By leveraging tailored datasets, advanced embedding models, and optimized chunking strategies, the research addresses challenges in data retrieval and contextual alignment unique to engineering documentation. Key contributions include the development of a domain-specific dataset for ABB circuit breakers and the evaluation of three RAG pipelines: OpenAI GPT-4o, Cohere, and Anthropic Claude. Advanced chunking methods, such as paragraph-based and title-aware segmentation, are assessed for their impact on retrieval accuracy and response generation. Results demonstrate that while certain configurations achieve high precision and relevancy, limitations persist in ensuring factual faithfulness and completeness, critical in engineering contexts. This work underscores the need for iterative improvements in RAG systems to meet the stringent demands of electrical engineering tasks, including design, troubleshooting, and operational decision-making. The findings in this paper help advance research of AI in highly technical domains such as electrical engineering.
Retrieval-Augmented Generation (RAG), Electrical Engineering, ABB Circuit Breakers, Chunking, Embeddings
Qi Huamei1 and Md Jahangir Alam2, 1School of Electronics Information Science, Central South University, Changsha, China, 2Department of Computer Science and Technology, Central South University, Changsha, China
The rise of Non-Alcoholic Fatty Liver Disease (NAFLD), associated with obesity and metabolic disorders, underscores the importance of developing precise prediction models for early identification. This research employs machine learning and survival analysis techniques to classify and forecast NAFLD using clinical and demographic data. The examined models include Decision Tree, Extra Trees, Random Forest (utilizing 10 estimators), and K-Nearest Neighbours (with K set to 3). For data preparation, KNN imputation was applied to address missing values, and MinMax scaling was used for standardization. Lasso regression (LassoCV) was implemented to select features and highlight significant variables to enhance model efficacy. Alongside classification models, the Kaplan-Meier estimator (KaplanMeierFitter) and Cox Proportional Hazards Model (CoxPHFitter) were utilized to evaluate patient survival rates and to pinpoint risk factors. The ensemble models, specifically the Extra Trees and Random Forest classifiers, surpassed the baseline Decision Tree (88.28) and KNN (91.56) models, achieving accuracies of 92.54 and 92.63, respectively. LassoCV contributed to improved feature significance, while survival analysis offered valuable insights into the progression of NAFLD. This study showcases the efficacy of ensemble methods and survival analysis in developing reliable and interpretable prediction models for NAFLD. Future research should aim to expand the dataset and incorporate additional clinical parameters.
NAFLD Prediction, Ensemble Learning, LassoCV Feature Selection, Survival Analysis, Cox Proportional hazards, Kaplan-Meier Estimator.
Messaoud MEZATI, Siham BEGGAA, Houria BENBOUBKEUR, Chahd BRAITHEL and Malak GHOULIA, Department of Computer Science and Information Technology, Kasdi Merbah University Ouargla, Algeria
Predicting pedestrian trajectories is a key challenge in intelligent transportation systems, robotics, and urban mobility, requiring models that balance accuracy, adaptability, and interpretability. Traditional Knowledge-Based (KB) models, including social force models, agent-based simulations, and reinforcement learning, offer structured decision-making but struggle with rapidly changing and complex environments. In contrast, Deep Learning (DL) techniques, such as LSTMs, Graph Neural Networks (GNNs), and Transformers, capture intricate movement patterns but often lack transparency. This study examines the hybridization of KB and DL models, integrating physics-based constraints with data-driven learning to enhance pedestrian behavior forecasting. A systematic classification of hybrid models is provided based on model structure, prediction tasks, AI integration, and real-world applications. Additionally, the study explores the potential of Reinforcement Learning (RL), Self-Supervised Learning, and Large Language Models (LLMs) in trajectory prediction. By bridging rule-based reasoning with adaptive learning, this work contributes to the development of safer, more flexible, and explainable pedestrian trajectory prediction models for applications in autonomous navigation, smart cities, and crowd management.
Pedestrian Trajectory Prediction, Knowledge-Based Models, Deep Learning, Autonomous Driving, Explainable AI.
Shraddha Sharma1, Anjali Sharma2 and Gaurav Vishwakarma3, 1Entergy Services, Houston Texas, USA, 2MP Electricity Board, Indore, MP, India, 3Reliance Power, Lucknow, UP, India
State estimation in power grids is the process of determining the most accurate operating state of an electrical power system using available measurements from sensors and meters. State estimation helps in ensuring reliability, and efficiency in modern electrical networks. Traditional state estimation methods, such as the Weighted Least Squares (WLS) approach, often struggle with non-linearity, measurement noise, and data sparsity. Deep learning (DL) has emerged as a powerful alternative due to its ability to learn complex patterns and handle large datasets. This paper explores the application of deep learning techniques for state estimation in power grids along with comparing their performance against conventional methods.
State Estimation, Weighted Least Squares, Deep Learning, State Estimators, Power Grid, Bayesian Algorithm.
Muhammad Sarmad, Emanuele Mele, Rajat Srivastava, Marco Pulimeno, Massimo Cafaro, and Italo Epicoco, Department of Engineering for Innovation, University of Salento Lecce-73047, Italy
Accurate representation of oceanic conditions is fundamental for reliable climate modeling, weather forecasting, and environmental monitoring. However, ocean models and observational datasets often exhibit systematic biases due to limitations in model physics, parameterizations, resolution, or observational coverage. In this work, we propose a diffusion model for bias correction and we systematically evaluated its performance for Sea Surface Temperature on the ocean (SST) generation by varying different hyper-parameters in the U-Net architecture. The model is trained to denoise simulated date and reconstruct SST field guided by reanalysis data. Our results demonstrate that increasing the base channel’s depth significantly improves the model’s performance, with improvements in convergence speed, reconstruction accuracy, and spatial detail retention. Quantitative metrics such as root mean squared error (RMSE), Pearson’s correlation coefficient (PCC), and coefficient of determination (R2) show notable gains up to a base channel depth of 64, beyond which performance gains plateau. A detailed temporal generalization analysis using seasonal batches every two months confirms the robustness of the model in varying SST regimes. At the same time, qualitative visualizations show sharp and coherent reconstructions with minimal error. The study highlights the trade-off between model complexity and performance and identifies 64 base channels as a computationally efficient and accurate configuration for SST modeling using diffusion-based generative methods.
Diffusion Models, Oceanic Dataset, Architectural Parameters, Bias Correction.
Lee Seo-jun, Choi Seo-yeon, Oh Ji-soo, & Gyu Tae Bae, UC Berkeley, United States of America
South Korea, with approximately 63% of its land covered by forests, is highly susceptible to wildfires. Traditional fire detection methods—such as satellite imagery and ground-based observation—face significant limitations, including high operational costs, delayed response times, and vulnerability to weather conditions. This paper presents an efficient fire detection system for Vertical Take-Off and Landing (VTOL) Unmanned Aerial Vehicles (UAVs), utilizing Convolutional Neural Networks (CNNs). The integration of CNNs significantly improves detection accuracy, even in complex environments that challenge conventional approaches. In simulations designed to closely mimic real-world scenarios, the optimized algorithm achieved a 93% detection rate with 20% false positives and a frame latency of just 1.2 seconds. Additionally, deploying the model on a Raspberry Pi onboard a VTOL drone demonstrated its practical viability for real-time forest fire surveillance and rapid response. This study highlights the potential of drone-based, AI-powered fire detection systems as a powerful supplement to existing wildfire monitoring and prevention strategies.
Forest fire detection, Wildfires, VTOL drones, Unmanned Aerial Vehicle (UAV), Convolutional Neural Networks (CNNs), Real-time detection, False positives, Frame latency, Raspberry Pi, Onboard processing, Fire surveillance, AI-powered monitoring, Wildland fire prevention, Drone-based systems, Environmental monitoring
Rachana S Potpelwar, U V Kulkarni, J M Waghmare, Shri Guru Gobind Singh Institute of Engineering and Technolgy, Computer Science and Engineering Department, Nanded, 431605, Maharashtra, India
Phishing attacks continue to pose a significant threat to online security, making efficient detection methods essential. This paper presents a lexical-based ap- proach for detecting malicious URLs using deep learning algorithms, including Artificial Neural Networks (ANN), Multi-Layer Perceptrons (MLP), and Long Short-Term Memory (LSTM) networks. Our dataset consists of phishing and le- gitimate URLs labeled accordingly. To enhance detection accuracy, the dataset was preprocessed using the Term Frequency-Inverse Document Frequency (TF- IDF) method, converting the raw URL strings into meaningful numerical rep- resentations. The experimental results demonstrate that preprocessing sub- stantially improves model performance. For LSTM, the accuracy improved from 90.05% (without preprocessing) to 90.77% (with preprocessing). These results highlight the effectiveness of combining lexical feature extraction with deep learning algorithms, offering a promising solution for real-time detection systems to safeguard against phishing attacks and enhance cybersecurity.”
Wallas Bruno S. Lira1, Gilton José Ferreira da Silva1, Silvio Mario Felix Dantas1, Barbara Cristina Silva Rosa2, Cassia Regina D’Antonio Rocha da Silva3, 1Departamento de Ciência da Computação – Universidade Federal de Sergipe (UFS) Cidade Univ. Prof. José Aloísio de Campos Av. Marcelo Deda Chagas, s/n, Bairro Rosa Elze São Cristóvão/SE CEP 49107-230, 2Departamento de Fonoaudiologia – Universidade Federal de Sergipe (UFS) Cidade Univ. Prof. José Aloísio de Campos Av. Marcelo Deda Chagas, s/n, Bairro Rosa Elze São Cristóvão/SE CEP 49107-230, 3Universidade Tiradentes, Universidade Tiradentes - Campus II. Av. Murilo Dantas, 300 Farolândia 49032-490 - Aracaju, SE - Brasil
The dynamic process of discovering and documenting software requirements demands effective approaches. This study explores, through a survey conducted via Google Forms using the snowball technique, the application of Design Thinking (DT) stages in Requirements Engineering (RE). Findings indicate that integrating DT phases enhances Requirements Engineering activities, though maintaining architectural quality throughout the agile lifecycle remains challenging. It concludes that the synergy among DT, Requirements Engineering, and Software Architecture significantly improves the effectiveness of agile software projects.
Design Thinking, Requirements Engineering, Software Architecture, Software Development, Design Management.
Bohdan Vodianyk1, Enrique Nava Baro2, Alfonso Ariza Quintana2, Anton Popov3, 4, 1Escuela de Ingenierías Industriales, Universidad de Málaga, Arquitecto Francisco Penalosa, 6, Malaga, 29071, Spain, 2ETSI Telecomunicación, Universidad de Málaga, Blvr. Louis Pasteur, 35, Malaga, 29010, Spain, 3Department of Electronic Engineering, Igor Sikorsky Kyiv Polytechnic Institute, Polytekhnichna Street, 16, Kyiv, 03056, Ukraine, 4Faculty of Applied Sciences, Ukrainian Catholic University, Kozelnytska Street, 2a, Lviv, 79026, Ukraine
Accurate 3D reconstruction of dental structures is crucial for orthodontic assessment and surgical planning. Traditional methods like SIFT and ORB struggle with complex dental textures. This paper proposes a pipeline using KeyNetAffNetHardNet for feature detection and matching, achieving higher robustness and 25% faster computation compared to state-of-the-art methods like LoFTR and DISK + LightGlue. To optimize 3D mesh reconstruction, Surface-Aligned Gaussian Splatting (SuGaR) enhances mesh accuracy and rendering quality, achieving SSIM up to 0.9538 and PSNR up to 28.98, improving SSIM by 10% and PSNR by 15% over conventional approaches. Experimental results demonstrate that integrating KeyNetAffNetHardNet and SuGaR delivers high-fidelity 3D dental models with improved efficiency and quality, advancing dental diagnostics and treatment planning.
3D Reconstruction, Keypoint Matching, Gaussian Splatting, Dental Imaging, Deep Learning, Computer Vision.
Copyright © CDKP 2025