Multimodal spatiotemporal data from smart city consumer electronics present critical challenges including cross-modal temporal misalignment,unreliable data quality,limited joint modeling of spatial and temporal depend...Multimodal spatiotemporal data from smart city consumer electronics present critical challenges including cross-modal temporal misalignment,unreliable data quality,limited joint modeling of spatial and temporal dependencies,and weak resilience to adversarial updates.To address these limitations,EdgeST-Fusion is introduced as a cross-modal federated graph transformer framework for context-aware smart city analytics.The architecture integrates cross-modal embedding networks for modality alignment,graph transformer encoders for spatial dependency modeling,temporal self-attention for dynamic pattern learning,and adaptive anomaly detection to ensure data quality and security during aggregation.A privacy-preserving federated learning protocol with differential privacy guarantees enables collaborative model training without centralizing sensitive data.The framework employs data-quality-aware weighted aggregation to enhance robustness against noisy and malicious client updates.Experimental evaluation on the GeoLife,PeMS-Bay,and SmartHome+datasets demonstrates that EdgeST-Fusion achieves 21.8%improvement in prediction accuracy,35.7%reduction in communication overhead,and 29.4%enhancement in security resilience compared to recent baselines.Real-world deployment across three smart city testbeds validates practical viability with 90.0%average accuracy and sub-250 ms inference latency.The proposed framework remains feasible for deployment on heterogeneous and resource-constrained consumer electronics devices whilemaintaining strong privacy guarantees and scalability for large-scale urban environments.展开更多
Latest digital advancements have intensified the necessity for adaptive,data-driven and socially-centered learning ecosystems.This paper presents the formulation of a cross-platform,innovative,gamified and personalize...Latest digital advancements have intensified the necessity for adaptive,data-driven and socially-centered learning ecosystems.This paper presents the formulation of a cross-platform,innovative,gamified and personalized Learning Ecosystem,which integrates 3D/VR environments,as well as machine learning algorithms,and business intelligence frameworks to enhance learner-centered education and inferenced decision-making.This Learning System makes use of immersive,analytically assessed virtual learning spaces,therefore facilitating real-time monitoring of not just learning performance,but also overall engagement and behavioral patterns,via a comprehensive set of sustainability-oriented ESG-aligned Key Performance Indicators(KPIs).Machine learning models support predictive analysis,personalized feedback,and hybrid recommendation mechanisms,whilst dedicated dashboards translate complex educational data into actionable insights for all Use Cases of the System(Educational Institutions,Educators and Learners).Additionally,the presented Learning System introduces a structured Mentoring and Consulting Subsystem,thence reinforcing human-centered guidance alongside automated intelligence.The Platform’s modular architecture and simulation-centered evaluation approach actively support personalized,and continuously optimized learning pathways.Thence,it exemplifies a mature,adaptive Learning Ecosystem,supporting immersive technologies,analytics,and pedagogical support,hence,contributing to contemporary digital learning innovation and sociotechnical transformation in education.展开更多
This paper aims to conduct a systematic literature review(SLR)using an artificial intelligence(AI)approach to predict and diagnose diabetes mellitus.After reviewing the literature published from 2015–2025,the paper a...This paper aims to conduct a systematic literature review(SLR)using an artificial intelligence(AI)approach to predict and diagnose diabetes mellitus.After reviewing the literature published from 2015–2025,the paper aims to identify the most effective AI techniques,the most used datasets,the most widely used data preprocessing techniques,and the most common issues.After analyzing the literature,it has been found that convolutional neural networks(CNNs)and long short-term memory(LSTM)networks are deep learning models that have shown high accuracy in diabetes prediction.Recursive feature elimination(RFE)and SMOTE are feature selection techniques that have significantly improved model accuracy,training time,and interpretability.Amidst this technological advancement,some existing issues persist:data imbalance,the inapplicability of techniques,computational limitations,and a lack of real-time application in a healthcare environment.The literature review has also identified the need for robust,interpretable,and scalable AI systems capable of handling large volumes of data,including real-world data,in the healthcare industry.Furthermore,it has been identified that the benefits should be integrated with wearable health monitoring systems and the development of privacy-preserving models to ensure continuous,secure,and proactive diabetes management.展开更多
The increasing number of interconnected devices and the incorporation of smart technology into contemporary healthcare systems have significantly raised the attack surface of cyber threats.The early detection of threa...The increasing number of interconnected devices and the incorporation of smart technology into contemporary healthcare systems have significantly raised the attack surface of cyber threats.The early detection of threats is both necessary and complex,yet these interconnected healthcare settings generate enormous amounts of heterogeneous data.Traditional Intrusion Detection Systems(IDS),which are generally centralized and machine learning-based,often fail to address the rapidly changing nature of cyberattacks and are challenged by ethical concerns related to patient data privacy.Moreover,traditional AI-driven IDS usually face challenges in handling large-scale,heterogeneous healthcare data while ensuring data privacy and operational efficiency.To address these issues,emerging technologies such as Big Data Analytics(BDA)and Federated Learning(FL)provide a hybrid framework for scalable,adaptive intrusion detection in IoT-driven healthcare systems.Big data techniques enable processing large-scale,highdimensional healthcare data,and FL can be used to train a model in a decentralized manner without transferring raw data,thereby maintaining privacy between institutions.This research proposes a privacy-preserving Federated Learning–based model that efficiently detects cyber threats in connected healthcare systems while ensuring distributed big data processing,privacy,and compliance with ethical regulations.To strengthen the reliability of the reported findings,the resultswere validated using cross-dataset testing and 95%confidence intervals derived frombootstrap analysis,confirming consistent performance across heterogeneous healthcare data distributions.This solution takes a significant step toward securing next-generation healthcare infrastructure by combining scalability,privacy,adaptability,and earlydetection capabilities.The proposed global model achieves a test accuracy of 99.93%±0.03(95%CI)and amiss-rate of only 0.07%±0.02,representing state-of-the-art performance in privacy-preserving intrusion detection.The proposed FL-driven IDS framework offers an efficient,privacy-preserving,and scalable solution for securing next-generation healthcare infrastructures by combining adaptability,early detection,and ethical data management.展开更多
In the competitive retail industry of the digital era,data-driven insights into gender-specific customer behavior are essential.They support the optimization of store performance,layout design,product placement,and ta...In the competitive retail industry of the digital era,data-driven insights into gender-specific customer behavior are essential.They support the optimization of store performance,layout design,product placement,and targeted marketing.However,existing computer vision solutions often rely on facial recognition to gather such insights,raising significant privacy and ethical concerns.To address these issues,this paper presents a privacypreserving customer analytics system through two key strategies.First,we deploy a deep learning framework using YOLOv9s,trained on the RCA-TVGender dataset.Cameras are positioned perpendicular to observation areas to reduce facial visibility while maintaining accurate gender classification.Second,we apply AES-128 encryption to customer position data,ensuring secure access and regulatory compliance.Our system achieved overall performance,with 81.5%mAP@50,77.7%precision,and 75.7%recall.Moreover,a 90-min observational study confirmed the system’s ability to generate privacy-protected heatmaps revealing distinct behavioral patterns between male and female customers.For instance,women spent more time in certain areas and showed interest in different products.These results confirm the system’s effectiveness in enabling personalized layout and marketing strategies without compromising privacy.展开更多
Artificial Intelligence(AI)is changing healthcare by helping with diagnosis.However,for doctors to trust AI tools,they need to be both accurate and easy to understand.In this study,we created a new machine learning sy...Artificial Intelligence(AI)is changing healthcare by helping with diagnosis.However,for doctors to trust AI tools,they need to be both accurate and easy to understand.In this study,we created a new machine learning system for the early detection of Autism Spectrum Disorder(ASD)in children.Our main goal was to build a model that is not only good at predicting ASD but also clear in its reasoning.For this,we combined several different models,including Random Forest,XGBoost,and Neural Networks,into a single,more powerful framework.We used two different types of datasets:(i)a standard behavioral dataset and(ii)a more complex multimodal dataset with images,audio,and physiological information.The datasets were carefully preprocessed for missing values,redundant features,and dataset imbalance to ensure fair learning.The results outperformed the state-of-the-art with a Regularized Neural Network,achieving 97.6%accuracy on behavioral data.Whereas,on the multimodal data,the accuracy is 98.2%.Other models also did well with accuracies consistently above 96%.We also used SHAP and LIME on a behavioral dataset for models’explainability.展开更多
The malicious dissemination of hate speech via compromised accounts,automated bot networks and malware-driven social media campaigns has become a growing cybersecurity concern.Automatically detecting such content in S...The malicious dissemination of hate speech via compromised accounts,automated bot networks and malware-driven social media campaigns has become a growing cybersecurity concern.Automatically detecting such content in Spanish is challenging due to linguistic complexity and the scarcity of annotated resources.In this paper,we compare two predominant AI-based approaches for the forensic detection of malicious hate speech:(1)finetuning encoder-only models that have been trained in Spanish and(2)In-Context Learning techniques(Zero-and Few-Shot Learning)with large-scale language models.Our approach goes beyond binary classification,proposing a comprehensive,multidimensional evaluation that labels each text by:(1)type of speech,(2)recipient,(3)level of intensity(ordinal)and(4)targeted group(multi-label).Performance is evaluated using an annotated Spanish corpus,standard metrics such as precision,recall and F1-score and stability-oriented metrics to evaluate the stability of the transition from zero-shot to few-shot prompting(Zero-to-Few Shot Retention and Zero-to-Few Shot Gain)are applied.The results indicate that fine-tuned encoder-only models(notably MarIA and BETO variants)consistently deliver the strongest and most reliable performance:in our experiments their macro F1-scores lie roughly in the range of approximately 46%–66%depending on the task.Zero-shot approaches are much less stable and typically yield substantially lower performance(observed F1-scores range approximately 0%–39%),often producing invalid outputs in practice.Few-shot prompting(e.g.,Qwen 38B,Mistral 7B)generally improves stability and recall relative to pure zero-shot,bringing F1-scores into a moderate range of approximately 20%–51%but still falling short of fully fine-tuned models.These findings highlight the importance of supervised adaptation and discuss the potential of both paradigms as components in AI-powered cybersecurity and malware forensics systems designed to identify and mitigate coordinated online hate campaigns.展开更多
The analytic continuation serves as a crucial bridge between quantum Monte Carlo calculations in imaginary-time formalism,specifically the Green's functions,and physical measurements(the spectral functions)in real...The analytic continuation serves as a crucial bridge between quantum Monte Carlo calculations in imaginary-time formalism,specifically the Green's functions,and physical measurements(the spectral functions)in real time.Various approaches have been developed to enhance the accuracy of analytic continuation,including the Padéapproximation,the maximum entropy method,and stochastic analytic continuation.In this study,we employ different deep learning techniques to investigate the analytic continuation for the quantum impurity model.A significant challenge in this context is that the sharp Abrikosov-Suhl resonance peak may be either underestimated or overestimated.We fit both the imaginary-time Green's function and the spectral function using Chebyshev polynomials in logarithmic coordinates.We utilize Full-Connected Networks(FCN),Convolutional Neural Networks(CNNs),and Residual Networks(ResNet)to address this issue.Our findings indicate that introducing noise during the training phase significantly improves the accuracy of the learning process.The typical absolute error achieved is less than 10-4.These investigations pave the way for machine learning to optimize the analytic continuation problem in many-body systems,thereby reducing the need for prior expertise in physics.展开更多
This paper presents a state of the art machine learning-based approach for automation of a varied class of Internet of things(Io T) analytics problems targeted on 1-dimensional(1-D) sensor data. As feature recommendat...This paper presents a state of the art machine learning-based approach for automation of a varied class of Internet of things(Io T) analytics problems targeted on 1-dimensional(1-D) sensor data. As feature recommendation is a major bottleneck for general Io Tbased applications, this paper shows how this step can be successfully automated based on a Wide Learning architecture without sacrificing the decision-making accuracy, and thereby reducing the development time and the cost of hiring expensive resources for specific problems. Interpretation of meaningful features is another contribution of this research. Several data sets from different real-world applications are considered to realize the proof-of-concept. Results show that the interpretable feature recommendation techniques are quite effective for the problems at hand in terms of performance and drastic reduction in development time.展开更多
This paper proposes a novel framework to detect cyber-attacks using Machine Learning coupled with User Behavior Analytics.The framework models the user behavior as sequences of events representing the user activities ...This paper proposes a novel framework to detect cyber-attacks using Machine Learning coupled with User Behavior Analytics.The framework models the user behavior as sequences of events representing the user activities at such a network.The represented sequences are thenfitted into a recurrent neural network model to extract features that draw distinctive behavior for individual users.Thus,the model can recognize frequencies of regular behavior to profile the user manner in the network.The subsequent procedure is that the recurrent neural network would detect abnormal behavior by classifying unknown behavior to either regu-lar or irregular behavior.The importance of the proposed framework is due to the increase of cyber-attacks especially when the attack is triggered from such sources inside the network.Typically detecting inside attacks are much more challenging in that the security protocols can barely recognize attacks from trustful resources at the network,including users.Therefore,the user behavior can be extracted and ultimately learned to recognize insightful patterns in which the regular patterns reflect a normal network workflow.In contrast,the irregular patterns can trigger an alert for a potential cyber-attack.The framework has been fully described where the evaluation metrics have also been introduced.The experimental results show that the approach performed better compared to other approaches and AUC 0.97 was achieved using RNN-LSTM 1.The paper has been concluded with pro-viding the potential directions for future improvements.展开更多
Process analytics is one of the popular research domains that advanced in the recent years.Process analytics encompasses identification,monitoring,and improvement of the processes through knowledge extraction from his...Process analytics is one of the popular research domains that advanced in the recent years.Process analytics encompasses identification,monitoring,and improvement of the processes through knowledge extraction from historical data.The evolution of Artificial Intelligence(AI)-enabled Electronic Health Records(EHRs)revolutionized the medical practice.Type 2 Diabetes Mellitus(T2DM)is a syndrome characterized by the lack of insulin secretion.If not diagnosed and managed at early stages,it may produce severe outcomes and at times,death too.Chronic Kidney Disease(CKD)and Coronary Heart Disease(CHD)are the most common,long-term and life-threatening diseases caused by T2DM.There-fore,it becomes inevitable to predict the risks of CKD and CHD in T2DM patients.The current research article presents automated Deep Learning(DL)-based Deep Neural Network(DNN)with Adagrad Optimization Algorithm i.e.,DNN-AGOA model to predict CKD and CHD risks in T2DM patients.The paper proposes a risk prediction model for T2DM patients who may develop CKD or CHD.This model helps in alarming both T2DM patients and clinicians in advance.At first,the proposed DNN-AGOA model performs data preprocessing to improve the quality of data and make it compatible for further processing.Besides,a Deep Neural Network(DNN)is employed for feature extraction,after which sigmoid function is used for classification.Further,Adagrad optimizer is applied to improve the performance of DNN model.For experimental validation,benchmark medical datasets were used and the results were validated under sev-eral dimensions.The proposed model achieved a maximum precision of 93.99%,recall of 94.63%,specificity of 73.34%,accuracy of 92.58%,and F-score of 94.22%.The results attained through experimentation established that the pro-posed DNN-AGOA model has good prediction capability over other methods.展开更多
Traditional Numerical Reservoir Simulation has been contributing to the oil and gas industry for decades.The current state of this technology is the result of decades of research and development by a large number of e...Traditional Numerical Reservoir Simulation has been contributing to the oil and gas industry for decades.The current state of this technology is the result of decades of research and development by a large number of engineers and scientists.Starting in the late 1960s and early 1970s,advances in computer hardware along with development and adaptation of clever algorithms resulted in a paradigm shift in reservoir studies moving them from simplified analogs and analytical solution methods to more mathematically robust computational and numerical solution models.展开更多
Major issues currently restricting the use of learning analytics are the lack of interpretability and adaptability of the machine learning models used in this domain.Interpretability makes it easy for the stakeholders...Major issues currently restricting the use of learning analytics are the lack of interpretability and adaptability of the machine learning models used in this domain.Interpretability makes it easy for the stakeholders to understand the working of these models and adaptability makes it easy to use the same model for multiple cohorts and courses in educational institutions.Recently,some models in learning analytics are constructed with the consideration of interpretability but their interpretability is not quantified.However,adaptability is not specifically considered in this domain.This paper presents a new framework based on hybrid statistical fuzzy theory to overcome these limitations.It also provides explainability in the form of rules describing the reasoning behind a particular output.The paper also discusses the system evaluation on a benchmark dataset showing promising results.The measure of explainability,fuzzy index,shows that the model is highly interpretable.This system achieves more than 82%recall in both the classification and the context adaptation stages.展开更多
Learning analytics is an emerging technique of analysing student par-ticipation and engagement.The recent COVID-19 pandemic has significantly increased the role of learning management systems(LMSs).LMSs previously only...Learning analytics is an emerging technique of analysing student par-ticipation and engagement.The recent COVID-19 pandemic has significantly increased the role of learning management systems(LMSs).LMSs previously only complemented face-to-face teaching,something which has not been possible between 2019 to 2020.To date,the existing body of literature on LMSs has not analysed learning in the context of the pandemic,where an LMS serves as the only interface between students and instructors.Consequently,productive results will remain elusive if the key factors that contribute towards engaging students in learning are notfirst identified.Therefore,this study aimed to perform an exten-sive literature review with which to design and develop a student engagement model for holistic involvement in an LMS.The required data was collected from an LMS that is currently utilised by a local Malaysian university.The model was validated by a panel of experts as well as discussions with students.It is our hope that the result of this study will help other institutions of higher learning determine factors of low engagement in their respective LMSs.展开更多
Learning analytics is a rapidly evolving research discipline that uses theinsights generated from data analysis to support learners as well as optimize boththe learning process and environment. This paper studied stud...Learning analytics is a rapidly evolving research discipline that uses theinsights generated from data analysis to support learners as well as optimize boththe learning process and environment. This paper studied students’ engagementlevel of the Learning Management System (LMS) via a learning analytics tool,student’s approach in managing their studies and possible learning analytic methods to analyze student data. Moreover, extensive systematic literature review(SLR) was employed for the selection, sorting and exclusion of articles fromdiverse renowned sources. The findings show that most of the engagement inLMS are driven by educators. Additionally, we have discussed the factors inLMS, causes of low engagement and ways of increasing engagement factorsvia the Learning Analytics approach. Nevertheless, apart from recognizing theLearning Analytics approach as being a successful method and technique for analyzing the LMS data, this research further highlighted the possibility of mergingthe learning analytics technique with the LMS engagement in every institution asbeing a direction for future research.展开更多
In recent years,huge volumes of healthcare data are getting generated in various forms.The advancements made in medical imaging are tremendous owing to which biomedical image acquisition has become easier and quicker....In recent years,huge volumes of healthcare data are getting generated in various forms.The advancements made in medical imaging are tremendous owing to which biomedical image acquisition has become easier and quicker.Due to such massive generation of big data,the utilization of new methods based on Big Data Analytics(BDA),Machine Learning(ML),and Artificial Intelligence(AI)have become essential.In this aspect,the current research work develops a new Big Data Analytics with Cat Swarm Optimization based deep Learning(BDA-CSODL)technique for medical image classification on Apache Spark environment.The aim of the proposed BDA-CSODL technique is to classify the medical images and diagnose the disease accurately.BDA-CSODL technique involves different stages of operations such as preprocessing,segmentation,fea-ture extraction,and classification.In addition,BDA-CSODL technique also fol-lows multi-level thresholding-based image segmentation approach for the detection of infected regions in medical image.Moreover,a deep convolutional neural network-based Inception v3 method is utilized in this study as feature extractor.Stochastic Gradient Descent(SGD)model is used for parameter tuning process.Furthermore,CSO with Long Short-Term Memory(CSO-LSTM)model is employed as a classification model to determine the appropriate class labels to it.Both SGD and CSO design approaches help in improving the overall image classification performance of the proposed BDA-CSODL technique.A wide range of simulations was conducted on benchmark medical image datasets and the com-prehensive comparative results demonstrate the supremacy of the proposed BDA-CSODL technique under different measures.展开更多
Digital technologies are becoming present and essential in all sectors of our lives.In education,the intensive usage of digital learning devices contributes to generating a large amount of trace data from digital lear...Digital technologies are becoming present and essential in all sectors of our lives.In education,the intensive usage of digital learning devices contributes to generating a large amount of trace data from digital learning activities.Intelligent exploitation of these traces represents a valuable asset for both device producers(to improve the design of the devices)and consumers(learners and teachers).In this paper,we first share our vision for better exploitation by teachers,of traces from middle schoolers’digital activities generated by their use of tools and digital learning services during different classes.This vision is a part of the AT41 project funded by the French Ministry of Education.This exploitation has to meet the requirements of the different teachers.Conducting such a project is not an easy task,because it has to consider the following issues:①the lack of comprehensive and clear methodology to design and exploit these traces;①heterogeneity of teacher requirements that complicates their elicitation and analysis;①the diversity of trace sources.Secondly,we propose a requirement-driven architecture for Learning Analytics composed of a well-identified life cycle.This architecture is augmented by learner traces.It offers a repository storing both teacher requirements and traces to facilitate the Learning Analytics in generating relevant and valuable indicators.展开更多
The information gained after the data analysis is vital to implement its outcomes to optimize processes and systems for more straightforward problem-solving. Therefore, the first step of data analytics deals with iden...The information gained after the data analysis is vital to implement its outcomes to optimize processes and systems for more straightforward problem-solving. Therefore, the first step of data analytics deals with identifying data requirements, mainly how the data should be grouped or labeled. For example, for data about Cybersecurity in organizations, grouping can be done into categories such as DOS denial of services, unauthorized access from local or remote, and surveillance and another probing. Next, after identifying the groups, a researcher or whoever carrying out the data analytics goes out into the field and primarily collects the data. The data collected is then organized in an orderly fashion to enable easy analysis;we aim to study different articles and compare performances for each algorithm to choose the best suitable classifies.展开更多
Risk management is relevant for every project that which seeks to avoid and suppress unanticipated costs, basically calling for pre-emptive action. The current work proposes a new approach for handling risks based on ...Risk management is relevant for every project that which seeks to avoid and suppress unanticipated costs, basically calling for pre-emptive action. The current work proposes a new approach for handling risks based on predictive analytics and machine learning (ML) that can work in real-time to help avoid risks and increase project adaptability. The main research aim of the study is to ascertain risk presence in projects by using historical data from previous projects, focusing on important aspects such as time, task time, resources and project results. t-SNE technique applies feature engineering in the reduction of the dimensionality while preserving important structural properties. This process is analysed using measures including recall, F1-score, accuracy and precision measurements. The results demonstrate that the Gradient Boosting Machine (GBM) achieves an impressive 85% accuracy, 82% precision, 85% recall, and 80% F1-score, surpassing previous models. Additionally, predictive analytics achieves a resource utilisation efficiency of 85%, compared to 70% for traditional allocation methods, and a project cost reduction of 10%, double the 5% achieved by traditional approaches. Furthermore, the study indicates that while GBM excels in overall accuracy, Logistic Regression (LR) offers more favourable precision-recall trade-offs, highlighting the importance of model selection in project risk management.展开更多
Diabetic retinopathy(DR)remains a leading cause of vision impairment and blindness among individuals with diabetes,necessitating innovative approaches to screening and management.This editorial explores the transforma...Diabetic retinopathy(DR)remains a leading cause of vision impairment and blindness among individuals with diabetes,necessitating innovative approaches to screening and management.This editorial explores the transformative potential of artificial intelligence(AI)and machine learning(ML)in revolutionizing DR care.AI and ML technologies have demonstrated remarkable advancements in enhancing the accuracy,efficiency,and accessibility of DR screening,helping to overcome barriers to early detection.These technologies leverage vast datasets to identify patterns and predict disease progression with unprecedented precision,enabling clinicians to make more informed decisions.Furthermore,AI-driven solutions hold promise in personalizing management strategies for DR,incorpo-rating predictive analytics to tailor interventions and optimize treatment path-ways.By automating routine tasks,AI can reduce the burden on healthcare providers,allowing for a more focused allocation of resources towards complex patient care.This review aims to evaluate the current advancements and applic-ations of AI and ML in DR screening,and to discuss the potential of these techno-logies in developing personalized management strategies,ultimately aiming to improve patient outcomes and reduce the global burden of DR.The integration of AI and ML in DR care represents a paradigm shift,offering a glimpse into the future of ophthalmic healthcare.展开更多
基金supported by the University of Tabuk,Saudi Arabia。
文摘Multimodal spatiotemporal data from smart city consumer electronics present critical challenges including cross-modal temporal misalignment,unreliable data quality,limited joint modeling of spatial and temporal dependencies,and weak resilience to adversarial updates.To address these limitations,EdgeST-Fusion is introduced as a cross-modal federated graph transformer framework for context-aware smart city analytics.The architecture integrates cross-modal embedding networks for modality alignment,graph transformer encoders for spatial dependency modeling,temporal self-attention for dynamic pattern learning,and adaptive anomaly detection to ensure data quality and security during aggregation.A privacy-preserving federated learning protocol with differential privacy guarantees enables collaborative model training without centralizing sensitive data.The framework employs data-quality-aware weighted aggregation to enhance robustness against noisy and malicious client updates.Experimental evaluation on the GeoLife,PeMS-Bay,and SmartHome+datasets demonstrates that EdgeST-Fusion achieves 21.8%improvement in prediction accuracy,35.7%reduction in communication overhead,and 29.4%enhancement in security resilience compared to recent baselines.Real-world deployment across three smart city testbeds validates practical viability with 90.0%average accuracy and sub-250 ms inference latency.The proposed framework remains feasible for deployment on heterogeneous and resource-constrained consumer electronics devices whilemaintaining strong privacy guarantees and scalability for large-scale urban environments.
文摘Latest digital advancements have intensified the necessity for adaptive,data-driven and socially-centered learning ecosystems.This paper presents the formulation of a cross-platform,innovative,gamified and personalized Learning Ecosystem,which integrates 3D/VR environments,as well as machine learning algorithms,and business intelligence frameworks to enhance learner-centered education and inferenced decision-making.This Learning System makes use of immersive,analytically assessed virtual learning spaces,therefore facilitating real-time monitoring of not just learning performance,but also overall engagement and behavioral patterns,via a comprehensive set of sustainability-oriented ESG-aligned Key Performance Indicators(KPIs).Machine learning models support predictive analysis,personalized feedback,and hybrid recommendation mechanisms,whilst dedicated dashboards translate complex educational data into actionable insights for all Use Cases of the System(Educational Institutions,Educators and Learners).Additionally,the presented Learning System introduces a structured Mentoring and Consulting Subsystem,thence reinforcing human-centered guidance alongside automated intelligence.The Platform’s modular architecture and simulation-centered evaluation approach actively support personalized,and continuously optimized learning pathways.Thence,it exemplifies a mature,adaptive Learning Ecosystem,supporting immersive technologies,analytics,and pedagogical support,hence,contributing to contemporary digital learning innovation and sociotechnical transformation in education.
文摘This paper aims to conduct a systematic literature review(SLR)using an artificial intelligence(AI)approach to predict and diagnose diabetes mellitus.After reviewing the literature published from 2015–2025,the paper aims to identify the most effective AI techniques,the most used datasets,the most widely used data preprocessing techniques,and the most common issues.After analyzing the literature,it has been found that convolutional neural networks(CNNs)and long short-term memory(LSTM)networks are deep learning models that have shown high accuracy in diabetes prediction.Recursive feature elimination(RFE)and SMOTE are feature selection techniques that have significantly improved model accuracy,training time,and interpretability.Amidst this technological advancement,some existing issues persist:data imbalance,the inapplicability of techniques,computational limitations,and a lack of real-time application in a healthcare environment.The literature review has also identified the need for robust,interpretable,and scalable AI systems capable of handling large volumes of data,including real-world data,in the healthcare industry.Furthermore,it has been identified that the benefits should be integrated with wearable health monitoring systems and the development of privacy-preserving models to ensure continuous,secure,and proactive diabetes management.
文摘The increasing number of interconnected devices and the incorporation of smart technology into contemporary healthcare systems have significantly raised the attack surface of cyber threats.The early detection of threats is both necessary and complex,yet these interconnected healthcare settings generate enormous amounts of heterogeneous data.Traditional Intrusion Detection Systems(IDS),which are generally centralized and machine learning-based,often fail to address the rapidly changing nature of cyberattacks and are challenged by ethical concerns related to patient data privacy.Moreover,traditional AI-driven IDS usually face challenges in handling large-scale,heterogeneous healthcare data while ensuring data privacy and operational efficiency.To address these issues,emerging technologies such as Big Data Analytics(BDA)and Federated Learning(FL)provide a hybrid framework for scalable,adaptive intrusion detection in IoT-driven healthcare systems.Big data techniques enable processing large-scale,highdimensional healthcare data,and FL can be used to train a model in a decentralized manner without transferring raw data,thereby maintaining privacy between institutions.This research proposes a privacy-preserving Federated Learning–based model that efficiently detects cyber threats in connected healthcare systems while ensuring distributed big data processing,privacy,and compliance with ethical regulations.To strengthen the reliability of the reported findings,the resultswere validated using cross-dataset testing and 95%confidence intervals derived frombootstrap analysis,confirming consistent performance across heterogeneous healthcare data distributions.This solution takes a significant step toward securing next-generation healthcare infrastructure by combining scalability,privacy,adaptability,and earlydetection capabilities.The proposed global model achieves a test accuracy of 99.93%±0.03(95%CI)and amiss-rate of only 0.07%±0.02,representing state-of-the-art performance in privacy-preserving intrusion detection.The proposed FL-driven IDS framework offers an efficient,privacy-preserving,and scalable solution for securing next-generation healthcare infrastructures by combining adaptability,early detection,and ethical data management.
文摘In the competitive retail industry of the digital era,data-driven insights into gender-specific customer behavior are essential.They support the optimization of store performance,layout design,product placement,and targeted marketing.However,existing computer vision solutions often rely on facial recognition to gather such insights,raising significant privacy and ethical concerns.To address these issues,this paper presents a privacypreserving customer analytics system through two key strategies.First,we deploy a deep learning framework using YOLOv9s,trained on the RCA-TVGender dataset.Cameras are positioned perpendicular to observation areas to reduce facial visibility while maintaining accurate gender classification.Second,we apply AES-128 encryption to customer position data,ensuring secure access and regulatory compliance.Our system achieved overall performance,with 81.5%mAP@50,77.7%precision,and 75.7%recall.Moreover,a 90-min observational study confirmed the system’s ability to generate privacy-protected heatmaps revealing distinct behavioral patterns between male and female customers.For instance,women spent more time in certain areas and showed interest in different products.These results confirm the system’s effectiveness in enabling personalized layout and marketing strategies without compromising privacy.
基金the King Salman center for Disability Research for funding this work through Research Group No.KSRG-2024-050.
文摘Artificial Intelligence(AI)is changing healthcare by helping with diagnosis.However,for doctors to trust AI tools,they need to be both accurate and easy to understand.In this study,we created a new machine learning system for the early detection of Autism Spectrum Disorder(ASD)in children.Our main goal was to build a model that is not only good at predicting ASD but also clear in its reasoning.For this,we combined several different models,including Random Forest,XGBoost,and Neural Networks,into a single,more powerful framework.We used two different types of datasets:(i)a standard behavioral dataset and(ii)a more complex multimodal dataset with images,audio,and physiological information.The datasets were carefully preprocessed for missing values,redundant features,and dataset imbalance to ensure fair learning.The results outperformed the state-of-the-art with a Regularized Neural Network,achieving 97.6%accuracy on behavioral data.Whereas,on the multimodal data,the accuracy is 98.2%.Other models also did well with accuracies consistently above 96%.We also used SHAP and LIME on a behavioral dataset for models’explainability.
基金the research project LaTe4PoliticES(PID2022-138099OB-I00)funded by MCIN/AEI/10.13039/501100011033 and the European Fund for Regional Development(ERDF)-a way to make Europe.Tomás Bernal-Beltrán is supported by University of Murcia through the predoctoral programme.
文摘The malicious dissemination of hate speech via compromised accounts,automated bot networks and malware-driven social media campaigns has become a growing cybersecurity concern.Automatically detecting such content in Spanish is challenging due to linguistic complexity and the scarcity of annotated resources.In this paper,we compare two predominant AI-based approaches for the forensic detection of malicious hate speech:(1)finetuning encoder-only models that have been trained in Spanish and(2)In-Context Learning techniques(Zero-and Few-Shot Learning)with large-scale language models.Our approach goes beyond binary classification,proposing a comprehensive,multidimensional evaluation that labels each text by:(1)type of speech,(2)recipient,(3)level of intensity(ordinal)and(4)targeted group(multi-label).Performance is evaluated using an annotated Spanish corpus,standard metrics such as precision,recall and F1-score and stability-oriented metrics to evaluate the stability of the transition from zero-shot to few-shot prompting(Zero-to-Few Shot Retention and Zero-to-Few Shot Gain)are applied.The results indicate that fine-tuned encoder-only models(notably MarIA and BETO variants)consistently deliver the strongest and most reliable performance:in our experiments their macro F1-scores lie roughly in the range of approximately 46%–66%depending on the task.Zero-shot approaches are much less stable and typically yield substantially lower performance(observed F1-scores range approximately 0%–39%),often producing invalid outputs in practice.Few-shot prompting(e.g.,Qwen 38B,Mistral 7B)generally improves stability and recall relative to pure zero-shot,bringing F1-scores into a moderate range of approximately 20%–51%but still falling short of fully fine-tuned models.These findings highlight the importance of supervised adaptation and discuss the potential of both paradigms as components in AI-powered cybersecurity and malware forensics systems designed to identify and mitigate coordinated online hate campaigns.
基金Sponsored by National Natural Science Foundation of China(Grant No.12174101)Fundamental Research Funds for the Central Universities(Grant No.2022MS051).
文摘The analytic continuation serves as a crucial bridge between quantum Monte Carlo calculations in imaginary-time formalism,specifically the Green's functions,and physical measurements(the spectral functions)in real time.Various approaches have been developed to enhance the accuracy of analytic continuation,including the Padéapproximation,the maximum entropy method,and stochastic analytic continuation.In this study,we employ different deep learning techniques to investigate the analytic continuation for the quantum impurity model.A significant challenge in this context is that the sharp Abrikosov-Suhl resonance peak may be either underestimated or overestimated.We fit both the imaginary-time Green's function and the spectral function using Chebyshev polynomials in logarithmic coordinates.We utilize Full-Connected Networks(FCN),Convolutional Neural Networks(CNNs),and Residual Networks(ResNet)to address this issue.Our findings indicate that introducing noise during the training phase significantly improves the accuracy of the learning process.The typical absolute error achieved is less than 10-4.These investigations pave the way for machine learning to optimize the analytic continuation problem in many-body systems,thereby reducing the need for prior expertise in physics.
文摘This paper presents a state of the art machine learning-based approach for automation of a varied class of Internet of things(Io T) analytics problems targeted on 1-dimensional(1-D) sensor data. As feature recommendation is a major bottleneck for general Io Tbased applications, this paper shows how this step can be successfully automated based on a Wide Learning architecture without sacrificing the decision-making accuracy, and thereby reducing the development time and the cost of hiring expensive resources for specific problems. Interpretation of meaningful features is another contribution of this research. Several data sets from different real-world applications are considered to realize the proof-of-concept. Results show that the interpretable feature recommendation techniques are quite effective for the problems at hand in terms of performance and drastic reduction in development time.
基金supported by the fund received from Al Baha University,8/1440.
文摘This paper proposes a novel framework to detect cyber-attacks using Machine Learning coupled with User Behavior Analytics.The framework models the user behavior as sequences of events representing the user activities at such a network.The represented sequences are thenfitted into a recurrent neural network model to extract features that draw distinctive behavior for individual users.Thus,the model can recognize frequencies of regular behavior to profile the user manner in the network.The subsequent procedure is that the recurrent neural network would detect abnormal behavior by classifying unknown behavior to either regu-lar or irregular behavior.The importance of the proposed framework is due to the increase of cyber-attacks especially when the attack is triggered from such sources inside the network.Typically detecting inside attacks are much more challenging in that the security protocols can barely recognize attacks from trustful resources at the network,including users.Therefore,the user behavior can be extracted and ultimately learned to recognize insightful patterns in which the regular patterns reflect a normal network workflow.In contrast,the irregular patterns can trigger an alert for a potential cyber-attack.The framework has been fully described where the evaluation metrics have also been introduced.The experimental results show that the approach performed better compared to other approaches and AUC 0.97 was achieved using RNN-LSTM 1.The paper has been concluded with pro-viding the potential directions for future improvements.
文摘Process analytics is one of the popular research domains that advanced in the recent years.Process analytics encompasses identification,monitoring,and improvement of the processes through knowledge extraction from historical data.The evolution of Artificial Intelligence(AI)-enabled Electronic Health Records(EHRs)revolutionized the medical practice.Type 2 Diabetes Mellitus(T2DM)is a syndrome characterized by the lack of insulin secretion.If not diagnosed and managed at early stages,it may produce severe outcomes and at times,death too.Chronic Kidney Disease(CKD)and Coronary Heart Disease(CHD)are the most common,long-term and life-threatening diseases caused by T2DM.There-fore,it becomes inevitable to predict the risks of CKD and CHD in T2DM patients.The current research article presents automated Deep Learning(DL)-based Deep Neural Network(DNN)with Adagrad Optimization Algorithm i.e.,DNN-AGOA model to predict CKD and CHD risks in T2DM patients.The paper proposes a risk prediction model for T2DM patients who may develop CKD or CHD.This model helps in alarming both T2DM patients and clinicians in advance.At first,the proposed DNN-AGOA model performs data preprocessing to improve the quality of data and make it compatible for further processing.Besides,a Deep Neural Network(DNN)is employed for feature extraction,after which sigmoid function is used for classification.Further,Adagrad optimizer is applied to improve the performance of DNN model.For experimental validation,benchmark medical datasets were used and the results were validated under sev-eral dimensions.The proposed model achieved a maximum precision of 93.99%,recall of 94.63%,specificity of 73.34%,accuracy of 92.58%,and F-score of 94.22%.The results attained through experimentation established that the pro-posed DNN-AGOA model has good prediction capability over other methods.
文摘Traditional Numerical Reservoir Simulation has been contributing to the oil and gas industry for decades.The current state of this technology is the result of decades of research and development by a large number of engineers and scientists.Starting in the late 1960s and early 1970s,advances in computer hardware along with development and adaptation of clever algorithms resulted in a paradigm shift in reservoir studies moving them from simplified analogs and analytical solution methods to more mathematically robust computational and numerical solution models.
文摘Major issues currently restricting the use of learning analytics are the lack of interpretability and adaptability of the machine learning models used in this domain.Interpretability makes it easy for the stakeholders to understand the working of these models and adaptability makes it easy to use the same model for multiple cohorts and courses in educational institutions.Recently,some models in learning analytics are constructed with the consideration of interpretability but their interpretability is not quantified.However,adaptability is not specifically considered in this domain.This paper presents a new framework based on hybrid statistical fuzzy theory to overcome these limitations.It also provides explainability in the form of rules describing the reasoning behind a particular output.The paper also discusses the system evaluation on a benchmark dataset showing promising results.The measure of explainability,fuzzy index,shows that the model is highly interpretable.This system achieves more than 82%recall in both the classification and the context adaptation stages.
文摘Learning analytics is an emerging technique of analysing student par-ticipation and engagement.The recent COVID-19 pandemic has significantly increased the role of learning management systems(LMSs).LMSs previously only complemented face-to-face teaching,something which has not been possible between 2019 to 2020.To date,the existing body of literature on LMSs has not analysed learning in the context of the pandemic,where an LMS serves as the only interface between students and instructors.Consequently,productive results will remain elusive if the key factors that contribute towards engaging students in learning are notfirst identified.Therefore,this study aimed to perform an exten-sive literature review with which to design and develop a student engagement model for holistic involvement in an LMS.The required data was collected from an LMS that is currently utilised by a local Malaysian university.The model was validated by a panel of experts as well as discussions with students.It is our hope that the result of this study will help other institutions of higher learning determine factors of low engagement in their respective LMSs.
基金supported by the University of Malaya,Bantuan Khas Penyelidikan under the research grant of BKS083-2017Fundamental Research Grant Scheme(FRGS)under Grant number FP112-2018A from the Ministry of Education Malaysia,Higher Education.
文摘Learning analytics is a rapidly evolving research discipline that uses theinsights generated from data analysis to support learners as well as optimize boththe learning process and environment. This paper studied students’ engagementlevel of the Learning Management System (LMS) via a learning analytics tool,student’s approach in managing their studies and possible learning analytic methods to analyze student data. Moreover, extensive systematic literature review(SLR) was employed for the selection, sorting and exclusion of articles fromdiverse renowned sources. The findings show that most of the engagement inLMS are driven by educators. Additionally, we have discussed the factors inLMS, causes of low engagement and ways of increasing engagement factorsvia the Learning Analytics approach. Nevertheless, apart from recognizing theLearning Analytics approach as being a successful method and technique for analyzing the LMS data, this research further highlighted the possibility of mergingthe learning analytics technique with the LMS engagement in every institution asbeing a direction for future research.
基金The author extends his appreciation to the Deanship of Scientific Research at Majmaah University for funding this study under Project Number(R-2022-61).
文摘In recent years,huge volumes of healthcare data are getting generated in various forms.The advancements made in medical imaging are tremendous owing to which biomedical image acquisition has become easier and quicker.Due to such massive generation of big data,the utilization of new methods based on Big Data Analytics(BDA),Machine Learning(ML),and Artificial Intelligence(AI)have become essential.In this aspect,the current research work develops a new Big Data Analytics with Cat Swarm Optimization based deep Learning(BDA-CSODL)technique for medical image classification on Apache Spark environment.The aim of the proposed BDA-CSODL technique is to classify the medical images and diagnose the disease accurately.BDA-CSODL technique involves different stages of operations such as preprocessing,segmentation,fea-ture extraction,and classification.In addition,BDA-CSODL technique also fol-lows multi-level thresholding-based image segmentation approach for the detection of infected regions in medical image.Moreover,a deep convolutional neural network-based Inception v3 method is utilized in this study as feature extractor.Stochastic Gradient Descent(SGD)model is used for parameter tuning process.Furthermore,CSO with Long Short-Term Memory(CSO-LSTM)model is employed as a classification model to determine the appropriate class labels to it.Both SGD and CSO design approaches help in improving the overall image classification performance of the proposed BDA-CSODL technique.A wide range of simulations was conducted on benchmark medical image datasets and the com-prehensive comparative results demonstrate the supremacy of the proposed BDA-CSODL technique under different measures.
文摘Digital technologies are becoming present and essential in all sectors of our lives.In education,the intensive usage of digital learning devices contributes to generating a large amount of trace data from digital learning activities.Intelligent exploitation of these traces represents a valuable asset for both device producers(to improve the design of the devices)and consumers(learners and teachers).In this paper,we first share our vision for better exploitation by teachers,of traces from middle schoolers’digital activities generated by their use of tools and digital learning services during different classes.This vision is a part of the AT41 project funded by the French Ministry of Education.This exploitation has to meet the requirements of the different teachers.Conducting such a project is not an easy task,because it has to consider the following issues:①the lack of comprehensive and clear methodology to design and exploit these traces;①heterogeneity of teacher requirements that complicates their elicitation and analysis;①the diversity of trace sources.Secondly,we propose a requirement-driven architecture for Learning Analytics composed of a well-identified life cycle.This architecture is augmented by learner traces.It offers a repository storing both teacher requirements and traces to facilitate the Learning Analytics in generating relevant and valuable indicators.
文摘The information gained after the data analysis is vital to implement its outcomes to optimize processes and systems for more straightforward problem-solving. Therefore, the first step of data analytics deals with identifying data requirements, mainly how the data should be grouped or labeled. For example, for data about Cybersecurity in organizations, grouping can be done into categories such as DOS denial of services, unauthorized access from local or remote, and surveillance and another probing. Next, after identifying the groups, a researcher or whoever carrying out the data analytics goes out into the field and primarily collects the data. The data collected is then organized in an orderly fashion to enable easy analysis;we aim to study different articles and compare performances for each algorithm to choose the best suitable classifies.
文摘Risk management is relevant for every project that which seeks to avoid and suppress unanticipated costs, basically calling for pre-emptive action. The current work proposes a new approach for handling risks based on predictive analytics and machine learning (ML) that can work in real-time to help avoid risks and increase project adaptability. The main research aim of the study is to ascertain risk presence in projects by using historical data from previous projects, focusing on important aspects such as time, task time, resources and project results. t-SNE technique applies feature engineering in the reduction of the dimensionality while preserving important structural properties. This process is analysed using measures including recall, F1-score, accuracy and precision measurements. The results demonstrate that the Gradient Boosting Machine (GBM) achieves an impressive 85% accuracy, 82% precision, 85% recall, and 80% F1-score, surpassing previous models. Additionally, predictive analytics achieves a resource utilisation efficiency of 85%, compared to 70% for traditional allocation methods, and a project cost reduction of 10%, double the 5% achieved by traditional approaches. Furthermore, the study indicates that while GBM excels in overall accuracy, Logistic Regression (LR) offers more favourable precision-recall trade-offs, highlighting the importance of model selection in project risk management.
文摘Diabetic retinopathy(DR)remains a leading cause of vision impairment and blindness among individuals with diabetes,necessitating innovative approaches to screening and management.This editorial explores the transformative potential of artificial intelligence(AI)and machine learning(ML)in revolutionizing DR care.AI and ML technologies have demonstrated remarkable advancements in enhancing the accuracy,efficiency,and accessibility of DR screening,helping to overcome barriers to early detection.These technologies leverage vast datasets to identify patterns and predict disease progression with unprecedented precision,enabling clinicians to make more informed decisions.Furthermore,AI-driven solutions hold promise in personalizing management strategies for DR,incorpo-rating predictive analytics to tailor interventions and optimize treatment path-ways.By automating routine tasks,AI can reduce the burden on healthcare providers,allowing for a more focused allocation of resources towards complex patient care.This review aims to evaluate the current advancements and applic-ations of AI and ML in DR screening,and to discuss the potential of these techno-logies in developing personalized management strategies,ultimately aiming to improve patient outcomes and reduce the global burden of DR.The integration of AI and ML in DR care represents a paradigm shift,offering a glimpse into the future of ophthalmic healthcare.