The planning of teaching for a course that belongs to an undergraduate program usually begins with the definition of its contents,which are derived from syllabus of a political-pedagogical project.The contents listed ...The planning of teaching for a course that belongs to an undergraduate program usually begins with the definition of its contents,which are derived from syllabus of a political-pedagogical project.The contents listed are organized in a sequence considered logical.A set of actions is planned,such as lectures,laboratories,among others,through which content will be developed.The previous training of the student is considered,the concurrent and subsequent courses,the context of the course inside the program,the specific and general objectives of the program.A set of assessments is also defined as part of this planning,the associated methodologies,techniques and teaching objectives.In this context,this paper focuses on the aspect of the sequencing of content,methodologies and teaching techniques in a course.For this purpose,the Bloom's Taxonomy of Educational Objectives is applied,which provides a hierarchical structure for the cognitive process.The importance of this hierarchy of knowledge is greater awareness of the teacher about the ways to be adopted in the teaching process.展开更多
The Intelligent Internet of Things(IIoT)involves real-world things that communicate or interact with each other through networking technologies by collecting data from these“things”and using intelligent approaches,s...The Intelligent Internet of Things(IIoT)involves real-world things that communicate or interact with each other through networking technologies by collecting data from these“things”and using intelligent approaches,such as Artificial Intelligence(AI)and machine learning,to make accurate decisions.Data science is the science of dealing with data and its relationships through intelligent approaches.Most state-of-the-art research focuses independently on either data science or IIoT,rather than exploring their integration.Therefore,to address the gap,this article provides a comprehensive survey on the advances and integration of data science with the Intelligent IoT(IIoT)system by classifying the existing IoT-based data science techniques and presenting a summary of various characteristics.The paper analyzes the data science or big data security and privacy features,including network architecture,data protection,and continuous monitoring of data,which face challenges in various IoT-based systems.Extensive insights into IoT data security,privacy,and challenges are visualized in the context of data science for IoT.In addition,this study reveals the current opportunities to enhance data science and IoT market development.The current gap and challenges faced in the integration of data science and IoT are comprehensively presented,followed by the future outlook and possible solutions.展开更多
Time series are an important object of study in sciences, engineering and business, especially in cases where it is expected to know, predict and optimize behaviors. In this context, we intend to show the feasibility ...Time series are an important object of study in sciences, engineering and business, especially in cases where it is expected to know, predict and optimize behaviors. In this context, we intend to show the feasibility of using artificial neural networks in the study of several time series in an engineering course, especially those that have no overt behavior or are not able to be modeled mathematically in a simple way and have direct application in the education of future engineers.展开更多
Machine learning(ML)is increasingly applied for medical image processing with appropriate learning paradigms.These applications include analyzing images of various organs,such as the brain,lung,eye,etc.,to identify sp...Machine learning(ML)is increasingly applied for medical image processing with appropriate learning paradigms.These applications include analyzing images of various organs,such as the brain,lung,eye,etc.,to identify specific flaws/diseases for diagnosis.The primary concern of ML applications is the precise selection of flexible image features for pattern detection and region classification.Most of the extracted image features are irrelevant and lead to an increase in computation time.Therefore,this article uses an analytical learning paradigm to design a Congruent Feature Selection Method to select the most relevant image features.This process trains the learning paradigm using similarity and correlation-based features over different textural intensities and pixel distributions.The similarity between the pixels over the various distribution patterns with high indexes is recommended for disease diagnosis.Later,the correlation based on intensity and distribution is analyzed to improve the feature selection congruency.Therefore,the more congruent pixels are sorted in the descending order of the selection,which identifies better regions than the distribution.Now,the learning paradigm is trained using intensity and region-based similarity to maximize the chances of selection.Therefore,the probability of feature selection,regardless of the textures and medical image patterns,is improved.This process enhances the performance of ML applications for different medical image processing.The proposed method improves the accuracy,precision,and training rate by 13.19%,10.69%,and 11.06%,respectively,compared to other models for the selected dataset.The mean error and selection time is also reduced by 12.56%and 13.56%,respectively,compared to the same models and dataset.展开更多
The ability to accurately predict urban traffic flows is crucial for optimising city operations.Consequently,various methods for forecasting urban traffic have been developed,focusing on analysing historical data to u...The ability to accurately predict urban traffic flows is crucial for optimising city operations.Consequently,various methods for forecasting urban traffic have been developed,focusing on analysing historical data to understand complex mobility patterns.Deep learning techniques,such as graph neural networks(GNNs),are popular for their ability to capture spatio-temporal dependencies.However,these models often become overly complex due to the large number of hyper-parameters involved.In this study,we introduce Dynamic Multi-Graph Spatial-Temporal Graph Neural Ordinary Differential Equation Networks(DMST-GNODE),a framework based on ordinary differential equations(ODEs)that autonomously discovers effective spatial-temporal graph neural network(STGNN)architectures for traffic prediction tasks.The comparative analysis of DMST-GNODE and baseline models indicates that DMST-GNODE model demonstrates superior performance across multiple datasets,consistently achieving the lowest Root Mean Square Error(RMSE)and Mean Absolute Error(MAE)values,alongside the highest accuracy.On the BKK(Bangkok)dataset,it outperformed other models with an RMSE of 3.3165 and an accuracy of 0.9367 for a 20-min interval,maintaining this trend across 40 and 60 min.Similarly,on the PeMS08 dataset,DMST-GNODE achieved the best performance with an RMSE of 19.4863 and an accuracy of 0.9377 at 20 min,demonstrating its effectiveness over longer periods.The Los_Loop dataset results further emphasise this model’s advantage,with an RMSE of 3.3422 and an accuracy of 0.7643 at 20 min,consistently maintaining superiority across all time intervals.These numerical highlights indicate that DMST-GNODE not only outperforms baseline models but also achieves higher accuracy and lower errors across different time intervals and datasets.展开更多
Over the past few years,the application and usage of Machine Learning(ML)techniques have increased exponentially due to continuously increasing the size of data and computing capacity.Despite the popularity of ML tech...Over the past few years,the application and usage of Machine Learning(ML)techniques have increased exponentially due to continuously increasing the size of data and computing capacity.Despite the popularity of ML techniques,only a few research studies have focused on the application of ML especially supervised learning techniques in Requirement Engineering(RE)activities to solve the problems that occur in RE activities.The authors focus on the systematic mapping of past work to investigate those studies that focused on the application of supervised learning techniques in RE activities between the period of 2002–2023.The authors aim to investigate the research trends,main RE activities,ML algorithms,and data sources that were studied during this period.Forty-five research studies were selected based on our exclusion and inclusion criteria.The results show that the scientific community used 57 algorithms.Among those algorithms,researchers mostly used the five following ML algorithms in RE activities:Decision Tree,Support Vector Machine,Naïve Bayes,K-nearest neighbour Classifier,and Random Forest.The results show that researchers used these algorithms in eight major RE activities.Those activities are requirements analysis,failure prediction,effort estimation,quality,traceability,business rules identification,content classification,and detection of problems in requirements written in natural language.Our selected research studies used 32 private and 41 public data sources.The most popular data sources that were detected in selected studies are the Metric Data Programme from NASA,Predictor Models in Software Engineering,and iTrust Electronic Health Care System.展开更多
In the era of Industry 4.0,conditionmonitoring has emerged as an effective solution for process industries to optimize their operational efficiency.Condition monitoring helps minimize unplanned downtime,extending equi...In the era of Industry 4.0,conditionmonitoring has emerged as an effective solution for process industries to optimize their operational efficiency.Condition monitoring helps minimize unplanned downtime,extending equipment lifespan,reducing maintenance costs,and improving production quality and safety.This research focuses on utilizing Bayesian search-based machine learning and deep learning approaches for the condition monitoring of industrial equipment.The study aims to enhance predictive maintenance for industrial equipment by forecasting vibration values based on domain-specific feature engineering.Early prediction of vibration enables proactive interventions to minimize downtime and extend the lifespan of critical assets.A data set of load information and vibration values from a heavy-duty industrial slip ring induction motor(4600 kW)and gearbox equipped with vibration sensors is used as a case study.The study implements and compares six machine learning models with the proposed Bayesian-optimized stacked Long Short-Term Memory(LSTM)model.The hyperparameters used in the implementation of models are selected based on the Bayesian optimization technique.Comparative analysis reveals that the proposed Bayesian optimized stacked LSTM outperforms other models,showcasing its capability to learn temporal features as well as long-term dependencies in time series information.The implemented machine learning models:Linear Regression(LR),RandomForest(RF),Gradient Boosting Regressor(GBR),ExtremeGradient Boosting(XGBoost),Light Gradient Boosting Machine(LightGBM),and Support Vector Regressor(SVR)displayed a mean squared error of 0.9515,0.4654,0.1849,0.0295,0.2127 and 0.0273,respectively.The proposed model predicts the future vibration characteristics with a mean squared error of 0.0019 on the dataset containing motor load information and vibration characteristics.The results demonstrate that the proposed model outperforms other models in terms of other evaluation metrics with a mean absolute error of 0.0263 and 0.882 as a coefficient of determination.Current research not only contributes to the comparative performance of machine learning models in condition monitoring but also showcases the practical implications of employing these techniques.By transitioning fromreactive to proactive maintenance strategies,industries canminimize downtime,reduce costs,and prolong the lifespan of crucial assets.This study demonstrates the practical advantages of transitioning from reactive to proactive maintenance strategies using ML-based condition monitoring.展开更多
Structural Health Monitoring(SHM)systems play a key role in managing buildings and infrastructure by delivering vital insights into their strength and structural integrity.There is a need for more efficient techniques...Structural Health Monitoring(SHM)systems play a key role in managing buildings and infrastructure by delivering vital insights into their strength and structural integrity.There is a need for more efficient techniques to detect defects,as traditional methods are often prone to human error,and this issue is also addressed through image processing(IP).In addition to IP,automated,accurate,and real-time detection of structural defects,such as cracks,corrosion,and material degradation that conventional inspection techniques may miss,is made possible by Artificial Intelligence(AI)technologies like Machine Learning(ML)and Deep Learning(DL).This review examines the integration of computer vision and AI techniques in Structural Health Monitoring(SHM),investigating their effectiveness in detecting various forms of structural deterioration.Also,it evaluates ML and DL models in SHM for their accuracy in identifying and assessing structural damage,ultimately enhancing safety,durability,and maintenance practices in the field.Key findings reveal that AI-powered approaches,especially those utilizing IP and DL models like CNNs,significantly improve detection efficiency and accuracy,with reported accuracies in various SHM tasks.However,significant research gaps remain,including challenges with the consistency,quality,and environmental resilience of image data,a notable lack of standardized models and datasets for training across diverse structures,and concerns regarding computational costs,model interpretability,and seamless integration with existing systems.Future work should focus on developing more robust models through data augmentation,transfer learning,and hybrid approaches,standardizing protocols,and fostering interdisciplinary collaboration to overcome these limitations and achieve more reliable,scalable,and affordable SHM systems.展开更多
The Internet of Things (IoT) and edge-assisted networking infrastructures are capable of bringing data processing and accessibility services locally at the respective edge rather than at a centralized module. These in...The Internet of Things (IoT) and edge-assisted networking infrastructures are capable of bringing data processing and accessibility services locally at the respective edge rather than at a centralized module. These infrastructures are very effective in providing a fast response to the respective queries of the requesting modules, but their distributed nature has introduced other problems such as security and privacy. To address these problems, various security-assisted communication mechanisms have been developed to safeguard every active module, i.e., devices and edges, from every possible vulnerability in the IoT. However, these methodologies have neglected one of the critical issues, which is the prediction of fraudulent devices, i.e., adversaries, preferably as early as possible in the IoT. In this paper, a hybrid communication mechanism is presented where the Hidden Markov Model (HMM) predicts the legitimacy of the requesting device (both source and destination), and the Advanced Encryption Standard (AES) safeguards the reliability of the transmitted data over a shared communication medium, preferably through a secret shared key, i.e., , and timestamp information. A device becomes trusted if it has passed both evaluation levels, i.e., HMM and message decryption, within a stipulated time interval. The proposed hybrid, along with existing state-of-the-art approaches, has been simulated in the realistic environment of the IoT to verify the security measures. These evaluations were carried out in the presence of intruders capable of launching various attacks simultaneously, such as man-in-the-middle, device impersonations, and masquerading attacks. Moreover, the proposed approach has been proven to be more effective than existing state-of-the-art approaches due to its exceptional performance in communication, processing, and storage overheads, i.e., 13%, 19%, and 16%, respectively. Finally, the proposed hybrid approach is pruned against well-known security attacks in the IoT.展开更多
Sentiment Analysis,a significant domain within Natural Language Processing(NLP),focuses on extracting and interpreting subjective information-such as emotions,opinions,and attitudes-from textual data.With the increasi...Sentiment Analysis,a significant domain within Natural Language Processing(NLP),focuses on extracting and interpreting subjective information-such as emotions,opinions,and attitudes-from textual data.With the increasing volume of user-generated content on social media and digital platforms,sentiment analysis has become essential for deriving actionable insights across various sectors.This study presents a systematic literature review of sentiment analysis methodologies,encompassing traditional machine learning algorithms,lexicon-based approaches,and recent advancements in deep learning techniques.The review follows a structured protocol comprising three phases:planning,execution,and analysis/reporting.During the execution phase,67 peer-reviewed articles were initially retrieved,with 25 meeting predefined inclusion and exclusion criteria.The analysis phase involved a detailed examination of each study’s methodology,experimental setup,and key contributions.Among the deep learning models evaluated,Long Short-Term Memory(LSTM)networks were identified as the most frequently adopted architecture for sentiment classification tasks.This review highlights current trends,technical challenges,and emerging opportunities in the field,providing valuable guidance for future research and development in applications such as market analysis,public health monitoring,financial forecasting,and crisis management.展开更多
The integration of artificial intelligence(AI)and multiomics has transformed clinical and life sciences,enabling precision medicine and redefining disease understanding.Scientific publications grew significantly from ...The integration of artificial intelligence(AI)and multiomics has transformed clinical and life sciences,enabling precision medicine and redefining disease understanding.Scientific publications grew significantly from 2.1 million in 2012 to 3.3 million in 2022,with AI research tripling during this period.Multiomics fields,including genomics and proteomics,also advanced,exemplified by the Human Proteome Project achieving a 90%complete blueprint by 2021.This growth highlights opportunities and challenges in integrating AI and multiomics into clinical reporting.A review of studies and case reports was conducted to evaluate AI and multiomics integration.Key areas analyzed included diagnostic accuracy,predictive modeling,and personalized treatment approaches driven by AI tools.Case examples were studied to assess impacts on clinical decision-making.AI and multiomics enhanced data integration,predictive insights,and treatment personalization.Fields like radiomics,genomics,and proteomics improved diagnostics and guided therapy.For instance,the“AI radiomics,geno-mics,oncopathomics,and surgomics project”combined radiomics and genomics for surgical decision-making,enabling preoperative,intraoperative,and post-operative interventions.AI applications in case reports predicted conditions like postoperative delirium and monitored cancer progression using genomic and imaging data.AI and multiomics enable standardized data analysis,dynamic updates,and predictive modeling in case reports.Traditional reports often lack objectivity,but AI enhances reproducibility and decision-making by processing large datasets.Challenges include data standardization,biases,and ethical concerns.Overcoming these barriers is vital for optimizing AI applications and advancing personalized medicine.AI and multiomics integration is revolutionizing clinical research and practice.Standardizing data reporting and addressing challenges in ethics and data quality will unlock their full potential.Emphasizing collaboration and transparency is essential for leveraging these tools to improve patient care and scientific communication.展开更多
Accurate estimation of evapotranspiration(ET)is crucial for efficient water resource management,particularly in the face of climate change and increasing water scarcity.This study performs a bibliometric analysis of 3...Accurate estimation of evapotranspiration(ET)is crucial for efficient water resource management,particularly in the face of climate change and increasing water scarcity.This study performs a bibliometric analysis of 352 articles and a systematic review of 35 peer-reviewed papers,selected according to PRISMA guidelines,to evaluate the performance of Hybrid Artificial Neural Networks(HANNs)in ET estimation.The findings demonstrate that HANNs,particularly those combining Multilayer Perceptrons(MLPs),Recurrent Neural Networks(RNNs),and Convolutional Neural Networks(CNNs),are highly effective in capturing the complex nonlinear relationships and tem-poral dependencies characteristic of hydrological processes.These hybrid models,often integrated with optimization algorithms and fuzzy logic frameworks,significantly improve the predictive accuracy and generalization capabilities of ET estimation.The growing adoption of advanced evaluation metrics,such as Kling-Gupta Efficiency(KGE)and Taylor Diagrams,highlights the increasing demand for more robust performance assessments beyond traditional methods.Despite the promising results,challenges remain,particularly regarding model interpretability,computational efficiency,and data scarcity.Future research should prioritize the integration of interpretability techniques,such as attention mechanisms,Local Interpretable Model-Agnostic Explanations(LIME),and feature importance analysis,to enhance model transparency and foster stakeholder trust.Additionally,improving HANN models’scalability and computational efficiency is crucial,especially for large-scale,real-world applications.Approaches such as transfer learning,parallel processing,and hyperparameter optimization will be essential in overcoming these challenges.This study underscores the transformative potential of HANN models for precise ET estimation,particularly in water-scarce and climate-vulnerable regions.By integrating CNNs for automatic feature extraction and leveraging hybrid architectures,HANNs offer considerable advantages for optimizing water management,particularly agriculture.Addressing challenges related to interpretability and scalability will be vital to ensuring the widespread deployment and operational success of HANNs in global water resource management.展开更多
In a cloud environment,graphics processing units(GPUs)are the primary devices used for high-performance computation.They exploit flexible resource utilization,a key advantage of cloud environments.Multiple users share...In a cloud environment,graphics processing units(GPUs)are the primary devices used for high-performance computation.They exploit flexible resource utilization,a key advantage of cloud environments.Multiple users share GPUs,which serve as coprocessors of central processing units(CPUs)and are activated only if tasks demand GPU computation.In a container environment,where resources can be shared among multiple users,GPU utilization can be increased by minimizing idle time because the tasks of many users run on a single GPU.However,unlike CPUs and memory,GPUs cannot logically multiplex their resources.Additionally,GPU memory does not support over-utilization:when it runs out,tasks will fail.Therefore,it is necessary to regulate the order of execution of concurrently running GPU tasks to avoid such task failures and to ensure equitable GPU sharing among users.In this paper,we propose a GPU task execution order management technique that controls GPU usage via time-based containers.The technique seeks to ensure equal GPU time among users in a container environment to prevent task failures.In the meantime,we use a deferred processing method to prevent GPU memory shortages when GPU tasks are executed simultaneously and to determine the execution order based on the GPU usage time.As the order of GPU tasks cannot be externally adjusted arbitrarily once the task commences,the GPU task is indirectly paused by pausing the container.In addition,as container pause/unpause status is based on the information about the available GPU memory capacity,overuse of GPU memory can be prevented at the source.As a result,the strategy can prevent task failure and the GPU tasks can be experimentally processed in appropriate order.展开更多
Voice, motion, and mimicry are naturalistic control modalities that have replaced text or display-driven control in human-computer communication (HCC). Specifically, the vocals contain a lot of knowledge, revealing de...Voice, motion, and mimicry are naturalistic control modalities that have replaced text or display-driven control in human-computer communication (HCC). Specifically, the vocals contain a lot of knowledge, revealing details about the speaker’s goals and desires, as well as their internal condition. Certain vocal characteristics reveal the speaker’s mood, intention, and motivation, while word study assists the speaker’s demand to be understood. Voice emotion recognition has become an essential component of modern HCC networks. Integrating findings from the various disciplines involved in identifying vocal emotions is also challenging. Many sound analysis techniques were developed in the past. Learning about the development of artificial intelligence (AI), and especially Deep Learning (DL) technology, research incorporating real data is becoming increasingly common these days. Thus, this research presents a novel selfish herd optimization-tuned long/short-term memory (SHO-LSTM) strategy to identify vocal emotions in human communication. The RAVDESS public dataset is used to train the suggested SHO-LSTM technique. Mel-frequency cepstral coefficient (MFCC) and wiener filter (WF) techniques are used, respectively, to remove noise and extract features from the data. LSTM and SHO are applied to the extracted data to optimize the LSTM network’s parameters for effective emotion recognition. Python Software was used to execute our proposed framework. In the finding assessment phase, Numerous metrics are used to evaluate the proposed model’s detection capability, Such as F1-score (95%), precision (95%), recall (96%), and accuracy (97%). The suggested approach is tested on a Python platform, and the SHO-LSTM’s outcomes are contrasted with those of other previously conducted research. Based on comparative assessments, our suggested approach outperforms the current approaches in vocal emotion recognition.展开更多
In today’s digital era,the rapid evolution of image editing technologies has brought about a significant simplification of image manipulation.Unfortunately,this progress has also given rise to the misuse of manipulat...In today’s digital era,the rapid evolution of image editing technologies has brought about a significant simplification of image manipulation.Unfortunately,this progress has also given rise to the misuse of manipulated images across various domains.One of the pressing challenges stemming from this advancement is the increasing difficulty in discerning between unaltered and manipulated images.This paper offers a comprehensive survey of existing methodologies for detecting image tampering,shedding light on the diverse approaches employed in the field of contemporary image forensics.The methods used to identify image forgery can be broadly classified into two primary categories:classical machine learning techniques,heavily reliant on manually crafted features,and deep learning methods.Additionally,this paper explores recent developments in image forensics,placing particular emphasis on the detection of counterfeit colorization.Image colorization involves predicting colors for grayscale images,thereby enhancing their visual appeal.The advancements in colorization techniques have reached a level where distinguishing between authentic and forged images with the naked eye has become an exceptionally challenging task.This paper serves as an in-depth exploration of the intricacies of image forensics in the modern age,with a specific focus on the detection of colorization forgery,presenting a comprehensive overview of methodologies in this critical field.展开更多
With the accelerated growth of the Internet of Things(IoT),real-time data processing on edge devices is increasingly important for reducing overhead and enhancing security by keeping sensitive data local.Since these d...With the accelerated growth of the Internet of Things(IoT),real-time data processing on edge devices is increasingly important for reducing overhead and enhancing security by keeping sensitive data local.Since these devices often handle personal information under limited resources,cryptographic algorithms must be executed efficiently.Their computational characteristics strongly affect system performance,making it necessary to analyze resource impact and predict usage under diverse configurations.In this paper,we analyze the phase-level resource usage of AES variants,ChaCha20,ECC,and RSA on an edge device and develop a prediction model.We apply these algorithms under varying parallelism levels and execution strategies across key generation,encryption,and decryption phases.Based on the analysis,we train a unified Random Forest model using execution context and temporal features,achieving R2 values up to 0.994 for power and 0.988 for temperature.Furthermore,the model maintains practical predictive performance even for cryptographic algorithms not included during training,demonstrating its ability to generalize across distinct computational characteristics.Our proposed approach reveals how execution characteristics and resource usage interacts,supporting proactive resource planning and efficient deployment of cryptographic workloads on edge devices.As our approach is grounded in phase-level computational characteristics rather than in any single algorithm,it provides generalizable insights that can be extended to a broader range of cryptographic algorithms that exhibit comparable phase-level execution patterns and to heterogeneous edge architectures.展开更多
The Internet of Things(IoT)is a smart infrastructure where devices share captured data with the respective server or edge modules.However,secure and reliable communication is among the challenging tasks in these netwo...The Internet of Things(IoT)is a smart infrastructure where devices share captured data with the respective server or edge modules.However,secure and reliable communication is among the challenging tasks in these networks,as shared channels are used to transmit packets.In this paper,a decision tree is integrated with other metrics to form a secure distributed communication strategy for IoT.Initially,every device works collaboratively to form a distributed network.In this model,if a device is deployed outside the coverage area of the nearest server,it communicates indirectly through the neighboring devices.For this purpose,every device collects data from the respective neighboring devices,such as hop count,average packet transmission delay,criticality factor,link reliability,and RSSI value,etc.These parameters are used to find an optimal route from the source to the destination.Secondly,the proposed approach has enabled devices to learn from the environment and adjust the optimal route-finding formula accordingly.Moreover,these devices and server modules must ensure that every packet is transmitted securely,which is possible only if it is encrypted with an encryption algorithm.For this purpose,a decision tree-enabled device-to-server authentication algorithm is presented where every device and server must take part in the offline phase.Simulation results have verified that the proposed distributed communication approach has the potential to ensure the integrity and confidentiality of data during transmission.Moreover,the proposed approach has outperformed the existing approaches in terms of communication cost,processing overhead,end-to-end delay,packet loss ratio,and throughput.Finally,the proposed approach is adoptable in different networking infrastructures.展开更多
Effective resource management in the Internet of Things and fog computing is essential for efficient and scalable networks.However,existing methods often fail in dynamic and high-demand environments,leading to resourc...Effective resource management in the Internet of Things and fog computing is essential for efficient and scalable networks.However,existing methods often fail in dynamic and high-demand environments,leading to resource bottlenecks and increased energy consumption.This study aims to address these limitations by proposing the Quantum Inspired Adaptive Resource Management(QIARM)model,which introduces novel algorithms inspired by quantum principles for enhanced resource allocation.QIARM employs a quantum superposition-inspired technique for multi-state resource representation and an adaptive learning component to adjust resources in real time dynamically.In addition,an energy-aware scheduling module minimizes power consumption by selecting optimal configurations based on energy metrics.The simulation was carried out in a 360-minute environment with eight distinct scenarios.This study introduces a novel quantum-inspired resource management framework that achieves up to 98%task offload success and reduces energy consumption by 20%,addressing critical challenges of scalability and efficiency in dynamic fog computing environments.展开更多
IoT has emerged as a game-changing technology that connects numerous gadgets to networks for communication,processing,and real-time monitoring across diverse applications.Due to their heterogeneous nature and constrai...IoT has emerged as a game-changing technology that connects numerous gadgets to networks for communication,processing,and real-time monitoring across diverse applications.Due to their heterogeneous nature and constrained resources,as well as the growing trend of using smart gadgets,there are privacy and security issues that are not adequately managed by conventional securitymeasures.This review offers a thorough analysis of contemporary AI solutions designed to enhance security within IoT ecosystems.The intersection of AI technologies,including ML,and blockchain,with IoT privacy and security is systematically examined,focusing on their efficacy in addressing core security issues.The methodology involves a detailed exploration of existing literature and research on AI-driven privacy-preserving security mechanisms in IoT.The reviewed solutions are categorized based on their ability to tackle specific security challenges.The review highlights key advancements,evaluates their practical applications,and identifies prevailing research gaps and challenges.The findings indicate that AI solutions,particularly those leveraging ML and blockchain,offerpromising enhancements to IoT privacy and security by improving threat detection capabilities and ensuring data integrity.This paper highlights how AI technologies might strengthen IoT privacy and security and offer suggestions for upcoming studies intended to address enduring problems and improve the robustness of IoT networks.展开更多
The sinkhole attack is one of the most damaging threats in the Internet of Things(IoT).It deceptively attracts neighboring nodes and initiates malicious activity,often disrupting the network when combined with other a...The sinkhole attack is one of the most damaging threats in the Internet of Things(IoT).It deceptively attracts neighboring nodes and initiates malicious activity,often disrupting the network when combined with other attacks.This study proposes a novel approach,named NADSA,to detect and isolate sinkhole attacks.NADSA is based on the RPL protocol and consists of two detection phases.In the first phase,the minimum possible hop count between the sender and receiver is calculated and compared with the sender’s reported hop count.The second phase utilizes the number of DIO messages to identify suspicious nodes and then applies a fuzzification process using RSSI,ETX,and distance measurements to confirm the presence of a malicious node.The proposed method is extensively simulated in highly lossy and sparse network environments with varying numbers of nodes.The results demonstrate that NADSA achieves high efficiency,with PDRs of 68%,70%,and 73%;E2EDs of 81,72,and 60 ms;TPRs of 89%,83%,and 80%;and FPRs of 24%,28%,and 33%.NADSA outperforms existing methods in challenging network conditions,where traditional approaches typically degrade in effectiveness.展开更多
文摘The planning of teaching for a course that belongs to an undergraduate program usually begins with the definition of its contents,which are derived from syllabus of a political-pedagogical project.The contents listed are organized in a sequence considered logical.A set of actions is planned,such as lectures,laboratories,among others,through which content will be developed.The previous training of the student is considered,the concurrent and subsequent courses,the context of the course inside the program,the specific and general objectives of the program.A set of assessments is also defined as part of this planning,the associated methodologies,techniques and teaching objectives.In this context,this paper focuses on the aspect of the sequencing of content,methodologies and teaching techniques in a course.For this purpose,the Bloom's Taxonomy of Educational Objectives is applied,which provides a hierarchical structure for the cognitive process.The importance of this hierarchy of knowledge is greater awareness of the teacher about the ways to be adopted in the teaching process.
基金supported in part by the National Natural Science Foundation of China under Grant 62371181in part by the Changzhou Science and Technology International Cooperation Program under Grant CZ20230029+1 种基金supported by a National Research Foundation of Korea(NRF)grant funded by the Korea government(MSIT)(2021R1A2B5B02087169)supported under the framework of international cooperation program managed by the National Research Foundation of Korea(2022K2A9A1A01098051)。
文摘The Intelligent Internet of Things(IIoT)involves real-world things that communicate or interact with each other through networking technologies by collecting data from these“things”and using intelligent approaches,such as Artificial Intelligence(AI)and machine learning,to make accurate decisions.Data science is the science of dealing with data and its relationships through intelligent approaches.Most state-of-the-art research focuses independently on either data science or IIoT,rather than exploring their integration.Therefore,to address the gap,this article provides a comprehensive survey on the advances and integration of data science with the Intelligent IoT(IIoT)system by classifying the existing IoT-based data science techniques and presenting a summary of various characteristics.The paper analyzes the data science or big data security and privacy features,including network architecture,data protection,and continuous monitoring of data,which face challenges in various IoT-based systems.Extensive insights into IoT data security,privacy,and challenges are visualized in the context of data science for IoT.In addition,this study reveals the current opportunities to enhance data science and IoT market development.The current gap and challenges faced in the integration of data science and IoT are comprehensively presented,followed by the future outlook and possible solutions.
文摘Time series are an important object of study in sciences, engineering and business, especially in cases where it is expected to know, predict and optimize behaviors. In this context, we intend to show the feasibility of using artificial neural networks in the study of several time series in an engineering course, especially those that have no overt behavior or are not able to be modeled mathematically in a simple way and have direct application in the education of future engineers.
基金the Deanship of Scientifc Research at King Khalid University for funding this work through large group Research Project under grant number RGP2/421/45supported via funding from Prince Sattam bin Abdulaziz University project number(PSAU/2024/R/1446)+1 种基金supported by theResearchers Supporting Project Number(UM-DSR-IG-2023-07)Almaarefa University,Riyadh,Saudi Arabia.supported by the Basic Science Research Program through the National Research Foundation of Korea(NRF)funded by the Ministry of Education(No.2021R1F1A1055408).
文摘Machine learning(ML)is increasingly applied for medical image processing with appropriate learning paradigms.These applications include analyzing images of various organs,such as the brain,lung,eye,etc.,to identify specific flaws/diseases for diagnosis.The primary concern of ML applications is the precise selection of flexible image features for pattern detection and region classification.Most of the extracted image features are irrelevant and lead to an increase in computation time.Therefore,this article uses an analytical learning paradigm to design a Congruent Feature Selection Method to select the most relevant image features.This process trains the learning paradigm using similarity and correlation-based features over different textural intensities and pixel distributions.The similarity between the pixels over the various distribution patterns with high indexes is recommended for disease diagnosis.Later,the correlation based on intensity and distribution is analyzed to improve the feature selection congruency.Therefore,the more congruent pixels are sorted in the descending order of the selection,which identifies better regions than the distribution.Now,the learning paradigm is trained using intensity and region-based similarity to maximize the chances of selection.Therefore,the probability of feature selection,regardless of the textures and medical image patterns,is improved.This process enhances the performance of ML applications for different medical image processing.The proposed method improves the accuracy,precision,and training rate by 13.19%,10.69%,and 11.06%,respectively,compared to other models for the selected dataset.The mean error and selection time is also reduced by 12.56%and 13.56%,respectively,compared to the same models and dataset.
文摘The ability to accurately predict urban traffic flows is crucial for optimising city operations.Consequently,various methods for forecasting urban traffic have been developed,focusing on analysing historical data to understand complex mobility patterns.Deep learning techniques,such as graph neural networks(GNNs),are popular for their ability to capture spatio-temporal dependencies.However,these models often become overly complex due to the large number of hyper-parameters involved.In this study,we introduce Dynamic Multi-Graph Spatial-Temporal Graph Neural Ordinary Differential Equation Networks(DMST-GNODE),a framework based on ordinary differential equations(ODEs)that autonomously discovers effective spatial-temporal graph neural network(STGNN)architectures for traffic prediction tasks.The comparative analysis of DMST-GNODE and baseline models indicates that DMST-GNODE model demonstrates superior performance across multiple datasets,consistently achieving the lowest Root Mean Square Error(RMSE)and Mean Absolute Error(MAE)values,alongside the highest accuracy.On the BKK(Bangkok)dataset,it outperformed other models with an RMSE of 3.3165 and an accuracy of 0.9367 for a 20-min interval,maintaining this trend across 40 and 60 min.Similarly,on the PeMS08 dataset,DMST-GNODE achieved the best performance with an RMSE of 19.4863 and an accuracy of 0.9377 at 20 min,demonstrating its effectiveness over longer periods.The Los_Loop dataset results further emphasise this model’s advantage,with an RMSE of 3.3422 and an accuracy of 0.7643 at 20 min,consistently maintaining superiority across all time intervals.These numerical highlights indicate that DMST-GNODE not only outperforms baseline models but also achieves higher accuracy and lower errors across different time intervals and datasets.
基金Research Center of the College of Computer and Information Sciences,King Saud University,Grant/Award Number:RSPD2024R947King Saud University。
文摘Over the past few years,the application and usage of Machine Learning(ML)techniques have increased exponentially due to continuously increasing the size of data and computing capacity.Despite the popularity of ML techniques,only a few research studies have focused on the application of ML especially supervised learning techniques in Requirement Engineering(RE)activities to solve the problems that occur in RE activities.The authors focus on the systematic mapping of past work to investigate those studies that focused on the application of supervised learning techniques in RE activities between the period of 2002–2023.The authors aim to investigate the research trends,main RE activities,ML algorithms,and data sources that were studied during this period.Forty-five research studies were selected based on our exclusion and inclusion criteria.The results show that the scientific community used 57 algorithms.Among those algorithms,researchers mostly used the five following ML algorithms in RE activities:Decision Tree,Support Vector Machine,Naïve Bayes,K-nearest neighbour Classifier,and Random Forest.The results show that researchers used these algorithms in eight major RE activities.Those activities are requirements analysis,failure prediction,effort estimation,quality,traceability,business rules identification,content classification,and detection of problems in requirements written in natural language.Our selected research studies used 32 private and 41 public data sources.The most popular data sources that were detected in selected studies are the Metric Data Programme from NASA,Predictor Models in Software Engineering,and iTrust Electronic Health Care System.
文摘In the era of Industry 4.0,conditionmonitoring has emerged as an effective solution for process industries to optimize their operational efficiency.Condition monitoring helps minimize unplanned downtime,extending equipment lifespan,reducing maintenance costs,and improving production quality and safety.This research focuses on utilizing Bayesian search-based machine learning and deep learning approaches for the condition monitoring of industrial equipment.The study aims to enhance predictive maintenance for industrial equipment by forecasting vibration values based on domain-specific feature engineering.Early prediction of vibration enables proactive interventions to minimize downtime and extend the lifespan of critical assets.A data set of load information and vibration values from a heavy-duty industrial slip ring induction motor(4600 kW)and gearbox equipped with vibration sensors is used as a case study.The study implements and compares six machine learning models with the proposed Bayesian-optimized stacked Long Short-Term Memory(LSTM)model.The hyperparameters used in the implementation of models are selected based on the Bayesian optimization technique.Comparative analysis reveals that the proposed Bayesian optimized stacked LSTM outperforms other models,showcasing its capability to learn temporal features as well as long-term dependencies in time series information.The implemented machine learning models:Linear Regression(LR),RandomForest(RF),Gradient Boosting Regressor(GBR),ExtremeGradient Boosting(XGBoost),Light Gradient Boosting Machine(LightGBM),and Support Vector Regressor(SVR)displayed a mean squared error of 0.9515,0.4654,0.1849,0.0295,0.2127 and 0.0273,respectively.The proposed model predicts the future vibration characteristics with a mean squared error of 0.0019 on the dataset containing motor load information and vibration characteristics.The results demonstrate that the proposed model outperforms other models in terms of other evaluation metrics with a mean absolute error of 0.0263 and 0.882 as a coefficient of determination.Current research not only contributes to the comparative performance of machine learning models in condition monitoring but also showcases the practical implications of employing these techniques.By transitioning fromreactive to proactive maintenance strategies,industries canminimize downtime,reduce costs,and prolong the lifespan of crucial assets.This study demonstrates the practical advantages of transitioning from reactive to proactive maintenance strategies using ML-based condition monitoring.
文摘Structural Health Monitoring(SHM)systems play a key role in managing buildings and infrastructure by delivering vital insights into their strength and structural integrity.There is a need for more efficient techniques to detect defects,as traditional methods are often prone to human error,and this issue is also addressed through image processing(IP).In addition to IP,automated,accurate,and real-time detection of structural defects,such as cracks,corrosion,and material degradation that conventional inspection techniques may miss,is made possible by Artificial Intelligence(AI)technologies like Machine Learning(ML)and Deep Learning(DL).This review examines the integration of computer vision and AI techniques in Structural Health Monitoring(SHM),investigating their effectiveness in detecting various forms of structural deterioration.Also,it evaluates ML and DL models in SHM for their accuracy in identifying and assessing structural damage,ultimately enhancing safety,durability,and maintenance practices in the field.Key findings reveal that AI-powered approaches,especially those utilizing IP and DL models like CNNs,significantly improve detection efficiency and accuracy,with reported accuracies in various SHM tasks.However,significant research gaps remain,including challenges with the consistency,quality,and environmental resilience of image data,a notable lack of standardized models and datasets for training across diverse structures,and concerns regarding computational costs,model interpretability,and seamless integration with existing systems.Future work should focus on developing more robust models through data augmentation,transfer learning,and hybrid approaches,standardizing protocols,and fostering interdisciplinary collaboration to overcome these limitations and achieve more reliable,scalable,and affordable SHM systems.
基金supported by the Deanship of Graduate Studies and Scientific Research at Qassim University via Grant No.(QU-APC-2025).
文摘The Internet of Things (IoT) and edge-assisted networking infrastructures are capable of bringing data processing and accessibility services locally at the respective edge rather than at a centralized module. These infrastructures are very effective in providing a fast response to the respective queries of the requesting modules, but their distributed nature has introduced other problems such as security and privacy. To address these problems, various security-assisted communication mechanisms have been developed to safeguard every active module, i.e., devices and edges, from every possible vulnerability in the IoT. However, these methodologies have neglected one of the critical issues, which is the prediction of fraudulent devices, i.e., adversaries, preferably as early as possible in the IoT. In this paper, a hybrid communication mechanism is presented where the Hidden Markov Model (HMM) predicts the legitimacy of the requesting device (both source and destination), and the Advanced Encryption Standard (AES) safeguards the reliability of the transmitted data over a shared communication medium, preferably through a secret shared key, i.e., , and timestamp information. A device becomes trusted if it has passed both evaluation levels, i.e., HMM and message decryption, within a stipulated time interval. The proposed hybrid, along with existing state-of-the-art approaches, has been simulated in the realistic environment of the IoT to verify the security measures. These evaluations were carried out in the presence of intruders capable of launching various attacks simultaneously, such as man-in-the-middle, device impersonations, and masquerading attacks. Moreover, the proposed approach has been proven to be more effective than existing state-of-the-art approaches due to its exceptional performance in communication, processing, and storage overheads, i.e., 13%, 19%, and 16%, respectively. Finally, the proposed hybrid approach is pruned against well-known security attacks in the IoT.
基金supported by the“Technology Commercialization Collaboration Platform Construction”project of the Innopolis Foundation(Project Number:2710033536)the Competitive Research Fund of The University of Aizu,Japan.
文摘Sentiment Analysis,a significant domain within Natural Language Processing(NLP),focuses on extracting and interpreting subjective information-such as emotions,opinions,and attitudes-from textual data.With the increasing volume of user-generated content on social media and digital platforms,sentiment analysis has become essential for deriving actionable insights across various sectors.This study presents a systematic literature review of sentiment analysis methodologies,encompassing traditional machine learning algorithms,lexicon-based approaches,and recent advancements in deep learning techniques.The review follows a structured protocol comprising three phases:planning,execution,and analysis/reporting.During the execution phase,67 peer-reviewed articles were initially retrieved,with 25 meeting predefined inclusion and exclusion criteria.The analysis phase involved a detailed examination of each study’s methodology,experimental setup,and key contributions.Among the deep learning models evaluated,Long Short-Term Memory(LSTM)networks were identified as the most frequently adopted architecture for sentiment classification tasks.This review highlights current trends,technical challenges,and emerging opportunities in the field,providing valuable guidance for future research and development in applications such as market analysis,public health monitoring,financial forecasting,and crisis management.
文摘The integration of artificial intelligence(AI)and multiomics has transformed clinical and life sciences,enabling precision medicine and redefining disease understanding.Scientific publications grew significantly from 2.1 million in 2012 to 3.3 million in 2022,with AI research tripling during this period.Multiomics fields,including genomics and proteomics,also advanced,exemplified by the Human Proteome Project achieving a 90%complete blueprint by 2021.This growth highlights opportunities and challenges in integrating AI and multiomics into clinical reporting.A review of studies and case reports was conducted to evaluate AI and multiomics integration.Key areas analyzed included diagnostic accuracy,predictive modeling,and personalized treatment approaches driven by AI tools.Case examples were studied to assess impacts on clinical decision-making.AI and multiomics enhanced data integration,predictive insights,and treatment personalization.Fields like radiomics,genomics,and proteomics improved diagnostics and guided therapy.For instance,the“AI radiomics,geno-mics,oncopathomics,and surgomics project”combined radiomics and genomics for surgical decision-making,enabling preoperative,intraoperative,and post-operative interventions.AI applications in case reports predicted conditions like postoperative delirium and monitored cancer progression using genomic and imaging data.AI and multiomics enable standardized data analysis,dynamic updates,and predictive modeling in case reports.Traditional reports often lack objectivity,but AI enhances reproducibility and decision-making by processing large datasets.Challenges include data standardization,biases,and ethical concerns.Overcoming these barriers is vital for optimizing AI applications and advancing personalized medicine.AI and multiomics integration is revolutionizing clinical research and practice.Standardizing data reporting and addressing challenges in ethics and data quality will unlock their full potential.Emphasizing collaboration and transparency is essential for leveraging these tools to improve patient care and scientific communication.
文摘Accurate estimation of evapotranspiration(ET)is crucial for efficient water resource management,particularly in the face of climate change and increasing water scarcity.This study performs a bibliometric analysis of 352 articles and a systematic review of 35 peer-reviewed papers,selected according to PRISMA guidelines,to evaluate the performance of Hybrid Artificial Neural Networks(HANNs)in ET estimation.The findings demonstrate that HANNs,particularly those combining Multilayer Perceptrons(MLPs),Recurrent Neural Networks(RNNs),and Convolutional Neural Networks(CNNs),are highly effective in capturing the complex nonlinear relationships and tem-poral dependencies characteristic of hydrological processes.These hybrid models,often integrated with optimization algorithms and fuzzy logic frameworks,significantly improve the predictive accuracy and generalization capabilities of ET estimation.The growing adoption of advanced evaluation metrics,such as Kling-Gupta Efficiency(KGE)and Taylor Diagrams,highlights the increasing demand for more robust performance assessments beyond traditional methods.Despite the promising results,challenges remain,particularly regarding model interpretability,computational efficiency,and data scarcity.Future research should prioritize the integration of interpretability techniques,such as attention mechanisms,Local Interpretable Model-Agnostic Explanations(LIME),and feature importance analysis,to enhance model transparency and foster stakeholder trust.Additionally,improving HANN models’scalability and computational efficiency is crucial,especially for large-scale,real-world applications.Approaches such as transfer learning,parallel processing,and hyperparameter optimization will be essential in overcoming these challenges.This study underscores the transformative potential of HANN models for precise ET estimation,particularly in water-scarce and climate-vulnerable regions.By integrating CNNs for automatic feature extraction and leveraging hybrid architectures,HANNs offer considerable advantages for optimizing water management,particularly agriculture.Addressing challenges related to interpretability and scalability will be vital to ensuring the widespread deployment and operational success of HANNs in global water resource management.
基金supported by“Regional Innovation Strategy(RIS)”through the National Research Foundation of Korea(NRF)funded by the Ministry of Education(MOE)(2023RIS-009).
文摘In a cloud environment,graphics processing units(GPUs)are the primary devices used for high-performance computation.They exploit flexible resource utilization,a key advantage of cloud environments.Multiple users share GPUs,which serve as coprocessors of central processing units(CPUs)and are activated only if tasks demand GPU computation.In a container environment,where resources can be shared among multiple users,GPU utilization can be increased by minimizing idle time because the tasks of many users run on a single GPU.However,unlike CPUs and memory,GPUs cannot logically multiplex their resources.Additionally,GPU memory does not support over-utilization:when it runs out,tasks will fail.Therefore,it is necessary to regulate the order of execution of concurrently running GPU tasks to avoid such task failures and to ensure equitable GPU sharing among users.In this paper,we propose a GPU task execution order management technique that controls GPU usage via time-based containers.The technique seeks to ensure equal GPU time among users in a container environment to prevent task failures.In the meantime,we use a deferred processing method to prevent GPU memory shortages when GPU tasks are executed simultaneously and to determine the execution order based on the GPU usage time.As the order of GPU tasks cannot be externally adjusted arbitrarily once the task commences,the GPU task is indirectly paused by pausing the container.In addition,as container pause/unpause status is based on the information about the available GPU memory capacity,overuse of GPU memory can be prevented at the source.As a result,the strategy can prevent task failure and the GPU tasks can be experimentally processed in appropriate order.
基金The author Dr.Arshiya S.Ansari extends the appreciation to the Deanship of Postgraduate Studies and Scientific Research at Majmaah University for funding this research work through the project number(R-2025-1538).
文摘Voice, motion, and mimicry are naturalistic control modalities that have replaced text or display-driven control in human-computer communication (HCC). Specifically, the vocals contain a lot of knowledge, revealing details about the speaker’s goals and desires, as well as their internal condition. Certain vocal characteristics reveal the speaker’s mood, intention, and motivation, while word study assists the speaker’s demand to be understood. Voice emotion recognition has become an essential component of modern HCC networks. Integrating findings from the various disciplines involved in identifying vocal emotions is also challenging. Many sound analysis techniques were developed in the past. Learning about the development of artificial intelligence (AI), and especially Deep Learning (DL) technology, research incorporating real data is becoming increasingly common these days. Thus, this research presents a novel selfish herd optimization-tuned long/short-term memory (SHO-LSTM) strategy to identify vocal emotions in human communication. The RAVDESS public dataset is used to train the suggested SHO-LSTM technique. Mel-frequency cepstral coefficient (MFCC) and wiener filter (WF) techniques are used, respectively, to remove noise and extract features from the data. LSTM and SHO are applied to the extracted data to optimize the LSTM network’s parameters for effective emotion recognition. Python Software was used to execute our proposed framework. In the finding assessment phase, Numerous metrics are used to evaluate the proposed model’s detection capability, Such as F1-score (95%), precision (95%), recall (96%), and accuracy (97%). The suggested approach is tested on a Python platform, and the SHO-LSTM’s outcomes are contrasted with those of other previously conducted research. Based on comparative assessments, our suggested approach outperforms the current approaches in vocal emotion recognition.
基金supported by Basic Science Research Program through the National Research Foundation of Korea(NRF)funded by the Ministry of Education(2021R1I1A3049788).
文摘In today’s digital era,the rapid evolution of image editing technologies has brought about a significant simplification of image manipulation.Unfortunately,this progress has also given rise to the misuse of manipulated images across various domains.One of the pressing challenges stemming from this advancement is the increasing difficulty in discerning between unaltered and manipulated images.This paper offers a comprehensive survey of existing methodologies for detecting image tampering,shedding light on the diverse approaches employed in the field of contemporary image forensics.The methods used to identify image forgery can be broadly classified into two primary categories:classical machine learning techniques,heavily reliant on manually crafted features,and deep learning methods.Additionally,this paper explores recent developments in image forensics,placing particular emphasis on the detection of counterfeit colorization.Image colorization involves predicting colors for grayscale images,thereby enhancing their visual appeal.The advancements in colorization techniques have reached a level where distinguishing between authentic and forged images with the naked eye has become an exceptionally challenging task.This paper serves as an in-depth exploration of the intricacies of image forensics in the modern age,with a specific focus on the detection of colorization forgery,presenting a comprehensive overview of methodologies in this critical field.
基金supported in part by the National Research Foundation of Korea(NRF)(No.RS-2025-00554650)supported by the Chung-Ang University research grant in 2024。
文摘With the accelerated growth of the Internet of Things(IoT),real-time data processing on edge devices is increasingly important for reducing overhead and enhancing security by keeping sensitive data local.Since these devices often handle personal information under limited resources,cryptographic algorithms must be executed efficiently.Their computational characteristics strongly affect system performance,making it necessary to analyze resource impact and predict usage under diverse configurations.In this paper,we analyze the phase-level resource usage of AES variants,ChaCha20,ECC,and RSA on an edge device and develop a prediction model.We apply these algorithms under varying parallelism levels and execution strategies across key generation,encryption,and decryption phases.Based on the analysis,we train a unified Random Forest model using execution context and temporal features,achieving R2 values up to 0.994 for power and 0.988 for temperature.Furthermore,the model maintains practical predictive performance even for cryptographic algorithms not included during training,demonstrating its ability to generalize across distinct computational characteristics.Our proposed approach reveals how execution characteristics and resource usage interacts,supporting proactive resource planning and efficient deployment of cryptographic workloads on edge devices.As our approach is grounded in phase-level computational characteristics rather than in any single algorithm,it provides generalizable insights that can be extended to a broader range of cryptographic algorithms that exhibit comparable phase-level execution patterns and to heterogeneous edge architectures.
基金supported by the Princess Nourah bint Abdulrahman University Riyadh,Saudi Arabia,through Project number(PNURSP2025R235).
文摘The Internet of Things(IoT)is a smart infrastructure where devices share captured data with the respective server or edge modules.However,secure and reliable communication is among the challenging tasks in these networks,as shared channels are used to transmit packets.In this paper,a decision tree is integrated with other metrics to form a secure distributed communication strategy for IoT.Initially,every device works collaboratively to form a distributed network.In this model,if a device is deployed outside the coverage area of the nearest server,it communicates indirectly through the neighboring devices.For this purpose,every device collects data from the respective neighboring devices,such as hop count,average packet transmission delay,criticality factor,link reliability,and RSSI value,etc.These parameters are used to find an optimal route from the source to the destination.Secondly,the proposed approach has enabled devices to learn from the environment and adjust the optimal route-finding formula accordingly.Moreover,these devices and server modules must ensure that every packet is transmitted securely,which is possible only if it is encrypted with an encryption algorithm.For this purpose,a decision tree-enabled device-to-server authentication algorithm is presented where every device and server must take part in the offline phase.Simulation results have verified that the proposed distributed communication approach has the potential to ensure the integrity and confidentiality of data during transmission.Moreover,the proposed approach has outperformed the existing approaches in terms of communication cost,processing overhead,end-to-end delay,packet loss ratio,and throughput.Finally,the proposed approach is adoptable in different networking infrastructures.
基金funded by Researchers Supporting Project Number(RSPD2025R947)King Saud University,Riyadh,Saudi Arabia.
文摘Effective resource management in the Internet of Things and fog computing is essential for efficient and scalable networks.However,existing methods often fail in dynamic and high-demand environments,leading to resource bottlenecks and increased energy consumption.This study aims to address these limitations by proposing the Quantum Inspired Adaptive Resource Management(QIARM)model,which introduces novel algorithms inspired by quantum principles for enhanced resource allocation.QIARM employs a quantum superposition-inspired technique for multi-state resource representation and an adaptive learning component to adjust resources in real time dynamically.In addition,an energy-aware scheduling module minimizes power consumption by selecting optimal configurations based on energy metrics.The simulation was carried out in a 360-minute environment with eight distinct scenarios.This study introduces a novel quantum-inspired resource management framework that achieves up to 98%task offload success and reduces energy consumption by 20%,addressing critical challenges of scalability and efficiency in dynamic fog computing environments.
基金The author Dr.Arshiya Sajid Ansari extends the appreciation to the Deanship of Postgraduate Studies and Scientific Research at Majmaah University for funding this research work through the project number(R-2025-1706).
文摘IoT has emerged as a game-changing technology that connects numerous gadgets to networks for communication,processing,and real-time monitoring across diverse applications.Due to their heterogeneous nature and constrained resources,as well as the growing trend of using smart gadgets,there are privacy and security issues that are not adequately managed by conventional securitymeasures.This review offers a thorough analysis of contemporary AI solutions designed to enhance security within IoT ecosystems.The intersection of AI technologies,including ML,and blockchain,with IoT privacy and security is systematically examined,focusing on their efficacy in addressing core security issues.The methodology involves a detailed exploration of existing literature and research on AI-driven privacy-preserving security mechanisms in IoT.The reviewed solutions are categorized based on their ability to tackle specific security challenges.The review highlights key advancements,evaluates their practical applications,and identifies prevailing research gaps and challenges.The findings indicate that AI solutions,particularly those leveraging ML and blockchain,offerpromising enhancements to IoT privacy and security by improving threat detection capabilities and ensuring data integrity.This paper highlights how AI technologies might strengthen IoT privacy and security and offer suggestions for upcoming studies intended to address enduring problems and improve the robustness of IoT networks.
文摘The sinkhole attack is one of the most damaging threats in the Internet of Things(IoT).It deceptively attracts neighboring nodes and initiates malicious activity,often disrupting the network when combined with other attacks.This study proposes a novel approach,named NADSA,to detect and isolate sinkhole attacks.NADSA is based on the RPL protocol and consists of two detection phases.In the first phase,the minimum possible hop count between the sender and receiver is calculated and compared with the sender’s reported hop count.The second phase utilizes the number of DIO messages to identify suspicious nodes and then applies a fuzzification process using RSSI,ETX,and distance measurements to confirm the presence of a malicious node.The proposed method is extensively simulated in highly lossy and sparse network environments with varying numbers of nodes.The results demonstrate that NADSA achieves high efficiency,with PDRs of 68%,70%,and 73%;E2EDs of 81,72,and 60 ms;TPRs of 89%,83%,and 80%;and FPRs of 24%,28%,and 33%.NADSA outperforms existing methods in challenging network conditions,where traditional approaches typically degrade in effectiveness.