The rapidly evolving cybersecurity threat landscape exposes a critical flaw in traditional educational programs where static curricula cannot adapt swiftly to novel attack vectors.This creates a significant gap betwee...The rapidly evolving cybersecurity threat landscape exposes a critical flaw in traditional educational programs where static curricula cannot adapt swiftly to novel attack vectors.This creates a significant gap between theoretical knowledge and the practical defensive capabilities needed in the field.To address this,we propose TeachSecure-CTI,a novel framework for adaptive cybersecurity curriculumgeneration that integrates real-time Cyber Threat Intelligence(CTI)with AI-driven personalization.Our framework employs a layered architecture featuring a CTI ingestion and clusteringmodule,natural language processing for semantic concept extraction,and a reinforcement learning agent for adaptive content sequencing.Bydynamically aligning learningmaterialswithboththe evolving threat environment and individual learner profiles,TeachSecure-CTI ensures content remains current,relevant,and tailored.A 12-week study with 150 students across three institutions demonstrated that the framework improves learning gains by 34%,significantly exceeding the 12%–21%reported in recent literature.The system achieved 84.8%personalization accuracy,85.9%recognition accuracy for MITRE ATT&CK tactics,and a 31%faster competency development rate compared to static curricula.These findings have implications beyond academia,extending to workforce development,cyber range training,and certification programs.By bridging the gap between dynamic threats and static educational materials,TeachSecure-CTI offers an empirically validated,scalable solution for cultivating cybersecurity professionals capable of responding to modern threats.展开更多
In light of the coronavirus disease 2019(COVID-19)outbreak caused by the novel coronavirus,companies and institutions have instructed their employees to work from home as a precautionary measure to reduce the risk of ...In light of the coronavirus disease 2019(COVID-19)outbreak caused by the novel coronavirus,companies and institutions have instructed their employees to work from home as a precautionary measure to reduce the risk of contagion.Employees,however,have been exposed to different security risks because of working from home.Moreover,the rapid global spread of COVID-19 has increased the volume of data generated from various sources.Working from home depends mainly on cloud computing(CC)applications that help employees to efficiently accomplish their tasks.The cloud computing environment(CCE)is an unsung hero in the COVID-19 pandemic crisis.It consists of the fast-paced practices for services that reflect the trend of rapidly deployable applications for maintaining data.Despite the increase in the use of CC applications,there is an ongoing research challenge in the domains of CCE concerning data,guaranteeing security,and the availability of CC applications.This paper,to the best of our knowledge,is the first paper that thoroughly explains the impact of the COVID-19 pandemic on CCE.Additionally,this paper also highlights the security risks of working from home during the COVID-19 pandemic.展开更多
Recently,several edge deployment types,such as on-premise edge clusters,Unmanned Aerial Vehicles(UAV)-attached edge devices,telecommunication base stations installed with edge clusters,etc.,are being deployed to enabl...Recently,several edge deployment types,such as on-premise edge clusters,Unmanned Aerial Vehicles(UAV)-attached edge devices,telecommunication base stations installed with edge clusters,etc.,are being deployed to enable faster response time for latency-sensitive tasks.One fundamental problem is where and how to offload and schedule multi-dependent tasks so as to minimize their collective execution time and to achieve high resource utilization.Existing approaches randomly dispatch tasks naively to available edge nodes without considering the resource demands of tasks,inter-dependencies of tasks and edge resource availability.These approaches can result in the longer waiting time for tasks due to insufficient resource availability or dependency support,as well as provider lock-in.Therefore,we present Edge Colla,which is based on the integration of edge resources running across multi-edge deployments.Edge Colla leverages learning techniques to intelligently dispatch multidependent tasks,and a variant bin-packing optimization method to co-locate these tasks firmly on available nodes to optimally utilize them.Extensive experiments on real-world datasets from Alibaba on task dependencies show that our approach can achieve optimal performance than the baseline schemes.展开更多
This review examines human vulnerabilities in cybersecurity within Microfinance Institutions, analyzing their impact on organizational resilience. Focusing on social engineering, inadequate security training, and weak...This review examines human vulnerabilities in cybersecurity within Microfinance Institutions, analyzing their impact on organizational resilience. Focusing on social engineering, inadequate security training, and weak internal protocols, the study identifies key vulnerabilities exacerbating cyber threats to MFIs. A literature review using databases like IEEE Xplore and Google Scholar focused on studies from 2019 to 2023 addressing human factors in cybersecurity specific to MFIs. Analysis of 57 studies reveals that phishing and insider threats are predominant, with a 20% annual increase in phishing attempts. Employee susceptibility to these attacks is heightened by insufficient training, with entry-level employees showing the highest vulnerability rates. Further, only 35% of MFIs offer regular cybersecurity training, significantly impacting incident reduction. This paper recommends enhanced training frequency, robust internal controls, and a cybersecurity-aware culture to mitigate human-induced cyber risks in MFIs.展开更多
The Internet of Things(IoT)is emerging as an innovative phenomenon concerned with the development of numerous vital applications.With the development of IoT devices,huge amounts of information,including users’private...The Internet of Things(IoT)is emerging as an innovative phenomenon concerned with the development of numerous vital applications.With the development of IoT devices,huge amounts of information,including users’private data,are generated.IoT systems face major security and data privacy challenges owing to their integral features such as scalability,resource constraints,and heterogeneity.These challenges are intensified by the fact that IoT technology frequently gathers and conveys complex data,creating an attractive opportunity for cyberattacks.To address these challenges,artificial intelligence(AI)techniques,such as machine learning(ML)and deep learning(DL),are utilized to build an intrusion detection system(IDS)that helps to secure IoT systems.Federated learning(FL)is a decentralized technique that can help to improve information privacy and performance by training the IDS on discrete linked devices.FL delivers an effectual tool to defend user confidentiality,mainly in the field of IoT,where IoT devices often obtain privacy-sensitive personal data.This study develops a Privacy-Enhanced Federated Learning for Intrusion Detection using the Chameleon Swarm Algorithm and Artificial Intelligence(PEFLID-CSAAI)technique.The main aim of the PEFLID-CSAAI method is to recognize the existence of attack behavior in IoT networks.First,the PEFLIDCSAAI technique involves data preprocessing using Z-score normalization to transformthe input data into a beneficial format.Then,the PEFLID-CSAAI method uses the Osprey Optimization Algorithm(OOA)for the feature selection(FS)model.For the classification of intrusion detection attacks,the Self-Attentive Variational Autoencoder(SA-VAE)technique can be exploited.Finally,the Chameleon Swarm Algorithm(CSA)is applied for the hyperparameter finetuning process that is involved in the SA-VAE model.A wide range of experiments were conducted to validate the execution of the PEFLID-CSAAI model.The simulated outcomes demonstrated that the PEFLID-CSAAI technique outperformed other recent models,highlighting its potential as a valuable tool for future applications in healthcare devices and small engineering systems.展开更多
Cyber-Physical Systems integrated with information technologies introduce vulnerabilities that extend beyond traditional cyber threats.Attackers can non-invasively manipulate sensors and spoof controllers,which in tur...Cyber-Physical Systems integrated with information technologies introduce vulnerabilities that extend beyond traditional cyber threats.Attackers can non-invasively manipulate sensors and spoof controllers,which in turn increases the autonomy of the system.Even though the focus on protecting against sensor attacks increases,there is still uncertainty about the optimal timing for attack detection.Existing systems often struggle to manage the trade-off between latency and false alarm rate,leading to inefficiencies in real-time anomaly detection.This paper presents a framework designed to monitor,predict,and control dynamic systems with a particular emphasis on detecting and adapting to changes,including anomalies such as“drift”and“attack”.The proposed algorithm integrates a Transformer-based Attention Generative Adversarial Residual model,which combines the strengths of generative adversarial networks,residual networks,and attention algorithms.The system operates in two phases:offline and online.During the offline phase,the proposed model is trained to learn complex patterns,enabling robust anomaly detection.The online phase applies a trained model,where the drift adapter adjusts the model to handle data changes,and the attack detector identifies deviations by comparing predicted and actual values.Based on the output of the attack detector,the controller makes decisions then the actuator executes suitable actions.Finally,the experimental findings show that the proposed model balances detection accuracy of 99.25%,precision of 98.84%,sensitivity of 99.10%,specificity of 98.81%,and an F1-score of 98.96%,thus provides an effective solution for dynamic and safety-critical environments.展开更多
Medical images play a crucial role in diagnosis,treatment procedures and overall healthcare.Nevertheless,they also pose substantial risks to patient confidentiality and safety.Safeguarding the confidentiality of patie...Medical images play a crucial role in diagnosis,treatment procedures and overall healthcare.Nevertheless,they also pose substantial risks to patient confidentiality and safety.Safeguarding the confidentiality of patients'data has become an urgent and practical concern.We present a novel approach for reversible data hiding for colour medical images.In a hybrid domain,we employ AlexNet,tuned with watershed transform(WST)and L-shaped fractal Tromino encryption.Our approach commences by constructing the host image's feature vector using a pre-trained AlexNet model.Next,we use the watershed transform to convert the extracted feature vector into a vector for a topographic map,which we then encrypt using an L-shaped fractal Tromino cryptosystem.We embed the secret image in the transformed image vector using a histogram-based embedding strategy to enhance payload and visual fidelity.When there are no attacks,the RDHNet exhibits robust performance,can be reversed to the original image and maintains a visually appealing stego image,with an average PSNR of 73.14 dB,an SSIM of 0.9999 and perfect values of NC=1 and BER=0 under normal conditions.The proposed RDHNet demonstrates a robust ability to withstand detrimental geometric and noise-adding attacks as well as various steganalysis methods.Furthermore,our RDHNet method initiative demonstrates efficacy in tackling contemporary confidentiality issues.展开更多
Energy burden,the inability to afford sufficient energy sources for basic household needs such as heating,cooling,cooking,and lighting,is one of the major social challenges in the U.S.While limited studies have examin...Energy burden,the inability to afford sufficient energy sources for basic household needs such as heating,cooling,cooking,and lighting,is one of the major social challenges in the U.S.While limited studies have examined these issues separately,to our knowledge,no study has empirically investigated the implication of energy burden for chronic kidney disease(CKD)within the U.S.context.This study aims to examine the association between energy burden and CKD prevalence across 500 U.S.cities by using nationally representative data sets.Utilizing propensity score matching and a random intercept analysis,we found that census tracts with high energy burden were significantly associated with a 0.195 higher chronic kidney prevalence[95%CI:0.144-0.246]compared to those with low energy burden,after adjusting key observed characteristics such as living,housing,and sociodemographic conditions of census tracts.Other risk factors contributing to increased CKD prevalence included older building age,higher percentages of nonwhite populations and older adults,lower educational levels,and lower average household incomes.The findings highlight that energy burden is not merely a financial problem but rather a social determinant of CKD health and a significant risk factor for increased CKD prevalence in U.S.urban areas.Our results indicate that state and local energy assistance programs may serve as important interventions not only for improving kidney health outcomes but also for reducing health disparities in the U.S.展开更多
The exponential growth of the Internet of Things(IoT)has introduced significant security challenges,with zero-day attacks emerging as one of the most critical and challenging threats.Traditional Machine Learning(ML)an...The exponential growth of the Internet of Things(IoT)has introduced significant security challenges,with zero-day attacks emerging as one of the most critical and challenging threats.Traditional Machine Learning(ML)and Deep Learning(DL)techniques have demonstrated promising early detection capabilities.However,their effectiveness is limited when handling the vast volumes of IoT-generated data due to scalability constraints,high computational costs,and the costly time-intensive process of data labeling.To address these challenges,this study proposes a Federated Learning(FL)framework that leverages collaborative and hybrid supervised learning to enhance cyber threat detection in IoT networks.By employing Deep Neural Networks(DNNs)and decentralized model training,the approach reduces computational complexity while improving detection accuracy.The proposed model demonstrates robust performance,achieving accuracies of 94.34%,99.95%,and 87.94%on the publicly available kitsune,Bot-IoT,and UNSW-NB15 datasets,respectively.Furthermore,its ability to detect zero-day attacks is validated through evaluations on two additional benchmark datasets,TON-IoT and IoT-23,using a Deep Federated Learning(DFL)framework,underscoring the generalization and effectiveness of the model in heterogeneous and decentralized IoT environments.Experimental results demonstrate superior performance over existing methods,establishing the proposed framework as an efficient and scalable solution for IoT security.展开更多
Due to the recent explosion of big data, our society has been rapidly going through digital transformation and entering a new world with numerous eye-opening developments. These new trends impact the society and futur...Due to the recent explosion of big data, our society has been rapidly going through digital transformation and entering a new world with numerous eye-opening developments. These new trends impact the society and future jobs, and thus student careers. At the heart of this digital transformation is data science, the discipline that makes sense of big data. With many rapidly emerging digital challenges ahead of us, this article discusses perspectives on iSchools' opportunities and suggestions in data science education. We argue that iSchools should empower their students with "information computing" disciplines, which we define as the ability to solve problems and create values, information, and knowledge using tools in application domains. As specific approaches to enforcing information computing disciplines in data science education, we suggest the three foci of user-based, tool-based, and application- based. These three loci will serve to differentiate the data science education of iSchools from that of computer science or business schools. We present a layered Data Science Education Framework (DSEF) with building blocks that include the three pillars of data science (people, technology, and data), computational thinking, data-driven paradigms, and data science lifecycles. Data science courses built on the top of this framework should thus be executed with user-based, tool-based, and application-based approaches. This framework will help our students think about data science problems from the big picture perspective and foster appropriate problem-solving skills in conjunction with broad perspectives of data science lifecycles. We hope the DSEF discussed in this article will help fellow iSchools in their design of new data science curricula.展开更多
In the area of pattern recognition and machine learning,features play a key role in prediction.The famous applications of features are medical imaging,image classification,and name a few more.With the exponential grow...In the area of pattern recognition and machine learning,features play a key role in prediction.The famous applications of features are medical imaging,image classification,and name a few more.With the exponential growth of information investments in medical data repositories and health service provision,medical institutions are collecting large volumes of data.These data repositories contain details information essential to support medical diagnostic decisions and also improve patient care quality.On the other hand,this growth also made it difficult to comprehend and utilize data for various purposes.The results of imaging data can become biased because of extraneous features present in larger datasets.Feature selection gives a chance to decrease the number of components in such large datasets.Through selection techniques,ousting the unimportant features and selecting a subset of components that produces prevalent characterization precision.The correct decision to find a good attribute produces a precise grouping model,which enhances learning pace and forecast control.This paper presents a review of feature selection techniques and attributes selection measures for medical imaging.This review is meant to describe feature selection techniques in a medical domainwith their pros and cons and to signify its application in imaging data and data mining algorithms.The review reveals the shortcomings of the existing feature and attributes selection techniques to multi-sourced data.Moreover,this review provides the importance of feature selection for correct classification of medical infections.In the end,critical analysis and future directions are provided.展开更多
Purpose: The purpose of the paper is to provide a framework for addressing the disconnect between metadata and data science. Data science cannot progress without metadata research.This paper takes steps toward advanc...Purpose: The purpose of the paper is to provide a framework for addressing the disconnect between metadata and data science. Data science cannot progress without metadata research.This paper takes steps toward advancing the synergy between metadata and data science, and identifies pathways for developing a more cohesive metadata research agenda in data science. Design/methodology/approach: This paper identifies factors that challenge metadata research in the digital ecosystem, defines metadata and data science, and presents the concepts big metadata, smart metadata, and metadata capital as part of a metadata lingua franca connecting to data science. Findings: The "utilitarian nature" and "historical and traditional views" of metadata are identified as two intersecting factors that have inhibited metadata research. Big metadata, smart metadata, and metadata capital are presented as part ofa metadata linguafranca to help frame research in the data science research space. Research limitations: There are additional, intersecting factors to consider that likely inhibit metadata research, and other significant metadata concepts to explore. Practical implications: The immediate contribution of this work is that it may elicit response, critique, revision, or, more significantly, motivate research. The work presented can encourage more researchers to consider the significance of metadata as a research worthy topic within data science and the larger digital ecosystem. Originality/value: Although metadata research has not kept pace with other data science topics, there is little attention directed to this problem. This is surprising, given that metadata is essential for data science endeavors. This examination synthesizes original and prior scholarship to provide new grounding for metadata research in data science.展开更多
Mobile-Edge Computing(MEC)displaces cloud services as closely as possible to the end user.This enables the edge servers to execute the offloaded tasks that are requested by the users,which in turn decreases the energy...Mobile-Edge Computing(MEC)displaces cloud services as closely as possible to the end user.This enables the edge servers to execute the offloaded tasks that are requested by the users,which in turn decreases the energy consumption and the turnaround time delay.However,as a result of a hostile environment or in catastrophic zones with no network,it could be difficult to deploy such edge servers.Unmanned Aerial Vehicles(UAVs)can be employed in such scenarios.The edge servers mounted on these UAVs assist with task offloading.For the majority of IoT applications,the execution times of tasks are often crucial.Therefore,UAVs tend to have a limited energy supply.This study presents an approach to offload IoT user applications based on the usage of Voronoi diagrams to determine task delays and cluster IoT devices dynamically as a first step.Second,the UAV flies over each cluster to perform the offloading process.In addition,we propose a Graphics Processing Unit(GPU)-based parallelization of particle swarm optimization to balance the cluster sizes and identify the shortest path along these clusters while minimizing the UAV flying time and energy consumption.Some evaluation results are given to demonstrate the effectiveness of the presented offloading strategy.展开更多
Traditional techniques based on image fusion are arduous in integrating complementary or heterogeneous infrared(IR)/visible(VS)images.Dissimilarities in various kind of features in these images are vital to preserve i...Traditional techniques based on image fusion are arduous in integrating complementary or heterogeneous infrared(IR)/visible(VS)images.Dissimilarities in various kind of features in these images are vital to preserve in the single fused image.Hence,simultaneous preservation of both the aspects at the same time is a challenging task.However,most of the existing methods utilize the manual extraction of features;and manual complicated designing of fusion rules resulted in a blurry artifact in the fused image.Therefore,this study has proposed a hybrid algorithm for the integration of multi-features among two heterogeneous images.Firstly,fuzzification of two IR/VS images has been done by feeding it to the fuzzy sets to remove the uncertainty present in the background and object of interest of the image.Secondly,images have been learned by two parallel branches of the siamese convolutional neural network(CNN)to extract prominent features from the images as well as high-frequency information to produce focus maps containing source image information.Finally,the obtained focused maps which contained the detailed integrated information are directly mapped with the source image via pixelwise strategy to result in fused image.Different parameters have been used to evaluate the performance of the proposed image fusion by achieving 1.008 for mutual information(MI),0.841 for entropy(EG),0.655 for edge information(EI),0.652 for human perception(HP),and 0.980 for image structural similarity(ISS).Experimental results have shown that the proposed technique has attained the best qualitative and quantitative results using 78 publically available images in comparison to the existing discrete cosine transform(DCT),anisotropic diffusion&karhunen-loeve(ADKL),guided filter(GF),random walk(RW),principal component analysis(PCA),and convolutional neural network(CNN)methods.展开更多
Even though several advances have been made in recent years,handwritten script recognition is still a challenging task in the pattern recognition domain.This field has gained much interest lately due to its diverse ap...Even though several advances have been made in recent years,handwritten script recognition is still a challenging task in the pattern recognition domain.This field has gained much interest lately due to its diverse application potentials.Nowadays,different methods are available for automatic script recognition.Among most of the reported script recognition techniques,deep neural networks have achieved impressive results and outperformed the classical machine learning algorithms.However,the process of designing such networks right from scratch intuitively appears to incur a significant amount of trial and error,which renders them unfeasible.This approach often requires manual intervention with domain expertise which consumes substantial time and computational resources.To alleviate this shortcoming,this paper proposes a new neural architecture search approach based on meta-heuristic quantum particle swarm optimization(QPSO),which is capable of automatically evolving the meaningful convolutional neural network(CNN)topologies.The computational experiments have been conducted on eight different datasets belonging to three popular Indic scripts,namely Bangla,Devanagari,and Dogri,consisting of handwritten characters and digits.Empirically,the results imply that the proposed QPSO-CNN algorithm outperforms the classical and state-of-the-art methods with faster prediction and higher accuracy.展开更多
The main aim of this paper is to propose a new memory dependent derivative(MDD)theory which called threetemperature nonlinear generalized anisotropic micropolar-thermoelasticity.The system of governing equations of th...The main aim of this paper is to propose a new memory dependent derivative(MDD)theory which called threetemperature nonlinear generalized anisotropic micropolar-thermoelasticity.The system of governing equations of the problems associated with the proposed theory is extremely difficult or impossible to solve analytically due to nonlinearity,MDD diffusion,multi-variable nature,multi-stage processing and anisotropic properties of the considered material.Therefore,we propose a novel boundary element method(BEM)formulation for modeling and simulation of such system.The computational performance of the proposed technique has been investigated.The numerical results illustrate the effects of time delays and kernel functions on the nonlinear three-temperature and nonlinear displacement components.The numerical results also demonstrate the validity,efficiency and accuracy of the proposed methodology.The findings and solutions of this study contribute to the further development of industrial applications and devices typically include micropolar-thermoelastic materials.展开更多
Recently,there has been a notable surge of interest in scientific research regarding spectral images.The potential of these images to revolutionize the digital photography industry,like aerial photography through Unma...Recently,there has been a notable surge of interest in scientific research regarding spectral images.The potential of these images to revolutionize the digital photography industry,like aerial photography through Unmanned Aerial Vehicles(UAVs),has captured considerable attention.One encouraging aspect is their combination with machine learning and deep learning algorithms,which have demonstrated remarkable outcomes in image classification.As a result of this powerful amalgamation,the adoption of spectral images has experienced exponential growth across various domains,with agriculture being one of the prominent beneficiaries.This paper presents an extensive survey encompassing multispectral and hyperspectral images,focusing on their applications for classification challenges in diverse agricultural areas,including plants,grains,fruits,and vegetables.By meticulously examining primary studies,we delve into the specific agricultural domains where multispectral and hyperspectral images have found practical use.Additionally,our attention is directed towards utilizing machine learning techniques for effectively classifying hyperspectral images within the agricultural context.The findings of our investigation reveal that deep learning and support vector machines have emerged as widely employed methods for hyperspectral image classification in agriculture.Nevertheless,we also shed light on the various issues and limitations of working with spectral images.This comprehensive analysis aims to provide valuable insights into the current state of spectral imaging in agriculture and its potential for future advancements.展开更多
In recent years,the number of Gun-related incidents has crossed over 250,000 per year and over 85%of the existing 1 billion firearms are in civilian hands,manual monitoring has not proven effective in detecting firear...In recent years,the number of Gun-related incidents has crossed over 250,000 per year and over 85%of the existing 1 billion firearms are in civilian hands,manual monitoring has not proven effective in detecting firearms.which is why an automated weapon detection system is needed.Various automated convolutional neural networks(CNN)weapon detection systems have been proposed in the past to generate good results.However,These techniques have high computation overhead and are slow to provide real-time detection which is essential for the weapon detection system.These models have a high rate of false negatives because they often fail to detect the guns due to the low quality and visibility issues of surveillance videos.This research work aims to minimize the rate of false negatives and false positives in weapon detection while keeping the speed of detection as a key parameter.The proposed framework is based on You Only Look Once(YOLO)and Area of Interest(AOI).Initially,themodels take pre-processed frames where the background is removed by the use of the Gaussian blur algorithm.The proposed architecture will be assessed through various performance parameters such as False Negative,False Positive,precision,recall rate,and F1 score.The results of this research work make it clear that due to YOLO-v5s high recall rate and speed of detection are achieved.Speed reached 0.010 s per frame compared to the 0.17 s of the Faster R-CNN.It is promising to be used in the field of security and weapon detection.展开更多
In situations when the precise position of a machine is unknown,localization becomes crucial.This research focuses on improving the position prediction accuracy over long-range(LoRa)network using an optimized machine ...In situations when the precise position of a machine is unknown,localization becomes crucial.This research focuses on improving the position prediction accuracy over long-range(LoRa)network using an optimized machine learning-based technique.In order to increase the prediction accuracy of the reference point position on the data collected using the fingerprinting method over LoRa technology,this study proposed an optimized machine learning(ML)based algorithm.Received signal strength indicator(RSSI)data from the sensors at different positions was first gathered via an experiment through the LoRa network in a multistory round layout building.The noise factor is also taken into account,and the signal-to-noise ratio(SNR)value is recorded for every RSSI measurement.This study concludes the examination of reference point accuracy with the modified KNN method(MKNN).MKNN was created to more precisely anticipate the position of the reference point.The findings showed that MKNN outperformed other algorithms in terms of accuracy and complexity.展开更多
In this paper,a discrete Lotka-Volterra predator-prey model is proposed that considers mixed functional responses of Holling types I and III.The equilibrium points of the model are obtained,and their stability is test...In this paper,a discrete Lotka-Volterra predator-prey model is proposed that considers mixed functional responses of Holling types I and III.The equilibrium points of the model are obtained,and their stability is tested.The dynamical behavior of this model is studied according to the change of the control parameters.We find that the complex dynamical behavior extends from a stable state to chaotic attractors.Finally,the analytical results are clarified by some numerical simulations.展开更多
文摘The rapidly evolving cybersecurity threat landscape exposes a critical flaw in traditional educational programs where static curricula cannot adapt swiftly to novel attack vectors.This creates a significant gap between theoretical knowledge and the practical defensive capabilities needed in the field.To address this,we propose TeachSecure-CTI,a novel framework for adaptive cybersecurity curriculumgeneration that integrates real-time Cyber Threat Intelligence(CTI)with AI-driven personalization.Our framework employs a layered architecture featuring a CTI ingestion and clusteringmodule,natural language processing for semantic concept extraction,and a reinforcement learning agent for adaptive content sequencing.Bydynamically aligning learningmaterialswithboththe evolving threat environment and individual learner profiles,TeachSecure-CTI ensures content remains current,relevant,and tailored.A 12-week study with 150 students across three institutions demonstrated that the framework improves learning gains by 34%,significantly exceeding the 12%–21%reported in recent literature.The system achieved 84.8%personalization accuracy,85.9%recognition accuracy for MITRE ATT&CK tactics,and a 31%faster competency development rate compared to static curricula.These findings have implications beyond academia,extending to workforce development,cyber range training,and certification programs.By bridging the gap between dynamic threats and static educational materials,TeachSecure-CTI offers an empirically validated,scalable solution for cultivating cybersecurity professionals capable of responding to modern threats.
文摘In light of the coronavirus disease 2019(COVID-19)outbreak caused by the novel coronavirus,companies and institutions have instructed their employees to work from home as a precautionary measure to reduce the risk of contagion.Employees,however,have been exposed to different security risks because of working from home.Moreover,the rapid global spread of COVID-19 has increased the volume of data generated from various sources.Working from home depends mainly on cloud computing(CC)applications that help employees to efficiently accomplish their tasks.The cloud computing environment(CCE)is an unsung hero in the COVID-19 pandemic crisis.It consists of the fast-paced practices for services that reflect the trend of rapidly deployable applications for maintaining data.Despite the increase in the use of CC applications,there is an ongoing research challenge in the domains of CCE concerning data,guaranteeing security,and the availability of CC applications.This paper,to the best of our knowledge,is the first paper that thoroughly explains the impact of the COVID-19 pandemic on CCE.Additionally,this paper also highlights the security risks of working from home during the COVID-19 pandemic.
基金The financial support of the National Natural Science Foundation of China under grants 61901416 and 61571401(part of the Natural Science Foundation of Henan under grant 242300420269)the Young Elite Scientists Sponsorship Program of Henan under grant 2024HYTP026the Innovative Talent of Colleges and the University of Henan Province under grant 18HASTIT021。
文摘Recently,several edge deployment types,such as on-premise edge clusters,Unmanned Aerial Vehicles(UAV)-attached edge devices,telecommunication base stations installed with edge clusters,etc.,are being deployed to enable faster response time for latency-sensitive tasks.One fundamental problem is where and how to offload and schedule multi-dependent tasks so as to minimize their collective execution time and to achieve high resource utilization.Existing approaches randomly dispatch tasks naively to available edge nodes without considering the resource demands of tasks,inter-dependencies of tasks and edge resource availability.These approaches can result in the longer waiting time for tasks due to insufficient resource availability or dependency support,as well as provider lock-in.Therefore,we present Edge Colla,which is based on the integration of edge resources running across multi-edge deployments.Edge Colla leverages learning techniques to intelligently dispatch multidependent tasks,and a variant bin-packing optimization method to co-locate these tasks firmly on available nodes to optimally utilize them.Extensive experiments on real-world datasets from Alibaba on task dependencies show that our approach can achieve optimal performance than the baseline schemes.
文摘This review examines human vulnerabilities in cybersecurity within Microfinance Institutions, analyzing their impact on organizational resilience. Focusing on social engineering, inadequate security training, and weak internal protocols, the study identifies key vulnerabilities exacerbating cyber threats to MFIs. A literature review using databases like IEEE Xplore and Google Scholar focused on studies from 2019 to 2023 addressing human factors in cybersecurity specific to MFIs. Analysis of 57 studies reveals that phishing and insider threats are predominant, with a 20% annual increase in phishing attempts. Employee susceptibility to these attacks is heightened by insufficient training, with entry-level employees showing the highest vulnerability rates. Further, only 35% of MFIs offer regular cybersecurity training, significantly impacting incident reduction. This paper recommends enhanced training frequency, robust internal controls, and a cybersecurity-aware culture to mitigate human-induced cyber risks in MFIs.
基金funded by the Deanship of Scientific Research at Northern Border University,Arar,Saudi Arabia,under grant number NBU-FFR-2025-451-6.
文摘The Internet of Things(IoT)is emerging as an innovative phenomenon concerned with the development of numerous vital applications.With the development of IoT devices,huge amounts of information,including users’private data,are generated.IoT systems face major security and data privacy challenges owing to their integral features such as scalability,resource constraints,and heterogeneity.These challenges are intensified by the fact that IoT technology frequently gathers and conveys complex data,creating an attractive opportunity for cyberattacks.To address these challenges,artificial intelligence(AI)techniques,such as machine learning(ML)and deep learning(DL),are utilized to build an intrusion detection system(IDS)that helps to secure IoT systems.Federated learning(FL)is a decentralized technique that can help to improve information privacy and performance by training the IDS on discrete linked devices.FL delivers an effectual tool to defend user confidentiality,mainly in the field of IoT,where IoT devices often obtain privacy-sensitive personal data.This study develops a Privacy-Enhanced Federated Learning for Intrusion Detection using the Chameleon Swarm Algorithm and Artificial Intelligence(PEFLID-CSAAI)technique.The main aim of the PEFLID-CSAAI method is to recognize the existence of attack behavior in IoT networks.First,the PEFLIDCSAAI technique involves data preprocessing using Z-score normalization to transformthe input data into a beneficial format.Then,the PEFLID-CSAAI method uses the Osprey Optimization Algorithm(OOA)for the feature selection(FS)model.For the classification of intrusion detection attacks,the Self-Attentive Variational Autoencoder(SA-VAE)technique can be exploited.Finally,the Chameleon Swarm Algorithm(CSA)is applied for the hyperparameter finetuning process that is involved in the SA-VAE model.A wide range of experiments were conducted to validate the execution of the PEFLID-CSAAI model.The simulated outcomes demonstrated that the PEFLID-CSAAI technique outperformed other recent models,highlighting its potential as a valuable tool for future applications in healthcare devices and small engineering systems.
文摘Cyber-Physical Systems integrated with information technologies introduce vulnerabilities that extend beyond traditional cyber threats.Attackers can non-invasively manipulate sensors and spoof controllers,which in turn increases the autonomy of the system.Even though the focus on protecting against sensor attacks increases,there is still uncertainty about the optimal timing for attack detection.Existing systems often struggle to manage the trade-off between latency and false alarm rate,leading to inefficiencies in real-time anomaly detection.This paper presents a framework designed to monitor,predict,and control dynamic systems with a particular emphasis on detecting and adapting to changes,including anomalies such as“drift”and“attack”.The proposed algorithm integrates a Transformer-based Attention Generative Adversarial Residual model,which combines the strengths of generative adversarial networks,residual networks,and attention algorithms.The system operates in two phases:offline and online.During the offline phase,the proposed model is trained to learn complex patterns,enabling robust anomaly detection.The online phase applies a trained model,where the drift adapter adjusts the model to handle data changes,and the attack detector identifies deviations by comparing predicted and actual values.Based on the output of the attack detector,the controller makes decisions then the actuator executes suitable actions.Finally,the experimental findings show that the proposed model balances detection accuracy of 99.25%,precision of 98.84%,sensitivity of 99.10%,specificity of 98.81%,and an F1-score of 98.96%,thus provides an effective solution for dynamic and safety-critical environments.
文摘Medical images play a crucial role in diagnosis,treatment procedures and overall healthcare.Nevertheless,they also pose substantial risks to patient confidentiality and safety.Safeguarding the confidentiality of patients'data has become an urgent and practical concern.We present a novel approach for reversible data hiding for colour medical images.In a hybrid domain,we employ AlexNet,tuned with watershed transform(WST)and L-shaped fractal Tromino encryption.Our approach commences by constructing the host image's feature vector using a pre-trained AlexNet model.Next,we use the watershed transform to convert the extracted feature vector into a vector for a topographic map,which we then encrypt using an L-shaped fractal Tromino cryptosystem.We embed the secret image in the transformed image vector using a histogram-based embedding strategy to enhance payload and visual fidelity.When there are no attacks,the RDHNet exhibits robust performance,can be reversed to the original image and maintains a visually appealing stego image,with an average PSNR of 73.14 dB,an SSIM of 0.9999 and perfect values of NC=1 and BER=0 under normal conditions.The proposed RDHNet demonstrates a robust ability to withstand detrimental geometric and noise-adding attacks as well as various steganalysis methods.Furthermore,our RDHNet method initiative demonstrates efficacy in tackling contemporary confidentiality issues.
基金supported by the American Heart Association grant(19TPA34830085PI,K.Z.)the Empire Innovation Program of the State University of New York(PI,K.Z.).
文摘Energy burden,the inability to afford sufficient energy sources for basic household needs such as heating,cooling,cooking,and lighting,is one of the major social challenges in the U.S.While limited studies have examined these issues separately,to our knowledge,no study has empirically investigated the implication of energy burden for chronic kidney disease(CKD)within the U.S.context.This study aims to examine the association between energy burden and CKD prevalence across 500 U.S.cities by using nationally representative data sets.Utilizing propensity score matching and a random intercept analysis,we found that census tracts with high energy burden were significantly associated with a 0.195 higher chronic kidney prevalence[95%CI:0.144-0.246]compared to those with low energy burden,after adjusting key observed characteristics such as living,housing,and sociodemographic conditions of census tracts.Other risk factors contributing to increased CKD prevalence included older building age,higher percentages of nonwhite populations and older adults,lower educational levels,and lower average household incomes.The findings highlight that energy burden is not merely a financial problem but rather a social determinant of CKD health and a significant risk factor for increased CKD prevalence in U.S.urban areas.Our results indicate that state and local energy assistance programs may serve as important interventions not only for improving kidney health outcomes but also for reducing health disparities in the U.S.
基金supported by Princess Nourah bint Abdulrahman University Researchers Supporting Project Number(PNURSP2025R97)Princess Nourah bint Abdulrahman University,Riyadh,Saudi Arabia.
文摘The exponential growth of the Internet of Things(IoT)has introduced significant security challenges,with zero-day attacks emerging as one of the most critical and challenging threats.Traditional Machine Learning(ML)and Deep Learning(DL)techniques have demonstrated promising early detection capabilities.However,their effectiveness is limited when handling the vast volumes of IoT-generated data due to scalability constraints,high computational costs,and the costly time-intensive process of data labeling.To address these challenges,this study proposes a Federated Learning(FL)framework that leverages collaborative and hybrid supervised learning to enhance cyber threat detection in IoT networks.By employing Deep Neural Networks(DNNs)and decentralized model training,the approach reduces computational complexity while improving detection accuracy.The proposed model demonstrates robust performance,achieving accuracies of 94.34%,99.95%,and 87.94%on the publicly available kitsune,Bot-IoT,and UNSW-NB15 datasets,respectively.Furthermore,its ability to detect zero-day attacks is validated through evaluations on two additional benchmark datasets,TON-IoT and IoT-23,using a Deep Federated Learning(DFL)framework,underscoring the generalization and effectiveness of the model in heterogeneous and decentralized IoT environments.Experimental results demonstrate superior performance over existing methods,establishing the proposed framework as an efficient and scalable solution for IoT security.
文摘Due to the recent explosion of big data, our society has been rapidly going through digital transformation and entering a new world with numerous eye-opening developments. These new trends impact the society and future jobs, and thus student careers. At the heart of this digital transformation is data science, the discipline that makes sense of big data. With many rapidly emerging digital challenges ahead of us, this article discusses perspectives on iSchools' opportunities and suggestions in data science education. We argue that iSchools should empower their students with "information computing" disciplines, which we define as the ability to solve problems and create values, information, and knowledge using tools in application domains. As specific approaches to enforcing information computing disciplines in data science education, we suggest the three foci of user-based, tool-based, and application- based. These three loci will serve to differentiate the data science education of iSchools from that of computer science or business schools. We present a layered Data Science Education Framework (DSEF) with building blocks that include the three pillars of data science (people, technology, and data), computational thinking, data-driven paradigms, and data science lifecycles. Data science courses built on the top of this framework should thus be executed with user-based, tool-based, and application-based approaches. This framework will help our students think about data science problems from the big picture perspective and foster appropriate problem-solving skills in conjunction with broad perspectives of data science lifecycles. We hope the DSEF discussed in this article will help fellow iSchools in their design of new data science curricula.
文摘In the area of pattern recognition and machine learning,features play a key role in prediction.The famous applications of features are medical imaging,image classification,and name a few more.With the exponential growth of information investments in medical data repositories and health service provision,medical institutions are collecting large volumes of data.These data repositories contain details information essential to support medical diagnostic decisions and also improve patient care quality.On the other hand,this growth also made it difficult to comprehend and utilize data for various purposes.The results of imaging data can become biased because of extraneous features present in larger datasets.Feature selection gives a chance to decrease the number of components in such large datasets.Through selection techniques,ousting the unimportant features and selecting a subset of components that produces prevalent characterization precision.The correct decision to find a good attribute produces a precise grouping model,which enhances learning pace and forecast control.This paper presents a review of feature selection techniques and attributes selection measures for medical imaging.This review is meant to describe feature selection techniques in a medical domainwith their pros and cons and to signify its application in imaging data and data mining algorithms.The review reveals the shortcomings of the existing feature and attributes selection techniques to multi-sourced data.Moreover,this review provides the importance of feature selection for correct classification of medical infections.In the end,critical analysis and future directions are provided.
文摘Purpose: The purpose of the paper is to provide a framework for addressing the disconnect between metadata and data science. Data science cannot progress without metadata research.This paper takes steps toward advancing the synergy between metadata and data science, and identifies pathways for developing a more cohesive metadata research agenda in data science. Design/methodology/approach: This paper identifies factors that challenge metadata research in the digital ecosystem, defines metadata and data science, and presents the concepts big metadata, smart metadata, and metadata capital as part of a metadata lingua franca connecting to data science. Findings: The "utilitarian nature" and "historical and traditional views" of metadata are identified as two intersecting factors that have inhibited metadata research. Big metadata, smart metadata, and metadata capital are presented as part ofa metadata linguafranca to help frame research in the data science research space. Research limitations: There are additional, intersecting factors to consider that likely inhibit metadata research, and other significant metadata concepts to explore. Practical implications: The immediate contribution of this work is that it may elicit response, critique, revision, or, more significantly, motivate research. The work presented can encourage more researchers to consider the significance of metadata as a research worthy topic within data science and the larger digital ecosystem. Originality/value: Although metadata research has not kept pace with other data science topics, there is little attention directed to this problem. This is surprising, given that metadata is essential for data science endeavors. This examination synthesizes original and prior scholarship to provide new grounding for metadata research in data science.
基金funded by the University of Jeddah,Saudi Arabia,under Grant No.(UJ-20-102-DR).
文摘Mobile-Edge Computing(MEC)displaces cloud services as closely as possible to the end user.This enables the edge servers to execute the offloaded tasks that are requested by the users,which in turn decreases the energy consumption and the turnaround time delay.However,as a result of a hostile environment or in catastrophic zones with no network,it could be difficult to deploy such edge servers.Unmanned Aerial Vehicles(UAVs)can be employed in such scenarios.The edge servers mounted on these UAVs assist with task offloading.For the majority of IoT applications,the execution times of tasks are often crucial.Therefore,UAVs tend to have a limited energy supply.This study presents an approach to offload IoT user applications based on the usage of Voronoi diagrams to determine task delays and cluster IoT devices dynamically as a first step.Second,the UAV flies over each cluster to perform the offloading process.In addition,we propose a Graphics Processing Unit(GPU)-based parallelization of particle swarm optimization to balance the cluster sizes and identify the shortest path along these clusters while minimizing the UAV flying time and energy consumption.Some evaluation results are given to demonstrate the effectiveness of the presented offloading strategy.
文摘Traditional techniques based on image fusion are arduous in integrating complementary or heterogeneous infrared(IR)/visible(VS)images.Dissimilarities in various kind of features in these images are vital to preserve in the single fused image.Hence,simultaneous preservation of both the aspects at the same time is a challenging task.However,most of the existing methods utilize the manual extraction of features;and manual complicated designing of fusion rules resulted in a blurry artifact in the fused image.Therefore,this study has proposed a hybrid algorithm for the integration of multi-features among two heterogeneous images.Firstly,fuzzification of two IR/VS images has been done by feeding it to the fuzzy sets to remove the uncertainty present in the background and object of interest of the image.Secondly,images have been learned by two parallel branches of the siamese convolutional neural network(CNN)to extract prominent features from the images as well as high-frequency information to produce focus maps containing source image information.Finally,the obtained focused maps which contained the detailed integrated information are directly mapped with the source image via pixelwise strategy to result in fused image.Different parameters have been used to evaluate the performance of the proposed image fusion by achieving 1.008 for mutual information(MI),0.841 for entropy(EG),0.655 for edge information(EI),0.652 for human perception(HP),and 0.980 for image structural similarity(ISS).Experimental results have shown that the proposed technique has attained the best qualitative and quantitative results using 78 publically available images in comparison to the existing discrete cosine transform(DCT),anisotropic diffusion&karhunen-loeve(ADKL),guided filter(GF),random walk(RW),principal component analysis(PCA),and convolutional neural network(CNN)methods.
文摘Even though several advances have been made in recent years,handwritten script recognition is still a challenging task in the pattern recognition domain.This field has gained much interest lately due to its diverse application potentials.Nowadays,different methods are available for automatic script recognition.Among most of the reported script recognition techniques,deep neural networks have achieved impressive results and outperformed the classical machine learning algorithms.However,the process of designing such networks right from scratch intuitively appears to incur a significant amount of trial and error,which renders them unfeasible.This approach often requires manual intervention with domain expertise which consumes substantial time and computational resources.To alleviate this shortcoming,this paper proposes a new neural architecture search approach based on meta-heuristic quantum particle swarm optimization(QPSO),which is capable of automatically evolving the meaningful convolutional neural network(CNN)topologies.The computational experiments have been conducted on eight different datasets belonging to three popular Indic scripts,namely Bangla,Devanagari,and Dogri,consisting of handwritten characters and digits.Empirically,the results imply that the proposed QPSO-CNN algorithm outperforms the classical and state-of-the-art methods with faster prediction and higher accuracy.
文摘The main aim of this paper is to propose a new memory dependent derivative(MDD)theory which called threetemperature nonlinear generalized anisotropic micropolar-thermoelasticity.The system of governing equations of the problems associated with the proposed theory is extremely difficult or impossible to solve analytically due to nonlinearity,MDD diffusion,multi-variable nature,multi-stage processing and anisotropic properties of the considered material.Therefore,we propose a novel boundary element method(BEM)formulation for modeling and simulation of such system.The computational performance of the proposed technique has been investigated.The numerical results illustrate the effects of time delays and kernel functions on the nonlinear three-temperature and nonlinear displacement components.The numerical results also demonstrate the validity,efficiency and accuracy of the proposed methodology.The findings and solutions of this study contribute to the further development of industrial applications and devices typically include micropolar-thermoelastic materials.
文摘Recently,there has been a notable surge of interest in scientific research regarding spectral images.The potential of these images to revolutionize the digital photography industry,like aerial photography through Unmanned Aerial Vehicles(UAVs),has captured considerable attention.One encouraging aspect is their combination with machine learning and deep learning algorithms,which have demonstrated remarkable outcomes in image classification.As a result of this powerful amalgamation,the adoption of spectral images has experienced exponential growth across various domains,with agriculture being one of the prominent beneficiaries.This paper presents an extensive survey encompassing multispectral and hyperspectral images,focusing on their applications for classification challenges in diverse agricultural areas,including plants,grains,fruits,and vegetables.By meticulously examining primary studies,we delve into the specific agricultural domains where multispectral and hyperspectral images have found practical use.Additionally,our attention is directed towards utilizing machine learning techniques for effectively classifying hyperspectral images within the agricultural context.The findings of our investigation reveal that deep learning and support vector machines have emerged as widely employed methods for hyperspectral image classification in agriculture.Nevertheless,we also shed light on the various issues and limitations of working with spectral images.This comprehensive analysis aims to provide valuable insights into the current state of spectral imaging in agriculture and its potential for future advancements.
基金We deeply acknowledge Taif University for Supporting and funding this study through Taif University Researchers Supporting Project Number(TURSP-2020/115),Taif University,Taif,Saudi Arabia.
文摘In recent years,the number of Gun-related incidents has crossed over 250,000 per year and over 85%of the existing 1 billion firearms are in civilian hands,manual monitoring has not proven effective in detecting firearms.which is why an automated weapon detection system is needed.Various automated convolutional neural networks(CNN)weapon detection systems have been proposed in the past to generate good results.However,These techniques have high computation overhead and are slow to provide real-time detection which is essential for the weapon detection system.These models have a high rate of false negatives because they often fail to detect the guns due to the low quality and visibility issues of surveillance videos.This research work aims to minimize the rate of false negatives and false positives in weapon detection while keeping the speed of detection as a key parameter.The proposed framework is based on You Only Look Once(YOLO)and Area of Interest(AOI).Initially,themodels take pre-processed frames where the background is removed by the use of the Gaussian blur algorithm.The proposed architecture will be assessed through various performance parameters such as False Negative,False Positive,precision,recall rate,and F1 score.The results of this research work make it clear that due to YOLO-v5s high recall rate and speed of detection are achieved.Speed reached 0.010 s per frame compared to the 0.17 s of the Faster R-CNN.It is promising to be used in the field of security and weapon detection.
基金The research will be funded by the Multimedia University,Department of Information Technology,Persiaran Multimedia,63100,Cyberjaya,Selangor,Malaysia.
文摘In situations when the precise position of a machine is unknown,localization becomes crucial.This research focuses on improving the position prediction accuracy over long-range(LoRa)network using an optimized machine learning-based technique.In order to increase the prediction accuracy of the reference point position on the data collected using the fingerprinting method over LoRa technology,this study proposed an optimized machine learning(ML)based algorithm.Received signal strength indicator(RSSI)data from the sensors at different positions was first gathered via an experiment through the LoRa network in a multistory round layout building.The noise factor is also taken into account,and the signal-to-noise ratio(SNR)value is recorded for every RSSI measurement.This study concludes the examination of reference point accuracy with the modified KNN method(MKNN).MKNN was created to more precisely anticipate the position of the reference point.The findings showed that MKNN outperformed other algorithms in terms of accuracy and complexity.
基金the Deanship of Scientific Research at King Khalid University for funding this work through the Big Research Group Project under grant number(R.G.P2/16/40).
文摘In this paper,a discrete Lotka-Volterra predator-prey model is proposed that considers mixed functional responses of Holling types I and III.The equilibrium points of the model are obtained,and their stability is tested.The dynamical behavior of this model is studied according to the change of the control parameters.We find that the complex dynamical behavior extends from a stable state to chaotic attractors.Finally,the analytical results are clarified by some numerical simulations.