The increasing reliance on digital infrastructure in modern healthcare systems has introduced significant cybersecurity challenges,particularly in safeguarding sensitive patient data and maintaining the integrity of m...The increasing reliance on digital infrastructure in modern healthcare systems has introduced significant cybersecurity challenges,particularly in safeguarding sensitive patient data and maintaining the integrity of medical services.As healthcare becomes more data-driven,cyberattacks targeting these systems continue to rise,necessitating the development of robust,domain-adapted Intrusion Detection Systems(IDS).However,current IDS solutions often lack access to domain-specific datasets that reflect realistic threat scenarios in healthcare.To address this gap,this study introduces HCKDDCUP,a synthetic dataset modeled on the widely used KDDCUP benchmark,augmented with healthcare-relevant attributes such as patient data,treatments,and diagnoses to better simulate the unique conditions of clinical environments.This research applies standard machine learning algorithms Random Forest(RF),Decision Tree(DT),and K-Nearest Neighbors(KNN)to both the KDDCUP and HCKDDCUP datasets.The methodology includes data preprocessing,feature selection,dimensionality reduction,and comparative performance evaluation.Experimental results show that the RF model performed best,achieving 98%accuracy on KDDCUP and 99%on HCKDDCUP,highlighting its effectiveness in detecting cyber intrusions within a healthcare-specific context.This work contributes a valuable resource for future research and underscores the need for IDS development tailored to sector-specific requirements.展开更多
With the increasing emphasis on personal information protection,encryption through security protocols has emerged as a critical requirement in data transmission and reception processes.Nevertheless,IoT ecosystems comp...With the increasing emphasis on personal information protection,encryption through security protocols has emerged as a critical requirement in data transmission and reception processes.Nevertheless,IoT ecosystems comprise heterogeneous networks where outdated systems coexist with the latest devices,spanning a range of devices from non-encrypted ones to fully encrypted ones.Given the limited visibility into payloads in this context,this study investigates AI-based attack detection methods that leverage encrypted traffic metadata,eliminating the need for decryption and minimizing system performance degradation—especially in light of these heterogeneous devices.Using the UNSW-NB15 and CICIoT-2023 dataset,encrypted and unencrypted traffic were categorized according to security protocol,and AI-based intrusion detection experiments were conducted for each traffic type based on metadata.To mitigate the problem of class imbalance,eight different data sampling techniques were applied.The effectiveness of these sampling techniques was then comparatively analyzed using two ensemble models and three Deep Learning(DL)models from various perspectives.The experimental results confirmed that metadata-based attack detection is feasible using only encrypted traffic.In the UNSW-NB15 dataset,the f1-score of encrypted traffic was approximately 0.98,which is 4.3%higher than that of unencrypted traffic(approximately 0.94).In addition,analysis of the encrypted traffic in the CICIoT-2023 dataset using the same method showed a significantly lower f1-score of roughly 0.43,indicating that the quality of the dataset and the preprocessing approach have a substantial impact on detection performance.Furthermore,when data sampling techniques were applied to encrypted traffic,the recall in the UNSWNB15(Encrypted)dataset improved by up to 23.0%,and in the CICIoT-2023(Encrypted)dataset by 20.26%,showing a similar level of improvement.Notably,in CICIoT-2023,f1-score and Receiver Operation Characteristic-Area Under the Curve(ROC-AUC)increased by 59.0%and 55.94%,respectively.These results suggest that data sampling can have a positive effect even in encrypted environments.However,the extent of the improvement may vary depending on data quality,model architecture,and sampling strategy.展开更多
New technologies that take advantage of the emergence of massive Internet of Things(IoT)and a hyper-connected network environment have rapidly increased in recent years.These technologies are used in diverse environme...New technologies that take advantage of the emergence of massive Internet of Things(IoT)and a hyper-connected network environment have rapidly increased in recent years.These technologies are used in diverse environments,such as smart factories,digital healthcare,and smart grids,with increased security concerns.We intend to operate Security Orchestration,Automation and Response(SOAR)in various environments through new concept definitions as the need to detect and respond automatically to rapidly increasing security incidents without the intervention of security personnel has emerged.To facilitate the understanding of the security concern involved in this newly emerging area,we offer the definition of Internet of Blended Environment(IoBE)where various convergence environments are interconnected and the data analyzed in automation.We define Blended Threat(BT)as a security threat that exploits security vulnerabilities through various attack surfaces in the IoBE.We propose a novel SOAR-CUBE architecture to respond to security incidents with minimal human intervention by automating the BT response process.The Security Orchestration,Automation,and Response(SOAR)part of our architecture is used to link heterogeneous security technologies and the threat intelligence function that collects threat data and performs a correlation analysis of the data.SOAR is operated under Collaborative Units of Blended Environment(CUBE)which facilitates dynamic exchanges of data according to the environment applied to the IoBE by distributing and deploying security technologies for each BT type and dynamically combining them according to the cyber kill chain stage to minimize the damage and respond efficiently to BT.展开更多
With the rise of remote collaboration,the demand for advanced storage and collaboration tools has rapidly increased.However,traditional collaboration tools primarily rely on access control,leaving data stored on cloud...With the rise of remote collaboration,the demand for advanced storage and collaboration tools has rapidly increased.However,traditional collaboration tools primarily rely on access control,leaving data stored on cloud servers vulnerable due to insufficient encryption.This paper introduces a novel mechanism that encrypts data in‘bundle’units,designed to meet the dual requirements of efficiency and security for frequently updated collaborative data.Each bundle includes updated information,allowing only the updated portions to be reencrypted when changes occur.The encryption method proposed in this paper addresses the inefficiencies of traditional encryption modes,such as Cipher Block Chaining(CBC)and Counter(CTR),which require decrypting and re-encrypting the entire dataset whenever updates occur.The proposed method leverages update-specific information embedded within data bundles and metadata that maps the relationship between these bundles and the plaintext data.By utilizing this information,the method accurately identifies the modified portions and applies algorithms to selectively re-encrypt only those sections.This approach significantly enhances the efficiency of data updates while maintaining high performance,particularly in large-scale data environments.To validate this approach,we conducted experiments measuring execution time as both the size of the modified data and the total dataset size varied.Results show that the proposed method significantly outperforms CBC and CTR modes in execution speed,with greater performance gains as data size increases.Additionally,our security evaluation confirms that this method provides robust protection against both passive and active attacks.展开更多
Roaming in 5G networks enables seamless global mobility but also introduces significant security risks due to legacy protocol dependencies,uneven Security Edge Protection Proxy(SEPP)deployment,and the dynamic nature o...Roaming in 5G networks enables seamless global mobility but also introduces significant security risks due to legacy protocol dependencies,uneven Security Edge Protection Proxy(SEPP)deployment,and the dynamic nature of inter-Public Land Mobile Network(inter-PLMN)signaling.Traditional rule-based defenses are inadequate for protecting cloud-native 5G core networks,particularly as roaming expands into enterprise and Internet of Things(IoT)domains.This work addresses these challenges by designing a scalable 5G Standalone testbed,generating the first intrusion detection dataset specifically tailored to roaming threats,and proposing a deep learning based intrusion detection framework for cloud-native environments.Six deep learning models including Multilayer Perceptron(MLP),one-dimensional Convolutional Neural Network(1D CNN),Autoencoder(AE),Recurrent Neural Network(RNN),Gated Recurrent Unit(GRU),and Long Short-Term Memory(LSTM)were evaluated on the dataset using both weighted and balanced metrics to account for strong class imbalance.While all models achieved over 99%accuracy,recurrent architectures such as GRU and LSTM outperformed others in balanced accuracy and macro-level evaluation,demonstrating superior effectiveness in detecting rare but high-impact attacks.These results confirm the importance of sequence-aware Artificial Intelligence(AI)models for securing roaming scenarios,where transient and contextdependent threats are common.The proposed framework provides a foundation for intelligent,adaptive intrusion detection in 5G and offers a path toward resilient security in Beyond 5G and 6G networks.展开更多
Data compression plays a vital role in datamanagement and information theory by reducing redundancy.However,it lacks built-in security features such as secret keys or password-based access control,leaving sensitive da...Data compression plays a vital role in datamanagement and information theory by reducing redundancy.However,it lacks built-in security features such as secret keys or password-based access control,leaving sensitive data vulnerable to unauthorized access and misuse.With the exponential growth of digital data,robust security measures are essential.Data encryption,a widely used approach,ensures data confidentiality by making it unreadable and unalterable through secret key control.Despite their individual benefits,both require significant computational resources.Additionally,performing them separately for the same data increases complexity and processing time.Recognizing the need for integrated approaches that balance compression ratios and security levels,this research proposes an integrated data compression and encryption algorithm,named IDCE,for enhanced security and efficiency.Thealgorithmoperates on 128-bit block sizes and a 256-bit secret key length.It combines Huffman coding for compression and a Tent map for encryption.Additionally,an iterative Arnold cat map further enhances cryptographic confusion properties.Experimental analysis validates the effectiveness of the proposed algorithm,showcasing competitive performance in terms of compression ratio,security,and overall efficiency when compared to prior algorithms in the field.展开更多
Side-channel attacks have recently progressed into software-induced attacks.In particular,a rowhammer attack,which exploits the characteristics of dynamic random access memory(DRAM),can quickly and continuously access...Side-channel attacks have recently progressed into software-induced attacks.In particular,a rowhammer attack,which exploits the characteristics of dynamic random access memory(DRAM),can quickly and continuously access the cells as the cell density of DRAM increases,thereby generating a disturbance error affecting the neighboring cells,resulting in bit flips.Although a rowhammer attack is a highly sophisticated attack in which disturbance errors are deliberately generated into data bits,it has been reported that it can be exploited on various platforms such as mobile devices,web browsers,and virtual machines.Furthermore,there have been studies on bypassing the defense measures of DRAM manufacturers and the like to respond to rowhammer attacks.A rowhammer attack can control user access and compromise the integrity of sensitive data with attacks such as a privilege escalation and an alteration of the encryption keys.In an attempt to mitigate a rowhammer attack,various hardware-and software-based mitigation techniques are being studied,but there are limitations in that the research methods do not detect the rowhammer attack in advance,causing overhead or degradation of the system performance.Therefore,in this study,a rowhammer attack detection technique is proposed by extracting common features of rowhammer attack files through a static analysis of rowhammer attack codes.展开更多
With the rapid development of quantum computers capable of realizing Shor’s algorithm,existing public key-based algorithms face a significant security risk.Crystals-Kyber has been selected as the only key encapsulati...With the rapid development of quantum computers capable of realizing Shor’s algorithm,existing public key-based algorithms face a significant security risk.Crystals-Kyber has been selected as the only key encapsulation mechanism(KEM)algorithm in the National Institute of Standards and Technology(NIST)Post-Quantum Cryptography(PQC)competition.In this study,we present a portable and efficient implementation of a Crystals-Kyber post-quantum KEM based on WebAssembly(Wasm),a recently released portable execution framework for high-performance web applications.Until now,most Kyber implementations have been developed with native programming languages such as C and Assembly.Although there are a few previous Kyber implementations based on JavaScript for portability,their performance is significantly lower than that of implementations based on native programming languages.Therefore,it is necessary to develop a portable and efficient Kyber implementation to secure web applications in the quantum computing era.Our Kyber software is based on JavaScript and Wasm to provide portability and efficiency while ensuring quantum security.Namely,the overall software is written in JavaScript,and the performance core parts(secure hash algorithm-3-based operations and polynomial multiplication)are written in Wasm.Furthermore,we parallelize the number theoretic transform(NTT)-based polynomial multiplication using single instruction multiple data(SIMD)functionality,which is available in Wasm.The three steps in the NTT-based polynomial multiplication have been parallelized with Wasm SIMD intrinsic functions.Our software outperforms the latest reference implementation of Kyber developed in JavaScript by×4.02(resp.×4.32 and×4.1),×3.42(resp.×3.52 and×3.44),and×3.41(resp.×3.44 and×3.38)in terms of key generation,encapsulation,and decapsulation on Google Chrome(resp.Firefox,and Microsoft Edge).As far as we know,this is the first software implementation of Kyber with Wasm technology in the web environment.展开更多
Coronavirus is a potentially fatal disease that normally occurs in mammals and birds.Generally,in humans,the virus spreads through aerial droplets of any type of fluid secreted from the body of an infected person.Coro...Coronavirus is a potentially fatal disease that normally occurs in mammals and birds.Generally,in humans,the virus spreads through aerial droplets of any type of fluid secreted from the body of an infected person.Coronavirus is a family of viruses that is more lethal than other unpremeditated viruses.In December 2019,a new variant,i.e.,a novel coronavirus(COVID-19)developed in Wuhan province,China.Since January 23,2020,the number of infected individuals has increased rapidly,affecting the health and economies of many countries,including Pakistan.The objective of this research is to provide a system to classify and categorize the COVID-19 outbreak in Pakistan based on the data collected every day from different regions of Pakistan.This research also compares the performance of machine learning classifiers(i.e.,Decision Tree(DT),Naive Bayes(NB),Support Vector Machine,and Logistic Regression)on the COVID-19 dataset collected in Pakistan.According to the experimental results,DT and NB classifiers outperformed the other classifiers.In addition,the classified data is categorized by implementing a Bayesian Regularization Artificial Neural Network(BRANN)classifier.The results demonstrate that the BRANN classifier outperforms state-of-the-art classifiers.展开更多
Contemporary attackers,mainly motivated by financial gain,consistently devise sophisticated penetration techniques to access important information or data.The growing use of Internet of Things(IoT)technology in the co...Contemporary attackers,mainly motivated by financial gain,consistently devise sophisticated penetration techniques to access important information or data.The growing use of Internet of Things(IoT)technology in the contemporary convergence environment to connect to corporate networks and cloud-based applications only worsens this situation,as it facilitates multiple new attack vectors to emerge effortlessly.As such,existing intrusion detection systems suffer from performance degradation mainly because of insufficient considerations and poorly modeled detection systems.To address this problem,we designed a blended threat detection approach,considering the possible impact and dimensionality of new attack surfaces due to the aforementioned convergence.We collectively refer to the convergence of different technology sectors as the internet of blended environment.The proposed approach encompasses an ensemble of heterogeneous probabilistic autoencoders that leverage the corresponding advantages of a convolutional variational autoencoder and long short-term memory variational autoencoder.An extensive experimental analysis conducted on the TON_IoT dataset demonstrated 96.02%detection accuracy.Furthermore,performance of the proposed approach was compared with various single model(autoencoder)-based network intrusion detection approaches:autoencoder,variational autoencoder,convolutional variational autoencoder,and long short-term memory variational autoencoder.The proposed model outperformed all compared models,demonstrating F1-score improvements of 4.99%,2.25%,1.92%,and 3.69%,respectively.展开更多
Numerous industries,especially the medical industry,are likely to exhibit significant developments in the future.Ever since the announcement of the precision medicine initiative by the United States in 2015,interest i...Numerous industries,especially the medical industry,are likely to exhibit significant developments in the future.Ever since the announcement of the precision medicine initiative by the United States in 2015,interest in the field has considerably increased.The techniques of precision medicine are employed to provide optimal treatment and medical services to patients,in addition to the prevention and management of diseases via the collection and analysis of big data related to their individual genetic characteristics,occupation,living environment,and dietary habits.As this involves the accumulation and utilization of sensitive information,such as patient history,DNA,and personal details,its implementation is difficult if the data are inaccurate,exposed,or forged,and there is also a concern for privacy,as massive amount of data are collected;hence,ensuring the security of information is essential.Therefore,it is necessary to develop methods of securely sharing sensitive data for the establishment of a precision medicine system.An authentication and data sharing scheme is presented in this study on the basis of an analysis of sensitive data.The proposed scheme securely shares sensitive data of each entity in the precision medicine system according to its architecture and data flow.展开更多
Call Detailed Records(CDR)are generated and stored in Mobile Networks(MNs)and contain subscriber’s information about active or passive usage of the network for various communication activities.The spatio-temporal nat...Call Detailed Records(CDR)are generated and stored in Mobile Networks(MNs)and contain subscriber’s information about active or passive usage of the network for various communication activities.The spatio-temporal nature of CDR makes them a valuable dataset used for forensic activities.Advances in technology have led to the seamless communication across Multiple Mobile Network(MMN),which poses a threat to the availability and integrity of CDR data.Present CDR implementation is capable of logging peer-to-peer communications over single connection only,thus necessitating improvements on how the CDR data is stored for forensic analysis.In this paper,the problem is solved by identifying and conceptually modelling six new artifacts generated by such communication activities.The newly identified artifacts are introduced into the existing CDR for an incident capturing of the required data for forensic analysis during investigations involved in the MMN communication.Results show an improved absolute speed of 0.0058 s for the MMN-CDR to associate a suspect with an incident,which is 0.0038 s faster than the speed of 0.0097s for the existing CDR to associate a suspect with an accomplice.Thus,a novel method for forensically tracking calls over the MMN has been developed.The MMN-CDR,when forensically analyzed,reveals an increase in time efficiency over the existing CDR due to its high absolute speed.Also,higher accuracy and completeness percentage are both obtained.展开更多
As the extensive use of cloud computing raises questions about the security of any personal data stored there,cryptography is being used more frequently as a security tool to protect data confidentiality and privacy i...As the extensive use of cloud computing raises questions about the security of any personal data stored there,cryptography is being used more frequently as a security tool to protect data confidentiality and privacy in the cloud environment.A hypervisor is a virtualization software used in cloud hosting to divide and allocate resources on various pieces of hardware.The choice of hypervisor can significantly impact the performance of cryptographic operations in the cloud environment.An important issue that must be carefully examined is that no hypervisor is completely superior in terms of performance;Each hypervisor should be examined to meet specific needs.The main objective of this study is to provide accurate results to compare the performance of Hyper-V and Kernel-based Virtual Machine(KVM)while implementing different cryptographic algorithms to guide cloud service providers and end users in choosing the most suitable hypervisor for their cryptographic needs.This study evaluated the efficiency of two hypervisors,Hyper-V and KVM,in implementing six cryptographic algorithms:Rivest,Shamir,Adleman(RSA),Advanced Encryption Standard(AES),Triple Data Encryption Standard(TripleDES),Carlisle Adams and Stafford Tavares(CAST-128),BLOWFISH,and TwoFish.The study’s findings show that KVM outperforms Hyper-V,with 12.2%less Central Processing Unit(CPU)use and 12.95%less time overall for encryption and decryption operations with various file sizes.The study’s findings emphasize how crucial it is to pick a hypervisor that is appropriate for cryptographic needs in a cloud environment,which could assist both cloud service providers and end users.Future research may focus more on how various hypervisors perform while handling cryptographic workloads.展开更多
TheCOVID-19 outbreak began in December 2019 andwas declared a global health emergency by the World Health Organization.The four most dominating variants are Beta,Gamma,Delta,and Omicron.After the administration of vac...TheCOVID-19 outbreak began in December 2019 andwas declared a global health emergency by the World Health Organization.The four most dominating variants are Beta,Gamma,Delta,and Omicron.After the administration of vaccine doses,an eminent decline in new cases has been observed.The COVID-19 vaccine induces neutralizing antibodies and T-cells in our bodies.However,strong variants likeDelta and Omicron tend to escape these neutralizing antibodies elicited by COVID-19 vaccination.Therefore,it is indispensable to study,analyze and most importantly,predict the response of SARS-CoV-2-derived t-cell epitopes against Covid variants in vaccinated and unvaccinated persons.In this regard,machine learning can be effectively utilized for predicting the response of COVID-derived t-cell epitopes.In this study,prediction of T-cells Epitopes’response was conducted for vaccinated and unvaccinated people for Beta,Gamma,Delta,and Omicron variants.The dataset was divided into two classes,i.e.,vaccinated and unvaccinated,and the predicted response of T-cell Epitopes was divided into three categories,i.e.,Strong,Impaired,and Over-activated.For the aforementioned prediction purposes,a self-proposed Bayesian neural network has been designed by combining variational inference and flow normalization optimizers.Furthermore,the Hidden Markov Model has also been trained on the same dataset to compare the results of the self-proposed Bayesian neural network with this state-of-the-art statistical approach.Extensive experimentation and results demonstrate the efficacy of the proposed network in terms of accurate prediction and reduced error.展开更多
The term Internet of Things has increased in popularity in recent years and has spread to be used in many applications around us,such as healthcare applications,smart homes and smart cities,IoT is a group of smart dev...The term Internet of Things has increased in popularity in recent years and has spread to be used in many applications around us,such as healthcare applications,smart homes and smart cities,IoT is a group of smart devices equipped with sensors that have the ability to calculate data,and carry out actions in the environment in which they are located,they are connected to each other through the Internet and recently it has become supported by 5G technology due to many advantages such as its ability to provide a fast connection,despite the efficiency of the IoT supported by the five G technology,it is subject to many security challenges.In this paper,we conducted a comprehensive review of previous research related to the security requirements of the IoT and security attacks.展开更多
The open nature and heterogeneous architecture of Open Radio Access Network(Open RAN)undermine the consistency of security policies and broaden the attack surface,thereby increasing the risk of security vulnerabilitie...The open nature and heterogeneous architecture of Open Radio Access Network(Open RAN)undermine the consistency of security policies and broaden the attack surface,thereby increasing the risk of security vulnerabilities.The dynamic nature of network performance and traffic patterns in Open RAN necessitates advanced detection models that can overcome the constraints of traditional techniques and adapt to evolving behaviors.This study presents a methodology for effectively detecting malicious traffic in Open RAN by utilizing an Artificial-Intelligence/MachineLearning(AI/ML)Framework.A hybrid Transformer–Convolutional-Neural-Network(Transformer-CNN)ensemble model is employed for anomaly detection.The proposed model generates final predictions through a soft-voting technique based on the predictive outputs of the two models with distinct features.This approach improves accuracy by up to 1.06%and F1 score by 1.48%compared with a hard-voting technique to determine the final prediction.Furthermore,the proposed model achieves an average accuracy of approximately 98.3%depending on the time step,exhibiting a 1.43%increase in accuracy over single-model approaches.Unlike single-model approaches,which are prone to overfitting,the ensemble model resolves the overfitting problem by reducing the deviation in validation loss.展开更多
Feature selection(FS)is a pivotal pre-processing step in developing data-driven models,influencing reliability,performance and optimization.Although existing FS techniques can yield high-performance metrics for certai...Feature selection(FS)is a pivotal pre-processing step in developing data-driven models,influencing reliability,performance and optimization.Although existing FS techniques can yield high-performance metrics for certain models,they do not invariably guarantee the extraction of the most critical or impactful features.Prior literature underscores the significance of equitable FS practices and has proposed diverse methodologies for the identification of appropriate features.However,the challenge of discerning the most relevant and influential features persists,particularly in the context of the exponential growth and heterogeneity of big data—a challenge that is increasingly salient in modern artificial intelligence(AI)applications.In response,this study introduces an innovative,automated statistical method termed Farea Similarity for Feature Selection(FSFS).The FSFS approach computes a similarity metric for each feature by benchmarking it against the record-wise mean,thereby finding feature dependencies and mitigating the influence of outliers that could potentially distort evaluation outcomes.Features are subsequently ranked according to their similarity scores,with the threshold established at the average similarity score.Notably,lower FSFS values indicate higher similarity and stronger data correlations,whereas higher values suggest lower similarity.The FSFS method is designed not only to yield reliable evaluation metrics but also to reduce data complexity without compromising model performance.Comparative analyses were performed against several established techniques,including Chi-squared(CS),Correlation Coefficient(CC),Genetic Algorithm(GA),Exhaustive Approach,Greedy Stepwise Approach,Gain Ratio,and Filtered Subset Eval,using a variety of datasets such as the Experimental Dataset,Breast Cancer Wisconsin(Original),KDD CUP 1999,NSL-KDD,UNSW-NB15,and Edge-IIoT.In the absence of the FSFS method,the highest classifier accuracies observed were 60.00%,95.13%,97.02%,98.17%,95.86%,and 94.62%for the respective datasets.When the FSFS technique was integrated with data normalization,encoding,balancing,and feature importance selection processes,accuracies improved to 100.00%,97.81%,98.63%,98.94%,94.27%,and 98.46%,respectively.The FSFS method,with a computational complexity of O(fn log n),demonstrates robust scalability and is well-suited for datasets of large size,ensuring efficient processing even when the number of features is substantial.By automatically eliminating outliers and redundant data,FSFS reduces computational overhead,resulting in faster training and improved model performance.Overall,the FSFS framework not only optimizes performance but also enhances the interpretability and explainability of data-driven models,thereby facilitating more trustworthy decision-making in AI applications.展开更多
The smart home platform integrates with Internet of Things(IoT)devices,smartphones,and cloud servers,enabling seamless and convenient services.It gathers and manages extensive user data,including personal information,...The smart home platform integrates with Internet of Things(IoT)devices,smartphones,and cloud servers,enabling seamless and convenient services.It gathers and manages extensive user data,including personal information,device operations,and patterns of user behavior.Such data plays an essential role in criminal inves-tigations,highlighting the growing importance of specialized smart home forensics.Given the rapid advancement in smart home software and hardware technologies,many companies are introducing new devices and services that expand the market.Consequently,scalable and platform-specific forensic research is necessary to support efficient digital investigations across diverse smart home ecosystems.This study thoroughly examines the core components and structures of smart homes,proposing a generalized architecture that represents various operational environments.A three-stage smart home forensics framework is introduced:(1)analyzing application functions to infer relevant data,(2)extracting and processing data from interconnected devices,and(3)identifying data valuable for investigative purposes.The framework’s applicability is validated using testbeds from Samsung SmartThings and Xiaomi Mi Home platforms,offering practical insights for real-world forensic applications.The results demonstrate that the proposed forensic framework effectively acquires and classifies relevant digital evidence in smart home platforms,confirming its practical applicability in smart home forensic investigations.展开更多
Human activity recognition(HAR)is crucial in fields like robotics,surveillance,and healthcare,enabling systems to understand and respond to human actions.Current models often struggle with complex datasets,making accu...Human activity recognition(HAR)is crucial in fields like robotics,surveillance,and healthcare,enabling systems to understand and respond to human actions.Current models often struggle with complex datasets,making accurate recognition challenging.This study proposes a quantum-integrated Convolutional Neural Network(QI-CNN)to enhance HAR performance.The traditional models demonstrate weak performance in transferring learned knowledge between diverse complex data collections,including D3D-HOI and Sysu 3D HOI.HAR requires better extraction models and techniques that must address current challenges to achieve improved accuracy and scalability.The model aims to enhance HAR task performance by combining quantum computing components with classical CNN approaches.The framework begins with bilateral filter(BF)enhancement of images and then implements multi-object tracking(MOT)in conjunction with felzenszwalb superpixel segmentation for object detection and segmentation.The watershed algorithm refines the united superpixels to create more accurate object boundary definitions.The model combination of histogram of oriented gradients(HoG)and Global Image Semantic Texture(GIST)descriptors alongside a new approach to extract 23-joint keypoints by employing relative joint angles and joint proximitymeasures.A fuzzy optimization process optimizes features that originated from the extraction phase.Our approach achieves 93.02%accuracy on the D3D-HOI dataset and 97.38%on the Sysu 3D HOI dataset Our approach achieves 93.02%accuracy on the D3D-HOI dataset and 97.38%on the Sysu 3D HOI dataset.Averaging across all classes,the proposed model yields 93.3%precision,92.6%recall,92.3%F1-score,89.1%specificity,an False Positive Rate(FPR)of 10.9%and a mean log-loss of 0.134 on the D3D-HOI dataset,while on the Sysu 3D HOI dataset the corresponding values are 98.4%precision,98.6%recall,98.4%F1-score,99.0%specificity,1.0%FPR and a log-loss of 0.058.These results demonstrate that the quantum integrated CNN significantly improves feature extraction and model optimisation.展开更多
The Internet of Things(IoT)is an innovation that combines imagined space with the actual world on a single platform.Because of the recent rapid rise of IoT devices,there has been a lack of standards,leading to a massi...The Internet of Things(IoT)is an innovation that combines imagined space with the actual world on a single platform.Because of the recent rapid rise of IoT devices,there has been a lack of standards,leading to a massive increase in unprotected devices connecting to networks.Consequently,cyberattacks on IoT are becoming more common,particularly keylogging attacks,which are often caused by security vulnerabilities on IoT networks.This research focuses on the role of transfer learning and ensemble classifiers in enhancing the detection of keylogging attacks within small,imbalanced IoT datasets.The authors propose a model that combines transfer learning with ensemble classification methods,leading to improved detection accuracy.By leveraging the BoT-IoT and keylogger_detection datasets,they facilitate the transfer of knowledge across various domains.The results reveal that the integration of transfer learning and ensemble classifiers significantly improves detection capabilities,even in scenarios with limited data availability.The proposed TRANS-ENS model showcases exceptional accuracy and a minimal false positive rate,outperforming current deep learning approaches.The primary objectives include:(i)introducing an ensemble feature selection technique to identify common features across models,(ii)creating a pre-trained deep learning model through transfer learning for the detection of keylogging attacks,and(iii)developing a transfer learning-ensemble model dedicated to keylogging detection.Experimental findings indicate that the TRANS-ENS model achieves a detection accuracy of 96.06%and a false alarm rate of 0.12%,surpassing existing models such as CNN,RNN,and LSTM.展开更多
基金supported and funded by the Deanship of Scientific Research at Imam Mohammad Ibn Saud Islamic University(IMSIU)(grant number IMSIU-DDRSP2501).
文摘The increasing reliance on digital infrastructure in modern healthcare systems has introduced significant cybersecurity challenges,particularly in safeguarding sensitive patient data and maintaining the integrity of medical services.As healthcare becomes more data-driven,cyberattacks targeting these systems continue to rise,necessitating the development of robust,domain-adapted Intrusion Detection Systems(IDS).However,current IDS solutions often lack access to domain-specific datasets that reflect realistic threat scenarios in healthcare.To address this gap,this study introduces HCKDDCUP,a synthetic dataset modeled on the widely used KDDCUP benchmark,augmented with healthcare-relevant attributes such as patient data,treatments,and diagnoses to better simulate the unique conditions of clinical environments.This research applies standard machine learning algorithms Random Forest(RF),Decision Tree(DT),and K-Nearest Neighbors(KNN)to both the KDDCUP and HCKDDCUP datasets.The methodology includes data preprocessing,feature selection,dimensionality reduction,and comparative performance evaluation.Experimental results show that the RF model performed best,achieving 98%accuracy on KDDCUP and 99%on HCKDDCUP,highlighting its effectiveness in detecting cyber intrusions within a healthcare-specific context.This work contributes a valuable resource for future research and underscores the need for IDS development tailored to sector-specific requirements.
基金supported by the Institute of Information&Communications Technology Planning&Evaluation(IITP)grant funded by the Korea government(MSIT)(No.RS-2023-00235509Development of security monitoring technology based network behavior against encrypted cyber threats in ICT convergence environment).
文摘With the increasing emphasis on personal information protection,encryption through security protocols has emerged as a critical requirement in data transmission and reception processes.Nevertheless,IoT ecosystems comprise heterogeneous networks where outdated systems coexist with the latest devices,spanning a range of devices from non-encrypted ones to fully encrypted ones.Given the limited visibility into payloads in this context,this study investigates AI-based attack detection methods that leverage encrypted traffic metadata,eliminating the need for decryption and minimizing system performance degradation—especially in light of these heterogeneous devices.Using the UNSW-NB15 and CICIoT-2023 dataset,encrypted and unencrypted traffic were categorized according to security protocol,and AI-based intrusion detection experiments were conducted for each traffic type based on metadata.To mitigate the problem of class imbalance,eight different data sampling techniques were applied.The effectiveness of these sampling techniques was then comparatively analyzed using two ensemble models and three Deep Learning(DL)models from various perspectives.The experimental results confirmed that metadata-based attack detection is feasible using only encrypted traffic.In the UNSW-NB15 dataset,the f1-score of encrypted traffic was approximately 0.98,which is 4.3%higher than that of unencrypted traffic(approximately 0.94).In addition,analysis of the encrypted traffic in the CICIoT-2023 dataset using the same method showed a significantly lower f1-score of roughly 0.43,indicating that the quality of the dataset and the preprocessing approach have a substantial impact on detection performance.Furthermore,when data sampling techniques were applied to encrypted traffic,the recall in the UNSWNB15(Encrypted)dataset improved by up to 23.0%,and in the CICIoT-2023(Encrypted)dataset by 20.26%,showing a similar level of improvement.Notably,in CICIoT-2023,f1-score and Receiver Operation Characteristic-Area Under the Curve(ROC-AUC)increased by 59.0%and 55.94%,respectively.These results suggest that data sampling can have a positive effect even in encrypted environments.However,the extent of the improvement may vary depending on data quality,model architecture,and sampling strategy.
基金This work was supported by the National Research Foundation of Korea(NRF)grant funded by the Korea government(MSIT)(No.2021R1A2C2011391)and was supported by the Ajou University research fund.
文摘New technologies that take advantage of the emergence of massive Internet of Things(IoT)and a hyper-connected network environment have rapidly increased in recent years.These technologies are used in diverse environments,such as smart factories,digital healthcare,and smart grids,with increased security concerns.We intend to operate Security Orchestration,Automation and Response(SOAR)in various environments through new concept definitions as the need to detect and respond automatically to rapidly increasing security incidents without the intervention of security personnel has emerged.To facilitate the understanding of the security concern involved in this newly emerging area,we offer the definition of Internet of Blended Environment(IoBE)where various convergence environments are interconnected and the data analyzed in automation.We define Blended Threat(BT)as a security threat that exploits security vulnerabilities through various attack surfaces in the IoBE.We propose a novel SOAR-CUBE architecture to respond to security incidents with minimal human intervention by automating the BT response process.The Security Orchestration,Automation,and Response(SOAR)part of our architecture is used to link heterogeneous security technologies and the threat intelligence function that collects threat data and performs a correlation analysis of the data.SOAR is operated under Collaborative Units of Blended Environment(CUBE)which facilitates dynamic exchanges of data according to the environment applied to the IoBE by distributing and deploying security technologies for each BT type and dynamically combining them according to the cyber kill chain stage to minimize the damage and respond efficiently to BT.
基金supported by the Institute of Information&communications Technology Planning&Evaluation(IITP)grant funded by the Korea government(MSIT)(RS-2024-00399401,Development of Quantum-Safe Infrastructure Migration and Quantum Security Verification Technologies).
文摘With the rise of remote collaboration,the demand for advanced storage and collaboration tools has rapidly increased.However,traditional collaboration tools primarily rely on access control,leaving data stored on cloud servers vulnerable due to insufficient encryption.This paper introduces a novel mechanism that encrypts data in‘bundle’units,designed to meet the dual requirements of efficiency and security for frequently updated collaborative data.Each bundle includes updated information,allowing only the updated portions to be reencrypted when changes occur.The encryption method proposed in this paper addresses the inefficiencies of traditional encryption modes,such as Cipher Block Chaining(CBC)and Counter(CTR),which require decrypting and re-encrypting the entire dataset whenever updates occur.The proposed method leverages update-specific information embedded within data bundles and metadata that maps the relationship between these bundles and the plaintext data.By utilizing this information,the method accurately identifies the modified portions and applies algorithms to selectively re-encrypt only those sections.This approach significantly enhances the efficiency of data updates while maintaining high performance,particularly in large-scale data environments.To validate this approach,we conducted experiments measuring execution time as both the size of the modified data and the total dataset size varied.Results show that the proposed method significantly outperforms CBC and CTR modes in execution speed,with greater performance gains as data size increases.Additionally,our security evaluation confirms that this method provides robust protection against both passive and active attacks.
基金supported by Institute of Information&communications Technology Planning&Evaluation(IITP)grant funded by the Korea government(MSIT)(RS-2024-00441484,Development of Open Roaming Technology for Private 5G Network)。
文摘Roaming in 5G networks enables seamless global mobility but also introduces significant security risks due to legacy protocol dependencies,uneven Security Edge Protection Proxy(SEPP)deployment,and the dynamic nature of inter-Public Land Mobile Network(inter-PLMN)signaling.Traditional rule-based defenses are inadequate for protecting cloud-native 5G core networks,particularly as roaming expands into enterprise and Internet of Things(IoT)domains.This work addresses these challenges by designing a scalable 5G Standalone testbed,generating the first intrusion detection dataset specifically tailored to roaming threats,and proposing a deep learning based intrusion detection framework for cloud-native environments.Six deep learning models including Multilayer Perceptron(MLP),one-dimensional Convolutional Neural Network(1D CNN),Autoencoder(AE),Recurrent Neural Network(RNN),Gated Recurrent Unit(GRU),and Long Short-Term Memory(LSTM)were evaluated on the dataset using both weighted and balanced metrics to account for strong class imbalance.While all models achieved over 99%accuracy,recurrent architectures such as GRU and LSTM outperformed others in balanced accuracy and macro-level evaluation,demonstrating superior effectiveness in detecting rare but high-impact attacks.These results confirm the importance of sequence-aware Artificial Intelligence(AI)models for securing roaming scenarios,where transient and contextdependent threats are common.The proposed framework provides a foundation for intelligent,adaptive intrusion detection in 5G and offers a path toward resilient security in Beyond 5G and 6G networks.
基金the Deanship of Graduate Studies and Scientific Research at Qassim University for financial support(QU-APC-2025).
文摘Data compression plays a vital role in datamanagement and information theory by reducing redundancy.However,it lacks built-in security features such as secret keys or password-based access control,leaving sensitive data vulnerable to unauthorized access and misuse.With the exponential growth of digital data,robust security measures are essential.Data encryption,a widely used approach,ensures data confidentiality by making it unreadable and unalterable through secret key control.Despite their individual benefits,both require significant computational resources.Additionally,performing them separately for the same data increases complexity and processing time.Recognizing the need for integrated approaches that balance compression ratios and security levels,this research proposes an integrated data compression and encryption algorithm,named IDCE,for enhanced security and efficiency.Thealgorithmoperates on 128-bit block sizes and a 256-bit secret key length.It combines Huffman coding for compression and a Tent map for encryption.Additionally,an iterative Arnold cat map further enhances cryptographic confusion properties.Experimental analysis validates the effectiveness of the proposed algorithm,showcasing competitive performance in terms of compression ratio,security,and overall efficiency when compared to prior algorithms in the field.
基金supported by a National Research Foundation of Korea(NRF)Grant funded by the Korean government(MSIT)(No.NRF-2017R1E1A1A01075110).
文摘Side-channel attacks have recently progressed into software-induced attacks.In particular,a rowhammer attack,which exploits the characteristics of dynamic random access memory(DRAM),can quickly and continuously access the cells as the cell density of DRAM increases,thereby generating a disturbance error affecting the neighboring cells,resulting in bit flips.Although a rowhammer attack is a highly sophisticated attack in which disturbance errors are deliberately generated into data bits,it has been reported that it can be exploited on various platforms such as mobile devices,web browsers,and virtual machines.Furthermore,there have been studies on bypassing the defense measures of DRAM manufacturers and the like to respond to rowhammer attacks.A rowhammer attack can control user access and compromise the integrity of sensitive data with attacks such as a privilege escalation and an alteration of the encryption keys.In an attempt to mitigate a rowhammer attack,various hardware-and software-based mitigation techniques are being studied,but there are limitations in that the research methods do not detect the rowhammer attack in advance,causing overhead or degradation of the system performance.Therefore,in this study,a rowhammer attack detection technique is proposed by extracting common features of rowhammer attack files through a static analysis of rowhammer attack codes.
基金This work was supported by Institute of Information&communications Technology Planning&Evaluation(IITP)grant funded by the Korea government(MSIT)(No.2022-0-01019,Development of eSIM security platform technology for edge devices to expand the eSIM ecosystem)This was partly supported by the MSIT(Ministry of Science and ICT)Korea,under the ITRC(Information Technology Research Center)support program(IITP-2022-RS-2022-00164800)supervised by the IITP(Institute for Information&Communications Technology Planning&Evaluation).
文摘With the rapid development of quantum computers capable of realizing Shor’s algorithm,existing public key-based algorithms face a significant security risk.Crystals-Kyber has been selected as the only key encapsulation mechanism(KEM)algorithm in the National Institute of Standards and Technology(NIST)Post-Quantum Cryptography(PQC)competition.In this study,we present a portable and efficient implementation of a Crystals-Kyber post-quantum KEM based on WebAssembly(Wasm),a recently released portable execution framework for high-performance web applications.Until now,most Kyber implementations have been developed with native programming languages such as C and Assembly.Although there are a few previous Kyber implementations based on JavaScript for portability,their performance is significantly lower than that of implementations based on native programming languages.Therefore,it is necessary to develop a portable and efficient Kyber implementation to secure web applications in the quantum computing era.Our Kyber software is based on JavaScript and Wasm to provide portability and efficiency while ensuring quantum security.Namely,the overall software is written in JavaScript,and the performance core parts(secure hash algorithm-3-based operations and polynomial multiplication)are written in Wasm.Furthermore,we parallelize the number theoretic transform(NTT)-based polynomial multiplication using single instruction multiple data(SIMD)functionality,which is available in Wasm.The three steps in the NTT-based polynomial multiplication have been parallelized with Wasm SIMD intrinsic functions.Our software outperforms the latest reference implementation of Kyber developed in JavaScript by×4.02(resp.×4.32 and×4.1),×3.42(resp.×3.52 and×3.44),and×3.41(resp.×3.44 and×3.38)in terms of key generation,encapsulation,and decapsulation on Google Chrome(resp.Firefox,and Microsoft Edge).As far as we know,this is the first software implementation of Kyber with Wasm technology in the web environment.
基金The authors are grateful to the Raytheon Chair for Systems Engineering for funding.
文摘Coronavirus is a potentially fatal disease that normally occurs in mammals and birds.Generally,in humans,the virus spreads through aerial droplets of any type of fluid secreted from the body of an infected person.Coronavirus is a family of viruses that is more lethal than other unpremeditated viruses.In December 2019,a new variant,i.e.,a novel coronavirus(COVID-19)developed in Wuhan province,China.Since January 23,2020,the number of infected individuals has increased rapidly,affecting the health and economies of many countries,including Pakistan.The objective of this research is to provide a system to classify and categorize the COVID-19 outbreak in Pakistan based on the data collected every day from different regions of Pakistan.This research also compares the performance of machine learning classifiers(i.e.,Decision Tree(DT),Naive Bayes(NB),Support Vector Machine,and Logistic Regression)on the COVID-19 dataset collected in Pakistan.According to the experimental results,DT and NB classifiers outperformed the other classifiers.In addition,the classified data is categorized by implementing a Bayesian Regularization Artificial Neural Network(BRANN)classifier.The results demonstrate that the BRANN classifier outperforms state-of-the-art classifiers.
基金This work was supported by the National Research Foundation of Korea(NRF)grant funded by the Korean government(MSIT)(No.2021R1A2C2011391)was supported by the Institute of Information&communications Technology Planning&Evaluation(IITP)grant funded by the Korea government(MSIT)(No.2021-0-01806Development of security by design and security management technology in smart factory).
文摘Contemporary attackers,mainly motivated by financial gain,consistently devise sophisticated penetration techniques to access important information or data.The growing use of Internet of Things(IoT)technology in the contemporary convergence environment to connect to corporate networks and cloud-based applications only worsens this situation,as it facilitates multiple new attack vectors to emerge effortlessly.As such,existing intrusion detection systems suffer from performance degradation mainly because of insufficient considerations and poorly modeled detection systems.To address this problem,we designed a blended threat detection approach,considering the possible impact and dimensionality of new attack surfaces due to the aforementioned convergence.We collectively refer to the convergence of different technology sectors as the internet of blended environment.The proposed approach encompasses an ensemble of heterogeneous probabilistic autoencoders that leverage the corresponding advantages of a convolutional variational autoencoder and long short-term memory variational autoencoder.An extensive experimental analysis conducted on the TON_IoT dataset demonstrated 96.02%detection accuracy.Furthermore,performance of the proposed approach was compared with various single model(autoencoder)-based network intrusion detection approaches:autoencoder,variational autoencoder,convolutional variational autoencoder,and long short-term memory variational autoencoder.The proposed model outperformed all compared models,demonstrating F1-score improvements of 4.99%,2.25%,1.92%,and 3.69%,respectively.
文摘Numerous industries,especially the medical industry,are likely to exhibit significant developments in the future.Ever since the announcement of the precision medicine initiative by the United States in 2015,interest in the field has considerably increased.The techniques of precision medicine are employed to provide optimal treatment and medical services to patients,in addition to the prevention and management of diseases via the collection and analysis of big data related to their individual genetic characteristics,occupation,living environment,and dietary habits.As this involves the accumulation and utilization of sensitive information,such as patient history,DNA,and personal details,its implementation is difficult if the data are inaccurate,exposed,or forged,and there is also a concern for privacy,as massive amount of data are collected;hence,ensuring the security of information is essential.Therefore,it is necessary to develop methods of securely sharing sensitive data for the establishment of a precision medicine system.An authentication and data sharing scheme is presented in this study on the basis of an analysis of sensitive data.The proposed scheme securely shares sensitive data of each entity in the precision medicine system according to its architecture and data flow.
文摘Call Detailed Records(CDR)are generated and stored in Mobile Networks(MNs)and contain subscriber’s information about active or passive usage of the network for various communication activities.The spatio-temporal nature of CDR makes them a valuable dataset used for forensic activities.Advances in technology have led to the seamless communication across Multiple Mobile Network(MMN),which poses a threat to the availability and integrity of CDR data.Present CDR implementation is capable of logging peer-to-peer communications over single connection only,thus necessitating improvements on how the CDR data is stored for forensic analysis.In this paper,the problem is solved by identifying and conceptually modelling six new artifacts generated by such communication activities.The newly identified artifacts are introduced into the existing CDR for an incident capturing of the required data for forensic analysis during investigations involved in the MMN communication.Results show an improved absolute speed of 0.0058 s for the MMN-CDR to associate a suspect with an incident,which is 0.0038 s faster than the speed of 0.0097s for the existing CDR to associate a suspect with an accomplice.Thus,a novel method for forensically tracking calls over the MMN has been developed.The MMN-CDR,when forensically analyzed,reveals an increase in time efficiency over the existing CDR due to its high absolute speed.Also,higher accuracy and completeness percentage are both obtained.
文摘As the extensive use of cloud computing raises questions about the security of any personal data stored there,cryptography is being used more frequently as a security tool to protect data confidentiality and privacy in the cloud environment.A hypervisor is a virtualization software used in cloud hosting to divide and allocate resources on various pieces of hardware.The choice of hypervisor can significantly impact the performance of cryptographic operations in the cloud environment.An important issue that must be carefully examined is that no hypervisor is completely superior in terms of performance;Each hypervisor should be examined to meet specific needs.The main objective of this study is to provide accurate results to compare the performance of Hyper-V and Kernel-based Virtual Machine(KVM)while implementing different cryptographic algorithms to guide cloud service providers and end users in choosing the most suitable hypervisor for their cryptographic needs.This study evaluated the efficiency of two hypervisors,Hyper-V and KVM,in implementing six cryptographic algorithms:Rivest,Shamir,Adleman(RSA),Advanced Encryption Standard(AES),Triple Data Encryption Standard(TripleDES),Carlisle Adams and Stafford Tavares(CAST-128),BLOWFISH,and TwoFish.The study’s findings show that KVM outperforms Hyper-V,with 12.2%less Central Processing Unit(CPU)use and 12.95%less time overall for encryption and decryption operations with various file sizes.The study’s findings emphasize how crucial it is to pick a hypervisor that is appropriate for cryptographic needs in a cloud environment,which could assist both cloud service providers and end users.Future research may focus more on how various hypervisors perform while handling cryptographic workloads.
基金This paper is funded by the Deanship of Scientific Research at ImamMohammad Ibn Saud Islamic University Research Group No.RG-21-07-05.
文摘TheCOVID-19 outbreak began in December 2019 andwas declared a global health emergency by the World Health Organization.The four most dominating variants are Beta,Gamma,Delta,and Omicron.After the administration of vaccine doses,an eminent decline in new cases has been observed.The COVID-19 vaccine induces neutralizing antibodies and T-cells in our bodies.However,strong variants likeDelta and Omicron tend to escape these neutralizing antibodies elicited by COVID-19 vaccination.Therefore,it is indispensable to study,analyze and most importantly,predict the response of SARS-CoV-2-derived t-cell epitopes against Covid variants in vaccinated and unvaccinated persons.In this regard,machine learning can be effectively utilized for predicting the response of COVID-derived t-cell epitopes.In this study,prediction of T-cells Epitopes’response was conducted for vaccinated and unvaccinated people for Beta,Gamma,Delta,and Omicron variants.The dataset was divided into two classes,i.e.,vaccinated and unvaccinated,and the predicted response of T-cell Epitopes was divided into three categories,i.e.,Strong,Impaired,and Over-activated.For the aforementioned prediction purposes,a self-proposed Bayesian neural network has been designed by combining variational inference and flow normalization optimizers.Furthermore,the Hidden Markov Model has also been trained on the same dataset to compare the results of the self-proposed Bayesian neural network with this state-of-the-art statistical approach.Extensive experimentation and results demonstrate the efficacy of the proposed network in terms of accurate prediction and reduced error.
文摘The term Internet of Things has increased in popularity in recent years and has spread to be used in many applications around us,such as healthcare applications,smart homes and smart cities,IoT is a group of smart devices equipped with sensors that have the ability to calculate data,and carry out actions in the environment in which they are located,they are connected to each other through the Internet and recently it has become supported by 5G technology due to many advantages such as its ability to provide a fast connection,despite the efficiency of the IoT supported by the five G technology,it is subject to many security challenges.In this paper,we conducted a comprehensive review of previous research related to the security requirements of the IoT and security attacks.
基金supported by the Institute of Information&Communications Technology Planning&Evaluation(IITP)grant funded by the Korea government(MSIT)(RS-2024-00396797,Development of core technology for intelligent O-RAN security platform)。
文摘The open nature and heterogeneous architecture of Open Radio Access Network(Open RAN)undermine the consistency of security policies and broaden the attack surface,thereby increasing the risk of security vulnerabilities.The dynamic nature of network performance and traffic patterns in Open RAN necessitates advanced detection models that can overcome the constraints of traditional techniques and adapt to evolving behaviors.This study presents a methodology for effectively detecting malicious traffic in Open RAN by utilizing an Artificial-Intelligence/MachineLearning(AI/ML)Framework.A hybrid Transformer–Convolutional-Neural-Network(Transformer-CNN)ensemble model is employed for anomaly detection.The proposed model generates final predictions through a soft-voting technique based on the predictive outputs of the two models with distinct features.This approach improves accuracy by up to 1.06%and F1 score by 1.48%compared with a hard-voting technique to determine the final prediction.Furthermore,the proposed model achieves an average accuracy of approximately 98.3%depending on the time step,exhibiting a 1.43%increase in accuracy over single-model approaches.Unlike single-model approaches,which are prone to overfitting,the ensemble model resolves the overfitting problem by reducing the deviation in validation loss.
文摘Feature selection(FS)is a pivotal pre-processing step in developing data-driven models,influencing reliability,performance and optimization.Although existing FS techniques can yield high-performance metrics for certain models,they do not invariably guarantee the extraction of the most critical or impactful features.Prior literature underscores the significance of equitable FS practices and has proposed diverse methodologies for the identification of appropriate features.However,the challenge of discerning the most relevant and influential features persists,particularly in the context of the exponential growth and heterogeneity of big data—a challenge that is increasingly salient in modern artificial intelligence(AI)applications.In response,this study introduces an innovative,automated statistical method termed Farea Similarity for Feature Selection(FSFS).The FSFS approach computes a similarity metric for each feature by benchmarking it against the record-wise mean,thereby finding feature dependencies and mitigating the influence of outliers that could potentially distort evaluation outcomes.Features are subsequently ranked according to their similarity scores,with the threshold established at the average similarity score.Notably,lower FSFS values indicate higher similarity and stronger data correlations,whereas higher values suggest lower similarity.The FSFS method is designed not only to yield reliable evaluation metrics but also to reduce data complexity without compromising model performance.Comparative analyses were performed against several established techniques,including Chi-squared(CS),Correlation Coefficient(CC),Genetic Algorithm(GA),Exhaustive Approach,Greedy Stepwise Approach,Gain Ratio,and Filtered Subset Eval,using a variety of datasets such as the Experimental Dataset,Breast Cancer Wisconsin(Original),KDD CUP 1999,NSL-KDD,UNSW-NB15,and Edge-IIoT.In the absence of the FSFS method,the highest classifier accuracies observed were 60.00%,95.13%,97.02%,98.17%,95.86%,and 94.62%for the respective datasets.When the FSFS technique was integrated with data normalization,encoding,balancing,and feature importance selection processes,accuracies improved to 100.00%,97.81%,98.63%,98.94%,94.27%,and 98.46%,respectively.The FSFS method,with a computational complexity of O(fn log n),demonstrates robust scalability and is well-suited for datasets of large size,ensuring efficient processing even when the number of features is substantial.By automatically eliminating outliers and redundant data,FSFS reduces computational overhead,resulting in faster training and improved model performance.Overall,the FSFS framework not only optimizes performance but also enhances the interpretability and explainability of data-driven models,thereby facilitating more trustworthy decision-making in AI applications.
文摘The smart home platform integrates with Internet of Things(IoT)devices,smartphones,and cloud servers,enabling seamless and convenient services.It gathers and manages extensive user data,including personal information,device operations,and patterns of user behavior.Such data plays an essential role in criminal inves-tigations,highlighting the growing importance of specialized smart home forensics.Given the rapid advancement in smart home software and hardware technologies,many companies are introducing new devices and services that expand the market.Consequently,scalable and platform-specific forensic research is necessary to support efficient digital investigations across diverse smart home ecosystems.This study thoroughly examines the core components and structures of smart homes,proposing a generalized architecture that represents various operational environments.A three-stage smart home forensics framework is introduced:(1)analyzing application functions to infer relevant data,(2)extracting and processing data from interconnected devices,and(3)identifying data valuable for investigative purposes.The framework’s applicability is validated using testbeds from Samsung SmartThings and Xiaomi Mi Home platforms,offering practical insights for real-world forensic applications.The results demonstrate that the proposed forensic framework effectively acquires and classifies relevant digital evidence in smart home platforms,confirming its practical applicability in smart home forensic investigations.
基金supported by Princess Nourah bint Abdulrahman University Researchers Supporting Project Number(PNURSP2025R410),Princess Nourah bint Abdulrahman University,Riyadh,Saudi Arabia.
文摘Human activity recognition(HAR)is crucial in fields like robotics,surveillance,and healthcare,enabling systems to understand and respond to human actions.Current models often struggle with complex datasets,making accurate recognition challenging.This study proposes a quantum-integrated Convolutional Neural Network(QI-CNN)to enhance HAR performance.The traditional models demonstrate weak performance in transferring learned knowledge between diverse complex data collections,including D3D-HOI and Sysu 3D HOI.HAR requires better extraction models and techniques that must address current challenges to achieve improved accuracy and scalability.The model aims to enhance HAR task performance by combining quantum computing components with classical CNN approaches.The framework begins with bilateral filter(BF)enhancement of images and then implements multi-object tracking(MOT)in conjunction with felzenszwalb superpixel segmentation for object detection and segmentation.The watershed algorithm refines the united superpixels to create more accurate object boundary definitions.The model combination of histogram of oriented gradients(HoG)and Global Image Semantic Texture(GIST)descriptors alongside a new approach to extract 23-joint keypoints by employing relative joint angles and joint proximitymeasures.A fuzzy optimization process optimizes features that originated from the extraction phase.Our approach achieves 93.02%accuracy on the D3D-HOI dataset and 97.38%on the Sysu 3D HOI dataset Our approach achieves 93.02%accuracy on the D3D-HOI dataset and 97.38%on the Sysu 3D HOI dataset.Averaging across all classes,the proposed model yields 93.3%precision,92.6%recall,92.3%F1-score,89.1%specificity,an False Positive Rate(FPR)of 10.9%and a mean log-loss of 0.134 on the D3D-HOI dataset,while on the Sysu 3D HOI dataset the corresponding values are 98.4%precision,98.6%recall,98.4%F1-score,99.0%specificity,1.0%FPR and a log-loss of 0.058.These results demonstrate that the quantum integrated CNN significantly improves feature extraction and model optimisation.
基金the Deanship of Graduate Studies and Scientific Research at Najran University for supporting the research project through the Group Research,with the project code NU/GP/SERC/13/712。
文摘The Internet of Things(IoT)is an innovation that combines imagined space with the actual world on a single platform.Because of the recent rapid rise of IoT devices,there has been a lack of standards,leading to a massive increase in unprotected devices connecting to networks.Consequently,cyberattacks on IoT are becoming more common,particularly keylogging attacks,which are often caused by security vulnerabilities on IoT networks.This research focuses on the role of transfer learning and ensemble classifiers in enhancing the detection of keylogging attacks within small,imbalanced IoT datasets.The authors propose a model that combines transfer learning with ensemble classification methods,leading to improved detection accuracy.By leveraging the BoT-IoT and keylogger_detection datasets,they facilitate the transfer of knowledge across various domains.The results reveal that the integration of transfer learning and ensemble classifiers significantly improves detection capabilities,even in scenarios with limited data availability.The proposed TRANS-ENS model showcases exceptional accuracy and a minimal false positive rate,outperforming current deep learning approaches.The primary objectives include:(i)introducing an ensemble feature selection technique to identify common features across models,(ii)creating a pre-trained deep learning model through transfer learning for the detection of keylogging attacks,and(iii)developing a transfer learning-ensemble model dedicated to keylogging detection.Experimental findings indicate that the TRANS-ENS model achieves a detection accuracy of 96.06%and a false alarm rate of 0.12%,surpassing existing models such as CNN,RNN,and LSTM.