Advanced Persistent Threats(APTs)represent one of the most complex and dangerous categories of cyber-attacks characterised by their stealthy behaviour,long-term persistence,and ability to bypass traditional detection ...Advanced Persistent Threats(APTs)represent one of the most complex and dangerous categories of cyber-attacks characterised by their stealthy behaviour,long-term persistence,and ability to bypass traditional detection systems.The complexity of real-world network data poses significant challenges in detection.Machine learning models have shown promise in detecting APTs;however,their performance often suffers when trained on large datasets with redundant or irrelevant features.This study presents a novel,hybrid feature selection method designed to improve APT detection by reducing dimensionality while preserving the informative characteristics of the data.It combines Mutual Information(MI),Symmetric Uncertainty(SU)and Minimum Redundancy Maximum Relevance(mRMR)to enhance feature selection.MI and SU assess feature relevance,while mRMR maximises relevance and minimises redundancy,ensuring that the most impactful features are prioritised.This method addresses redundancy among selected features,improving the overall efficiency and effectiveness of the detection model.Experiments on a real-world APT datasets were conducted to evaluate the proposed method.Multiple classifiers including,Random Forest,Support Vector Machine(SVM),Gradient Boosting,and Neural Networks were used to assess classification performance.The results demonstrate that the proposed feature selection method significantly enhances detection accuracy compared to baseline models trained on the full feature set.The Random Forest algorithm achieved the highest performance,with near-perfect accuracy,precision,recall,and F1 scores(99.97%).The proposed adaptive thresholding algorithm within the selection method allows each classifier to benefit from a reduced and optimised feature space,resulting in improved training and predictive performance.This research offers a scalable and classifier-agnostic solution for dimensionality reduction in cybersecurity applications.展开更多
The rapid growth of Internet of Things(IoT)devices has brought numerous benefits to the interconnected world.However,the ubiquitous nature of IoT networks exposes them to various security threats,including anomaly int...The rapid growth of Internet of Things(IoT)devices has brought numerous benefits to the interconnected world.However,the ubiquitous nature of IoT networks exposes them to various security threats,including anomaly intrusion attacks.In addition,IoT devices generate a high volume of unstructured data.Traditional intrusion detection systems often struggle to cope with the unique characteristics of IoT networks,such as resource constraints and heterogeneous data sources.Given the unpredictable nature of network technologies and diverse intrusion methods,conventional machine-learning approaches seem to lack efficiency.Across numerous research domains,deep learning techniques have demonstrated their capability to precisely detect anomalies.This study designs and enhances a novel anomaly-based intrusion detection system(AIDS)for IoT networks.Firstly,a Sparse Autoencoder(SAE)is applied to reduce the high dimension and get a significant data representation by calculating the reconstructed error.Secondly,the Convolutional Neural Network(CNN)technique is employed to create a binary classification approach.The proposed SAE-CNN approach is validated using the Bot-IoT dataset.The proposed models exceed the performance of the existing deep learning approach in the literature with an accuracy of 99.9%,precision of 99.9%,recall of 100%,F1 of 99.9%,False Positive Rate(FPR)of 0.0003,and True Positive Rate(TPR)of 0.9992.In addition,alternative metrics,such as training and testing durations,indicated that SAE-CNN performs better.展开更多
In an era marked by escalating cybersecurity threats,our study addresses the challenge of malware variant detection,a significant concern for amultitude of sectors including petroleum and mining organizations.This pap...In an era marked by escalating cybersecurity threats,our study addresses the challenge of malware variant detection,a significant concern for amultitude of sectors including petroleum and mining organizations.This paper presents an innovative Application Programmable Interface(API)-based hybrid model designed to enhance the detection performance of malware variants.This model integrates eXtreme Gradient Boosting(XGBoost)and an Artificial Neural Network(ANN)classifier,offering a potent response to the sophisticated evasion and obfuscation techniques frequently deployed by malware authors.The model’s design capitalizes on the benefits of both static and dynamic analysis to extract API-based features,providing a holistic and comprehensive view of malware behavior.From these features,we construct two XGBoost predictors,each of which contributes a valuable perspective on the malicious activities under scrutiny.The outputs of these predictors,interpreted as malicious scores,are then fed into an ANN-based classifier,which processes this data to derive a final decision.The strength of the proposed model lies in its capacity to leverage behavioral and signature-based features,and most importantly,in its ability to extract and analyze the hidden relations between these two types of features.The efficacy of our proposed APIbased hybrid model is evident in its performance metrics.It outperformed other models in our tests,achieving an impressive accuracy of 95%and an F-measure of 93%.This significantly improved the detection performance of malware variants,underscoring the value and potential of our approach in the challenging field of cybersecurity.展开更多
On-path caching is the prominent module in Content-Centric Networking(CCN),equipped with the capability to handle the demands of future networks such as the Internet of Things(IoT)and vehicular networks.The main focus...On-path caching is the prominent module in Content-Centric Networking(CCN),equipped with the capability to handle the demands of future networks such as the Internet of Things(IoT)and vehicular networks.The main focus of the CCN caching module is data dissemination within the network.Most of the existing strategies of in-network caching in CCN store the content at the maximumnumber of routers along the downloading path.Consequently,content redundancy in the network increases significantly,whereas the cache hit ratio and network performance decrease due to the unnecessary utilization of limited cache storage.Moreover,content redundancy adversely affects the cache resources,hit ratio,latency,bandwidth utilization,and server load.We proposed an in-network caching placement strategy named Coupling Parameters to Optimize Content Placement(COCP)to address the content redundancy and associated problems.The novelty of the technique lies in its capability tominimize content redundancy by creating a balanced cache space along the routing path by considering request rate,distance,and available cache space.The proposed approach minimizes the content redundancy and content dissemination within the network by using appropriate locations while increasing the cache hit ratio and network performance.The COCP is implemented in the simulator(Icarus)to evaluate its performance in terms of the cache hit ratio,path stretch,latency,and link load.Extensive experiments have been conducted to evaluate the proposed COCP strategy.The proposed COCP technique has been compared with the existing state-of-theart techniques,namely,Leave Copy Everywhere(LCE),Leave Copy Down(LCD),ProbCache,Cache Less forMore(CL4M),and opt-Cache.The results obtained with different cache sizes and popularities show that our proposed caching strategy can achieve up to 91.46%more cache hits,19.71%reduced latency,35.43%improved path stretch and 38.14%decreased link load.These results confirm that the proposed COCP strategy has the potential capability to handle the demands of future networks such as the Internet of Things(IoT)and vehicular networks.展开更多
The Internet of Medical Things (IoMT) emerges with the visionof the Wireless Body Sensor Network (WBSN) to improve the health monitoringsystems and has an enormous impact on the healthcare system forrecognizing the le...The Internet of Medical Things (IoMT) emerges with the visionof the Wireless Body Sensor Network (WBSN) to improve the health monitoringsystems and has an enormous impact on the healthcare system forrecognizing the levels of risk/severity factors (premature diagnosis, treatment,and supervision of chronic disease i.e., cancer) via wearable/electronic healthsensor i.e., wireless endoscopic capsule. However, AI-assisted endoscopy playsa very significant role in the detection of gastric cancer. Convolutional NeuralNetwork (CNN) has been widely used to diagnose gastric cancer based onvarious feature extraction models, consequently, limiting the identificationand categorization performance in terms of cancerous stages and gradesassociated with each type of gastric cancer. This paper proposed an optimizedAI-based approach to diagnose and assess the risk factor of gastric cancerbased on its type, stage, and grade in the endoscopic images for smarthealthcare applications. The proposed method is categorized into five phasessuch as image pre-processing, Four-Dimensional (4D) image conversion,image segmentation, K-Nearest Neighbour (K-NN) classification, and multigradingand staging of image intensities. Moreover, the performance of theproposed method has experimented on two different datasets consisting ofcolor and black and white endoscopic images. The simulation results verifiedthat the proposed approach is capable of perceiving gastric cancer with 88.09%sensitivity, 95.77% specificity, and 96.55% overall accuracy respectively.展开更多
Stroke and cerebral haemorrhage are the second leading causes of death in the world after ischaemic heart disease.In this work,a dataset containing medical,physiological and environmental tests for stroke was used to ...Stroke and cerebral haemorrhage are the second leading causes of death in the world after ischaemic heart disease.In this work,a dataset containing medical,physiological and environmental tests for stroke was used to evaluate the efficacy of machine learning,deep learning and a hybrid technique between deep learning and machine learning on theMagnetic Resonance Imaging(MRI)dataset for cerebral haemorrhage.In the first dataset(medical records),two features,namely,diabetes and obesity,were created on the basis of the values of the corresponding features.The t-Distributed Stochastic Neighbour Embedding algorithm was applied to represent the high-dimensional dataset in a low-dimensional data space.Meanwhile,the Recursive Feature Elimination algorithm(RFE)was applied to rank the features according to priority and their correlation to the target feature and to remove the unimportant features.The features are fed into the various classification algorithms,namely,Support Vector Machine(SVM),K Nearest Neighbours(KNN),Decision Tree,Random Forest,and Multilayer Perceptron.All algorithms achieved superior results.The Random Forest algorithm achieved the best performance amongst the algorithms;it reached an overall accuracy of 99%.This algorithm classified stroke cases with Precision,Recall and F1 score of 98%,100%and 99%,respectively.In the second dataset,the MRI image dataset was evaluated by using the AlexNet model and AlexNet+SVM hybrid technique.The hybrid model AlexNet+SVM performed is better than the AlexNet model;it reached accuracy,sensitivity,specificity and Area Under the Curve(AUC)of 99.9%,100%,99.80%and 99.86%,respectively.展开更多
University timetabling problems are a yearly challenging task and are faced repeatedly each semester.The problems are considered nonpolynomial time(NP)and combinatorial optimization problems(COP),which means that they...University timetabling problems are a yearly challenging task and are faced repeatedly each semester.The problems are considered nonpolynomial time(NP)and combinatorial optimization problems(COP),which means that they can be solved through optimization algorithms to produce the aspired optimal timetable.Several techniques have been used to solve university timetabling problems,and most of them use optimization techniques.This paper provides a comprehensive review of the most recent studies dealing with concepts,methodologies,optimization,benchmarks,and open issues of university timetabling problems.The comprehensive review starts by presenting the essence of university timetabling as NP-COP,defining and clarifying the two formed classes of university timetabling:University Course Timetabling and University Examination Timetabling,illustrating the adopted algorithms for solving such a problem,elaborating the university timetabling constraints to be considered achieving the optimal timetable,and explaining how to analyze and measure the performance of the optimization algorithms by demonstrating the commonly used benchmark datasets for the evaluation.It is noted that meta-heuristic methodologies are widely used in the literature.Additionally,recently,multi-objective optimization has been increasingly used in solving such a problem that can identify robust university timetabling solutions.Finally,trends and future directions in university timetabling problems are provided.This paper provides good information for students,researchers,and specialists interested in this area of research.The challenges and possibilities for future research prospects are also explored.展开更多
SoftwareDefined Networks(SDN)introduced better network management by decoupling control and data plane.However,communication reliability is the desired property in computer networks.The frequency of communication link...SoftwareDefined Networks(SDN)introduced better network management by decoupling control and data plane.However,communication reliability is the desired property in computer networks.The frequency of communication link failure degrades network performance,and service disruptions are likely to occur.Emerging network applications,such as delaysensitive applications,suffer packet loss with higher Round Trip Time(RTT).Several failure recovery schemes have been proposed to address link failure recovery issues in SDN.However,these schemes have various weaknesses,which may not always guarantee service availability.Communication paths differ in their roles;some paths are critical because of the higher frequency usage.Other paths frequently share links between primary and backup.Rerouting the affected flows after failure occurrences without investigating the path roles can lead to post-recovery congestion with packet loss and system throughput.Therefore,there is a lack of studies to incorporate path criticality and residual path capacity to reroute the affected flows in case of link failure.This paper proposed Reliable Failure Restoration with Congestion Aware for SDN to select the reliable backup path that decreases packet loss and RTT,increasing network throughput while minimizing post-recovery congestion.The affected flows are redirected through a path with minimal risk of failure,while Bayesian probability is used to predict post-recovery congestion.Both the former and latter path with a minimal score is chosen.The simulation results improved throughput by(45%),reduced packet losses(87%),and lowered RTT(89%)compared to benchmarking works.展开更多
基金funded by Universiti Teknologi Malaysia under the UTM RA ICONIC Grant(Q.J130000.4351.09G61).
文摘Advanced Persistent Threats(APTs)represent one of the most complex and dangerous categories of cyber-attacks characterised by their stealthy behaviour,long-term persistence,and ability to bypass traditional detection systems.The complexity of real-world network data poses significant challenges in detection.Machine learning models have shown promise in detecting APTs;however,their performance often suffers when trained on large datasets with redundant or irrelevant features.This study presents a novel,hybrid feature selection method designed to improve APT detection by reducing dimensionality while preserving the informative characteristics of the data.It combines Mutual Information(MI),Symmetric Uncertainty(SU)and Minimum Redundancy Maximum Relevance(mRMR)to enhance feature selection.MI and SU assess feature relevance,while mRMR maximises relevance and minimises redundancy,ensuring that the most impactful features are prioritised.This method addresses redundancy among selected features,improving the overall efficiency and effectiveness of the detection model.Experiments on a real-world APT datasets were conducted to evaluate the proposed method.Multiple classifiers including,Random Forest,Support Vector Machine(SVM),Gradient Boosting,and Neural Networks were used to assess classification performance.The results demonstrate that the proposed feature selection method significantly enhances detection accuracy compared to baseline models trained on the full feature set.The Random Forest algorithm achieved the highest performance,with near-perfect accuracy,precision,recall,and F1 scores(99.97%).The proposed adaptive thresholding algorithm within the selection method allows each classifier to benefit from a reduced and optimised feature space,resulting in improved training and predictive performance.This research offers a scalable and classifier-agnostic solution for dimensionality reduction in cybersecurity applications.
基金Researchers Supporting Project Number(RSP2024R206),King Saud University,Riyadh,Saudi Arabia.
文摘The rapid growth of Internet of Things(IoT)devices has brought numerous benefits to the interconnected world.However,the ubiquitous nature of IoT networks exposes them to various security threats,including anomaly intrusion attacks.In addition,IoT devices generate a high volume of unstructured data.Traditional intrusion detection systems often struggle to cope with the unique characteristics of IoT networks,such as resource constraints and heterogeneous data sources.Given the unpredictable nature of network technologies and diverse intrusion methods,conventional machine-learning approaches seem to lack efficiency.Across numerous research domains,deep learning techniques have demonstrated their capability to precisely detect anomalies.This study designs and enhances a novel anomaly-based intrusion detection system(AIDS)for IoT networks.Firstly,a Sparse Autoencoder(SAE)is applied to reduce the high dimension and get a significant data representation by calculating the reconstructed error.Secondly,the Convolutional Neural Network(CNN)technique is employed to create a binary classification approach.The proposed SAE-CNN approach is validated using the Bot-IoT dataset.The proposed models exceed the performance of the existing deep learning approach in the literature with an accuracy of 99.9%,precision of 99.9%,recall of 100%,F1 of 99.9%,False Positive Rate(FPR)of 0.0003,and True Positive Rate(TPR)of 0.9992.In addition,alternative metrics,such as training and testing durations,indicated that SAE-CNN performs better.
基金supported by the Deanship of Scientific Research at Northern Border University for funding work through Research Group No.(RG-NBU-2022-1724).
文摘In an era marked by escalating cybersecurity threats,our study addresses the challenge of malware variant detection,a significant concern for amultitude of sectors including petroleum and mining organizations.This paper presents an innovative Application Programmable Interface(API)-based hybrid model designed to enhance the detection performance of malware variants.This model integrates eXtreme Gradient Boosting(XGBoost)and an Artificial Neural Network(ANN)classifier,offering a potent response to the sophisticated evasion and obfuscation techniques frequently deployed by malware authors.The model’s design capitalizes on the benefits of both static and dynamic analysis to extract API-based features,providing a holistic and comprehensive view of malware behavior.From these features,we construct two XGBoost predictors,each of which contributes a valuable perspective on the malicious activities under scrutiny.The outputs of these predictors,interpreted as malicious scores,are then fed into an ANN-based classifier,which processes this data to derive a final decision.The strength of the proposed model lies in its capacity to leverage behavioral and signature-based features,and most importantly,in its ability to extract and analyze the hidden relations between these two types of features.The efficacy of our proposed APIbased hybrid model is evident in its performance metrics.It outperformed other models in our tests,achieving an impressive accuracy of 95%and an F-measure of 93%.This significantly improved the detection performance of malware variants,underscoring the value and potential of our approach in the challenging field of cybersecurity.
基金This work was supported by Taif University Researchers Supporting Project Number(TURSP-2020/292),Taif University,Taif,Saudi Arabia。
文摘On-path caching is the prominent module in Content-Centric Networking(CCN),equipped with the capability to handle the demands of future networks such as the Internet of Things(IoT)and vehicular networks.The main focus of the CCN caching module is data dissemination within the network.Most of the existing strategies of in-network caching in CCN store the content at the maximumnumber of routers along the downloading path.Consequently,content redundancy in the network increases significantly,whereas the cache hit ratio and network performance decrease due to the unnecessary utilization of limited cache storage.Moreover,content redundancy adversely affects the cache resources,hit ratio,latency,bandwidth utilization,and server load.We proposed an in-network caching placement strategy named Coupling Parameters to Optimize Content Placement(COCP)to address the content redundancy and associated problems.The novelty of the technique lies in its capability tominimize content redundancy by creating a balanced cache space along the routing path by considering request rate,distance,and available cache space.The proposed approach minimizes the content redundancy and content dissemination within the network by using appropriate locations while increasing the cache hit ratio and network performance.The COCP is implemented in the simulator(Icarus)to evaluate its performance in terms of the cache hit ratio,path stretch,latency,and link load.Extensive experiments have been conducted to evaluate the proposed COCP strategy.The proposed COCP technique has been compared with the existing state-of-theart techniques,namely,Leave Copy Everywhere(LCE),Leave Copy Down(LCD),ProbCache,Cache Less forMore(CL4M),and opt-Cache.The results obtained with different cache sizes and popularities show that our proposed caching strategy can achieve up to 91.46%more cache hits,19.71%reduced latency,35.43%improved path stretch and 38.14%decreased link load.These results confirm that the proposed COCP strategy has the potential capability to handle the demands of future networks such as the Internet of Things(IoT)and vehicular networks.
基金the Universiti Teknologi Malaysia for funding this research work through the Project Number Q.J130000.2409.08G77.
文摘The Internet of Medical Things (IoMT) emerges with the visionof the Wireless Body Sensor Network (WBSN) to improve the health monitoringsystems and has an enormous impact on the healthcare system forrecognizing the levels of risk/severity factors (premature diagnosis, treatment,and supervision of chronic disease i.e., cancer) via wearable/electronic healthsensor i.e., wireless endoscopic capsule. However, AI-assisted endoscopy playsa very significant role in the detection of gastric cancer. Convolutional NeuralNetwork (CNN) has been widely used to diagnose gastric cancer based onvarious feature extraction models, consequently, limiting the identificationand categorization performance in terms of cancerous stages and gradesassociated with each type of gastric cancer. This paper proposed an optimizedAI-based approach to diagnose and assess the risk factor of gastric cancerbased on its type, stage, and grade in the endoscopic images for smarthealthcare applications. The proposed method is categorized into five phasessuch as image pre-processing, Four-Dimensional (4D) image conversion,image segmentation, K-Nearest Neighbour (K-NN) classification, and multigradingand staging of image intensities. Moreover, the performance of theproposed method has experimented on two different datasets consisting ofcolor and black and white endoscopic images. The simulation results verifiedthat the proposed approach is capable of perceiving gastric cancer with 88.09%sensitivity, 95.77% specificity, and 96.55% overall accuracy respectively.
文摘Stroke and cerebral haemorrhage are the second leading causes of death in the world after ischaemic heart disease.In this work,a dataset containing medical,physiological and environmental tests for stroke was used to evaluate the efficacy of machine learning,deep learning and a hybrid technique between deep learning and machine learning on theMagnetic Resonance Imaging(MRI)dataset for cerebral haemorrhage.In the first dataset(medical records),two features,namely,diabetes and obesity,were created on the basis of the values of the corresponding features.The t-Distributed Stochastic Neighbour Embedding algorithm was applied to represent the high-dimensional dataset in a low-dimensional data space.Meanwhile,the Recursive Feature Elimination algorithm(RFE)was applied to rank the features according to priority and their correlation to the target feature and to remove the unimportant features.The features are fed into the various classification algorithms,namely,Support Vector Machine(SVM),K Nearest Neighbours(KNN),Decision Tree,Random Forest,and Multilayer Perceptron.All algorithms achieved superior results.The Random Forest algorithm achieved the best performance amongst the algorithms;it reached an overall accuracy of 99%.This algorithm classified stroke cases with Precision,Recall and F1 score of 98%,100%and 99%,respectively.In the second dataset,the MRI image dataset was evaluated by using the AlexNet model and AlexNet+SVM hybrid technique.The hybrid model AlexNet+SVM performed is better than the AlexNet model;it reached accuracy,sensitivity,specificity and Area Under the Curve(AUC)of 99.9%,100%,99.80%and 99.86%,respectively.
基金This research work was supported by the University Malaysia Sabah,Malaysia.
文摘University timetabling problems are a yearly challenging task and are faced repeatedly each semester.The problems are considered nonpolynomial time(NP)and combinatorial optimization problems(COP),which means that they can be solved through optimization algorithms to produce the aspired optimal timetable.Several techniques have been used to solve university timetabling problems,and most of them use optimization techniques.This paper provides a comprehensive review of the most recent studies dealing with concepts,methodologies,optimization,benchmarks,and open issues of university timetabling problems.The comprehensive review starts by presenting the essence of university timetabling as NP-COP,defining and clarifying the two formed classes of university timetabling:University Course Timetabling and University Examination Timetabling,illustrating the adopted algorithms for solving such a problem,elaborating the university timetabling constraints to be considered achieving the optimal timetable,and explaining how to analyze and measure the performance of the optimization algorithms by demonstrating the commonly used benchmark datasets for the evaluation.It is noted that meta-heuristic methodologies are widely used in the literature.Additionally,recently,multi-objective optimization has been increasingly used in solving such a problem that can identify robust university timetabling solutions.Finally,trends and future directions in university timetabling problems are provided.This paper provides good information for students,researchers,and specialists interested in this area of research.The challenges and possibilities for future research prospects are also explored.
基金The authors thank the UTM and Deanship of Scientific Research at King Khalid University for funding this work through grant No R.J130000.7709.4J561Large Groups.(Project under grant number(RGP.2/111/43)).
文摘SoftwareDefined Networks(SDN)introduced better network management by decoupling control and data plane.However,communication reliability is the desired property in computer networks.The frequency of communication link failure degrades network performance,and service disruptions are likely to occur.Emerging network applications,such as delaysensitive applications,suffer packet loss with higher Round Trip Time(RTT).Several failure recovery schemes have been proposed to address link failure recovery issues in SDN.However,these schemes have various weaknesses,which may not always guarantee service availability.Communication paths differ in their roles;some paths are critical because of the higher frequency usage.Other paths frequently share links between primary and backup.Rerouting the affected flows after failure occurrences without investigating the path roles can lead to post-recovery congestion with packet loss and system throughput.Therefore,there is a lack of studies to incorporate path criticality and residual path capacity to reroute the affected flows in case of link failure.This paper proposed Reliable Failure Restoration with Congestion Aware for SDN to select the reliable backup path that decreases packet loss and RTT,increasing network throughput while minimizing post-recovery congestion.The affected flows are redirected through a path with minimal risk of failure,while Bayesian probability is used to predict post-recovery congestion.Both the former and latter path with a minimal score is chosen.The simulation results improved throughput by(45%),reduced packet losses(87%),and lowered RTT(89%)compared to benchmarking works.