The integration of technologies like artificial intelligence,6G,and vehicular ad-hoc networks holds great potential to meet the communication demands of the Internet of Vehicles and drive the advancement of vehicle ap...The integration of technologies like artificial intelligence,6G,and vehicular ad-hoc networks holds great potential to meet the communication demands of the Internet of Vehicles and drive the advancement of vehicle applications.However,these advancements also generate a surge in data processing requirements,necessitating the offloading of vehicular tasks to edge servers due to the limited computational capacity of vehicles.Despite recent advancements,the robustness and scalability of the existing approaches with respect to the number of vehicles and edge servers and their resources,as well as privacy,remain a concern.In this paper,a lightweight offloading strategy that leverages ubiquitous connectivity through the Space Air Ground Integrated Vehicular Network architecture while ensuring privacy preservation is proposed.The Internet of Vehicles(IoV)environment is first modeled as a graph,with vehicles and base stations as nodes,and their communication links as edges.Secondly,vehicular applications are offloaded to suitable servers based on latency using an attention-based heterogeneous graph neural network(HetGNN)algorithm.Subsequently,a differential privacy stochastic gradient descent trainingmechanism is employed for privacypreserving of vehicles and offloading inference.Finally,the simulation results demonstrated that the proposedHetGNN method shows good performance with 0.321 s of inference time,which is 42.68%,63.93%,30.22%,and 76.04% less than baseline methods such as Deep Deterministic Policy Gradient,Deep Q Learning,Deep Neural Network,and Genetic Algorithm,respectively.展开更多
In the context of the rapid iteration of information technology,the Internet of Things(IoT)has established itself as a pivotal hub connecting the digital world and the physical world.Wireless Sensor Networks(WSNs),dee...In the context of the rapid iteration of information technology,the Internet of Things(IoT)has established itself as a pivotal hub connecting the digital world and the physical world.Wireless Sensor Networks(WSNs),deeply embedded in the perception layer architecture of the IoT,play a crucial role as“tactile nerve endings.”A vast number of micro sensor nodes are widely distributed in monitoring areas according to preset deployment strategies,continuously and accurately perceiving and collecting real-time data on environmental parameters such as temperature,humidity,light intensity,air pressure,and pollutant concentration.These data are transmitted to the IoT cloud platform through stable and reliable communication links,forming a massive and detailed basic data resource pool.By using cutting-edge big data processing algorithms,machine learning models,and artificial intelligence analysis tools,in-depth mining and intelligent analysis of these multi-source heterogeneous data are conducted to generate high-value-added decision-making bases.This precisely empowers multiple fields,including agriculture,medical and health care,smart home,environmental science,and industrial manufacturing,driving intelligent transformation and catalyzing society to move towards a new stage of high-quality development.This paper comprehensively analyzes the technical cores of the IoT and WSNs,systematically sorts out the advanced key technologies of WSNs and the evolution of their strategic significance in the IoT system,deeply explores the innovative application scenarios and practical effects of the two in specific vertical fields,and looks forward to the technological evolution trends.It provides a detailed and highly practical guiding reference for researchers,technical engineers,and industrial decision-makers.展开更多
The increasing demand for radioauthorized applications in the 6G era necessitates enhanced monitoring and management of radio resources,particularly for precise control over the electromagnetic environment.The radio m...The increasing demand for radioauthorized applications in the 6G era necessitates enhanced monitoring and management of radio resources,particularly for precise control over the electromagnetic environment.The radio map serves as a crucial tool for describing signal strength distribution within the current electromagnetic environment.However,most existing algorithms rely on sparse measurements of radio strength,disregarding the impact of building information.In this paper,we propose a spectrum cartography(SC)algorithm that eliminates the need for relying on sparse ground-based radio strength measurements by utilizing a satellite network to collect data on buildings and transmitters.Our algorithm leverages Pix2Pix Generative Adversarial Network(GAN)to construct accurate radio maps using transmitter information within real geographical environments.Finally,simulation results demonstrate that our algorithm exhibits superior accuracy compared to previously proposed methods.展开更多
Marburg virus disease(MVD)is a highly fatal illness,with a case fatality rate of up to 88%,though this rate can be significantly reduced with prompt and effective patient care.The disease was first identified in 1967 ...Marburg virus disease(MVD)is a highly fatal illness,with a case fatality rate of up to 88%,though this rate can be significantly reduced with prompt and effective patient care.The disease was first identified in 1967 during concurrent outbreaks in Marburg and Frankfurt,Germany,and in Belgrade,Serbia,linked to laboratory use of African green monkeys imported from Uganda.Subsequent outbreaks and isolated cases have been reported in various African countries,including Angola,the Democratic Republic of the Congo,Equatorial Guinea,Ghana,Guinea,Kenya,Rwanda,South Africa(in an individual with recent travel to Zimbabwe),Tanzania,and Uganda.Initial human MVD infections typically occur due to prolonged exposure to mines or caves inhabited by Rousettus aegyptiacus fruit bats,the natural hosts of the virus.展开更多
Industrial Internet combines the industrial system with Internet connectivity to build a new manufacturing and service system covering the entire industry chain and value chain.Its highly heterogeneous network structu...Industrial Internet combines the industrial system with Internet connectivity to build a new manufacturing and service system covering the entire industry chain and value chain.Its highly heterogeneous network structure and diversified application requirements call for the applying of network slicing technology.Guaranteeing robust network slicing is essential for Industrial Internet,but it faces the challenge of complex slice topologies caused by the intricate interaction relationships among Network Functions(NFs)composing the slice.Existing works have not concerned the strengthening problem of industrial network slicing regarding its complex network properties.Towards this end,we aim to study this issue by intelligently selecting a subset of most valuable NFs with the minimum cost to satisfy the strengthening requirements.State-of-the-art AlphaGo series of algorithms and the advanced graph neural network technology are combined to build the solution.Simulation results demonstrate the superior performance of our scheme compared to the benchmark schemes.展开更多
Industrial Internet of Things(IIoT)is a pervasive network of interlinked smart devices that provide a variety of intelligent computing services in industrial environments.Several IIoT nodes operate confidential data(s...Industrial Internet of Things(IIoT)is a pervasive network of interlinked smart devices that provide a variety of intelligent computing services in industrial environments.Several IIoT nodes operate confidential data(such as medical,transportation,military,etc.)which are reachable targets for hostile intruders due to their openness and varied structure.Intrusion Detection Systems(IDS)based on Machine Learning(ML)and Deep Learning(DL)techniques have got significant attention.However,existing ML and DL-based IDS still face a number of obstacles that must be overcome.For instance,the existing DL approaches necessitate a substantial quantity of data for effective performance,which is not feasible to run on low-power and low-memory devices.Imbalanced and fewer data potentially lead to low performance on existing IDS.This paper proposes a self-attention convolutional neural network(SACNN)architecture for the detection of malicious activity in IIoT networks and an appropriate feature extraction method to extract the most significant features.The proposed architecture has a self-attention layer to calculate the input attention and convolutional neural network(CNN)layers to process the assigned attention features for prediction.The performance evaluation of the proposed SACNN architecture has been done with the Edge-IIoTset and X-IIoTID datasets.These datasets encompassed the behaviours of contemporary IIoT communication protocols,the operations of state-of-the-art devices,various attack types,and diverse attack scenarios.展开更多
Internet of Vehicles (IoV) is a new system that enables individual vehicles to connect with nearby vehicles,people, transportation infrastructure, and networks, thereby realizing amore intelligent and efficient transp...Internet of Vehicles (IoV) is a new system that enables individual vehicles to connect with nearby vehicles,people, transportation infrastructure, and networks, thereby realizing amore intelligent and efficient transportationsystem. The movement of vehicles and the three-dimensional (3D) nature of the road network cause the topologicalstructure of IoV to have the high space and time complexity.Network modeling and structure recognition for 3Droads can benefit the description of topological changes for IoV. This paper proposes a 3Dgeneral roadmodel basedon discrete points of roads obtained from GIS. First, the constraints imposed by 3D roads on moving vehicles areanalyzed. Then the effects of road curvature radius (Ra), longitudinal slope (Slo), and length (Len) on speed andacceleration are studied. Finally, a general 3D road network model based on road section features is established.This paper also presents intersection and road section recognition methods based on the structural features ofthe 3D road network model and the road features. Real GIS data from a specific region of Beijing is adopted tocreate the simulation scenario, and the simulation results validate the general 3D road network model and therecognitionmethod. Therefore, thiswork makes contributions to the field of intelligent transportation by providinga comprehensive approach tomodeling the 3Droad network and its topological changes in achieving efficient trafficflowand improved road safety.展开更多
The intersection of the Industrial Internet of Things(IIoT)and artificial intelligence(AI)has garnered ever-increasing attention and research interest.Nevertheless,the dilemma between the strict resource-constrained n...The intersection of the Industrial Internet of Things(IIoT)and artificial intelligence(AI)has garnered ever-increasing attention and research interest.Nevertheless,the dilemma between the strict resource-constrained nature of IIoT devices and the extensive resource demands of AI has not yet been fully addressed with a comprehensive solution.Taking advantage of the lightweight constructive neural network(LightGCNet)in developing fast learner models for IIoT,a convex geometric constructive neural network with a low-complexity control strategy,namely,ConGCNet,is proposed in this article via convex optimization and matrix theory,which enhances the convergence rate and reduces the computational consumption in comparison with LightGCNet.Firstly,a low-complexity control strategy is proposed to reduce the computational consumption during the hidden parameters training process.Secondly,a novel output weights evaluated method based on convex optimization is proposed to guarantee the convergence rate.Finally,the universal approximation property of ConGCNet is proved by the low-complexity control strategy and convex output weights evaluated method.Simulation results,including four benchmark datasets and the real-world ore grinding process,demonstrate that ConGCNet effectively reduces computational consumption in the modelling process and improves the model’s convergence rate.展开更多
Mobile and Internet network coverage plays an important role in digital transformation and the exploitation of new services. The evolution of mobile networks from the first generation (1G) to the 5th generation is sti...Mobile and Internet network coverage plays an important role in digital transformation and the exploitation of new services. The evolution of mobile networks from the first generation (1G) to the 5th generation is still a long process. 2G networks have developed the messaging service, which complements the already operational voice service. 2G technology has rapidly progressed to the third generation (3G), incorporating multimedia data transmission techniques. It then progressed to fourth generation (4G) and LTE (Long Term Evolution), increasing the transmission speed to improve 3G. Currently, developed countries have already moved to 5G. In developing countries, including Burundi, a member of the East African Community (ECA) where more than 80% are connected to 2G technologies, 40% are connected to the 3G network and 25% to the 4G network and are not yet connected to the 5G network and then still a process. The objective of this article is to analyze the coverage of 2G, 3G and 4G networks in Burundi. This analysis will make it possible to identify possible deficits in order to reduce the digital divide between connected urban areas and remote rural areas. Furthermore, this analysis will draw the attention of decision-makers to the need to deploy networks and coverage to allow the population to access mobile and Internet services and thus enable the digitalization of the population. Finally, this article shows the level of coverage, the digital divide and an overview of the deployment of base stations (BTS) throughout the country to promote the transformation and digital inclusion of services.展开更多
The Internet of Things(IoT)is integral to modern infrastructure,enabling connectivity among a wide range of devices from home automation to industrial control systems.With the exponential increase in data generated by...The Internet of Things(IoT)is integral to modern infrastructure,enabling connectivity among a wide range of devices from home automation to industrial control systems.With the exponential increase in data generated by these interconnected devices,robust anomaly detection mechanisms are essential.Anomaly detection in this dynamic environment necessitates methods that can accurately distinguish between normal and anomalous behavior by learning intricate patterns.This paper presents a novel approach utilizing generative adversarial networks(GANs)for anomaly detection in IoT systems.However,optimizing GANs involves tuning hyper-parameters such as learning rate,batch size,and optimization algorithms,which can be challenging due to the non-convex nature of GAN loss functions.To address this,we propose a five-dimensional Gray wolf optimizer(5DGWO)to optimize GAN hyper-parameters.The 5DGWO introduces two new types of wolves:gamma(γ)for improved exploitation and convergence,and theta(θ)for enhanced exploration and escaping local minima.The proposed system framework comprises four key stages:1)preprocessing,2)generative model training,3)autoencoder(AE)training,and 4)predictive model training.The generative models are utilized to assist the AE training,and the final predictive models(including convolutional neural network(CNN),deep belief network(DBN),recurrent neural network(RNN),random forest(RF),and extreme gradient boosting(XGBoost))are trained using the generated data and AE-encoded features.We evaluated the system on three benchmark datasets:NSL-KDD,UNSW-NB15,and IoT-23.Experiments conducted on diverse IoT datasets show that our method outperforms existing anomaly detection strategies and significantly reduces false positives.The 5DGWO-GAN-CNNAE exhibits superior performance in various metrics,including accuracy,recall,precision,root mean square error(RMSE),and convergence trend.The proposed 5DGWO-GAN-CNNAE achieved the lowest RMSE values across the NSL-KDD,UNSW-NB15,and IoT-23 datasets,with values of 0.24,1.10,and 0.09,respectively.Additionally,it attained the highest accuracy,ranging from 94%to 100%.These results suggest a promising direction for future IoT security frameworks,offering a scalable and efficient solution to safeguard against evolving cyber threats.展开更多
Smart edge computing(SEC)is a novel paradigm for computing that could transfer cloud-based applications to the edge network,supporting computation-intensive services like face detection and natural language processing...Smart edge computing(SEC)is a novel paradigm for computing that could transfer cloud-based applications to the edge network,supporting computation-intensive services like face detection and natural language processing.A core feature of mobile edge computing,SEC improves user experience and device performance by offloading local activities to edge processors.In this framework,blockchain technology is utilized to ensure secure and trustworthy communication between edge devices and servers,protecting against potential security threats.Additionally,Deep Learning algorithms are employed to analyze resource availability and optimize computation offloading decisions dynamically.IoT applications that require significant resources can benefit from SEC,which has better coverage.Although access is constantly changing and network devices have heterogeneous resources,it is not easy to create consistent,dependable,and instantaneous communication between edge devices and their processors,specifically in 5G Heterogeneous Network(HN)situations.Thus,an Intelligent Management of Resources for Smart Edge Computing(IMRSEC)framework,which combines blockchain,edge computing,and Artificial Intelligence(AI)into 5G HNs,has been proposed in this paper.As a result,a unique dual schedule deep reinforcement learning(DS-DRL)technique has been developed,consisting of a rapid schedule learning process and a slow schedule learning process.The primary objective is to minimize overall unloading latency and system resource usage by optimizing computation offloading,resource allocation,and application caching.Simulation results demonstrate that the DS-DRL approach reduces task execution time by 32%,validating the method’s effectiveness within the IMRSEC framework.展开更多
The Internet of Things(IoT)ecosystem faces growing security challenges because it is projected to have 76.88 billion devices by 2025 and $1.4 trillion market value by 2027,operating in distributed networks with resour...The Internet of Things(IoT)ecosystem faces growing security challenges because it is projected to have 76.88 billion devices by 2025 and $1.4 trillion market value by 2027,operating in distributed networks with resource limitations and diverse system architectures.The current conventional intrusion detection systems(IDS)face scalability problems and trust-related issues,but blockchain-based solutions face limitations because of their low transaction throughput(Bitcoin:7 TPS(Transactions Per Second),Ethereum:15-30 TPS)and high latency.The research introduces MBID(Multi-Tier Blockchain Intrusion Detection)as a groundbreaking Multi-Tier Blockchain Intrusion Detection System with AI-Enhanced Detection,which solves the problems in huge IoT networks.The MBID system uses a four-tier architecture that includes device,edge,fog,and cloud layers with blockchain implementations and Physics-Informed Neural Networks(PINNs)for edge-based anomaly detection and a dual consensus mechanism that uses Honesty-based Distributed Proof-of-Authority(HDPoA)and Delegated Proof of Stake(DPoS).The system achieves scalability and efficiency through the combination of dynamic sharding and Interplanetary File System(IPFS)integration.Experimental evaluations demonstrate exceptional performance,achieving a detection accuracy of 99.84%,an ultra-low false positive rate of 0.01% with a False Negative Rate of 0.15%,and a near-instantaneous edge detection latency of 0.40 ms.The system demonstrated an aggregate throughput of 214.57 TPS in a 3-shard configuration,providing a clear,evidence-based path for horizontally scaling to support overmillions of devices with exceeding throughput.The proposed architecture represents a significant advancement in blockchain-based security for IoT networks,effectively balancing the trade-offs between scalability,security,and decentralization.展开更多
Efficient resource management within Internet of Things(IoT)environments remains a pressing challenge due to the increasing number of devices and their diverse functionalities.This study introduces a neural network-ba...Efficient resource management within Internet of Things(IoT)environments remains a pressing challenge due to the increasing number of devices and their diverse functionalities.This study introduces a neural network-based model that uses Long-Short-Term Memory(LSTM)to optimize resource allocation under dynam-ically changing conditions.Designed to monitor the workload on individual IoT nodes,the model incorporates long-term data dependencies,enabling adaptive resource distribution in real time.The training process utilizes Min-Max normalization and grid search for hyperparameter tuning,ensuring high resource utilization and consistent performance.The simulation results demonstrate the effectiveness of the proposed method,outperforming the state-of-the-art approaches,including Dynamic and Efficient Enhanced Load-Balancing(DEELB),Optimized Scheduling and Collaborative Active Resource-management(OSCAR),Convolutional Neural Network with Monarch Butterfly Optimization(CNN-MBO),and Autonomic Workload Prediction and Resource Allocation for Fog(AWPR-FOG).For example,in scenarios with low system utilization,the model achieved a resource utilization efficiency of 95%while maintaining a latency of just 15 ms,significantly exceeding the performance of comparative methods.展开更多
In mobile computing environments, most IoT devices connected to networks experience variable error rates and possess limited bandwidth. The conventional method of retransmitting lost information during transmission, c...In mobile computing environments, most IoT devices connected to networks experience variable error rates and possess limited bandwidth. The conventional method of retransmitting lost information during transmission, commonly used in data transmission protocols, increases transmission delay and consumes excessive bandwidth. To overcome this issue, forward error correction techniques, e.g., Random Linear Network Coding(RLNC) can be used in data transmission. The primary challenge in RLNC-based methodologies is sustaining a consistent coding ratio during data transmission, leading to notable bandwidth usage and transmission delay in dynamic network conditions. Therefore, this study proposes a new block-based RLNC strategy known as Adjustable RLNC(ARLNC), which dynamically adjusts the coding ratio and transmission window during runtime based on the estimated network error rate calculated via receiver feedback. The calculations in this approach are performed using a Galois field with the order of 256. Furthermore, we assessed ARLNC's performance by subjecting it to various error models such as Gilbert Elliott, exponential, and constant rates and compared it with the standard RLNC. The results show that dynamically adjusting the coding ratio and transmission window size based on network conditions significantly enhances network throughput and reduces total transmission delay in most scenarios. In contrast to the conventional RLNC method employing a fixed coding ratio, the presented approach has demonstrated significant enhancements, resulting in a 73% decrease in transmission delay and a 4 times augmentation in throughput. However, in dynamic computational environments, ARLNC generally incurs higher computational costs than the standard RLNC but excels in high-performance networks.展开更多
文摘The integration of technologies like artificial intelligence,6G,and vehicular ad-hoc networks holds great potential to meet the communication demands of the Internet of Vehicles and drive the advancement of vehicle applications.However,these advancements also generate a surge in data processing requirements,necessitating the offloading of vehicular tasks to edge servers due to the limited computational capacity of vehicles.Despite recent advancements,the robustness and scalability of the existing approaches with respect to the number of vehicles and edge servers and their resources,as well as privacy,remain a concern.In this paper,a lightweight offloading strategy that leverages ubiquitous connectivity through the Space Air Ground Integrated Vehicular Network architecture while ensuring privacy preservation is proposed.The Internet of Vehicles(IoV)environment is first modeled as a graph,with vehicles and base stations as nodes,and their communication links as edges.Secondly,vehicular applications are offloaded to suitable servers based on latency using an attention-based heterogeneous graph neural network(HetGNN)algorithm.Subsequently,a differential privacy stochastic gradient descent trainingmechanism is employed for privacypreserving of vehicles and offloading inference.Finally,the simulation results demonstrated that the proposedHetGNN method shows good performance with 0.321 s of inference time,which is 42.68%,63.93%,30.22%,and 76.04% less than baseline methods such as Deep Deterministic Policy Gradient,Deep Q Learning,Deep Neural Network,and Genetic Algorithm,respectively.
文摘In the context of the rapid iteration of information technology,the Internet of Things(IoT)has established itself as a pivotal hub connecting the digital world and the physical world.Wireless Sensor Networks(WSNs),deeply embedded in the perception layer architecture of the IoT,play a crucial role as“tactile nerve endings.”A vast number of micro sensor nodes are widely distributed in monitoring areas according to preset deployment strategies,continuously and accurately perceiving and collecting real-time data on environmental parameters such as temperature,humidity,light intensity,air pressure,and pollutant concentration.These data are transmitted to the IoT cloud platform through stable and reliable communication links,forming a massive and detailed basic data resource pool.By using cutting-edge big data processing algorithms,machine learning models,and artificial intelligence analysis tools,in-depth mining and intelligent analysis of these multi-source heterogeneous data are conducted to generate high-value-added decision-making bases.This precisely empowers multiple fields,including agriculture,medical and health care,smart home,environmental science,and industrial manufacturing,driving intelligent transformation and catalyzing society to move towards a new stage of high-quality development.This paper comprehensively analyzes the technical cores of the IoT and WSNs,systematically sorts out the advanced key technologies of WSNs and the evolution of their strategic significance in the IoT system,deeply explores the innovative application scenarios and practical effects of the two in specific vertical fields,and looks forward to the technological evolution trends.It provides a detailed and highly practical guiding reference for researchers,technical engineers,and industrial decision-makers.
文摘The increasing demand for radioauthorized applications in the 6G era necessitates enhanced monitoring and management of radio resources,particularly for precise control over the electromagnetic environment.The radio map serves as a crucial tool for describing signal strength distribution within the current electromagnetic environment.However,most existing algorithms rely on sparse measurements of radio strength,disregarding the impact of building information.In this paper,we propose a spectrum cartography(SC)algorithm that eliminates the need for relying on sparse ground-based radio strength measurements by utilizing a satellite network to collect data on buildings and transmitters.Our algorithm leverages Pix2Pix Generative Adversarial Network(GAN)to construct accurate radio maps using transmitter information within real geographical environments.Finally,simulation results demonstrate that our algorithm exhibits superior accuracy compared to previously proposed methods.
文摘Marburg virus disease(MVD)is a highly fatal illness,with a case fatality rate of up to 88%,though this rate can be significantly reduced with prompt and effective patient care.The disease was first identified in 1967 during concurrent outbreaks in Marburg and Frankfurt,Germany,and in Belgrade,Serbia,linked to laboratory use of African green monkeys imported from Uganda.Subsequent outbreaks and isolated cases have been reported in various African countries,including Angola,the Democratic Republic of the Congo,Equatorial Guinea,Ghana,Guinea,Kenya,Rwanda,South Africa(in an individual with recent travel to Zimbabwe),Tanzania,and Uganda.Initial human MVD infections typically occur due to prolonged exposure to mines or caves inhabited by Rousettus aegyptiacus fruit bats,the natural hosts of the virus.
基金supported by National Key R&D Program of China(2022YFB3104200)in part by National Natural Science Foundation of China(62202386)+2 种基金in part by Basic Research Programs of Taicang(TC2021JC31)in part by Fundamental Research Funds for the Central Universities(D5000210817)in part by Xi’an Unmanned System Security and Intelligent Communications ISTC Center,and in part by Special Funds for Central Universities Construction of World-Class Universities(Disciplines)and Special Development Guidance(0639022GH0202237 and 0639022SH0201237).
文摘Industrial Internet combines the industrial system with Internet connectivity to build a new manufacturing and service system covering the entire industry chain and value chain.Its highly heterogeneous network structure and diversified application requirements call for the applying of network slicing technology.Guaranteeing robust network slicing is essential for Industrial Internet,but it faces the challenge of complex slice topologies caused by the intricate interaction relationships among Network Functions(NFs)composing the slice.Existing works have not concerned the strengthening problem of industrial network slicing regarding its complex network properties.Towards this end,we aim to study this issue by intelligently selecting a subset of most valuable NFs with the minimum cost to satisfy the strengthening requirements.State-of-the-art AlphaGo series of algorithms and the advanced graph neural network technology are combined to build the solution.Simulation results demonstrate the superior performance of our scheme compared to the benchmark schemes.
基金Deputy for Research and Innovation-Ministry of Education,Kingdom of Saudi Arabia,Grant/Award Number:NU/IFC/02/SERC/-/31Institutional Funding Committee at Najran University,Kingdom of Saudi Arabia。
文摘Industrial Internet of Things(IIoT)is a pervasive network of interlinked smart devices that provide a variety of intelligent computing services in industrial environments.Several IIoT nodes operate confidential data(such as medical,transportation,military,etc.)which are reachable targets for hostile intruders due to their openness and varied structure.Intrusion Detection Systems(IDS)based on Machine Learning(ML)and Deep Learning(DL)techniques have got significant attention.However,existing ML and DL-based IDS still face a number of obstacles that must be overcome.For instance,the existing DL approaches necessitate a substantial quantity of data for effective performance,which is not feasible to run on low-power and low-memory devices.Imbalanced and fewer data potentially lead to low performance on existing IDS.This paper proposes a self-attention convolutional neural network(SACNN)architecture for the detection of malicious activity in IIoT networks and an appropriate feature extraction method to extract the most significant features.The proposed architecture has a self-attention layer to calculate the input attention and convolutional neural network(CNN)layers to process the assigned attention features for prediction.The performance evaluation of the proposed SACNN architecture has been done with the Edge-IIoTset and X-IIoTID datasets.These datasets encompassed the behaviours of contemporary IIoT communication protocols,the operations of state-of-the-art devices,various attack types,and diverse attack scenarios.
基金the National Natural Science Foundation of China(Nos.62272063,62072056 and 61902041)the Natural Science Foundation of Hunan Province(Nos.2022JJ30617 and 2020JJ2029)+4 种基金Open Research Fund of Key Lab of Broadband Wireless Communication and Sensor Network Technology,Nanjing University of Posts and Telecommunications(No.JZNY202102)the Traffic Science and Technology Project of Hunan Province,China(No.202042)Hunan Provincial Key Research and Development Program(No.2022GK2019)this work was funded by the Researchers Supporting Project Number(RSPD2023R681)King Saud University,Riyadh,Saudi Arabia.
文摘Internet of Vehicles (IoV) is a new system that enables individual vehicles to connect with nearby vehicles,people, transportation infrastructure, and networks, thereby realizing amore intelligent and efficient transportationsystem. The movement of vehicles and the three-dimensional (3D) nature of the road network cause the topologicalstructure of IoV to have the high space and time complexity.Network modeling and structure recognition for 3Droads can benefit the description of topological changes for IoV. This paper proposes a 3Dgeneral roadmodel basedon discrete points of roads obtained from GIS. First, the constraints imposed by 3D roads on moving vehicles areanalyzed. Then the effects of road curvature radius (Ra), longitudinal slope (Slo), and length (Len) on speed andacceleration are studied. Finally, a general 3D road network model based on road section features is established.This paper also presents intersection and road section recognition methods based on the structural features ofthe 3D road network model and the road features. Real GIS data from a specific region of Beijing is adopted tocreate the simulation scenario, and the simulation results validate the general 3D road network model and therecognitionmethod. Therefore, thiswork makes contributions to the field of intelligent transportation by providinga comprehensive approach tomodeling the 3Droad network and its topological changes in achieving efficient trafficflowand improved road safety.
文摘The intersection of the Industrial Internet of Things(IIoT)and artificial intelligence(AI)has garnered ever-increasing attention and research interest.Nevertheless,the dilemma between the strict resource-constrained nature of IIoT devices and the extensive resource demands of AI has not yet been fully addressed with a comprehensive solution.Taking advantage of the lightweight constructive neural network(LightGCNet)in developing fast learner models for IIoT,a convex geometric constructive neural network with a low-complexity control strategy,namely,ConGCNet,is proposed in this article via convex optimization and matrix theory,which enhances the convergence rate and reduces the computational consumption in comparison with LightGCNet.Firstly,a low-complexity control strategy is proposed to reduce the computational consumption during the hidden parameters training process.Secondly,a novel output weights evaluated method based on convex optimization is proposed to guarantee the convergence rate.Finally,the universal approximation property of ConGCNet is proved by the low-complexity control strategy and convex output weights evaluated method.Simulation results,including four benchmark datasets and the real-world ore grinding process,demonstrate that ConGCNet effectively reduces computational consumption in the modelling process and improves the model’s convergence rate.
文摘Mobile and Internet network coverage plays an important role in digital transformation and the exploitation of new services. The evolution of mobile networks from the first generation (1G) to the 5th generation is still a long process. 2G networks have developed the messaging service, which complements the already operational voice service. 2G technology has rapidly progressed to the third generation (3G), incorporating multimedia data transmission techniques. It then progressed to fourth generation (4G) and LTE (Long Term Evolution), increasing the transmission speed to improve 3G. Currently, developed countries have already moved to 5G. In developing countries, including Burundi, a member of the East African Community (ECA) where more than 80% are connected to 2G technologies, 40% are connected to the 3G network and 25% to the 4G network and are not yet connected to the 5G network and then still a process. The objective of this article is to analyze the coverage of 2G, 3G and 4G networks in Burundi. This analysis will make it possible to identify possible deficits in order to reduce the digital divide between connected urban areas and remote rural areas. Furthermore, this analysis will draw the attention of decision-makers to the need to deploy networks and coverage to allow the population to access mobile and Internet services and thus enable the digitalization of the population. Finally, this article shows the level of coverage, the digital divide and an overview of the deployment of base stations (BTS) throughout the country to promote the transformation and digital inclusion of services.
基金described in this paper has been developed with in the project PRESECREL(PID2021-124502OB-C43)。
文摘The Internet of Things(IoT)is integral to modern infrastructure,enabling connectivity among a wide range of devices from home automation to industrial control systems.With the exponential increase in data generated by these interconnected devices,robust anomaly detection mechanisms are essential.Anomaly detection in this dynamic environment necessitates methods that can accurately distinguish between normal and anomalous behavior by learning intricate patterns.This paper presents a novel approach utilizing generative adversarial networks(GANs)for anomaly detection in IoT systems.However,optimizing GANs involves tuning hyper-parameters such as learning rate,batch size,and optimization algorithms,which can be challenging due to the non-convex nature of GAN loss functions.To address this,we propose a five-dimensional Gray wolf optimizer(5DGWO)to optimize GAN hyper-parameters.The 5DGWO introduces two new types of wolves:gamma(γ)for improved exploitation and convergence,and theta(θ)for enhanced exploration and escaping local minima.The proposed system framework comprises four key stages:1)preprocessing,2)generative model training,3)autoencoder(AE)training,and 4)predictive model training.The generative models are utilized to assist the AE training,and the final predictive models(including convolutional neural network(CNN),deep belief network(DBN),recurrent neural network(RNN),random forest(RF),and extreme gradient boosting(XGBoost))are trained using the generated data and AE-encoded features.We evaluated the system on three benchmark datasets:NSL-KDD,UNSW-NB15,and IoT-23.Experiments conducted on diverse IoT datasets show that our method outperforms existing anomaly detection strategies and significantly reduces false positives.The 5DGWO-GAN-CNNAE exhibits superior performance in various metrics,including accuracy,recall,precision,root mean square error(RMSE),and convergence trend.The proposed 5DGWO-GAN-CNNAE achieved the lowest RMSE values across the NSL-KDD,UNSW-NB15,and IoT-23 datasets,with values of 0.24,1.10,and 0.09,respectively.Additionally,it attained the highest accuracy,ranging from 94%to 100%.These results suggest a promising direction for future IoT security frameworks,offering a scalable and efficient solution to safeguard against evolving cyber threats.
文摘Smart edge computing(SEC)is a novel paradigm for computing that could transfer cloud-based applications to the edge network,supporting computation-intensive services like face detection and natural language processing.A core feature of mobile edge computing,SEC improves user experience and device performance by offloading local activities to edge processors.In this framework,blockchain technology is utilized to ensure secure and trustworthy communication between edge devices and servers,protecting against potential security threats.Additionally,Deep Learning algorithms are employed to analyze resource availability and optimize computation offloading decisions dynamically.IoT applications that require significant resources can benefit from SEC,which has better coverage.Although access is constantly changing and network devices have heterogeneous resources,it is not easy to create consistent,dependable,and instantaneous communication between edge devices and their processors,specifically in 5G Heterogeneous Network(HN)situations.Thus,an Intelligent Management of Resources for Smart Edge Computing(IMRSEC)framework,which combines blockchain,edge computing,and Artificial Intelligence(AI)into 5G HNs,has been proposed in this paper.As a result,a unique dual schedule deep reinforcement learning(DS-DRL)technique has been developed,consisting of a rapid schedule learning process and a slow schedule learning process.The primary objective is to minimize overall unloading latency and system resource usage by optimizing computation offloading,resource allocation,and application caching.Simulation results demonstrate that the DS-DRL approach reduces task execution time by 32%,validating the method’s effectiveness within the IMRSEC framework.
基金supported in part by Multimedia University under the Research Fellow Grant MMUI/250008in part by Telekom Research&Development Sdn Bhd underGrantRDTC/241149Princess Nourah bint Abdulrahman University Researchers Supporting Project number(PNURSP2025R140),Princess Nourah bint Abdulrahman University,Riyadh,Saudi Arabia.
文摘The Internet of Things(IoT)ecosystem faces growing security challenges because it is projected to have 76.88 billion devices by 2025 and $1.4 trillion market value by 2027,operating in distributed networks with resource limitations and diverse system architectures.The current conventional intrusion detection systems(IDS)face scalability problems and trust-related issues,but blockchain-based solutions face limitations because of their low transaction throughput(Bitcoin:7 TPS(Transactions Per Second),Ethereum:15-30 TPS)and high latency.The research introduces MBID(Multi-Tier Blockchain Intrusion Detection)as a groundbreaking Multi-Tier Blockchain Intrusion Detection System with AI-Enhanced Detection,which solves the problems in huge IoT networks.The MBID system uses a four-tier architecture that includes device,edge,fog,and cloud layers with blockchain implementations and Physics-Informed Neural Networks(PINNs)for edge-based anomaly detection and a dual consensus mechanism that uses Honesty-based Distributed Proof-of-Authority(HDPoA)and Delegated Proof of Stake(DPoS).The system achieves scalability and efficiency through the combination of dynamic sharding and Interplanetary File System(IPFS)integration.Experimental evaluations demonstrate exceptional performance,achieving a detection accuracy of 99.84%,an ultra-low false positive rate of 0.01% with a False Negative Rate of 0.15%,and a near-instantaneous edge detection latency of 0.40 ms.The system demonstrated an aggregate throughput of 214.57 TPS in a 3-shard configuration,providing a clear,evidence-based path for horizontally scaling to support overmillions of devices with exceeding throughput.The proposed architecture represents a significant advancement in blockchain-based security for IoT networks,effectively balancing the trade-offs between scalability,security,and decentralization.
基金funding of the Deanship of Graduate Studies and Scientific Research,Jazan University,Saudi Arabia,through Project Number:ISP-2024.
文摘Efficient resource management within Internet of Things(IoT)environments remains a pressing challenge due to the increasing number of devices and their diverse functionalities.This study introduces a neural network-based model that uses Long-Short-Term Memory(LSTM)to optimize resource allocation under dynam-ically changing conditions.Designed to monitor the workload on individual IoT nodes,the model incorporates long-term data dependencies,enabling adaptive resource distribution in real time.The training process utilizes Min-Max normalization and grid search for hyperparameter tuning,ensuring high resource utilization and consistent performance.The simulation results demonstrate the effectiveness of the proposed method,outperforming the state-of-the-art approaches,including Dynamic and Efficient Enhanced Load-Balancing(DEELB),Optimized Scheduling and Collaborative Active Resource-management(OSCAR),Convolutional Neural Network with Monarch Butterfly Optimization(CNN-MBO),and Autonomic Workload Prediction and Resource Allocation for Fog(AWPR-FOG).For example,in scenarios with low system utilization,the model achieved a resource utilization efficiency of 95%while maintaining a latency of just 15 ms,significantly exceeding the performance of comparative methods.
文摘In mobile computing environments, most IoT devices connected to networks experience variable error rates and possess limited bandwidth. The conventional method of retransmitting lost information during transmission, commonly used in data transmission protocols, increases transmission delay and consumes excessive bandwidth. To overcome this issue, forward error correction techniques, e.g., Random Linear Network Coding(RLNC) can be used in data transmission. The primary challenge in RLNC-based methodologies is sustaining a consistent coding ratio during data transmission, leading to notable bandwidth usage and transmission delay in dynamic network conditions. Therefore, this study proposes a new block-based RLNC strategy known as Adjustable RLNC(ARLNC), which dynamically adjusts the coding ratio and transmission window during runtime based on the estimated network error rate calculated via receiver feedback. The calculations in this approach are performed using a Galois field with the order of 256. Furthermore, we assessed ARLNC's performance by subjecting it to various error models such as Gilbert Elliott, exponential, and constant rates and compared it with the standard RLNC. The results show that dynamically adjusting the coding ratio and transmission window size based on network conditions significantly enhances network throughput and reduces total transmission delay in most scenarios. In contrast to the conventional RLNC method employing a fixed coding ratio, the presented approach has demonstrated significant enhancements, resulting in a 73% decrease in transmission delay and a 4 times augmentation in throughput. However, in dynamic computational environments, ARLNC generally incurs higher computational costs than the standard RLNC but excels in high-performance networks.