With the deep integration of edge computing,5G and Artificial Intelligence ofThings(AIoT)technologies,the large-scale deployment of intelligent terminal devices has given rise to data silos and privacy security challe...With the deep integration of edge computing,5G and Artificial Intelligence ofThings(AIoT)technologies,the large-scale deployment of intelligent terminal devices has given rise to data silos and privacy security challenges in sensing-computing fusion scenarios.Traditional federated learning(FL)algorithms face significant limitations in practical applications due to client drift,model bias,and resource constraints under non-independent and identically distributed(Non-IID)data,as well as the computational overhead and utility loss caused by privacy-preserving techniques.To address these issues,this paper proposes an Efficient and Privacy-enhancing Clustering Federated Learning method(FedEPC).This method introduces a dual-round client selection mechanism to optimize training.First,the Sparsity-based Privacy-preserving Representation Extraction Module(SPRE)and Adaptive Isomorphic Devices Clustering Module(AIDC)cluster clients based on privacy-sensitive features.Second,the Context-aware Incluster Client Selection Module(CICS)dynamically selects representative devices for training,ensuring heterogeneous data distributions are fully represented.By conducting federated training within clusters and aggregating personalized models,FedEPC effectively mitigates weight divergence caused by data heterogeneity,reduces the impact of client drift and straggler issues.Experimental results demonstrate that FedEPC significantly improves test accuracy in highly Non-IID data scenarios compared to FedAvg and existing clustering FL methods.By ensuring privacy security,FedEPC provides an efficient and robust solution for FL in resource-constrained devices within sensing-computing fusion scenarios,offering both theoretical value and engineering practicality.展开更多
Container-based virtualization technology has been more widely used in edge computing environments recently due to its advantages of lighter resource occupation, faster startup capability, and better resource utilizat...Container-based virtualization technology has been more widely used in edge computing environments recently due to its advantages of lighter resource occupation, faster startup capability, and better resource utilization efficiency. To meet the diverse needs of tasks, it usually needs to instantiate multiple network functions in the form of containers interconnect various generated containers to build a Container Cluster(CC). Then CCs will be deployed on edge service nodes with relatively limited resources. However, the increasingly complex and timevarying nature of tasks brings great challenges to optimal placement of CC. This paper regards the charges for various resources occupied by providing services as revenue, the service efficiency and energy consumption as cost, thus formulates a Mixed Integer Programming(MIP) model to describe the optimal placement of CC on edge service nodes. Furthermore, an Actor-Critic based Deep Reinforcement Learning(DRL) incorporating Graph Convolutional Networks(GCN) framework named as RL-GCN is proposed to solve the optimization problem. The framework obtains an optimal placement strategy through self-learning according to the requirements and objectives of the placement of CC. Particularly, through the introduction of GCN, the features of the association relationship between multiple containers in CCs can be effectively extracted to improve the quality of placement.The experiment results show that under different scales of service nodes and task requests, the proposed method can obtain the improved system performance in terms of placement error ratio, time efficiency of solution output and cumulative system revenue compared with other representative baseline methods.展开更多
[Objective]Real-time monitoring of cow ruminant behavior is of paramount importance for promptly obtaining relevant information about cow health and predicting cow diseases.Currently,various strategies have been propo...[Objective]Real-time monitoring of cow ruminant behavior is of paramount importance for promptly obtaining relevant information about cow health and predicting cow diseases.Currently,various strategies have been proposed for monitoring cow ruminant behavior,including video surveillance,sound recognition,and sensor monitoring methods.How‐ever,the application of edge device gives rise to the issue of inadequate real-time performance.To reduce the volume of data transmission and cloud computing workload while achieving real-time monitoring of dairy cow rumination behavior,a real-time monitoring method was proposed for cow ruminant behavior based on edge computing.[Methods]Autono‐mously designed edge devices were utilized to collect and process six-axis acceleration signals from cows in real-time.Based on these six-axis data,two distinct strategies,federated edge intelligence and split edge intelligence,were investigat‐ed for the real-time recognition of cow ruminant behavior.Focused on the real-time recognition method for cow ruminant behavior leveraging federated edge intelligence,the CA-MobileNet v3 network was proposed by enhancing the MobileNet v3 network with a collaborative attention mechanism.Additionally,a federated edge intelligence model was designed uti‐lizing the CA-MobileNet v3 network and the FedAvg federated aggregation algorithm.In the study on split edge intelli‐gence,a split edge intelligence model named MobileNet-LSTM was designed by integrating the MobileNet v3 network with a fusion collaborative attention mechanism and the Bi-LSTM network.[Results and Discussions]Through compara‐tive experiments with MobileNet v3 and MobileNet-LSTM,the federated edge intelligence model based on CA-Mo‐bileNet v3 achieved an average Precision rate,Recall rate,F1-Score,Specificity,and Accuracy of 97.1%,97.9%,97.5%,98.3%,and 98.2%,respectively,yielding the best recognition performance.[Conclusions]It is provided a real-time and effective method for monitoring cow ruminant behavior,and the proposed federated edge intelligence model can be ap‐plied in practical settings.展开更多
The exponential growth of Internet ofThings(IoT)devices has created unprecedented challenges in data processing and resource management for time-critical applications.Traditional cloud computing paradigms cannot meet ...The exponential growth of Internet ofThings(IoT)devices has created unprecedented challenges in data processing and resource management for time-critical applications.Traditional cloud computing paradigms cannot meet the stringent latency requirements of modern IoT systems,while pure edge computing faces resource constraints that limit processing capabilities.This paper addresses these challenges by proposing a novel Deep Reinforcement Learning(DRL)-enhanced priority-based scheduling framework for hybrid edge-cloud computing environments.Our approach integrates adaptive priority assignment with a two-level concurrency control protocol that ensures both optimal performance and data consistency.The framework introduces three key innovations:(1)a DRL-based dynamic priority assignmentmechanism that learns fromsystem behavior,(2)a hybrid concurrency control protocol combining local edge validation with global cloud coordination,and(3)an integrated mathematical model that formalizes sensor-driven transactions across edge-cloud architectures.Extensive simulations across diverse workload scenarios demonstrate significant quantitative improvements:40%latency reduction,25%throughput increase,85%resource utilization(compared to 60%for heuristicmethods),40%reduction in energy consumption(300 vs.500 J per task),and 50%improvement in scalability factor(1.8 vs.1.2 for EDF)compared to state-of-the-art heuristic and meta-heuristic approaches.These results establish the framework as a robust solution for large-scale IoT and autonomous applications requiring real-time processing with consistency guarantees.展开更多
With the widespread adoption of encrypted Domain Name System(DNS)technologies such as DNS over Hyper Text Transfer Protocol Secure(HTTPS),traditional port and protocol-based traffic analysis methods have become ineffe...With the widespread adoption of encrypted Domain Name System(DNS)technologies such as DNS over Hyper Text Transfer Protocol Secure(HTTPS),traditional port and protocol-based traffic analysis methods have become ineffective.Although encrypted DNS enhances user privacy protection,it also provides concealed communication channels for malicious software,compelling detection technologies to shift towards statistical featurebased and machine learning approaches.However,these methods still face challenges in real-time performance and privacy protection.This paper proposes a real-time identification technology for encrypted DNS traffic with privacy protection.Firstly,a hierarchical architecture of cloud-edge-end collaboration is designed,incorporating task offloading strategies to balance privacy protection and identification efficiency.Secondly,a privacy-preserving federated learning mechanismbased on Federated Robust Aggregation(FedRA)is proposed,utilizingMedoid aggregation and differential privacy techniques to ensure data privacy and enhance identification accuracy.Finally,an edge offloading strategy based on a dynamic priority scheduling algorithm(DPSA)is designed to alleviate terminal burden and reduce latency.Simulation results demonstrate that the proposed technology significantly improves the accuracy and realtime performance of encrypted DNS traffic identification while protecting privacy,making it suitable for various network environments.展开更多
The rapid expansion of the Internet of Things(IoT)has introduced significant security challenges due to the scale,complexity,and heterogeneity of interconnected devices.The current traditional centralized security mod...The rapid expansion of the Internet of Things(IoT)has introduced significant security challenges due to the scale,complexity,and heterogeneity of interconnected devices.The current traditional centralized security models are deemed irrelevant in dealing with these threats,especially in decentralized applications where the IoT devices may at times operate on minimal resources.The emergence of new technologies,including Artificial Intelligence(AI),blockchain,edge computing,and Zero-Trust-Architecture(ZTA),is offering potential solutions as it helps with additional threat detection,data integrity,and system resilience in real-time.AI offers sophisticated anomaly detection and prediction analytics,and blockchain delivers decentralized and tamper-proof insurance over device communication and exchange of information.Edge computing enables low-latency character processing by distributing and moving the computational workload near the devices.The ZTA enhances security by continuously verifying each device and user on the network,adhering to the“never trust,always verify”ideology.The present research paper is a review of these technologies,finding out how they are used in securing IoT ecosystems,the issues of such integration,and the possibility of developing a multi-layered,adaptive security structure.Major concerns,such as scalability,resource limitations,and interoperability,are identified,and the way to optimize the application of AI,blockchain,and edge computing in zero-trust IoT systems in the future is discussed.展开更多
Infrastructure as a Service(IaaS)in cloud computing enables flexible resource distribution over the Internet,but achieving optimal scheduling remains a challenge.Effective resource allocation in cloud-based environmen...Infrastructure as a Service(IaaS)in cloud computing enables flexible resource distribution over the Internet,but achieving optimal scheduling remains a challenge.Effective resource allocation in cloud-based environments,particularly within the IaaS model,poses persistent challenges.Existing methods often struggle with slow opti-mization,imbalanced workload distribution,and inefficient use of available assets.These limitations result in longer processing times,increased operational expenses,and inadequate resource deployment,particularly under fluctuating demands.To overcome these issues,a novel Clustered Input-Oriented Salp Swarm Algorithm(CIOSSA)is introduced.This approach combines two distinct strategies:Task Splitting Agglomerative Clustering(TSAC)with an Input Oriented Salp Swarm Algorithm(IOSSA),which prioritizes tasks based on urgency,and a refined multi-leader model that accelerates optimization processes,enhancing both speed and accuracy.By continuously assessing system capacity before task distribution,the model ensures that assets are deployed effectively and costs are controlled.The dual-leader technique expands the potential solution space,leading to substantial gains in processing speed,cost-effectiveness,asset efficiency,and system throughput,as demonstrated by comprehensive tests.As a result,the suggested model performs better than existing approaches in terms of makespan,resource utilisation,throughput,and convergence speed,demonstrating that CIOSSA is scalable,reliable,and appropriate for the dynamic settings found in cloud computing.展开更多
The flexibility of traditional image processing system is limited because those system are designed for specific applications. In this paper, a new TMS320C64x-based multi-DSP parallel computing architecture is present...The flexibility of traditional image processing system is limited because those system are designed for specific applications. In this paper, a new TMS320C64x-based multi-DSP parallel computing architecture is presented. It has many promising characteristics such as powerful computing capability, broad I/O bandwidth, topology flexibility, and expansibility. The parallel system performance is evaluated by practical experiment.展开更多
This paper focuses on the time efficiency for machine vision and intelligent photogrammetry, especially high accuracy on-board real-time cloud detection method. With the development of technology, the data acquisition...This paper focuses on the time efficiency for machine vision and intelligent photogrammetry, especially high accuracy on-board real-time cloud detection method. With the development of technology, the data acquisition ability is growing continuously and the volume of raw data is increasing explosively. Meanwhile, because of the higher requirement of data accuracy, the computation load is also becoming heavier. This situation makes time efficiency extremely important. Moreover, the cloud cover rate of optical satellite imagery is up to approximately 50%, which is seriously restricting the applications of on-board intelligent photogrammetry services. To meet the on-board cloud detection requirements and offer valid input data to subsequent processing, this paper presents a stream-computing of high accuracy on-board real-time cloud detection solution which follows the “bottom-up” understanding strategy of machine vision and uses multiple embedded GPU with significant potential to be applied on-board. Without external memory, the data parallel pipeline system based on multiple processing modules of this solution could afford the “stream-in, processing, stream-out” real-time stream computing. In experiments, images of GF-2 satellite are used to validate the accuracy and performance of this approach, and the experimental results show that this solution could not only bring up cloud detection accuracy, but also match the on-board real-time processing requirements.展开更多
Fog computing is an emerging paradigm that has broad applications including storage, measurement and control. In this paper, we propose a novel real-time notification protocol called RT-Notification for wireless contr...Fog computing is an emerging paradigm that has broad applications including storage, measurement and control. In this paper, we propose a novel real-time notification protocol called RT-Notification for wireless control in fog computing. RT-Notification provides low-latency TDMA communication between an access point in Fog and a large number of portable monitoring devices equipped with sensor and actuator. RT-Notification differentiates two types of controls: urgent downlink actuator-oriented control and normal uplink access & scheduling control. Different from existing protocols, RT-Notification has two salient features:(i) support real-time notification of control frames, while not interrupting ongoing other transmissions, and(ii) support on-demand channel allocation for normal uplink access & scheduling control. RT-Notification can be implemented based on the commercial off-the-shelf 802.11 hardware. Our extensive simulations verify that RT-Notification is very effective in supporting the above two features.展开更多
Accurate classification of encrypted traffic plays an important role in network management.However,current methods confronts several problems:inability to characterize traffic that exhibits great dispersion,inability ...Accurate classification of encrypted traffic plays an important role in network management.However,current methods confronts several problems:inability to characterize traffic that exhibits great dispersion,inability to classify traffic with multi-level features,and degradation due to limited training traffic size.To address these problems,this paper proposes a traffic granularity-based cryptographic traffic classification method,called Granular Classifier(GC).In this paper,a novel Cardinality-based Constrained Fuzzy C-Means(CCFCM)clustering algorithm is proposed to address the problem caused by limited training traffic,considering the ratio of cardinality that must be linked between flows to achieve good traffic partitioning.Then,an original representation format of traffic is presented based on granular computing,named Traffic Granules(TG),to accurately describe traffic structure by catching the dispersion of different traffic features.Each granule is a compact set of similar data with a refined boundary by excluding outliers.Based on TG,GC is constructed to perform traffic classification based on multi-level features.The performance of the GC is evaluated based on real-world encrypted network traffic data.Experimental results show that the GC achieves outstanding performance for encrypted traffic classification with limited size of training traffic and keeps accurate classification in dynamic network conditions.展开更多
A user-programmable computational/control platform was developed at the University of Toronto that offers real-time hybrid simulation (RTHS) capabilities. The platform was verified previously using several linear ph...A user-programmable computational/control platform was developed at the University of Toronto that offers real-time hybrid simulation (RTHS) capabilities. The platform was verified previously using several linear physical substructures. The study presented in this paper is focused on further validating the RTHS platform using a nonlinear viscoelastic-plastic damper that has displacement, frequency and temperature-dependent properties. The validation study includes damper component characterization tests, as well as RTHS of a series of single-degree-of-freedom (SDOF) systems equipped with viscoelastic-plastic dampers that represent different structural designs. From the component characterization tests, it was found that for a wide range of excitation frequencies and friction slip loads, the tracking errors are comparable to the errors in RTHS of linear spring systems. The hybrid SDOF results are compared to an independently validated thermal- mechanical viscoelastic model to further validate the ability for the platform to test nonlinear systems. After the validation, as an application study, nonlinear SDOF hybrid tests were used to develop performance spectra to predict the response of structures equipped with damping systems that are more challenging to model analytically. The use of the experimental performance spectra is illustrated by comparing the predicted response to the hybrid test response of 2DOF systems equipped with viscoelastic-plastic dampers.展开更多
ultra-Dense Network(UDN)has been envisioned as a promising technology to provide high-quality wireless connectivity in dense urban areas,in which the density of Access Points(APs)is increased up to the point where it ...ultra-Dense Network(UDN)has been envisioned as a promising technology to provide high-quality wireless connectivity in dense urban areas,in which the density of Access Points(APs)is increased up to the point where it is comparable with or surpasses the density of active mobile users.In order to mitigate inter-AP interference and improve spectrum efficiency,APs in UDNs are usually clustered into multiple groups to serve different mobile users,respectively.However,as the number of APs increases,the computational capability within an AP group has become the bottleneck of AP clustering.In this paper,we first propose a novel UDN architecture based on Mobile Edge Computing(MEC),in which each MEC server is associated with a user-centric AP cluster to act as a mobile agent.In addition,in the context of MEC-based UDN,we leverage mobility prediction techniques to achieve a dynamic AP clustering scheme,in which the cluster structure can automatically adapt to the dynamic distribution of user traffic in a specific area.Simulation results show that the proposed scheme can highly increase the average user throughput compared with the baseline algorithm using max-SINR user association and equal bandwidth allocation,while it guarantees at the same time low transmission delay.展开更多
Model predictive control (MPC) could not be deployed in real-time control systems for its computation time is not well defined. A real-time fault tolerant implementation algorithm based on imprecise computation is pro...Model predictive control (MPC) could not be deployed in real-time control systems for its computation time is not well defined. A real-time fault tolerant implementation algorithm based on imprecise computation is proposed for MPC, according to the solving process of quadratic programming (QP) problem. In this algorithm, system stability is guaranteed even when computation resource is not enough to finish optimization completely. By this kind of graceful degradation, the behavior of real-time control systems is still predictable and determinate. The algorithm is demonstrated by experiments on servomotor, and the simulation results show its effectiveness.展开更多
In the Internet of Things(IoT)based system,the multi-level client’s requirements can be fulfilled by incorporating communication technologies with distributed homogeneous networks called ubiquitous computing systems(...In the Internet of Things(IoT)based system,the multi-level client’s requirements can be fulfilled by incorporating communication technologies with distributed homogeneous networks called ubiquitous computing systems(UCS).The UCS necessitates heterogeneity,management level,and data transmission for distributed users.Simultaneously,security remains a major issue in the IoT-driven UCS.Besides,energy-limited IoT devices need an effective clustering strategy for optimal energy utilization.The recent developments of explainable artificial intelligence(XAI)concepts can be employed to effectively design intrusion detection systems(IDS)for accomplishing security in UCS.In this view,this study designs a novel Blockchain with Explainable Artificial Intelligence Driven Intrusion Detection for IoT Driven Ubiquitous Computing System(BXAI-IDCUCS)model.The major intention of the BXAI-IDCUCS model is to accomplish energy efficacy and security in the IoT environment.The BXAI-IDCUCS model initially clusters the IoT nodes using an energy-aware duck swarm optimization(EADSO)algorithm to accomplish this.Besides,deep neural network(DNN)is employed for detecting and classifying intrusions in the IoT network.Lastly,blockchain technology is exploited for secure inter-cluster data transmission processes.To ensure the productive performance of the BXAI-IDCUCS model,a comprehensive experimentation study is applied,and the outcomes are assessed under different aspects.The comparison study emphasized the superiority of the BXAI-IDCUCS model over the current state-of-the-art approaches with a packet delivery ratio of 99.29%,a packet loss rate of 0.71%,a throughput of 92.95 Mbps,energy consumption of 0.0891 mJ,a lifetime of 3529 rounds,and accuracy of 99.38%.展开更多
A new file assignment strategy of parallel I/O, which is named heuristic file sorted assignment algorithm was proposed on cluster computing system. Based on the load balancing, it assigns the files to the same disk ac...A new file assignment strategy of parallel I/O, which is named heuristic file sorted assignment algorithm was proposed on cluster computing system. Based on the load balancing, it assigns the files to the same disk according to the similar service time. Firstly, the files were sorted and stored at the set I in descending order in terms of their service time, then one disk of cluster node was selected randomly when the files were to be assigned, and at last the continuous files were taken orderly from the set I to the disk until the disk reached its load maximum. The experimental results show that the new strategy improves the performance by 20.2% when the load of the system is light and by 31.6% when the load is heavy. And the higher the data access rate, the more evident the improvement of the performance obtained by the heuristic file sorted assignment algorithm.展开更多
Cloud computing is an advance computing model using which several applications,data and countless IT services are provided over the Internet.Task scheduling plays a crucial role in cloud computing systems.The issue of...Cloud computing is an advance computing model using which several applications,data and countless IT services are provided over the Internet.Task scheduling plays a crucial role in cloud computing systems.The issue of task scheduling can be viewed as the finding or searching an optimal mapping/assignment of set of subtasks of different tasks over the available set of resources so that we can achieve the desired goals for tasks.With the enlargement of users of cloud the tasks need to be scheduled.Cloud’s performance depends on the task scheduling algorithms used.Numerous algorithms have been submitted in the past to solve the task scheduling problem for heterogeneous network of computers.The existing research work proposes different methods for data intensive applications which are energy and deadline aware task scheduling method.As scientific workflow is combination of fine grain and coarse grain task.Every task scheduled to VM has system overhead.If multiple fine grain task are executing in scientific workflow,it increase the scheduling overhead.To overcome the scheduling overhead,multiple small tasks has been combined to large task,which decrease the scheduling overhead and improve the execution time of the workflow.Horizontal clustering has been used to cluster the fine grained task further replication technique has been combined.The proposed scheduling algorithm improves the performance metrics such as execution time and cost.Further this research can be extended with improved clustering technique and replication methods.展开更多
Real-time prediction of ship motions is crucial for ensuring the safety of offshore activities.In this study,we investigate the performance of the reservoir computing(RC)model in predicting the motions of a ship saili...Real-time prediction of ship motions is crucial for ensuring the safety of offshore activities.In this study,we investigate the performance of the reservoir computing(RC)model in predicting the motions of a ship sailing in irregular waves,comparing it with the long short-term memory(LSTM),bidirectional LSTM(BiLSTM),and gated recurrent unit(GRU)networks.The model tests are carried out in a towing tank to generate the datasets for training and testing the machine learning models.First,we explore the performance of machine learning models trained solely on motion data.It is found that the RC model outperforms the L STM,BiL STM,and GRU networks in both accuracy and efficiency for predicting ship motions.Besides,we investigate the performance of the RC model trained using the historical motion and wave elevation data.It is shown that,compared with the RC model trained solely on motion data,the RC model trained on the motion and wave elevation data can significantly improve the motion prediction accuracy.This study validates the effectiveness and efficiency of the RC model in ship motion prediction during sailing and highlights the utility of wave elevation data in enhancing the RC model’s prediction accuracy.展开更多
This study presents an innovative development of the exponentially weighted moving average(EWMA)control chart,explicitly adapted for the examination of time series data distinguished by seasonal autoregressive moving ...This study presents an innovative development of the exponentially weighted moving average(EWMA)control chart,explicitly adapted for the examination of time series data distinguished by seasonal autoregressive moving average behavior—SARMA(1,1)L under exponential white noise.Unlike previous works that rely on simplified models such as AR(1)or assume independence,this research derives for the first time an exact two-sided Average Run Length(ARL)formula for theModified EWMAchart under SARMA(1,1)L conditions,using a mathematically rigorous Fredholm integral approach.The derived formulas are validated against numerical integral equation(NIE)solutions,showing strong agreement and significantly reduced computational burden.Additionally,a performance comparison index(PCI)is introduced to assess the chart’s detection capability.Results demonstrate that the proposed method exhibits superior sensitivity to mean shifts in autocorrelated environments,outperforming existing approaches.The findings offer a new,efficient framework for real-time quality control in complex seasonal processes,with potential applications in environmental monitoring and intelligent manufacturing systems.展开更多
The large-scale computations are often performed in science and engineering areas such as numerical weather forecasting, astrophysics, energy resources exploration, nuclear weapon design, and plasma fusion research et...The large-scale computations are often performed in science and engineering areas such as numerical weather forecasting, astrophysics, energy resources exploration, nuclear weapon design, and plasma fusion research etc. Many applications in these areas need super computing power. The traditional mode of sequential processing cannot meet the demands of those computations, thus, parallel processing(PP) is the main way of high performance computing (HPC) now.展开更多
基金funded by the State Grid Corporation Science and Technology Project“Research and Application of Key Technologies for Integrated Sensing and Computing for Intelligent Operation of Power Grid”(Grant No.5700-202318596A-3-2-ZN).
文摘With the deep integration of edge computing,5G and Artificial Intelligence ofThings(AIoT)technologies,the large-scale deployment of intelligent terminal devices has given rise to data silos and privacy security challenges in sensing-computing fusion scenarios.Traditional federated learning(FL)algorithms face significant limitations in practical applications due to client drift,model bias,and resource constraints under non-independent and identically distributed(Non-IID)data,as well as the computational overhead and utility loss caused by privacy-preserving techniques.To address these issues,this paper proposes an Efficient and Privacy-enhancing Clustering Federated Learning method(FedEPC).This method introduces a dual-round client selection mechanism to optimize training.First,the Sparsity-based Privacy-preserving Representation Extraction Module(SPRE)and Adaptive Isomorphic Devices Clustering Module(AIDC)cluster clients based on privacy-sensitive features.Second,the Context-aware Incluster Client Selection Module(CICS)dynamically selects representative devices for training,ensuring heterogeneous data distributions are fully represented.By conducting federated training within clusters and aggregating personalized models,FedEPC effectively mitigates weight divergence caused by data heterogeneity,reduces the impact of client drift and straggler issues.Experimental results demonstrate that FedEPC significantly improves test accuracy in highly Non-IID data scenarios compared to FedAvg and existing clustering FL methods.By ensuring privacy security,FedEPC provides an efficient and robust solution for FL in resource-constrained devices within sensing-computing fusion scenarios,offering both theoretical value and engineering practicality.
文摘Container-based virtualization technology has been more widely used in edge computing environments recently due to its advantages of lighter resource occupation, faster startup capability, and better resource utilization efficiency. To meet the diverse needs of tasks, it usually needs to instantiate multiple network functions in the form of containers interconnect various generated containers to build a Container Cluster(CC). Then CCs will be deployed on edge service nodes with relatively limited resources. However, the increasingly complex and timevarying nature of tasks brings great challenges to optimal placement of CC. This paper regards the charges for various resources occupied by providing services as revenue, the service efficiency and energy consumption as cost, thus formulates a Mixed Integer Programming(MIP) model to describe the optimal placement of CC on edge service nodes. Furthermore, an Actor-Critic based Deep Reinforcement Learning(DRL) incorporating Graph Convolutional Networks(GCN) framework named as RL-GCN is proposed to solve the optimization problem. The framework obtains an optimal placement strategy through self-learning according to the requirements and objectives of the placement of CC. Particularly, through the introduction of GCN, the features of the association relationship between multiple containers in CCs can be effectively extracted to improve the quality of placement.The experiment results show that under different scales of service nodes and task requests, the proposed method can obtain the improved system performance in terms of placement error ratio, time efficiency of solution output and cumulative system revenue compared with other representative baseline methods.
文摘[Objective]Real-time monitoring of cow ruminant behavior is of paramount importance for promptly obtaining relevant information about cow health and predicting cow diseases.Currently,various strategies have been proposed for monitoring cow ruminant behavior,including video surveillance,sound recognition,and sensor monitoring methods.How‐ever,the application of edge device gives rise to the issue of inadequate real-time performance.To reduce the volume of data transmission and cloud computing workload while achieving real-time monitoring of dairy cow rumination behavior,a real-time monitoring method was proposed for cow ruminant behavior based on edge computing.[Methods]Autono‐mously designed edge devices were utilized to collect and process six-axis acceleration signals from cows in real-time.Based on these six-axis data,two distinct strategies,federated edge intelligence and split edge intelligence,were investigat‐ed for the real-time recognition of cow ruminant behavior.Focused on the real-time recognition method for cow ruminant behavior leveraging federated edge intelligence,the CA-MobileNet v3 network was proposed by enhancing the MobileNet v3 network with a collaborative attention mechanism.Additionally,a federated edge intelligence model was designed uti‐lizing the CA-MobileNet v3 network and the FedAvg federated aggregation algorithm.In the study on split edge intelli‐gence,a split edge intelligence model named MobileNet-LSTM was designed by integrating the MobileNet v3 network with a fusion collaborative attention mechanism and the Bi-LSTM network.[Results and Discussions]Through compara‐tive experiments with MobileNet v3 and MobileNet-LSTM,the federated edge intelligence model based on CA-Mo‐bileNet v3 achieved an average Precision rate,Recall rate,F1-Score,Specificity,and Accuracy of 97.1%,97.9%,97.5%,98.3%,and 98.2%,respectively,yielding the best recognition performance.[Conclusions]It is provided a real-time and effective method for monitoring cow ruminant behavior,and the proposed federated edge intelligence model can be ap‐plied in practical settings.
基金supported by Princess Nourah bint Abdulrahman University Researchers Supporting Project number(PNURSP2025R909),Princess Nourah bint Abdulrahman University,Riyadh,Saudi Arabia.
文摘The exponential growth of Internet ofThings(IoT)devices has created unprecedented challenges in data processing and resource management for time-critical applications.Traditional cloud computing paradigms cannot meet the stringent latency requirements of modern IoT systems,while pure edge computing faces resource constraints that limit processing capabilities.This paper addresses these challenges by proposing a novel Deep Reinforcement Learning(DRL)-enhanced priority-based scheduling framework for hybrid edge-cloud computing environments.Our approach integrates adaptive priority assignment with a two-level concurrency control protocol that ensures both optimal performance and data consistency.The framework introduces three key innovations:(1)a DRL-based dynamic priority assignmentmechanism that learns fromsystem behavior,(2)a hybrid concurrency control protocol combining local edge validation with global cloud coordination,and(3)an integrated mathematical model that formalizes sensor-driven transactions across edge-cloud architectures.Extensive simulations across diverse workload scenarios demonstrate significant quantitative improvements:40%latency reduction,25%throughput increase,85%resource utilization(compared to 60%for heuristicmethods),40%reduction in energy consumption(300 vs.500 J per task),and 50%improvement in scalability factor(1.8 vs.1.2 for EDF)compared to state-of-the-art heuristic and meta-heuristic approaches.These results establish the framework as a robust solution for large-scale IoT and autonomous applications requiring real-time processing with consistency guarantees.
文摘With the widespread adoption of encrypted Domain Name System(DNS)technologies such as DNS over Hyper Text Transfer Protocol Secure(HTTPS),traditional port and protocol-based traffic analysis methods have become ineffective.Although encrypted DNS enhances user privacy protection,it also provides concealed communication channels for malicious software,compelling detection technologies to shift towards statistical featurebased and machine learning approaches.However,these methods still face challenges in real-time performance and privacy protection.This paper proposes a real-time identification technology for encrypted DNS traffic with privacy protection.Firstly,a hierarchical architecture of cloud-edge-end collaboration is designed,incorporating task offloading strategies to balance privacy protection and identification efficiency.Secondly,a privacy-preserving federated learning mechanismbased on Federated Robust Aggregation(FedRA)is proposed,utilizingMedoid aggregation and differential privacy techniques to ensure data privacy and enhance identification accuracy.Finally,an edge offloading strategy based on a dynamic priority scheduling algorithm(DPSA)is designed to alleviate terminal burden and reduce latency.Simulation results demonstrate that the proposed technology significantly improves the accuracy and realtime performance of encrypted DNS traffic identification while protecting privacy,making it suitable for various network environments.
基金the Deanship of Graduate Studies and Scientific Research at Qassim University for financial support(QU-APC-2025).
文摘The rapid expansion of the Internet of Things(IoT)has introduced significant security challenges due to the scale,complexity,and heterogeneity of interconnected devices.The current traditional centralized security models are deemed irrelevant in dealing with these threats,especially in decentralized applications where the IoT devices may at times operate on minimal resources.The emergence of new technologies,including Artificial Intelligence(AI),blockchain,edge computing,and Zero-Trust-Architecture(ZTA),is offering potential solutions as it helps with additional threat detection,data integrity,and system resilience in real-time.AI offers sophisticated anomaly detection and prediction analytics,and blockchain delivers decentralized and tamper-proof insurance over device communication and exchange of information.Edge computing enables low-latency character processing by distributing and moving the computational workload near the devices.The ZTA enhances security by continuously verifying each device and user on the network,adhering to the“never trust,always verify”ideology.The present research paper is a review of these technologies,finding out how they are used in securing IoT ecosystems,the issues of such integration,and the possibility of developing a multi-layered,adaptive security structure.Major concerns,such as scalability,resource limitations,and interoperability,are identified,and the way to optimize the application of AI,blockchain,and edge computing in zero-trust IoT systems in the future is discussed.
文摘Infrastructure as a Service(IaaS)in cloud computing enables flexible resource distribution over the Internet,but achieving optimal scheduling remains a challenge.Effective resource allocation in cloud-based environments,particularly within the IaaS model,poses persistent challenges.Existing methods often struggle with slow opti-mization,imbalanced workload distribution,and inefficient use of available assets.These limitations result in longer processing times,increased operational expenses,and inadequate resource deployment,particularly under fluctuating demands.To overcome these issues,a novel Clustered Input-Oriented Salp Swarm Algorithm(CIOSSA)is introduced.This approach combines two distinct strategies:Task Splitting Agglomerative Clustering(TSAC)with an Input Oriented Salp Swarm Algorithm(IOSSA),which prioritizes tasks based on urgency,and a refined multi-leader model that accelerates optimization processes,enhancing both speed and accuracy.By continuously assessing system capacity before task distribution,the model ensures that assets are deployed effectively and costs are controlled.The dual-leader technique expands the potential solution space,leading to substantial gains in processing speed,cost-effectiveness,asset efficiency,and system throughput,as demonstrated by comprehensive tests.As a result,the suggested model performs better than existing approaches in terms of makespan,resource utilisation,throughput,and convergence speed,demonstrating that CIOSSA is scalable,reliable,and appropriate for the dynamic settings found in cloud computing.
基金This project was supported by the National Natural Science Foundation of China (60135020).
文摘The flexibility of traditional image processing system is limited because those system are designed for specific applications. In this paper, a new TMS320C64x-based multi-DSP parallel computing architecture is presented. It has many promising characteristics such as powerful computing capability, broad I/O bandwidth, topology flexibility, and expansibility. The parallel system performance is evaluated by practical experiment.
基金The National Natural Science Foundation of China (91438203,91638301,91438111,41601476).
文摘This paper focuses on the time efficiency for machine vision and intelligent photogrammetry, especially high accuracy on-board real-time cloud detection method. With the development of technology, the data acquisition ability is growing continuously and the volume of raw data is increasing explosively. Meanwhile, because of the higher requirement of data accuracy, the computation load is also becoming heavier. This situation makes time efficiency extremely important. Moreover, the cloud cover rate of optical satellite imagery is up to approximately 50%, which is seriously restricting the applications of on-board intelligent photogrammetry services. To meet the on-board cloud detection requirements and offer valid input data to subsequent processing, this paper presents a stream-computing of high accuracy on-board real-time cloud detection solution which follows the “bottom-up” understanding strategy of machine vision and uses multiple embedded GPU with significant potential to be applied on-board. Without external memory, the data parallel pipeline system based on multiple processing modules of this solution could afford the “stream-in, processing, stream-out” real-time stream computing. In experiments, images of GF-2 satellite are used to validate the accuracy and performance of this approach, and the experimental results show that this solution could not only bring up cloud detection accuracy, but also match the on-board real-time processing requirements.
基金supported by Macao FDCTMOST grant001/2015/AMJMacao FDCT grants 005/2016/A1, and 056/2017/A2
文摘Fog computing is an emerging paradigm that has broad applications including storage, measurement and control. In this paper, we propose a novel real-time notification protocol called RT-Notification for wireless control in fog computing. RT-Notification provides low-latency TDMA communication between an access point in Fog and a large number of portable monitoring devices equipped with sensor and actuator. RT-Notification differentiates two types of controls: urgent downlink actuator-oriented control and normal uplink access & scheduling control. Different from existing protocols, RT-Notification has two salient features:(i) support real-time notification of control frames, while not interrupting ongoing other transmissions, and(ii) support on-demand channel allocation for normal uplink access & scheduling control. RT-Notification can be implemented based on the commercial off-the-shelf 802.11 hardware. Our extensive simulations verify that RT-Notification is very effective in supporting the above two features.
基金supported in part by the Shandong Provincial Natural Science Foundation under Grant ZR2021QF008the National Natural Science Foundation of China under Grant 62072351+1 种基金in part by the open research project of ZheJiang Lab under grant 2021PD0AB01in part by the 111 Project under Grant B16037。
文摘Accurate classification of encrypted traffic plays an important role in network management.However,current methods confronts several problems:inability to characterize traffic that exhibits great dispersion,inability to classify traffic with multi-level features,and degradation due to limited training traffic size.To address these problems,this paper proposes a traffic granularity-based cryptographic traffic classification method,called Granular Classifier(GC).In this paper,a novel Cardinality-based Constrained Fuzzy C-Means(CCFCM)clustering algorithm is proposed to address the problem caused by limited training traffic,considering the ratio of cardinality that must be linked between flows to achieve good traffic partitioning.Then,an original representation format of traffic is presented based on granular computing,named Traffic Granules(TG),to accurately describe traffic structure by catching the dispersion of different traffic features.Each granule is a compact set of similar data with a refined boundary by excluding outliers.Based on TG,GC is constructed to perform traffic classification based on multi-level features.The performance of the GC is evaluated based on real-world encrypted network traffic data.Experimental results show that the GC achieves outstanding performance for encrypted traffic classification with limited size of training traffic and keeps accurate classification in dynamic network conditions.
基金NSERC Discovery under Grant 371627-2009 and NSERC RTI under Grant 374707-2009 EQPEQ programs
文摘A user-programmable computational/control platform was developed at the University of Toronto that offers real-time hybrid simulation (RTHS) capabilities. The platform was verified previously using several linear physical substructures. The study presented in this paper is focused on further validating the RTHS platform using a nonlinear viscoelastic-plastic damper that has displacement, frequency and temperature-dependent properties. The validation study includes damper component characterization tests, as well as RTHS of a series of single-degree-of-freedom (SDOF) systems equipped with viscoelastic-plastic dampers that represent different structural designs. From the component characterization tests, it was found that for a wide range of excitation frequencies and friction slip loads, the tracking errors are comparable to the errors in RTHS of linear spring systems. The hybrid SDOF results are compared to an independently validated thermal- mechanical viscoelastic model to further validate the ability for the platform to test nonlinear systems. After the validation, as an application study, nonlinear SDOF hybrid tests were used to develop performance spectra to predict the response of structures equipped with damping systems that are more challenging to model analytically. The use of the experimental performance spectra is illustrated by comparing the predicted response to the hybrid test response of 2DOF systems equipped with viscoelastic-plastic dampers.
基金This work was partially supported by the National Natural Science Foundation of China(61801208,61671233,61931023)the Jiangsu Science Foundation(BK20170650)+2 种基金the Postdoctoral Science Foundation of China(BX201700118,2017M621712)the Jiangsu Postdoctoral Science Foundation(1701118B)the open research fund of National Mobile Communications Research Laboratory(2019D02).
文摘ultra-Dense Network(UDN)has been envisioned as a promising technology to provide high-quality wireless connectivity in dense urban areas,in which the density of Access Points(APs)is increased up to the point where it is comparable with or surpasses the density of active mobile users.In order to mitigate inter-AP interference and improve spectrum efficiency,APs in UDNs are usually clustered into multiple groups to serve different mobile users,respectively.However,as the number of APs increases,the computational capability within an AP group has become the bottleneck of AP clustering.In this paper,we first propose a novel UDN architecture based on Mobile Edge Computing(MEC),in which each MEC server is associated with a user-centric AP cluster to act as a mobile agent.In addition,in the context of MEC-based UDN,we leverage mobility prediction techniques to achieve a dynamic AP clustering scheme,in which the cluster structure can automatically adapt to the dynamic distribution of user traffic in a specific area.Simulation results show that the proposed scheme can highly increase the average user throughput compared with the baseline algorithm using max-SINR user association and equal bandwidth allocation,while it guarantees at the same time low transmission delay.
文摘Model predictive control (MPC) could not be deployed in real-time control systems for its computation time is not well defined. A real-time fault tolerant implementation algorithm based on imprecise computation is proposed for MPC, according to the solving process of quadratic programming (QP) problem. In this algorithm, system stability is guaranteed even when computation resource is not enough to finish optimization completely. By this kind of graceful degradation, the behavior of real-time control systems is still predictable and determinate. The algorithm is demonstrated by experiments on servomotor, and the simulation results show its effectiveness.
基金This research work was funded by Institutional Fund Projects under grant no.(IFPIP:624-611-1443)。
文摘In the Internet of Things(IoT)based system,the multi-level client’s requirements can be fulfilled by incorporating communication technologies with distributed homogeneous networks called ubiquitous computing systems(UCS).The UCS necessitates heterogeneity,management level,and data transmission for distributed users.Simultaneously,security remains a major issue in the IoT-driven UCS.Besides,energy-limited IoT devices need an effective clustering strategy for optimal energy utilization.The recent developments of explainable artificial intelligence(XAI)concepts can be employed to effectively design intrusion detection systems(IDS)for accomplishing security in UCS.In this view,this study designs a novel Blockchain with Explainable Artificial Intelligence Driven Intrusion Detection for IoT Driven Ubiquitous Computing System(BXAI-IDCUCS)model.The major intention of the BXAI-IDCUCS model is to accomplish energy efficacy and security in the IoT environment.The BXAI-IDCUCS model initially clusters the IoT nodes using an energy-aware duck swarm optimization(EADSO)algorithm to accomplish this.Besides,deep neural network(DNN)is employed for detecting and classifying intrusions in the IoT network.Lastly,blockchain technology is exploited for secure inter-cluster data transmission processes.To ensure the productive performance of the BXAI-IDCUCS model,a comprehensive experimentation study is applied,and the outcomes are assessed under different aspects.The comparison study emphasized the superiority of the BXAI-IDCUCS model over the current state-of-the-art approaches with a packet delivery ratio of 99.29%,a packet loss rate of 0.71%,a throughput of 92.95 Mbps,energy consumption of 0.0891 mJ,a lifetime of 3529 rounds,and accuracy of 99.38%.
文摘A new file assignment strategy of parallel I/O, which is named heuristic file sorted assignment algorithm was proposed on cluster computing system. Based on the load balancing, it assigns the files to the same disk according to the similar service time. Firstly, the files were sorted and stored at the set I in descending order in terms of their service time, then one disk of cluster node was selected randomly when the files were to be assigned, and at last the continuous files were taken orderly from the set I to the disk until the disk reached its load maximum. The experimental results show that the new strategy improves the performance by 20.2% when the load of the system is light and by 31.6% when the load is heavy. And the higher the data access rate, the more evident the improvement of the performance obtained by the heuristic file sorted assignment algorithm.
文摘Cloud computing is an advance computing model using which several applications,data and countless IT services are provided over the Internet.Task scheduling plays a crucial role in cloud computing systems.The issue of task scheduling can be viewed as the finding or searching an optimal mapping/assignment of set of subtasks of different tasks over the available set of resources so that we can achieve the desired goals for tasks.With the enlargement of users of cloud the tasks need to be scheduled.Cloud’s performance depends on the task scheduling algorithms used.Numerous algorithms have been submitted in the past to solve the task scheduling problem for heterogeneous network of computers.The existing research work proposes different methods for data intensive applications which are energy and deadline aware task scheduling method.As scientific workflow is combination of fine grain and coarse grain task.Every task scheduled to VM has system overhead.If multiple fine grain task are executing in scientific workflow,it increase the scheduling overhead.To overcome the scheduling overhead,multiple small tasks has been combined to large task,which decrease the scheduling overhead and improve the execution time of the workflow.Horizontal clustering has been used to cluster the fine grained task further replication technique has been combined.The proposed scheduling algorithm improves the performance metrics such as execution time and cost.Further this research can be extended with improved clustering technique and replication methods.
基金supported by National Natural Science Foundation of China(No.12272230)Shanghai Pilot Program for Basic Research-Shanghai Jiao Tong University(No.21TQ1400202).
文摘Real-time prediction of ship motions is crucial for ensuring the safety of offshore activities.In this study,we investigate the performance of the reservoir computing(RC)model in predicting the motions of a ship sailing in irregular waves,comparing it with the long short-term memory(LSTM),bidirectional LSTM(BiLSTM),and gated recurrent unit(GRU)networks.The model tests are carried out in a towing tank to generate the datasets for training and testing the machine learning models.First,we explore the performance of machine learning models trained solely on motion data.It is found that the RC model outperforms the L STM,BiL STM,and GRU networks in both accuracy and efficiency for predicting ship motions.Besides,we investigate the performance of the RC model trained using the historical motion and wave elevation data.It is shown that,compared with the RC model trained solely on motion data,the RC model trained on the motion and wave elevation data can significantly improve the motion prediction accuracy.This study validates the effectiveness and efficiency of the RC model in ship motion prediction during sailing and highlights the utility of wave elevation data in enhancing the RC model’s prediction accuracy.
基金financially by the National Research Council of Thailand(NRCT)under Contract No.N42A670894.
文摘This study presents an innovative development of the exponentially weighted moving average(EWMA)control chart,explicitly adapted for the examination of time series data distinguished by seasonal autoregressive moving average behavior—SARMA(1,1)L under exponential white noise.Unlike previous works that rely on simplified models such as AR(1)or assume independence,this research derives for the first time an exact two-sided Average Run Length(ARL)formula for theModified EWMAchart under SARMA(1,1)L conditions,using a mathematically rigorous Fredholm integral approach.The derived formulas are validated against numerical integral equation(NIE)solutions,showing strong agreement and significantly reduced computational burden.Additionally,a performance comparison index(PCI)is introduced to assess the chart’s detection capability.Results demonstrate that the proposed method exhibits superior sensitivity to mean shifts in autocorrelated environments,outperforming existing approaches.The findings offer a new,efficient framework for real-time quality control in complex seasonal processes,with potential applications in environmental monitoring and intelligent manufacturing systems.
文摘The large-scale computations are often performed in science and engineering areas such as numerical weather forecasting, astrophysics, energy resources exploration, nuclear weapon design, and plasma fusion research etc. Many applications in these areas need super computing power. The traditional mode of sequential processing cannot meet the demands of those computations, thus, parallel processing(PP) is the main way of high performance computing (HPC) now.