With the deep integration of edge computing,5G and Artificial Intelligence ofThings(AIoT)technologies,the large-scale deployment of intelligent terminal devices has given rise to data silos and privacy security challe...With the deep integration of edge computing,5G and Artificial Intelligence ofThings(AIoT)technologies,the large-scale deployment of intelligent terminal devices has given rise to data silos and privacy security challenges in sensing-computing fusion scenarios.Traditional federated learning(FL)algorithms face significant limitations in practical applications due to client drift,model bias,and resource constraints under non-independent and identically distributed(Non-IID)data,as well as the computational overhead and utility loss caused by privacy-preserving techniques.To address these issues,this paper proposes an Efficient and Privacy-enhancing Clustering Federated Learning method(FedEPC).This method introduces a dual-round client selection mechanism to optimize training.First,the Sparsity-based Privacy-preserving Representation Extraction Module(SPRE)and Adaptive Isomorphic Devices Clustering Module(AIDC)cluster clients based on privacy-sensitive features.Second,the Context-aware Incluster Client Selection Module(CICS)dynamically selects representative devices for training,ensuring heterogeneous data distributions are fully represented.By conducting federated training within clusters and aggregating personalized models,FedEPC effectively mitigates weight divergence caused by data heterogeneity,reduces the impact of client drift and straggler issues.Experimental results demonstrate that FedEPC significantly improves test accuracy in highly Non-IID data scenarios compared to FedAvg and existing clustering FL methods.By ensuring privacy security,FedEPC provides an efficient and robust solution for FL in resource-constrained devices within sensing-computing fusion scenarios,offering both theoretical value and engineering practicality.展开更多
Container-based virtualization technology has been more widely used in edge computing environments recently due to its advantages of lighter resource occupation, faster startup capability, and better resource utilizat...Container-based virtualization technology has been more widely used in edge computing environments recently due to its advantages of lighter resource occupation, faster startup capability, and better resource utilization efficiency. To meet the diverse needs of tasks, it usually needs to instantiate multiple network functions in the form of containers interconnect various generated containers to build a Container Cluster(CC). Then CCs will be deployed on edge service nodes with relatively limited resources. However, the increasingly complex and timevarying nature of tasks brings great challenges to optimal placement of CC. This paper regards the charges for various resources occupied by providing services as revenue, the service efficiency and energy consumption as cost, thus formulates a Mixed Integer Programming(MIP) model to describe the optimal placement of CC on edge service nodes. Furthermore, an Actor-Critic based Deep Reinforcement Learning(DRL) incorporating Graph Convolutional Networks(GCN) framework named as RL-GCN is proposed to solve the optimization problem. The framework obtains an optimal placement strategy through self-learning according to the requirements and objectives of the placement of CC. Particularly, through the introduction of GCN, the features of the association relationship between multiple containers in CCs can be effectively extracted to improve the quality of placement.The experiment results show that under different scales of service nodes and task requests, the proposed method can obtain the improved system performance in terms of placement error ratio, time efficiency of solution output and cumulative system revenue compared with other representative baseline methods.展开更多
Infrastructure as a Service(IaaS)in cloud computing enables flexible resource distribution over the Internet,but achieving optimal scheduling remains a challenge.Effective resource allocation in cloud-based environmen...Infrastructure as a Service(IaaS)in cloud computing enables flexible resource distribution over the Internet,but achieving optimal scheduling remains a challenge.Effective resource allocation in cloud-based environments,particularly within the IaaS model,poses persistent challenges.Existing methods often struggle with slow opti-mization,imbalanced workload distribution,and inefficient use of available assets.These limitations result in longer processing times,increased operational expenses,and inadequate resource deployment,particularly under fluctuating demands.To overcome these issues,a novel Clustered Input-Oriented Salp Swarm Algorithm(CIOSSA)is introduced.This approach combines two distinct strategies:Task Splitting Agglomerative Clustering(TSAC)with an Input Oriented Salp Swarm Algorithm(IOSSA),which prioritizes tasks based on urgency,and a refined multi-leader model that accelerates optimization processes,enhancing both speed and accuracy.By continuously assessing system capacity before task distribution,the model ensures that assets are deployed effectively and costs are controlled.The dual-leader technique expands the potential solution space,leading to substantial gains in processing speed,cost-effectiveness,asset efficiency,and system throughput,as demonstrated by comprehensive tests.As a result,the suggested model performs better than existing approaches in terms of makespan,resource utilisation,throughput,and convergence speed,demonstrating that CIOSSA is scalable,reliable,and appropriate for the dynamic settings found in cloud computing.展开更多
Accurate classification of encrypted traffic plays an important role in network management.However,current methods confronts several problems:inability to characterize traffic that exhibits great dispersion,inability ...Accurate classification of encrypted traffic plays an important role in network management.However,current methods confronts several problems:inability to characterize traffic that exhibits great dispersion,inability to classify traffic with multi-level features,and degradation due to limited training traffic size.To address these problems,this paper proposes a traffic granularity-based cryptographic traffic classification method,called Granular Classifier(GC).In this paper,a novel Cardinality-based Constrained Fuzzy C-Means(CCFCM)clustering algorithm is proposed to address the problem caused by limited training traffic,considering the ratio of cardinality that must be linked between flows to achieve good traffic partitioning.Then,an original representation format of traffic is presented based on granular computing,named Traffic Granules(TG),to accurately describe traffic structure by catching the dispersion of different traffic features.Each granule is a compact set of similar data with a refined boundary by excluding outliers.Based on TG,GC is constructed to perform traffic classification based on multi-level features.The performance of the GC is evaluated based on real-world encrypted network traffic data.Experimental results show that the GC achieves outstanding performance for encrypted traffic classification with limited size of training traffic and keeps accurate classification in dynamic network conditions.展开更多
ultra-Dense Network(UDN)has been envisioned as a promising technology to provide high-quality wireless connectivity in dense urban areas,in which the density of Access Points(APs)is increased up to the point where it ...ultra-Dense Network(UDN)has been envisioned as a promising technology to provide high-quality wireless connectivity in dense urban areas,in which the density of Access Points(APs)is increased up to the point where it is comparable with or surpasses the density of active mobile users.In order to mitigate inter-AP interference and improve spectrum efficiency,APs in UDNs are usually clustered into multiple groups to serve different mobile users,respectively.However,as the number of APs increases,the computational capability within an AP group has become the bottleneck of AP clustering.In this paper,we first propose a novel UDN architecture based on Mobile Edge Computing(MEC),in which each MEC server is associated with a user-centric AP cluster to act as a mobile agent.In addition,in the context of MEC-based UDN,we leverage mobility prediction techniques to achieve a dynamic AP clustering scheme,in which the cluster structure can automatically adapt to the dynamic distribution of user traffic in a specific area.Simulation results show that the proposed scheme can highly increase the average user throughput compared with the baseline algorithm using max-SINR user association and equal bandwidth allocation,while it guarantees at the same time low transmission delay.展开更多
In the Internet of Things(IoT)based system,the multi-level client’s requirements can be fulfilled by incorporating communication technologies with distributed homogeneous networks called ubiquitous computing systems(...In the Internet of Things(IoT)based system,the multi-level client’s requirements can be fulfilled by incorporating communication technologies with distributed homogeneous networks called ubiquitous computing systems(UCS).The UCS necessitates heterogeneity,management level,and data transmission for distributed users.Simultaneously,security remains a major issue in the IoT-driven UCS.Besides,energy-limited IoT devices need an effective clustering strategy for optimal energy utilization.The recent developments of explainable artificial intelligence(XAI)concepts can be employed to effectively design intrusion detection systems(IDS)for accomplishing security in UCS.In this view,this study designs a novel Blockchain with Explainable Artificial Intelligence Driven Intrusion Detection for IoT Driven Ubiquitous Computing System(BXAI-IDCUCS)model.The major intention of the BXAI-IDCUCS model is to accomplish energy efficacy and security in the IoT environment.The BXAI-IDCUCS model initially clusters the IoT nodes using an energy-aware duck swarm optimization(EADSO)algorithm to accomplish this.Besides,deep neural network(DNN)is employed for detecting and classifying intrusions in the IoT network.Lastly,blockchain technology is exploited for secure inter-cluster data transmission processes.To ensure the productive performance of the BXAI-IDCUCS model,a comprehensive experimentation study is applied,and the outcomes are assessed under different aspects.The comparison study emphasized the superiority of the BXAI-IDCUCS model over the current state-of-the-art approaches with a packet delivery ratio of 99.29%,a packet loss rate of 0.71%,a throughput of 92.95 Mbps,energy consumption of 0.0891 mJ,a lifetime of 3529 rounds,and accuracy of 99.38%.展开更多
A new file assignment strategy of parallel I/O, which is named heuristic file sorted assignment algorithm was proposed on cluster computing system. Based on the load balancing, it assigns the files to the same disk ac...A new file assignment strategy of parallel I/O, which is named heuristic file sorted assignment algorithm was proposed on cluster computing system. Based on the load balancing, it assigns the files to the same disk according to the similar service time. Firstly, the files were sorted and stored at the set I in descending order in terms of their service time, then one disk of cluster node was selected randomly when the files were to be assigned, and at last the continuous files were taken orderly from the set I to the disk until the disk reached its load maximum. The experimental results show that the new strategy improves the performance by 20.2% when the load of the system is light and by 31.6% when the load is heavy. And the higher the data access rate, the more evident the improvement of the performance obtained by the heuristic file sorted assignment algorithm.展开更多
Cloud computing is an advance computing model using which several applications,data and countless IT services are provided over the Internet.Task scheduling plays a crucial role in cloud computing systems.The issue of...Cloud computing is an advance computing model using which several applications,data and countless IT services are provided over the Internet.Task scheduling plays a crucial role in cloud computing systems.The issue of task scheduling can be viewed as the finding or searching an optimal mapping/assignment of set of subtasks of different tasks over the available set of resources so that we can achieve the desired goals for tasks.With the enlargement of users of cloud the tasks need to be scheduled.Cloud’s performance depends on the task scheduling algorithms used.Numerous algorithms have been submitted in the past to solve the task scheduling problem for heterogeneous network of computers.The existing research work proposes different methods for data intensive applications which are energy and deadline aware task scheduling method.As scientific workflow is combination of fine grain and coarse grain task.Every task scheduled to VM has system overhead.If multiple fine grain task are executing in scientific workflow,it increase the scheduling overhead.To overcome the scheduling overhead,multiple small tasks has been combined to large task,which decrease the scheduling overhead and improve the execution time of the workflow.Horizontal clustering has been used to cluster the fine grained task further replication technique has been combined.The proposed scheduling algorithm improves the performance metrics such as execution time and cost.Further this research can be extended with improved clustering technique and replication methods.展开更多
The Dynamical Density Functional Theory(DDFT)algorithm,derived by associating classical Density Functional Theory(DFT)with the fundamental Smoluchowski dynamical equation,describes the evolution of inhomo-geneous flui...The Dynamical Density Functional Theory(DDFT)algorithm,derived by associating classical Density Functional Theory(DFT)with the fundamental Smoluchowski dynamical equation,describes the evolution of inhomo-geneous fluid density distributions over time.It plays a significant role in studying the evolution of density distributions over time in inhomogeneous systems.The Sunway Bluelight II supercomputer,as a new generation of China’s developed supercomputer,possesses powerful computational capabilities.Porting and optimizing industrial software on this platform holds significant importance.For the optimization of the DDFT algorithm,based on the Sunway Bluelight II supercomputer and the unique hardware architecture of the SW39000 processor,this work proposes three acceleration strategies to enhance computational efficiency and performance,including direct parallel optimization,local-memory constrained optimization for CPEs,and multi-core groups collaboration and communication optimization.This method combines the characteristics of the program’s algorithm with the unique hardware architecture of the Sunway Bluelight II supercomputer,optimizing the storage and transmission structures to achieve a closer integration of software and hardware.For the first time,this paper presents Sunway-Dynamical Density Functional Theory(SW-DDFT).Experimental results show that SW-DDFT achieves a speedup of 6.67 times within a single-core group compared to the original DDFT implementation,with six core groups(a total of 384 CPEs),the maximum speedup can reach 28.64 times,and parallel efficiency can reach 71%,demonstrating excellent acceleration performance.展开更多
The large-scale computations are often performed in science and engineering areas such as numerical weather forecasting, astrophysics, energy resources exploration, nuclear weapon design, and plasma fusion research et...The large-scale computations are often performed in science and engineering areas such as numerical weather forecasting, astrophysics, energy resources exploration, nuclear weapon design, and plasma fusion research etc. Many applications in these areas need super computing power. The traditional mode of sequential processing cannot meet the demands of those computations, thus, parallel processing(PP) is the main way of high performance computing (HPC) now.展开更多
In the plethora of conceptual and algorithmic developments supporting data analytics and system modeling,humancentric pursuits assume a particular position owing to ways they emphasize and realize interaction between ...In the plethora of conceptual and algorithmic developments supporting data analytics and system modeling,humancentric pursuits assume a particular position owing to ways they emphasize and realize interaction between users and the data.We advocate that the level of abstraction,which can be flexibly adjusted,is conveniently realized through Granular Computing.Granular Computing is concerned with the development and processing information granules–formal entities which facilitate a way of organizing knowledge about the available data and relationships existing there.This study identifies the principles of Granular Computing,shows how information granules are constructed and subsequently used in describing relationships present among the data.展开更多
Discrete element method can effectively simulate the discontinuity,inhomogeneity and large deformation and failure of rock and soil.Based on the innovative matrix computing of the discrete element method,the highperfo...Discrete element method can effectively simulate the discontinuity,inhomogeneity and large deformation and failure of rock and soil.Based on the innovative matrix computing of the discrete element method,the highperformance discrete element software MatDEM may handle millions of elements in one computer,and enables the discrete element simulation at the engineering scale.It supports heat calculation,multi-field and fluidsolid coupling numerical simulations.Furthermore,the software integrates pre-processing,solver,postprocessing,and powerful secondary development,allowing recompiling new discrete element software.The basic principles of the DEM,the implement and development of the MatDEM software,and its applications are introduced in this paper.The software and sample source code are available online(http://matdem.com).展开更多
Measurement-based quantum computation with continuous variables,which realizes computation by performing measurement and feedforward of measurement results on a large scale Gaussian cluster state,provides a feasible w...Measurement-based quantum computation with continuous variables,which realizes computation by performing measurement and feedforward of measurement results on a large scale Gaussian cluster state,provides a feasible way to implement quantum computation.Quantum error correction is an essential procedure to protect quantum information in quantum computation and quantum communication.In this review,we briefly introduce the progress of measurement-based quantum computation and quantum error correction with continuous variables based on Gaussian cluster states.We also discuss the challenges in the fault-tolerant measurement-based quantum computation with continuous variables.展开更多
The simulation field became essential in designing or developing new casting products and in improving manufacturing processes within limited time, because it can help us to simulate the nature of processing, so that ...The simulation field became essential in designing or developing new casting products and in improving manufacturing processes within limited time, because it can help us to simulate the nature of processing, so that developers can make ideal casting designs. To take the prior occupation at commercial simulation market, so many development groups in the world are doing their every effort. They already reported successful stories in manufacturing fields by developing and providing the high performance simulation technologies for multipurpose. But they all run at powerful desk-side computers by well-trained experts mainly, so that it is hard to diffuse the scientific designing concept to newcomers in casting field. To overcome upcoming problems in scientific casting designs, we utilized information technologies and full-matured hardware backbones to spread out the effective and scientific casting design mind, and they all were integrated into Simulation Portal on the web. It professes scientific casting design on the NET including ubiquitous access way represented by "Anyone, Anytime, Anywhere" concept for casting designs.展开更多
A simple and accurate high-performance liquid chromatography(HPLC)coupled with diode array detector(DAD)and evaporative light scattering detector(ELSD)was established for the determination of six bioactive compo...A simple and accurate high-performance liquid chromatography(HPLC)coupled with diode array detector(DAD)and evaporative light scattering detector(ELSD)was established for the determination of six bioactive compounds in Zhenqi Fuzheng preparation(ZFP).The monitoring wavelengths were 254,275 and 328 nm.Under the optimum conditions,good separation was achieved,and the assay was fully validated in respect of precision,repeatability and accuracy.The proposed method was successfully applied to quantify the six ingredients in 31 batches of ZFP samples and evaluate the variation by hierarchical cluster analysis(HCA),which demonstrated significant variations on the content of these compounds in the samples from different manufacturers with different preparation procedures.The developed HPLC method can be used as a valid analytical method to evaluate the intrinsic quality of this preparation.展开更多
Parallel finite element method using domain decomposition technique is adapted to a distributed parallel environment of workstation cluster. The algorithm is presented for parallelization of the preconditioned conjuga...Parallel finite element method using domain decomposition technique is adapted to a distributed parallel environment of workstation cluster. The algorithm is presented for parallelization of the preconditioned conjugate gradient method based on domain decomposition. Using the developed code, a dam structural analysis problem is solved on workstation cluster and results are given. The parallel performance is analyzed.展开更多
The real problem in cluster of workstations is the changes in workstation power or number of workstations or dynmaic changes in the run time behavior of the application hamper the efficient use of resources. Dynamic l...The real problem in cluster of workstations is the changes in workstation power or number of workstations or dynmaic changes in the run time behavior of the application hamper the efficient use of resources. Dynamic load balancing is a technique for the parallel implementation of problems, which generate unpredictable workloads by migration work units from heavily loaded processor to lightly loaded processors at run time. This paper proposed an efficient load balancing method in which parallel tree computations depth first search (DFS) generates unpredictable, highly imbalance workloads and moves through different phases detectable at run time, where dynamic load balancing strategy is applicable in each phase running under the MPI(message passing interface) and Unix operating system on cluster of workstations parallel platform computing.展开更多
基金funded by the State Grid Corporation Science and Technology Project“Research and Application of Key Technologies for Integrated Sensing and Computing for Intelligent Operation of Power Grid”(Grant No.5700-202318596A-3-2-ZN).
文摘With the deep integration of edge computing,5G and Artificial Intelligence ofThings(AIoT)technologies,the large-scale deployment of intelligent terminal devices has given rise to data silos and privacy security challenges in sensing-computing fusion scenarios.Traditional federated learning(FL)algorithms face significant limitations in practical applications due to client drift,model bias,and resource constraints under non-independent and identically distributed(Non-IID)data,as well as the computational overhead and utility loss caused by privacy-preserving techniques.To address these issues,this paper proposes an Efficient and Privacy-enhancing Clustering Federated Learning method(FedEPC).This method introduces a dual-round client selection mechanism to optimize training.First,the Sparsity-based Privacy-preserving Representation Extraction Module(SPRE)and Adaptive Isomorphic Devices Clustering Module(AIDC)cluster clients based on privacy-sensitive features.Second,the Context-aware Incluster Client Selection Module(CICS)dynamically selects representative devices for training,ensuring heterogeneous data distributions are fully represented.By conducting federated training within clusters and aggregating personalized models,FedEPC effectively mitigates weight divergence caused by data heterogeneity,reduces the impact of client drift and straggler issues.Experimental results demonstrate that FedEPC significantly improves test accuracy in highly Non-IID data scenarios compared to FedAvg and existing clustering FL methods.By ensuring privacy security,FedEPC provides an efficient and robust solution for FL in resource-constrained devices within sensing-computing fusion scenarios,offering both theoretical value and engineering practicality.
文摘Container-based virtualization technology has been more widely used in edge computing environments recently due to its advantages of lighter resource occupation, faster startup capability, and better resource utilization efficiency. To meet the diverse needs of tasks, it usually needs to instantiate multiple network functions in the form of containers interconnect various generated containers to build a Container Cluster(CC). Then CCs will be deployed on edge service nodes with relatively limited resources. However, the increasingly complex and timevarying nature of tasks brings great challenges to optimal placement of CC. This paper regards the charges for various resources occupied by providing services as revenue, the service efficiency and energy consumption as cost, thus formulates a Mixed Integer Programming(MIP) model to describe the optimal placement of CC on edge service nodes. Furthermore, an Actor-Critic based Deep Reinforcement Learning(DRL) incorporating Graph Convolutional Networks(GCN) framework named as RL-GCN is proposed to solve the optimization problem. The framework obtains an optimal placement strategy through self-learning according to the requirements and objectives of the placement of CC. Particularly, through the introduction of GCN, the features of the association relationship between multiple containers in CCs can be effectively extracted to improve the quality of placement.The experiment results show that under different scales of service nodes and task requests, the proposed method can obtain the improved system performance in terms of placement error ratio, time efficiency of solution output and cumulative system revenue compared with other representative baseline methods.
文摘Infrastructure as a Service(IaaS)in cloud computing enables flexible resource distribution over the Internet,but achieving optimal scheduling remains a challenge.Effective resource allocation in cloud-based environments,particularly within the IaaS model,poses persistent challenges.Existing methods often struggle with slow opti-mization,imbalanced workload distribution,and inefficient use of available assets.These limitations result in longer processing times,increased operational expenses,and inadequate resource deployment,particularly under fluctuating demands.To overcome these issues,a novel Clustered Input-Oriented Salp Swarm Algorithm(CIOSSA)is introduced.This approach combines two distinct strategies:Task Splitting Agglomerative Clustering(TSAC)with an Input Oriented Salp Swarm Algorithm(IOSSA),which prioritizes tasks based on urgency,and a refined multi-leader model that accelerates optimization processes,enhancing both speed and accuracy.By continuously assessing system capacity before task distribution,the model ensures that assets are deployed effectively and costs are controlled.The dual-leader technique expands the potential solution space,leading to substantial gains in processing speed,cost-effectiveness,asset efficiency,and system throughput,as demonstrated by comprehensive tests.As a result,the suggested model performs better than existing approaches in terms of makespan,resource utilisation,throughput,and convergence speed,demonstrating that CIOSSA is scalable,reliable,and appropriate for the dynamic settings found in cloud computing.
基金supported in part by the Shandong Provincial Natural Science Foundation under Grant ZR2021QF008the National Natural Science Foundation of China under Grant 62072351+1 种基金in part by the open research project of ZheJiang Lab under grant 2021PD0AB01in part by the 111 Project under Grant B16037。
文摘Accurate classification of encrypted traffic plays an important role in network management.However,current methods confronts several problems:inability to characterize traffic that exhibits great dispersion,inability to classify traffic with multi-level features,and degradation due to limited training traffic size.To address these problems,this paper proposes a traffic granularity-based cryptographic traffic classification method,called Granular Classifier(GC).In this paper,a novel Cardinality-based Constrained Fuzzy C-Means(CCFCM)clustering algorithm is proposed to address the problem caused by limited training traffic,considering the ratio of cardinality that must be linked between flows to achieve good traffic partitioning.Then,an original representation format of traffic is presented based on granular computing,named Traffic Granules(TG),to accurately describe traffic structure by catching the dispersion of different traffic features.Each granule is a compact set of similar data with a refined boundary by excluding outliers.Based on TG,GC is constructed to perform traffic classification based on multi-level features.The performance of the GC is evaluated based on real-world encrypted network traffic data.Experimental results show that the GC achieves outstanding performance for encrypted traffic classification with limited size of training traffic and keeps accurate classification in dynamic network conditions.
基金This work was partially supported by the National Natural Science Foundation of China(61801208,61671233,61931023)the Jiangsu Science Foundation(BK20170650)+2 种基金the Postdoctoral Science Foundation of China(BX201700118,2017M621712)the Jiangsu Postdoctoral Science Foundation(1701118B)the open research fund of National Mobile Communications Research Laboratory(2019D02).
文摘ultra-Dense Network(UDN)has been envisioned as a promising technology to provide high-quality wireless connectivity in dense urban areas,in which the density of Access Points(APs)is increased up to the point where it is comparable with or surpasses the density of active mobile users.In order to mitigate inter-AP interference and improve spectrum efficiency,APs in UDNs are usually clustered into multiple groups to serve different mobile users,respectively.However,as the number of APs increases,the computational capability within an AP group has become the bottleneck of AP clustering.In this paper,we first propose a novel UDN architecture based on Mobile Edge Computing(MEC),in which each MEC server is associated with a user-centric AP cluster to act as a mobile agent.In addition,in the context of MEC-based UDN,we leverage mobility prediction techniques to achieve a dynamic AP clustering scheme,in which the cluster structure can automatically adapt to the dynamic distribution of user traffic in a specific area.Simulation results show that the proposed scheme can highly increase the average user throughput compared with the baseline algorithm using max-SINR user association and equal bandwidth allocation,while it guarantees at the same time low transmission delay.
基金This research work was funded by Institutional Fund Projects under grant no.(IFPIP:624-611-1443)。
文摘In the Internet of Things(IoT)based system,the multi-level client’s requirements can be fulfilled by incorporating communication technologies with distributed homogeneous networks called ubiquitous computing systems(UCS).The UCS necessitates heterogeneity,management level,and data transmission for distributed users.Simultaneously,security remains a major issue in the IoT-driven UCS.Besides,energy-limited IoT devices need an effective clustering strategy for optimal energy utilization.The recent developments of explainable artificial intelligence(XAI)concepts can be employed to effectively design intrusion detection systems(IDS)for accomplishing security in UCS.In this view,this study designs a novel Blockchain with Explainable Artificial Intelligence Driven Intrusion Detection for IoT Driven Ubiquitous Computing System(BXAI-IDCUCS)model.The major intention of the BXAI-IDCUCS model is to accomplish energy efficacy and security in the IoT environment.The BXAI-IDCUCS model initially clusters the IoT nodes using an energy-aware duck swarm optimization(EADSO)algorithm to accomplish this.Besides,deep neural network(DNN)is employed for detecting and classifying intrusions in the IoT network.Lastly,blockchain technology is exploited for secure inter-cluster data transmission processes.To ensure the productive performance of the BXAI-IDCUCS model,a comprehensive experimentation study is applied,and the outcomes are assessed under different aspects.The comparison study emphasized the superiority of the BXAI-IDCUCS model over the current state-of-the-art approaches with a packet delivery ratio of 99.29%,a packet loss rate of 0.71%,a throughput of 92.95 Mbps,energy consumption of 0.0891 mJ,a lifetime of 3529 rounds,and accuracy of 99.38%.
文摘A new file assignment strategy of parallel I/O, which is named heuristic file sorted assignment algorithm was proposed on cluster computing system. Based on the load balancing, it assigns the files to the same disk according to the similar service time. Firstly, the files were sorted and stored at the set I in descending order in terms of their service time, then one disk of cluster node was selected randomly when the files were to be assigned, and at last the continuous files were taken orderly from the set I to the disk until the disk reached its load maximum. The experimental results show that the new strategy improves the performance by 20.2% when the load of the system is light and by 31.6% when the load is heavy. And the higher the data access rate, the more evident the improvement of the performance obtained by the heuristic file sorted assignment algorithm.
文摘Cloud computing is an advance computing model using which several applications,data and countless IT services are provided over the Internet.Task scheduling plays a crucial role in cloud computing systems.The issue of task scheduling can be viewed as the finding or searching an optimal mapping/assignment of set of subtasks of different tasks over the available set of resources so that we can achieve the desired goals for tasks.With the enlargement of users of cloud the tasks need to be scheduled.Cloud’s performance depends on the task scheduling algorithms used.Numerous algorithms have been submitted in the past to solve the task scheduling problem for heterogeneous network of computers.The existing research work proposes different methods for data intensive applications which are energy and deadline aware task scheduling method.As scientific workflow is combination of fine grain and coarse grain task.Every task scheduled to VM has system overhead.If multiple fine grain task are executing in scientific workflow,it increase the scheduling overhead.To overcome the scheduling overhead,multiple small tasks has been combined to large task,which decrease the scheduling overhead and improve the execution time of the workflow.Horizontal clustering has been used to cluster the fine grained task further replication technique has been combined.The proposed scheduling algorithm improves the performance metrics such as execution time and cost.Further this research can be extended with improved clustering technique and replication methods.
基金supported by National Key Research and Development Program of China under Grant 2024YFE0210800National Natural Science Foundation of China under Grant 62495062Beijing Natural Science Foundation under Grant L242017.
文摘The Dynamical Density Functional Theory(DDFT)algorithm,derived by associating classical Density Functional Theory(DFT)with the fundamental Smoluchowski dynamical equation,describes the evolution of inhomo-geneous fluid density distributions over time.It plays a significant role in studying the evolution of density distributions over time in inhomogeneous systems.The Sunway Bluelight II supercomputer,as a new generation of China’s developed supercomputer,possesses powerful computational capabilities.Porting and optimizing industrial software on this platform holds significant importance.For the optimization of the DDFT algorithm,based on the Sunway Bluelight II supercomputer and the unique hardware architecture of the SW39000 processor,this work proposes three acceleration strategies to enhance computational efficiency and performance,including direct parallel optimization,local-memory constrained optimization for CPEs,and multi-core groups collaboration and communication optimization.This method combines the characteristics of the program’s algorithm with the unique hardware architecture of the Sunway Bluelight II supercomputer,optimizing the storage and transmission structures to achieve a closer integration of software and hardware.For the first time,this paper presents Sunway-Dynamical Density Functional Theory(SW-DDFT).Experimental results show that SW-DDFT achieves a speedup of 6.67 times within a single-core group compared to the original DDFT implementation,with six core groups(a total of 384 CPEs),the maximum speedup can reach 28.64 times,and parallel efficiency can reach 71%,demonstrating excellent acceleration performance.
文摘The large-scale computations are often performed in science and engineering areas such as numerical weather forecasting, astrophysics, energy resources exploration, nuclear weapon design, and plasma fusion research etc. Many applications in these areas need super computing power. The traditional mode of sequential processing cannot meet the demands of those computations, thus, parallel processing(PP) is the main way of high performance computing (HPC) now.
文摘In the plethora of conceptual and algorithmic developments supporting data analytics and system modeling,humancentric pursuits assume a particular position owing to ways they emphasize and realize interaction between users and the data.We advocate that the level of abstraction,which can be flexibly adjusted,is conveniently realized through Granular Computing.Granular Computing is concerned with the development and processing information granules–formal entities which facilitate a way of organizing knowledge about the available data and relationships existing there.This study identifies the principles of Granular Computing,shows how information granules are constructed and subsequently used in describing relationships present among the data.
基金Financial supports from the Natural Science Foundation of China(41761134089,41977218)Six Talent Peaks Project of Jiangsu Province(RJFW-003)the Fundamental Research Funds for the Central Universities(14380103)are gratefully acknowledged.
文摘Discrete element method can effectively simulate the discontinuity,inhomogeneity and large deformation and failure of rock and soil.Based on the innovative matrix computing of the discrete element method,the highperformance discrete element software MatDEM may handle millions of elements in one computer,and enables the discrete element simulation at the engineering scale.It supports heat calculation,multi-field and fluidsolid coupling numerical simulations.Furthermore,the software integrates pre-processing,solver,postprocessing,and powerful secondary development,allowing recompiling new discrete element software.The basic principles of the DEM,the implement and development of the MatDEM software,and its applications are introduced in this paper.The software and sample source code are available online(http://matdem.com).
基金Project supported by the National Natural Science Foundation of China(Grant Nos.11834010,11804001,and 11904160)the Natural Science Foundation of Anhui Province,China(Grant No.1808085QA11)+1 种基金the Program of Youth Sanjin Scholar,National Key R&D Program of China(Grant No.2016YFA0301402)the Fund for Shanxi"1331 Project"Key Subjects Construction.
文摘Measurement-based quantum computation with continuous variables,which realizes computation by performing measurement and feedforward of measurement results on a large scale Gaussian cluster state,provides a feasible way to implement quantum computation.Quantum error correction is an essential procedure to protect quantum information in quantum computation and quantum communication.In this review,we briefly introduce the progress of measurement-based quantum computation and quantum error correction with continuous variables based on Gaussian cluster states.We also discuss the challenges in the fault-tolerant measurement-based quantum computation with continuous variables.
文摘The simulation field became essential in designing or developing new casting products and in improving manufacturing processes within limited time, because it can help us to simulate the nature of processing, so that developers can make ideal casting designs. To take the prior occupation at commercial simulation market, so many development groups in the world are doing their every effort. They already reported successful stories in manufacturing fields by developing and providing the high performance simulation technologies for multipurpose. But they all run at powerful desk-side computers by well-trained experts mainly, so that it is hard to diffuse the scientific designing concept to newcomers in casting field. To overcome upcoming problems in scientific casting designs, we utilized information technologies and full-matured hardware backbones to spread out the effective and scientific casting design mind, and they all were integrated into Simulation Portal on the web. It professes scientific casting design on the NET including ubiquitous access way represented by "Anyone, Anytime, Anywhere" concept for casting designs.
文摘A simple and accurate high-performance liquid chromatography(HPLC)coupled with diode array detector(DAD)and evaporative light scattering detector(ELSD)was established for the determination of six bioactive compounds in Zhenqi Fuzheng preparation(ZFP).The monitoring wavelengths were 254,275 and 328 nm.Under the optimum conditions,good separation was achieved,and the assay was fully validated in respect of precision,repeatability and accuracy.The proposed method was successfully applied to quantify the six ingredients in 31 batches of ZFP samples and evaluate the variation by hierarchical cluster analysis(HCA),which demonstrated significant variations on the content of these compounds in the samples from different manufacturers with different preparation procedures.The developed HPLC method can be used as a valid analytical method to evaluate the intrinsic quality of this preparation.
基金Project supported by Key Project Science Foundation of ShanghaiMunicipal Commission of Education (Grant No .03AZ03)
文摘Parallel finite element method using domain decomposition technique is adapted to a distributed parallel environment of workstation cluster. The algorithm is presented for parallelization of the preconditioned conjugate gradient method based on domain decomposition. Using the developed code, a dam structural analysis problem is solved on workstation cluster and results are given. The parallel performance is analyzed.
基金Natural Science Foundation of China (No.60 173 0 3 1)
文摘The real problem in cluster of workstations is the changes in workstation power or number of workstations or dynmaic changes in the run time behavior of the application hamper the efficient use of resources. Dynamic load balancing is a technique for the parallel implementation of problems, which generate unpredictable workloads by migration work units from heavily loaded processor to lightly loaded processors at run time. This paper proposed an efficient load balancing method in which parallel tree computations depth first search (DFS) generates unpredictable, highly imbalance workloads and moves through different phases detectable at run time, where dynamic load balancing strategy is applicable in each phase running under the MPI(message passing interface) and Unix operating system on cluster of workstations parallel platform computing.