Distributed computing is an important topic in the field of wireless communications and networking,and its high efficiency in handling large amounts of data is particularly noteworthy.Although distributed computing be...Distributed computing is an important topic in the field of wireless communications and networking,and its high efficiency in handling large amounts of data is particularly noteworthy.Although distributed computing benefits from its ability of processing data in parallel,the communication burden between different servers is incurred,thereby the computation process is detained.Recent researches have applied coding in distributed computing to reduce the communication burden,where repetitive computation is utilized to enable multicast opportunities so that the same coded information can be reused across different servers.To handle the computation tasks in practical heterogeneous systems,we propose a novel coding scheme to effectively mitigate the "straggling effect" in distributed computing.We assume that there are two types of servers in the system and the only difference between them is their computational capabilities,the servers with lower computational capabilities are called stragglers.Given any ratio of fast servers to slow servers and any gap of computational capabilities between them,we achieve approximately the same computation time for both fast and slow servers by assigning different amounts of computation tasks to them,thus reducing the overall computation time.Furthermore,we investigate the informationtheoretic lower bound of the inter-communication load and show that the lower bound is within a constant multiplicative gap to the upper bound achieved by our scheme.Various simulations also validate the effectiveness of the proposed scheme.展开更多
A dynamic multi-beam resource allocation algorithm for large low Earth orbit(LEO)constellation based on on-board distributed computing is proposed in this paper.The allocation is a combinatorial optimization process u...A dynamic multi-beam resource allocation algorithm for large low Earth orbit(LEO)constellation based on on-board distributed computing is proposed in this paper.The allocation is a combinatorial optimization process under a series of complex constraints,which is important for enhancing the matching between resources and requirements.A complex algorithm is not available because that the LEO on-board resources is limi-ted.The proposed genetic algorithm(GA)based on two-dimen-sional individual model and uncorrelated single paternal inheri-tance method is designed to support distributed computation to enhance the feasibility of on-board application.A distributed system composed of eight embedded devices is built to verify the algorithm.A typical scenario is built in the system to evalu-ate the resource allocation process,algorithm mathematical model,trigger strategy,and distributed computation architec-ture.According to the simulation and measurement results,the proposed algorithm can provide an allocation result for more than 1500 tasks in 14 s and the success rate is more than 91%in a typical scene.The response time is decreased by 40%com-pared with the conditional GA.展开更多
In LEO(Low Earth Orbit)satellite communication systems,the satellite network is made up of a large number of satellites,the dynamically changing network environment affects the results of distributed computing.In orde...In LEO(Low Earth Orbit)satellite communication systems,the satellite network is made up of a large number of satellites,the dynamically changing network environment affects the results of distributed computing.In order to improve the fault tolerance rate,a novel public blockchain consensus mechanism that applies a distributed computing architecture in a public network is proposed.Redundant calculation of blockchain ensures the credibility of the results;and the transactions with calculation results of a task are stored distributed in sequence in Directed Acyclic Graphs(DAG).The transactions issued by nodes are connected to form a net.The net can quickly provide node reputation evaluation that does not rely on third parties.Simulations show that our proposed blockchain has the following advantages:1.The task processing speed of the blockchain can be close to that of the fastest node in the entire blockchain;2.When the tasks’arrival time intervals and demanded working nodes(WNs)meet certain conditions,the network can tolerate more than 50%of malicious devices;3.No matter the number of nodes in the blockchain is increased or reduced,the network can keep robustness by adjusting the task’s arrival time interval and demanded WNs.展开更多
To security support large-scale intelligent applications,distributed machine learning based on blockchain is an intuitive solution scheme.However,the distributed machine learning is difficult to train due to that the ...To security support large-scale intelligent applications,distributed machine learning based on blockchain is an intuitive solution scheme.However,the distributed machine learning is difficult to train due to that the corresponding optimization solver algorithms converge slowly,which highly demand on computing and memory resources.To overcome the challenges,we propose a distributed computing framework for L-BFGS optimization algorithm based on variance reduction method,which is a lightweight,few additional cost and parallelized scheme for the model training process.To validate the claims,we have conducted several experiments on multiple classical datasets.Results show that our proposed computing framework can steadily accelerate the training process of solver in either local mode or distributed mode.展开更多
Recently, wireless distributed computing (WDC) concept has emerged promising manifolds improvements to current wireless technotogies. Despite the various expected benefits of this concept, significant drawbacks were...Recently, wireless distributed computing (WDC) concept has emerged promising manifolds improvements to current wireless technotogies. Despite the various expected benefits of this concept, significant drawbacks were addressed in the open literature. One of WDC key challenges is the impact of wireless channel quality on the load of distributed computations. Therefore, this research investigates the wireless channel impact on WDC performance when the tatter is applied to spectrum sensing in cognitive radio (CR) technology. However, a trade- off is found between accuracy and computational complexity in spectrum sensing approaches. Increasing these approaches accuracy is accompanied by an increase in computational complexity. This results in greater power consumption and processing time. A novel WDC scheme for cyclostationary feature detection spectrum sensing approach is proposed in this paper and thoroughly investigated. The benefits of the proposed scheme are firstly presented. Then, the impact of the wireless channel of the proposed scheme is addressed considering two scenarios. In the first scenario, workload matrices are distributed over the wireless channel展开更多
This paper discusses the model of how the Agent is applied to implement distributed computing of Ada95 and presents a dynamic allocation strategy for distributed computing that based on pre-allocationand Agent. The ...This paper discusses the model of how the Agent is applied to implement distributed computing of Ada95 and presents a dynamic allocation strategy for distributed computing that based on pre-allocationand Agent. The aim of this strategy is realizing dynamic equilibrium allocation.展开更多
Today we witness the exponential growth of scientific research. This fast growth is possible thanks to the rapid development of computing systems since its first days in 1947 and the invention of transistor till the p...Today we witness the exponential growth of scientific research. This fast growth is possible thanks to the rapid development of computing systems since its first days in 1947 and the invention of transistor till the present days with high performance and scalable distributed computing systems. This fast growth of computing systems was first observed by Gordon E. Moore in 1965 and postulated as Moore’s Law. For the development of the scalable distributed computing systems, the year 2000 was a very special year. The first GHz speed processor, GB size memory and GB/s data transmission through network were achieved. Interestingly, in the same year the usable Grid computing systems emerged, which gave a strong impulse to a rapid development of distributed computing systems. This paper recognizes these facts that occurred in the year 2000, as the G-phenomena, a millennium cornerstone for the rapid development of scalable distributed systems evolved around the Grid and Cloud computing paradigms.展开更多
In distributed quantum computing(DQC),quantum hardware design mainly focuses on providing as many as possible high-quality inter-chip connections.Meanwhile,quantum software tries its best to reduce the required number...In distributed quantum computing(DQC),quantum hardware design mainly focuses on providing as many as possible high-quality inter-chip connections.Meanwhile,quantum software tries its best to reduce the required number of remote quantum gates between chips.However,this“hardware first,software follows”methodology may not fully exploit the potential of DQC.Inspired by classical software-hardware co-design,this paper explores the design space of application-specific DQC architectures.More specifically,we propose Auto Arch,an automated quantum chip network(QCN)structure design tool.With qubits grouping followed by a customized QCN design,AutoArch can generate a near-optimal DQC architecture suitable for target quantum algorithms.Experimental results show that the DQC architecture generated by Auto Arch can outperform other general QCN architectures when executing target quantum algorithms.展开更多
In the current noisy intermediate-scale quantum(NISQ)era,a single quantum processing unit(QPU)is insufficient to implement large-scale quantum algorithms;this has driven extensive research into distributed quantum com...In the current noisy intermediate-scale quantum(NISQ)era,a single quantum processing unit(QPU)is insufficient to implement large-scale quantum algorithms;this has driven extensive research into distributed quantum computing(DQC).DQC involves the cooperative operation of multiple QPUs but is concurrently challenged by excessive communication complexity.To address this issue,this paper proposes a quantum circuit partitioning method based on spectral clustering.The approach transforms quantum circuits into weighted graphs and,through computation of the Laplacian matrix and clustering techniques,identifies candidate partition schemes that minimize the total weight of the cut.Additionally,a global gate search tree strategy is introduced to meticulously explore opportunities for merged transfer of global gates,thereby minimizing the transmission cost of distributed quantum circuits and selecting the optimal partition scheme from the candidates.Finally,the proposed method is evaluated through various comparative experiments.The experimental results demonstrate that spectral clustering-based partitioning exhibits robust stability and efficiency in runtime in quantum circuits of different scales.In experiments involving the quantum Fourier transform algorithm and Revlib quantum circuits,the transmission cost achieved by the global gate search tree strategy is significantly optimized.展开更多
Distributed computing frameworks are the fundamental component of distributed computing systems.They provide an essential way to support the efficient processing of big data on clusters or cloud.The size of big data i...Distributed computing frameworks are the fundamental component of distributed computing systems.They provide an essential way to support the efficient processing of big data on clusters or cloud.The size of big data increases at a pace that is faster than the increase in the big data processing capacity of clusters.Thus,distributed computing frameworks based on the MapReduce computing model are not adequate to support big data analysis tasks which often require running complex analytical algorithms on extremely big data sets in terabytes.In performing such tasks,these frameworks face three challenges:computational inefficiency due to high I/O and communication costs,non-scalability to big data due to memory limit,and limited analytical algorithms because many serial algorithms cannot be implemented in the MapReduce programming model.New distributed computing frameworks need to be developed to conquer these challenges.In this paper,we review MapReduce-type distributed computing frameworks that are currently used in handling big data and discuss their problems when conducting big data analysis.In addition,we present a non-MapReduce distributed computing framework that has the potential to overcome big data analysis challenges.展开更多
Distributed cryptographic computing system plays an important role since cryptographic computing is extremely computation sensitive. However, no general cryptographic computing system is available. Grid technology can...Distributed cryptographic computing system plays an important role since cryptographic computing is extremely computation sensitive. However, no general cryptographic computing system is available. Grid technology can give an efficient computational support for cryptographic applications. Therefore, a general-purpose grid-based distributed computing system called DCCS is put forward in this paper. The architecture of DCCS is simply described at first. The policy of task division adapted in DCCS is then presented. The method to manage subtask is further discussed in detail. Furthermore, the building and execution process of a computing job is revealed. Finally, the details of DCCS implementation under Globus Toolkit 4 are illustrated.展开更多
This paper proposes a distributed computing architecture for protection functions within a digital substation, in order to achieve data redundancy, functional redundancy and functional coordination. This can be realiz...This paper proposes a distributed computing architecture for protection functions within a digital substation, in order to achieve data redundancy, functional redundancy and functional coordination. This can be realized primarily due to the advances in digital and communications technology within a substation, particularly the process bus which allows data sharing between Intelligent Electronic Devices(IEDs). Results of backup protection investigation, using redundant information both within the substation and on a wide area basis, are then presented. A campus microgrid protection scheme was used as a test case to demonstrate the concept of protection using shared information. Finally, the paper proposes a multi-agent system as a simulation platform, which can be used to further demonstrate some of these concepts.展开更多
Brain tumors come in various types,each with distinct characteristics and treatment approaches,making manual detection a time-consuming and potentially ambiguous process.Brain tumor detection is a valuable tool for ga...Brain tumors come in various types,each with distinct characteristics and treatment approaches,making manual detection a time-consuming and potentially ambiguous process.Brain tumor detection is a valuable tool for gaining a deeper understanding of tumors and improving treatment outcomes.Machine learning models have become key players in automating brain tumor detection.Gradient descent methods are the mainstream algorithms for solving machine learning models.In this paper,we propose a novel distributed proximal stochastic gradient descent approach to solve the L_(1)-Smooth Support Vector Machine(SVM)classifier for brain tumor detection.Firstly,the smooth hinge loss is introduced to be used as the loss function of SVM.It avoids the issue of nondifferentiability at the zero point encountered by the traditional hinge loss function during gradient descent optimization.Secondly,the L_(1) regularization method is employed to sparsify features and enhance the robustness of the model.Finally,adaptive proximal stochastic gradient descent(PGD)with momentum,and distributed adaptive PGDwithmomentum(DPGD)are proposed and applied to the L_(1)-Smooth SVM.Distributed computing is crucial in large-scale data analysis,with its value manifested in extending algorithms to distributed clusters,thus enabling more efficient processing ofmassive amounts of data.The DPGD algorithm leverages Spark,enabling full utilization of the computer’s multi-core resources.Due to its sparsity induced by L_(1) regularization on parameters,it exhibits significantly accelerated convergence speed.From the perspective of loss reduction,DPGD converges faster than PGD.The experimental results show that adaptive PGD withmomentumand its variants have achieved cutting-edge accuracy and efficiency in brain tumor detection.Frompre-trained models,both the PGD andDPGD outperform other models,boasting an accuracy of 95.21%.展开更多
In this study, we delve into the realm of efficient Big Data Engineering and Extract, Transform, Load (ETL) processes within the healthcare sector, leveraging the robust foundation provided by the MIMIC-III Clinical D...In this study, we delve into the realm of efficient Big Data Engineering and Extract, Transform, Load (ETL) processes within the healthcare sector, leveraging the robust foundation provided by the MIMIC-III Clinical Database. Our investigation entails a comprehensive exploration of various methodologies aimed at enhancing the efficiency of ETL processes, with a primary emphasis on optimizing time and resource utilization. Through meticulous experimentation utilizing a representative dataset, we shed light on the advantages associated with the incorporation of PySpark and Docker containerized applications. Our research illuminates significant advancements in time efficiency, process streamlining, and resource optimization attained through the utilization of PySpark for distributed computing within Big Data Engineering workflows. Additionally, we underscore the strategic integration of Docker containers, delineating their pivotal role in augmenting scalability and reproducibility within the ETL pipeline. This paper encapsulates the pivotal insights gleaned from our experimental journey, accentuating the practical implications and benefits entailed in the adoption of PySpark and Docker. By streamlining Big Data Engineering and ETL processes in the context of clinical big data, our study contributes to the ongoing discourse on optimizing data processing efficiency in healthcare applications. The source code is available on request.展开更多
In this paper, we adopt Java platform to achieve a multi-tier distributed object enterprise computing model which provides an open, flexible, robust and cross-platform standard for enterprise applications of new gener...In this paper, we adopt Java platform to achieve a multi-tier distributed object enterprise computing model which provides an open, flexible, robust and cross-platform standard for enterprise applications of new generation. In addition to this model, we define remote server objects as session or entity objects according to their roles in a distributed application server, which separate information details from business operations for software reuse. A web store system is implement by using this multi-tier distributed object enterprise computing model.展开更多
In this paper,we study a distributed model to cooperatively compute variational inequalities over time-varying directed graphs.Here,each agent has access to a part of the full mapping and holds a local view of the glo...In this paper,we study a distributed model to cooperatively compute variational inequalities over time-varying directed graphs.Here,each agent has access to a part of the full mapping and holds a local view of the global set constraint.By virtue of an auxiliary vector to compensate the graph imbalance,we propose a consensus-based distributed projection algorithm relying on local computation and communication at each agent.We show the convergence of this algorithm over uniformly jointly strongly connected unbalanced digraphs with nonidentical local constraints.We also provide a numerical example to illustrate the effectiveness of our algorithm.展开更多
Person re-identification has been a hot research issues in the field of computer vision.In recent years,with the maturity of the theory,a large number of excellent methods have been proposed.However,large-scale data s...Person re-identification has been a hot research issues in the field of computer vision.In recent years,with the maturity of the theory,a large number of excellent methods have been proposed.However,large-scale data sets and huge networks make training a time-consuming process.At the same time,the parameters and their values generated during the training process also take up a lot of computer resources.Therefore,we apply distributed cloud computing method to perform person re-identification task.Using distributed data storage method,pedestrian data sets and parameters are stored in cloud nodes.To speed up operational efficiency and increase fault tolerance,we add data redundancy mechanism to copy and store data blocks to different nodes,and we propose a hash loop optimization algorithm to optimize the data distribution process.Moreover,we assign different layers of the re-identification network to different nodes to complete the training in the way of model parallelism.By comparing and analyzing the accuracy and operation speed of the distributed model on the video-based dataset MARS,the results show that our distributed model has a faster training speed.展开更多
Federated Learning(FL)has become a popular training paradigm in recent years.However,stragglers are critical bottlenecks in an Internet of Things(IoT)network while training.These nodes produce stale updates to the ser...Federated Learning(FL)has become a popular training paradigm in recent years.However,stragglers are critical bottlenecks in an Internet of Things(IoT)network while training.These nodes produce stale updates to the server,which slow down the convergence.In this paper,we studied the impact of the stale updates on the global model,which is observed to be significant.To address this,we propose a weighted averaging scheme,FedStrag,that optimizes the training with stale updates.The work is focused on training a model in an IoT network that has multiple challenges,such as resource constraints,stragglers,network issues,device heterogeneity,etc.To this end,we developed a time-bounded asynchronous FL paradigm that can train a model on the continuous iflow of data in the edge-fog-cloud continuum.To test the FedStrag approach,a model is trained with multiple stragglers scenarios on both Independent and Identically Distributed(IID)and non-IID datasets on Raspberry Pis.The experiment results suggest that the FedStrag outperforms the baseline FedAvg in all possible cases.展开更多
Spark performs excellently in large-scale data-parallel computing and iterative processing.However,with the increase in data size and program complexity,the default scheduling strategy has difficultymeeting the demand...Spark performs excellently in large-scale data-parallel computing and iterative processing.However,with the increase in data size and program complexity,the default scheduling strategy has difficultymeeting the demands of resource utilization and performance optimization.Scheduling strategy optimization,as a key direction for improving Spark’s execution efficiency,has attracted widespread attention.This paper first introduces the basic theories of Spark,compares several default scheduling strategies,and discusses common scheduling performance evaluation indicators and factors affecting scheduling efficiency.Subsequently,existing scheduling optimization schemes are summarized based on three scheduling modes:load characteristics,cluster characteristics,and matching of both,and representative algorithms are analyzed in terms of performance indicators and applicable scenarios,comparing the advantages and disadvantages of different scheduling modes.The article also explores in detail the integration of Spark scheduling strategies with specific application scenarios and the challenges in production environments.Finally,the limitations of the existing schemes are analyzed,and prospects are envisioned.展开更多
基金supported by NSF China(No.T2421002,62061146002,62020106005)。
文摘Distributed computing is an important topic in the field of wireless communications and networking,and its high efficiency in handling large amounts of data is particularly noteworthy.Although distributed computing benefits from its ability of processing data in parallel,the communication burden between different servers is incurred,thereby the computation process is detained.Recent researches have applied coding in distributed computing to reduce the communication burden,where repetitive computation is utilized to enable multicast opportunities so that the same coded information can be reused across different servers.To handle the computation tasks in practical heterogeneous systems,we propose a novel coding scheme to effectively mitigate the "straggling effect" in distributed computing.We assume that there are two types of servers in the system and the only difference between them is their computational capabilities,the servers with lower computational capabilities are called stragglers.Given any ratio of fast servers to slow servers and any gap of computational capabilities between them,we achieve approximately the same computation time for both fast and slow servers by assigning different amounts of computation tasks to them,thus reducing the overall computation time.Furthermore,we investigate the informationtheoretic lower bound of the inter-communication load and show that the lower bound is within a constant multiplicative gap to the upper bound achieved by our scheme.Various simulations also validate the effectiveness of the proposed scheme.
基金This work was supported by the National Key Research and Development Program of China(2021YFB2900603)the National Natural Science Foundation of China(61831008).
文摘A dynamic multi-beam resource allocation algorithm for large low Earth orbit(LEO)constellation based on on-board distributed computing is proposed in this paper.The allocation is a combinatorial optimization process under a series of complex constraints,which is important for enhancing the matching between resources and requirements.A complex algorithm is not available because that the LEO on-board resources is limi-ted.The proposed genetic algorithm(GA)based on two-dimen-sional individual model and uncorrelated single paternal inheri-tance method is designed to support distributed computation to enhance the feasibility of on-board application.A distributed system composed of eight embedded devices is built to verify the algorithm.A typical scenario is built in the system to evalu-ate the resource allocation process,algorithm mathematical model,trigger strategy,and distributed computation architec-ture.According to the simulation and measurement results,the proposed algorithm can provide an allocation result for more than 1500 tasks in 14 s and the success rate is more than 91%in a typical scene.The response time is decreased by 40%com-pared with the conditional GA.
基金funded in part by the National Natural Science Foundation of China (Grant no. 61772352, 62172061, 61871422)National Key Research and Development Project (Grants nos. 2020YFB1711800 and 2020YFB1707900)+2 种基金the Science and Technology Project of Sichuan Province (Grants no. 2021YFG0152, 2021YFG0025, 2020YFG0479, 2020YFG0322, 2020GFW035, 2020GFW033, 2020YFH0071)the R&D Project of Chengdu City (Grant no. 2019-YF05-01790-GX)the Central Universities of Southwest Minzu University (Grants no. ZYN2022032)
文摘In LEO(Low Earth Orbit)satellite communication systems,the satellite network is made up of a large number of satellites,the dynamically changing network environment affects the results of distributed computing.In order to improve the fault tolerance rate,a novel public blockchain consensus mechanism that applies a distributed computing architecture in a public network is proposed.Redundant calculation of blockchain ensures the credibility of the results;and the transactions with calculation results of a task are stored distributed in sequence in Directed Acyclic Graphs(DAG).The transactions issued by nodes are connected to form a net.The net can quickly provide node reputation evaluation that does not rely on third parties.Simulations show that our proposed blockchain has the following advantages:1.The task processing speed of the blockchain can be close to that of the fastest node in the entire blockchain;2.When the tasks’arrival time intervals and demanded working nodes(WNs)meet certain conditions,the network can tolerate more than 50%of malicious devices;3.No matter the number of nodes in the blockchain is increased or reduced,the network can keep robustness by adjusting the task’s arrival time interval and demanded WNs.
基金partly supported by National Key Basic Research Program of China(2016YFB1000100)partly supported by National Natural Science Foundation of China(NO.61402490)。
文摘To security support large-scale intelligent applications,distributed machine learning based on blockchain is an intuitive solution scheme.However,the distributed machine learning is difficult to train due to that the corresponding optimization solver algorithms converge slowly,which highly demand on computing and memory resources.To overcome the challenges,we propose a distributed computing framework for L-BFGS optimization algorithm based on variance reduction method,which is a lightweight,few additional cost and parallelized scheme for the model training process.To validate the claims,we have conducted several experiments on multiple classical datasets.Results show that our proposed computing framework can steadily accelerate the training process of solver in either local mode or distributed mode.
文摘Recently, wireless distributed computing (WDC) concept has emerged promising manifolds improvements to current wireless technotogies. Despite the various expected benefits of this concept, significant drawbacks were addressed in the open literature. One of WDC key challenges is the impact of wireless channel quality on the load of distributed computations. Therefore, this research investigates the wireless channel impact on WDC performance when the tatter is applied to spectrum sensing in cognitive radio (CR) technology. However, a trade- off is found between accuracy and computational complexity in spectrum sensing approaches. Increasing these approaches accuracy is accompanied by an increase in computational complexity. This results in greater power consumption and processing time. A novel WDC scheme for cyclostationary feature detection spectrum sensing approach is proposed in this paper and thoroughly investigated. The benefits of the proposed scheme are firstly presented. Then, the impact of the wireless channel of the proposed scheme is addressed considering two scenarios. In the first scenario, workload matrices are distributed over the wireless channel
文摘This paper discusses the model of how the Agent is applied to implement distributed computing of Ada95 and presents a dynamic allocation strategy for distributed computing that based on pre-allocationand Agent. The aim of this strategy is realizing dynamic equilibrium allocation.
基金in part,supported by the European Commission through the EU FP7 SEE GRID SCI and SCI BUS projectsby the Grant 098-0982562-2567 awarded by the Ministry of Science,Education and Sports of the Republic of Croatia.
文摘Today we witness the exponential growth of scientific research. This fast growth is possible thanks to the rapid development of computing systems since its first days in 1947 and the invention of transistor till the present days with high performance and scalable distributed computing systems. This fast growth of computing systems was first observed by Gordon E. Moore in 1965 and postulated as Moore’s Law. For the development of the scalable distributed computing systems, the year 2000 was a very special year. The first GHz speed processor, GB size memory and GB/s data transmission through network were achieved. Interestingly, in the same year the usable Grid computing systems emerged, which gave a strong impulse to a rapid development of distributed computing systems. This paper recognizes these facts that occurred in the year 2000, as the G-phenomena, a millennium cornerstone for the rapid development of scalable distributed systems evolved around the Grid and Cloud computing paradigms.
基金Project supported by the National Key R&D Program of China(Grant No.2023YFA1009403)the National Natural Science Foundation of China(Grant Nos.62072176 and 62472175)the“Digital Silk Road”Shanghai International Joint Lab of Trustworthy Intelligent Software(Grant No.22510750100)。
文摘In distributed quantum computing(DQC),quantum hardware design mainly focuses on providing as many as possible high-quality inter-chip connections.Meanwhile,quantum software tries its best to reduce the required number of remote quantum gates between chips.However,this“hardware first,software follows”methodology may not fully exploit the potential of DQC.Inspired by classical software-hardware co-design,this paper explores the design space of application-specific DQC architectures.More specifically,we propose Auto Arch,an automated quantum chip network(QCN)structure design tool.With qubits grouping followed by a customized QCN design,AutoArch can generate a near-optimal DQC architecture suitable for target quantum algorithms.Experimental results show that the DQC architecture generated by Auto Arch can outperform other general QCN architectures when executing target quantum algorithms.
基金supported by the National Natural Science Foundation of China(Grant No.62072259)in part by the Natural Science Foundation of Jiangsu Province(Grant No.BK20221411)+1 种基金the PhD Start-up Fund of Nantong University(Grant No.23B03)the Postgraduate Research&Practice Innovation Program of School of Information Science and Technology,Nantong University(Grant No.NTUSISTPR2405).
文摘In the current noisy intermediate-scale quantum(NISQ)era,a single quantum processing unit(QPU)is insufficient to implement large-scale quantum algorithms;this has driven extensive research into distributed quantum computing(DQC).DQC involves the cooperative operation of multiple QPUs but is concurrently challenged by excessive communication complexity.To address this issue,this paper proposes a quantum circuit partitioning method based on spectral clustering.The approach transforms quantum circuits into weighted graphs and,through computation of the Laplacian matrix and clustering techniques,identifies candidate partition schemes that minimize the total weight of the cut.Additionally,a global gate search tree strategy is introduced to meticulously explore opportunities for merged transfer of global gates,thereby minimizing the transmission cost of distributed quantum circuits and selecting the optimal partition scheme from the candidates.Finally,the proposed method is evaluated through various comparative experiments.The experimental results demonstrate that spectral clustering-based partitioning exhibits robust stability and efficiency in runtime in quantum circuits of different scales.In experiments involving the quantum Fourier transform algorithm and Revlib quantum circuits,the transmission cost achieved by the global gate search tree strategy is significantly optimized.
基金supported by the National Natural Science Foundation of China(No.61972261)Basic Research Foundations of Shenzhen(Nos.JCYJ 20210324093609026 and JCYJ20200813091134001).
文摘Distributed computing frameworks are the fundamental component of distributed computing systems.They provide an essential way to support the efficient processing of big data on clusters or cloud.The size of big data increases at a pace that is faster than the increase in the big data processing capacity of clusters.Thus,distributed computing frameworks based on the MapReduce computing model are not adequate to support big data analysis tasks which often require running complex analytical algorithms on extremely big data sets in terabytes.In performing such tasks,these frameworks face three challenges:computational inefficiency due to high I/O and communication costs,non-scalability to big data due to memory limit,and limited analytical algorithms because many serial algorithms cannot be implemented in the MapReduce programming model.New distributed computing frameworks need to be developed to conquer these challenges.In this paper,we review MapReduce-type distributed computing frameworks that are currently used in handling big data and discuss their problems when conducting big data analysis.In addition,we present a non-MapReduce distributed computing framework that has the potential to overcome big data analysis challenges.
基金Supported by the National Basic Research Program of China (973 Program 2004CB318004), the National Natural Science Foundation of China (NSFC90204016) and the National High Technology Research and Development Program of China (2003AA144030)
文摘Distributed cryptographic computing system plays an important role since cryptographic computing is extremely computation sensitive. However, no general cryptographic computing system is available. Grid technology can give an efficient computational support for cryptographic applications. Therefore, a general-purpose grid-based distributed computing system called DCCS is put forward in this paper. The architecture of DCCS is simply described at first. The policy of task division adapted in DCCS is then presented. The method to manage subtask is further discussed in detail. Furthermore, the building and execution process of a computing job is revealed. Finally, the details of DCCS implementation under Globus Toolkit 4 are illustrated.
基金supported by the National Natural Science Foundation of China under grant number 51277009
文摘This paper proposes a distributed computing architecture for protection functions within a digital substation, in order to achieve data redundancy, functional redundancy and functional coordination. This can be realized primarily due to the advances in digital and communications technology within a substation, particularly the process bus which allows data sharing between Intelligent Electronic Devices(IEDs). Results of backup protection investigation, using redundant information both within the substation and on a wide area basis, are then presented. A campus microgrid protection scheme was used as a test case to demonstrate the concept of protection using shared information. Finally, the paper proposes a multi-agent system as a simulation platform, which can be used to further demonstrate some of these concepts.
基金the Natural Science Foundation of Ningxia Province(No.2021AAC03230).
文摘Brain tumors come in various types,each with distinct characteristics and treatment approaches,making manual detection a time-consuming and potentially ambiguous process.Brain tumor detection is a valuable tool for gaining a deeper understanding of tumors and improving treatment outcomes.Machine learning models have become key players in automating brain tumor detection.Gradient descent methods are the mainstream algorithms for solving machine learning models.In this paper,we propose a novel distributed proximal stochastic gradient descent approach to solve the L_(1)-Smooth Support Vector Machine(SVM)classifier for brain tumor detection.Firstly,the smooth hinge loss is introduced to be used as the loss function of SVM.It avoids the issue of nondifferentiability at the zero point encountered by the traditional hinge loss function during gradient descent optimization.Secondly,the L_(1) regularization method is employed to sparsify features and enhance the robustness of the model.Finally,adaptive proximal stochastic gradient descent(PGD)with momentum,and distributed adaptive PGDwithmomentum(DPGD)are proposed and applied to the L_(1)-Smooth SVM.Distributed computing is crucial in large-scale data analysis,with its value manifested in extending algorithms to distributed clusters,thus enabling more efficient processing ofmassive amounts of data.The DPGD algorithm leverages Spark,enabling full utilization of the computer’s multi-core resources.Due to its sparsity induced by L_(1) regularization on parameters,it exhibits significantly accelerated convergence speed.From the perspective of loss reduction,DPGD converges faster than PGD.The experimental results show that adaptive PGD withmomentumand its variants have achieved cutting-edge accuracy and efficiency in brain tumor detection.Frompre-trained models,both the PGD andDPGD outperform other models,boasting an accuracy of 95.21%.
文摘In this study, we delve into the realm of efficient Big Data Engineering and Extract, Transform, Load (ETL) processes within the healthcare sector, leveraging the robust foundation provided by the MIMIC-III Clinical Database. Our investigation entails a comprehensive exploration of various methodologies aimed at enhancing the efficiency of ETL processes, with a primary emphasis on optimizing time and resource utilization. Through meticulous experimentation utilizing a representative dataset, we shed light on the advantages associated with the incorporation of PySpark and Docker containerized applications. Our research illuminates significant advancements in time efficiency, process streamlining, and resource optimization attained through the utilization of PySpark for distributed computing within Big Data Engineering workflows. Additionally, we underscore the strategic integration of Docker containers, delineating their pivotal role in augmenting scalability and reproducibility within the ETL pipeline. This paper encapsulates the pivotal insights gleaned from our experimental journey, accentuating the practical implications and benefits entailed in the adoption of PySpark and Docker. By streamlining Big Data Engineering and ETL processes in the context of clinical big data, our study contributes to the ongoing discourse on optimizing data processing efficiency in healthcare applications. The source code is available on request.
文摘In this paper, we adopt Java platform to achieve a multi-tier distributed object enterprise computing model which provides an open, flexible, robust and cross-platform standard for enterprise applications of new generation. In addition to this model, we define remote server objects as session or entity objects according to their roles in a distributed application server, which separate information details from business operations for software reuse. A web store system is implement by using this multi-tier distributed object enterprise computing model.
基金supported by the National Natural Science Foundation of China(No.61973043)Shanghai Municipal Science and Technology Major Project(No.2021SHZDZX0100).
文摘In this paper,we study a distributed model to cooperatively compute variational inequalities over time-varying directed graphs.Here,each agent has access to a part of the full mapping and holds a local view of the global set constraint.By virtue of an auxiliary vector to compensate the graph imbalance,we propose a consensus-based distributed projection algorithm relying on local computation and communication at each agent.We show the convergence of this algorithm over uniformly jointly strongly connected unbalanced digraphs with nonidentical local constraints.We also provide a numerical example to illustrate the effectiveness of our algorithm.
基金the Common Key Technology Innovation Special of Key Industries of Chongqing Science and Technology Commission under Grant No.cstc2017zdcy-zdyfX0067.
文摘Person re-identification has been a hot research issues in the field of computer vision.In recent years,with the maturity of the theory,a large number of excellent methods have been proposed.However,large-scale data sets and huge networks make training a time-consuming process.At the same time,the parameters and their values generated during the training process also take up a lot of computer resources.Therefore,we apply distributed cloud computing method to perform person re-identification task.Using distributed data storage method,pedestrian data sets and parameters are stored in cloud nodes.To speed up operational efficiency and increase fault tolerance,we add data redundancy mechanism to copy and store data blocks to different nodes,and we propose a hash loop optimization algorithm to optimize the data distribution process.Moreover,we assign different layers of the re-identification network to different nodes to complete the training in the way of model parallelism.By comparing and analyzing the accuracy and operation speed of the distributed model on the video-based dataset MARS,the results show that our distributed model has a faster training speed.
基金supported by SERB,India,through grant CRG/2021/003888financial support to UoH-IoE by MHRD,India(F11/9/2019-U3(A)).
文摘Federated Learning(FL)has become a popular training paradigm in recent years.However,stragglers are critical bottlenecks in an Internet of Things(IoT)network while training.These nodes produce stale updates to the server,which slow down the convergence.In this paper,we studied the impact of the stale updates on the global model,which is observed to be significant.To address this,we propose a weighted averaging scheme,FedStrag,that optimizes the training with stale updates.The work is focused on training a model in an IoT network that has multiple challenges,such as resource constraints,stragglers,network issues,device heterogeneity,etc.To this end,we developed a time-bounded asynchronous FL paradigm that can train a model on the continuous iflow of data in the edge-fog-cloud continuum.To test the FedStrag approach,a model is trained with multiple stragglers scenarios on both Independent and Identically Distributed(IID)and non-IID datasets on Raspberry Pis.The experiment results suggest that the FedStrag outperforms the baseline FedAvg in all possible cases.
基金supported in part by the Key Research and Development Program of Shaanxi under Grant 2023-ZDLGY-34.
文摘Spark performs excellently in large-scale data-parallel computing and iterative processing.However,with the increase in data size and program complexity,the default scheduling strategy has difficultymeeting the demands of resource utilization and performance optimization.Scheduling strategy optimization,as a key direction for improving Spark’s execution efficiency,has attracted widespread attention.This paper first introduces the basic theories of Spark,compares several default scheduling strategies,and discusses common scheduling performance evaluation indicators and factors affecting scheduling efficiency.Subsequently,existing scheduling optimization schemes are summarized based on three scheduling modes:load characteristics,cluster characteristics,and matching of both,and representative algorithms are analyzed in terms of performance indicators and applicable scenarios,comparing the advantages and disadvantages of different scheduling modes.The article also explores in detail the integration of Spark scheduling strategies with specific application scenarios and the challenges in production environments.Finally,the limitations of the existing schemes are analyzed,and prospects are envisioned.