Distributed computing is an important topic in the field of wireless communications and networking,and its high efficiency in handling large amounts of data is particularly noteworthy.Although distributed computing be...Distributed computing is an important topic in the field of wireless communications and networking,and its high efficiency in handling large amounts of data is particularly noteworthy.Although distributed computing benefits from its ability of processing data in parallel,the communication burden between different servers is incurred,thereby the computation process is detained.Recent researches have applied coding in distributed computing to reduce the communication burden,where repetitive computation is utilized to enable multicast opportunities so that the same coded information can be reused across different servers.To handle the computation tasks in practical heterogeneous systems,we propose a novel coding scheme to effectively mitigate the "straggling effect" in distributed computing.We assume that there are two types of servers in the system and the only difference between them is their computational capabilities,the servers with lower computational capabilities are called stragglers.Given any ratio of fast servers to slow servers and any gap of computational capabilities between them,we achieve approximately the same computation time for both fast and slow servers by assigning different amounts of computation tasks to them,thus reducing the overall computation time.Furthermore,we investigate the informationtheoretic lower bound of the inter-communication load and show that the lower bound is within a constant multiplicative gap to the upper bound achieved by our scheme.Various simulations also validate the effectiveness of the proposed scheme.展开更多
In many applications such as computational fluid dynamics and weather prediction, as well as image processing and state of Markov chain etc., the grade of matrix n is often very large, and any serial algorithm cannot ...In many applications such as computational fluid dynamics and weather prediction, as well as image processing and state of Markov chain etc., the grade of matrix n is often very large, and any serial algorithm cannot solve the problems. A distributed cluster-based solution for very large linear equations is discussed, it includes the definitions of notations, partition of matrix, communication mechanism, and a master-slaver algorithm etc., the computing cost is O(n^3/N), the memory cost is O(n^2/N), the I/O cost is O(n^2/N), and the com- munication cost is O(Nn ), here, N is the number of computing nodes or processes. Some tests show that the solution could solve the double type of matrix under 10^6 × 10^6 effectively.展开更多
In LEO(Low Earth Orbit)satellite communication systems,the satellite network is made up of a large number of satellites,the dynamically changing network environment affects the results of distributed computing.In orde...In LEO(Low Earth Orbit)satellite communication systems,the satellite network is made up of a large number of satellites,the dynamically changing network environment affects the results of distributed computing.In order to improve the fault tolerance rate,a novel public blockchain consensus mechanism that applies a distributed computing architecture in a public network is proposed.Redundant calculation of blockchain ensures the credibility of the results;and the transactions with calculation results of a task are stored distributed in sequence in Directed Acyclic Graphs(DAG).The transactions issued by nodes are connected to form a net.The net can quickly provide node reputation evaluation that does not rely on third parties.Simulations show that our proposed blockchain has the following advantages:1.The task processing speed of the blockchain can be close to that of the fastest node in the entire blockchain;2.When the tasks’arrival time intervals and demanded working nodes(WNs)meet certain conditions,the network can tolerate more than 50%of malicious devices;3.No matter the number of nodes in the blockchain is increased or reduced,the network can keep robustness by adjusting the task’s arrival time interval and demanded WNs.展开更多
To security support large-scale intelligent applications,distributed machine learning based on blockchain is an intuitive solution scheme.However,the distributed machine learning is difficult to train due to that the ...To security support large-scale intelligent applications,distributed machine learning based on blockchain is an intuitive solution scheme.However,the distributed machine learning is difficult to train due to that the corresponding optimization solver algorithms converge slowly,which highly demand on computing and memory resources.To overcome the challenges,we propose a distributed computing framework for L-BFGS optimization algorithm based on variance reduction method,which is a lightweight,few additional cost and parallelized scheme for the model training process.To validate the claims,we have conducted several experiments on multiple classical datasets.Results show that our proposed computing framework can steadily accelerate the training process of solver in either local mode or distributed mode.展开更多
Recently, wireless distributed computing (WDC) concept has emerged promising manifolds improvements to current wireless technotogies. Despite the various expected benefits of this concept, significant drawbacks were...Recently, wireless distributed computing (WDC) concept has emerged promising manifolds improvements to current wireless technotogies. Despite the various expected benefits of this concept, significant drawbacks were addressed in the open literature. One of WDC key challenges is the impact of wireless channel quality on the load of distributed computations. Therefore, this research investigates the wireless channel impact on WDC performance when the tatter is applied to spectrum sensing in cognitive radio (CR) technology. However, a trade- off is found between accuracy and computational complexity in spectrum sensing approaches. Increasing these approaches accuracy is accompanied by an increase in computational complexity. This results in greater power consumption and processing time. A novel WDC scheme for cyclostationary feature detection spectrum sensing approach is proposed in this paper and thoroughly investigated. The benefits of the proposed scheme are firstly presented. Then, the impact of the wireless channel of the proposed scheme is addressed considering two scenarios. In the first scenario, workload matrices are distributed over the wireless channel展开更多
This paper presents a distributed optimization strategy for large-scale traffic network based on fog computing. Different from the traditional cloud-based centralized optimization strategy, the fog-based distributed o...This paper presents a distributed optimization strategy for large-scale traffic network based on fog computing. Different from the traditional cloud-based centralized optimization strategy, the fog-based distributed optimization strategy distributes its computing tasks to individual sub-processors, thus significantly reducing computation time. A traffic model is built and a series of communication rules between subsystems are set to ensure that the entire transportation network can be globally optimized while the subsystem is achieving its local optimization. Finally, this paper numerically simulates the operation of the traffic network by mixed-Integer programming, also, compares the advantages and disadvantages of the two optimization strategies.展开更多
Distributed cryptographic computing system plays an important role since cryptographic computing is extremely computation sensitive. However, no general cryptographic computing system is available. Grid technology can...Distributed cryptographic computing system plays an important role since cryptographic computing is extremely computation sensitive. However, no general cryptographic computing system is available. Grid technology can give an efficient computational support for cryptographic applications. Therefore, a general-purpose grid-based distributed computing system called DCCS is put forward in this paper. The architecture of DCCS is simply described at first. The policy of task division adapted in DCCS is then presented. The method to manage subtask is further discussed in detail. Furthermore, the building and execution process of a computing job is revealed. Finally, the details of DCCS implementation under Globus Toolkit 4 are illustrated.展开更多
This paper discusses the model of how the Agent is applied to implement distributed computing of Ada95 and presents a dynamic allocation strategy for distributed computing that based on pre-allocationand Agent. The ...This paper discusses the model of how the Agent is applied to implement distributed computing of Ada95 and presents a dynamic allocation strategy for distributed computing that based on pre-allocationand Agent. The aim of this strategy is realizing dynamic equilibrium allocation.展开更多
In this paper, we adopt Java platform to achieve a multi-tier distributed object enterprise computing model which provides an open, flexible, robust and cross-platform standard for enterprise applications of new gener...In this paper, we adopt Java platform to achieve a multi-tier distributed object enterprise computing model which provides an open, flexible, robust and cross-platform standard for enterprise applications of new generation. In addition to this model, we define remote server objects as session or entity objects according to their roles in a distributed application server, which separate information details from business operations for software reuse. A web store system is implement by using this multi-tier distributed object enterprise computing model.展开更多
his paper examines planning management problems in a Multiagentbased Distributed Open Computing Environment Model (MDOCEM). First the meaning of planning management in MDOCEM is introduced, and then a formal method to...his paper examines planning management problems in a Multiagentbased Distributed Open Computing Environment Model (MDOCEM). First the meaning of planning management in MDOCEM is introduced, and then a formal method to describe the associated task partition problems is presented, and a heuristic algorithm which gives an approximate optimum solution is given. Finally the task coordination and integration of execution results are discussed.展开更多
Vehicular networks have been envisioned to provide us with numerous interesting services such as dissemination of real-time safety warnings and commercial advertisements via car-to-car communication. However, efficien...Vehicular networks have been envisioned to provide us with numerous interesting services such as dissemination of real-time safety warnings and commercial advertisements via car-to-car communication. However, efficient routing is a research challenge due to the highly dynamic nature of these networks. Nevertheless, the availability of connections imposes additional constraint. Our earlier works in the area of efficient dissemination integrates the advantages of middleware operations with muhicast routing to de- sign a framework for distributed routing in vehicular networks. Cloud computing makes use of pools of physical computing resourc- es to meet the requirements of such highly dynamic networks. The proposed solution in this paper applies the principles of cloud computing to our existing framework. The routing protocol works at the network layer for the formation of clouds in specific geo- graphic regions. Simulation results present the effieiency of the model in terms of serviee discovery, download time and the queu- ing delay at the controller nodes.展开更多
In distributed quantum computing(DQC),quantum hardware design mainly focuses on providing as many as possible high-quality inter-chip connections.Meanwhile,quantum software tries its best to reduce the required number...In distributed quantum computing(DQC),quantum hardware design mainly focuses on providing as many as possible high-quality inter-chip connections.Meanwhile,quantum software tries its best to reduce the required number of remote quantum gates between chips.However,this“hardware first,software follows”methodology may not fully exploit the potential of DQC.Inspired by classical software-hardware co-design,this paper explores the design space of application-specific DQC architectures.More specifically,we propose Auto Arch,an automated quantum chip network(QCN)structure design tool.With qubits grouping followed by a customized QCN design,AutoArch can generate a near-optimal DQC architecture suitable for target quantum algorithms.Experimental results show that the DQC architecture generated by Auto Arch can outperform other general QCN architectures when executing target quantum algorithms.展开更多
A dynamic multi-beam resource allocation algorithm for large low Earth orbit(LEO)constellation based on on-board distributed computing is proposed in this paper.The allocation is a combinatorial optimization process u...A dynamic multi-beam resource allocation algorithm for large low Earth orbit(LEO)constellation based on on-board distributed computing is proposed in this paper.The allocation is a combinatorial optimization process under a series of complex constraints,which is important for enhancing the matching between resources and requirements.A complex algorithm is not available because that the LEO on-board resources is limi-ted.The proposed genetic algorithm(GA)based on two-dimen-sional individual model and uncorrelated single paternal inheri-tance method is designed to support distributed computation to enhance the feasibility of on-board application.A distributed system composed of eight embedded devices is built to verify the algorithm.A typical scenario is built in the system to evalu-ate the resource allocation process,algorithm mathematical model,trigger strategy,and distributed computation architec-ture.According to the simulation and measurement results,the proposed algorithm can provide an allocation result for more than 1500 tasks in 14 s and the success rate is more than 91%in a typical scene.The response time is decreased by 40%com-pared with the conditional GA.展开更多
IT as a dynamic filed changes very rapidly; efficient management of such systems for the most of the companies requires handling tremendous complex situations in terms of hardware and software setup. Hardware and soft...IT as a dynamic filed changes very rapidly; efficient management of such systems for the most of the companies requires handling tremendous complex situations in terms of hardware and software setup. Hardware and software itself changes quickly with the time and keeping them updated is a difficult problem for the most of the companies; the problem is more emphasized for the companies having large infrastructure of IT facilities such as data centers which are expensive to be maintained. Many applications run on the company premises which require well prepared staff for successfully maintaining them. With the inception of Cloud Computing many companies have transferred their applications and data into cloud computing based platforms in order to have reduced maintaining cost, easier maintenance in terms of hardware and software, reliable and securely accessible services. The benefits of building distributed applications using Google infrastructure are conferred in this paper.展开更多
Today we witness the exponential growth of scientific research. This fast growth is possible thanks to the rapid development of computing systems since its first days in 1947 and the invention of transistor till the p...Today we witness the exponential growth of scientific research. This fast growth is possible thanks to the rapid development of computing systems since its first days in 1947 and the invention of transistor till the present days with high performance and scalable distributed computing systems. This fast growth of computing systems was first observed by Gordon E. Moore in 1965 and postulated as Moore’s Law. For the development of the scalable distributed computing systems, the year 2000 was a very special year. The first GHz speed processor, GB size memory and GB/s data transmission through network were achieved. Interestingly, in the same year the usable Grid computing systems emerged, which gave a strong impulse to a rapid development of distributed computing systems. This paper recognizes these facts that occurred in the year 2000, as the G-phenomena, a millennium cornerstone for the rapid development of scalable distributed systems evolved around the Grid and Cloud computing paradigms.展开更多
Big data are always processed repeatedly with small changes, which is a major form of big data processing. The feature of incremental change of big data shows that incremental computing mode can improve the performanc...Big data are always processed repeatedly with small changes, which is a major form of big data processing. The feature of incremental change of big data shows that incremental computing mode can improve the performance greatly. HDFS is a distributed file system on Hadoop which is the most popular platform for big data analytics. And HDFS adopts fixed-size chunking policy, which is inefficient facing incremental computing. Therefore, in this paper, we proposed iHDFS (incremental HDFS), a distributed file system, which can provide basic guarantee for big data parallel processing. The iHDFS is implemented as an extension to HDFS. In iHDFS, Rabin fingerprint algorithm is applied to achieve content defined chunking. This policy make data chunking has much higher stability, and the intermediate processing results can be reused efficiently, so the performance of incremental data processing can be improved significantly. The effectiveness and efficiency of iHDFS have been demonstrated by the experimental results.展开更多
Person re-identification has been a hot research issues in the field of computer vision.In recent years,with the maturity of the theory,a large number of excellent methods have been proposed.However,large-scale data s...Person re-identification has been a hot research issues in the field of computer vision.In recent years,with the maturity of the theory,a large number of excellent methods have been proposed.However,large-scale data sets and huge networks make training a time-consuming process.At the same time,the parameters and their values generated during the training process also take up a lot of computer resources.Therefore,we apply distributed cloud computing method to perform person re-identification task.Using distributed data storage method,pedestrian data sets and parameters are stored in cloud nodes.To speed up operational efficiency and increase fault tolerance,we add data redundancy mechanism to copy and store data blocks to different nodes,and we propose a hash loop optimization algorithm to optimize the data distribution process.Moreover,we assign different layers of the re-identification network to different nodes to complete the training in the way of model parallelism.By comparing and analyzing the accuracy and operation speed of the distributed model on the video-based dataset MARS,the results show that our distributed model has a faster training speed.展开更多
An attempt has been made to develop a distributed software infrastructure model for onboard data fusion system simulation, which is also applied to netted radar systems, onboard distributed detection systems and advan...An attempt has been made to develop a distributed software infrastructure model for onboard data fusion system simulation, which is also applied to netted radar systems, onboard distributed detection systems and advanced C3I systems. Two architectures are provided and verified: one is based on pure TCP/IP protocol and C/S model, and implemented with Winsock, the other is based on CORBA (common object request broker architecture). The performance of data fusion simulation system, i.e. reliability, flexibility and scalability, is improved and enhanced by two models. The study of them makes valuable explore on incorporating the distributed computation concepts into radar system simulation techniques.展开更多
基金supported by NSF China(No.T2421002,62061146002,62020106005)。
文摘Distributed computing is an important topic in the field of wireless communications and networking,and its high efficiency in handling large amounts of data is particularly noteworthy.Although distributed computing benefits from its ability of processing data in parallel,the communication burden between different servers is incurred,thereby the computation process is detained.Recent researches have applied coding in distributed computing to reduce the communication burden,where repetitive computation is utilized to enable multicast opportunities so that the same coded information can be reused across different servers.To handle the computation tasks in practical heterogeneous systems,we propose a novel coding scheme to effectively mitigate the "straggling effect" in distributed computing.We assume that there are two types of servers in the system and the only difference between them is their computational capabilities,the servers with lower computational capabilities are called stragglers.Given any ratio of fast servers to slow servers and any gap of computational capabilities between them,we achieve approximately the same computation time for both fast and slow servers by assigning different amounts of computation tasks to them,thus reducing the overall computation time.Furthermore,we investigate the informationtheoretic lower bound of the inter-communication load and show that the lower bound is within a constant multiplicative gap to the upper bound achieved by our scheme.Various simulations also validate the effectiveness of the proposed scheme.
文摘In many applications such as computational fluid dynamics and weather prediction, as well as image processing and state of Markov chain etc., the grade of matrix n is often very large, and any serial algorithm cannot solve the problems. A distributed cluster-based solution for very large linear equations is discussed, it includes the definitions of notations, partition of matrix, communication mechanism, and a master-slaver algorithm etc., the computing cost is O(n^3/N), the memory cost is O(n^2/N), the I/O cost is O(n^2/N), and the com- munication cost is O(Nn ), here, N is the number of computing nodes or processes. Some tests show that the solution could solve the double type of matrix under 10^6 × 10^6 effectively.
基金funded in part by the National Natural Science Foundation of China (Grant no. 61772352, 62172061, 61871422)National Key Research and Development Project (Grants nos. 2020YFB1711800 and 2020YFB1707900)+2 种基金the Science and Technology Project of Sichuan Province (Grants no. 2021YFG0152, 2021YFG0025, 2020YFG0479, 2020YFG0322, 2020GFW035, 2020GFW033, 2020YFH0071)the R&D Project of Chengdu City (Grant no. 2019-YF05-01790-GX)the Central Universities of Southwest Minzu University (Grants no. ZYN2022032)
文摘In LEO(Low Earth Orbit)satellite communication systems,the satellite network is made up of a large number of satellites,the dynamically changing network environment affects the results of distributed computing.In order to improve the fault tolerance rate,a novel public blockchain consensus mechanism that applies a distributed computing architecture in a public network is proposed.Redundant calculation of blockchain ensures the credibility of the results;and the transactions with calculation results of a task are stored distributed in sequence in Directed Acyclic Graphs(DAG).The transactions issued by nodes are connected to form a net.The net can quickly provide node reputation evaluation that does not rely on third parties.Simulations show that our proposed blockchain has the following advantages:1.The task processing speed of the blockchain can be close to that of the fastest node in the entire blockchain;2.When the tasks’arrival time intervals and demanded working nodes(WNs)meet certain conditions,the network can tolerate more than 50%of malicious devices;3.No matter the number of nodes in the blockchain is increased or reduced,the network can keep robustness by adjusting the task’s arrival time interval and demanded WNs.
基金partly supported by National Key Basic Research Program of China(2016YFB1000100)partly supported by National Natural Science Foundation of China(NO.61402490)。
文摘To security support large-scale intelligent applications,distributed machine learning based on blockchain is an intuitive solution scheme.However,the distributed machine learning is difficult to train due to that the corresponding optimization solver algorithms converge slowly,which highly demand on computing and memory resources.To overcome the challenges,we propose a distributed computing framework for L-BFGS optimization algorithm based on variance reduction method,which is a lightweight,few additional cost and parallelized scheme for the model training process.To validate the claims,we have conducted several experiments on multiple classical datasets.Results show that our proposed computing framework can steadily accelerate the training process of solver in either local mode or distributed mode.
文摘Recently, wireless distributed computing (WDC) concept has emerged promising manifolds improvements to current wireless technotogies. Despite the various expected benefits of this concept, significant drawbacks were addressed in the open literature. One of WDC key challenges is the impact of wireless channel quality on the load of distributed computations. Therefore, this research investigates the wireless channel impact on WDC performance when the tatter is applied to spectrum sensing in cognitive radio (CR) technology. However, a trade- off is found between accuracy and computational complexity in spectrum sensing approaches. Increasing these approaches accuracy is accompanied by an increase in computational complexity. This results in greater power consumption and processing time. A novel WDC scheme for cyclostationary feature detection spectrum sensing approach is proposed in this paper and thoroughly investigated. The benefits of the proposed scheme are firstly presented. Then, the impact of the wireless channel of the proposed scheme is addressed considering two scenarios. In the first scenario, workload matrices are distributed over the wireless channel
基金supported by the Natural Science Foundation of China under Grant 61873017 and Grant 61473016in part by the Beijing Natural Science Foundation under Grant Z180005supported in part by the National Research Foundation of South Africa under Grant 113340in part by the Oppenheimer Memorial Trust Grant
文摘This paper presents a distributed optimization strategy for large-scale traffic network based on fog computing. Different from the traditional cloud-based centralized optimization strategy, the fog-based distributed optimization strategy distributes its computing tasks to individual sub-processors, thus significantly reducing computation time. A traffic model is built and a series of communication rules between subsystems are set to ensure that the entire transportation network can be globally optimized while the subsystem is achieving its local optimization. Finally, this paper numerically simulates the operation of the traffic network by mixed-Integer programming, also, compares the advantages and disadvantages of the two optimization strategies.
基金Supported by the National Basic Research Program of China (973 Program 2004CB318004), the National Natural Science Foundation of China (NSFC90204016) and the National High Technology Research and Development Program of China (2003AA144030)
文摘Distributed cryptographic computing system plays an important role since cryptographic computing is extremely computation sensitive. However, no general cryptographic computing system is available. Grid technology can give an efficient computational support for cryptographic applications. Therefore, a general-purpose grid-based distributed computing system called DCCS is put forward in this paper. The architecture of DCCS is simply described at first. The policy of task division adapted in DCCS is then presented. The method to manage subtask is further discussed in detail. Furthermore, the building and execution process of a computing job is revealed. Finally, the details of DCCS implementation under Globus Toolkit 4 are illustrated.
文摘This paper discusses the model of how the Agent is applied to implement distributed computing of Ada95 and presents a dynamic allocation strategy for distributed computing that based on pre-allocationand Agent. The aim of this strategy is realizing dynamic equilibrium allocation.
文摘In this paper, we adopt Java platform to achieve a multi-tier distributed object enterprise computing model which provides an open, flexible, robust and cross-platform standard for enterprise applications of new generation. In addition to this model, we define remote server objects as session or entity objects according to their roles in a distributed application server, which separate information details from business operations for software reuse. A web store system is implement by using this multi-tier distributed object enterprise computing model.
文摘his paper examines planning management problems in a Multiagentbased Distributed Open Computing Environment Model (MDOCEM). First the meaning of planning management in MDOCEM is introduced, and then a formal method to describe the associated task partition problems is presented, and a heuristic algorithm which gives an approximate optimum solution is given. Finally the task coordination and integration of execution results are discussed.
文摘Vehicular networks have been envisioned to provide us with numerous interesting services such as dissemination of real-time safety warnings and commercial advertisements via car-to-car communication. However, efficient routing is a research challenge due to the highly dynamic nature of these networks. Nevertheless, the availability of connections imposes additional constraint. Our earlier works in the area of efficient dissemination integrates the advantages of middleware operations with muhicast routing to de- sign a framework for distributed routing in vehicular networks. Cloud computing makes use of pools of physical computing resourc- es to meet the requirements of such highly dynamic networks. The proposed solution in this paper applies the principles of cloud computing to our existing framework. The routing protocol works at the network layer for the formation of clouds in specific geo- graphic regions. Simulation results present the effieiency of the model in terms of serviee discovery, download time and the queu- ing delay at the controller nodes.
基金Project supported by the National Key R&D Program of China(Grant No.2023YFA1009403)the National Natural Science Foundation of China(Grant Nos.62072176 and 62472175)the“Digital Silk Road”Shanghai International Joint Lab of Trustworthy Intelligent Software(Grant No.22510750100)。
文摘In distributed quantum computing(DQC),quantum hardware design mainly focuses on providing as many as possible high-quality inter-chip connections.Meanwhile,quantum software tries its best to reduce the required number of remote quantum gates between chips.However,this“hardware first,software follows”methodology may not fully exploit the potential of DQC.Inspired by classical software-hardware co-design,this paper explores the design space of application-specific DQC architectures.More specifically,we propose Auto Arch,an automated quantum chip network(QCN)structure design tool.With qubits grouping followed by a customized QCN design,AutoArch can generate a near-optimal DQC architecture suitable for target quantum algorithms.Experimental results show that the DQC architecture generated by Auto Arch can outperform other general QCN architectures when executing target quantum algorithms.
基金This work was supported by the National Key Research and Development Program of China(2021YFB2900603)the National Natural Science Foundation of China(61831008).
文摘A dynamic multi-beam resource allocation algorithm for large low Earth orbit(LEO)constellation based on on-board distributed computing is proposed in this paper.The allocation is a combinatorial optimization process under a series of complex constraints,which is important for enhancing the matching between resources and requirements.A complex algorithm is not available because that the LEO on-board resources is limi-ted.The proposed genetic algorithm(GA)based on two-dimen-sional individual model and uncorrelated single paternal inheri-tance method is designed to support distributed computation to enhance the feasibility of on-board application.A distributed system composed of eight embedded devices is built to verify the algorithm.A typical scenario is built in the system to evalu-ate the resource allocation process,algorithm mathematical model,trigger strategy,and distributed computation architec-ture.According to the simulation and measurement results,the proposed algorithm can provide an allocation result for more than 1500 tasks in 14 s and the success rate is more than 91%in a typical scene.The response time is decreased by 40%com-pared with the conditional GA.
文摘IT as a dynamic filed changes very rapidly; efficient management of such systems for the most of the companies requires handling tremendous complex situations in terms of hardware and software setup. Hardware and software itself changes quickly with the time and keeping them updated is a difficult problem for the most of the companies; the problem is more emphasized for the companies having large infrastructure of IT facilities such as data centers which are expensive to be maintained. Many applications run on the company premises which require well prepared staff for successfully maintaining them. With the inception of Cloud Computing many companies have transferred their applications and data into cloud computing based platforms in order to have reduced maintaining cost, easier maintenance in terms of hardware and software, reliable and securely accessible services. The benefits of building distributed applications using Google infrastructure are conferred in this paper.
基金in part,supported by the European Commission through the EU FP7 SEE GRID SCI and SCI BUS projectsby the Grant 098-0982562-2567 awarded by the Ministry of Science,Education and Sports of the Republic of Croatia.
文摘Today we witness the exponential growth of scientific research. This fast growth is possible thanks to the rapid development of computing systems since its first days in 1947 and the invention of transistor till the present days with high performance and scalable distributed computing systems. This fast growth of computing systems was first observed by Gordon E. Moore in 1965 and postulated as Moore’s Law. For the development of the scalable distributed computing systems, the year 2000 was a very special year. The first GHz speed processor, GB size memory and GB/s data transmission through network were achieved. Interestingly, in the same year the usable Grid computing systems emerged, which gave a strong impulse to a rapid development of distributed computing systems. This paper recognizes these facts that occurred in the year 2000, as the G-phenomena, a millennium cornerstone for the rapid development of scalable distributed systems evolved around the Grid and Cloud computing paradigms.
文摘Big data are always processed repeatedly with small changes, which is a major form of big data processing. The feature of incremental change of big data shows that incremental computing mode can improve the performance greatly. HDFS is a distributed file system on Hadoop which is the most popular platform for big data analytics. And HDFS adopts fixed-size chunking policy, which is inefficient facing incremental computing. Therefore, in this paper, we proposed iHDFS (incremental HDFS), a distributed file system, which can provide basic guarantee for big data parallel processing. The iHDFS is implemented as an extension to HDFS. In iHDFS, Rabin fingerprint algorithm is applied to achieve content defined chunking. This policy make data chunking has much higher stability, and the intermediate processing results can be reused efficiently, so the performance of incremental data processing can be improved significantly. The effectiveness and efficiency of iHDFS have been demonstrated by the experimental results.
基金the Common Key Technology Innovation Special of Key Industries of Chongqing Science and Technology Commission under Grant No.cstc2017zdcy-zdyfX0067.
文摘Person re-identification has been a hot research issues in the field of computer vision.In recent years,with the maturity of the theory,a large number of excellent methods have been proposed.However,large-scale data sets and huge networks make training a time-consuming process.At the same time,the parameters and their values generated during the training process also take up a lot of computer resources.Therefore,we apply distributed cloud computing method to perform person re-identification task.Using distributed data storage method,pedestrian data sets and parameters are stored in cloud nodes.To speed up operational efficiency and increase fault tolerance,we add data redundancy mechanism to copy and store data blocks to different nodes,and we propose a hash loop optimization algorithm to optimize the data distribution process.Moreover,we assign different layers of the re-identification network to different nodes to complete the training in the way of model parallelism.By comparing and analyzing the accuracy and operation speed of the distributed model on the video-based dataset MARS,the results show that our distributed model has a faster training speed.
基金Acknowledgements: This work has been st, pported in part by the National High-Tech Research and Dcvelopment Plan of China under Gram No. 2002BA711A08 and by the Natural Science Foundation of Hunan Province under Grant No. 03JJY4054.
文摘An attempt has been made to develop a distributed software infrastructure model for onboard data fusion system simulation, which is also applied to netted radar systems, onboard distributed detection systems and advanced C3I systems. Two architectures are provided and verified: one is based on pure TCP/IP protocol and C/S model, and implemented with Winsock, the other is based on CORBA (common object request broker architecture). The performance of data fusion simulation system, i.e. reliability, flexibility and scalability, is improved and enhanced by two models. The study of them makes valuable explore on incorporating the distributed computation concepts into radar system simulation techniques.