Blockchain technology,with its attributes of decentralization,immutability,and traceability,has emerged as a powerful catalyst for enhancing traditional industries in terms of optimizing business processes.However,tra...Blockchain technology,with its attributes of decentralization,immutability,and traceability,has emerged as a powerful catalyst for enhancing traditional industries in terms of optimizing business processes.However,transaction performance and scalability has become the main challenges hindering the widespread adoption of blockchain.Due to its inability to meet the demands of high-frequency trading,blockchain cannot be adopted in many scenarios.To improve the transaction capacity,researchers have proposed some on-chain scaling technologies,including lightning networks,directed acyclic graph technology,state channels,and shardingmechanisms,inwhich sharding emerges as a potential scaling technology.Nevertheless,excessive cross-shard transactions and uneven shard workloads prevent the sharding mechanism from achieving the expected aim.This paper proposes a graphbased sharding scheme for public blockchain to efficiently balance the transaction distribution.Bymitigating crossshard transactions and evening-out workloads among shards,the scheme reduces transaction confirmation latency and enhances the transaction capacity of the blockchain.Therefore,the scheme can achieve a high-frequency transaction as well as a better blockchain scalability.Experiments results show that the scheme effectively reduces the cross-shard transaction ratio to a range of 35%-56%and significantly decreases the transaction confirmation latency to 6 s in a blockchain with no more than 25 shards.展开更多
A modified reduced-order method for RC networks which takes a division-and-conquest strategy is presented.The whole network is partitioned into a set of sub-networks at first,then each of them is reduced by Krylov sub...A modified reduced-order method for RC networks which takes a division-and-conquest strategy is presented.The whole network is partitioned into a set of sub-networks at first,then each of them is reduced by Krylov subspace techniques,and finally all the reduced sub-networks are incorporated together.With some accuracy,this method can reduce the number of both nodes and components of the circuit comparing to the traditional methods which usually only offer a reduced net with less nodes.This can markedly accelerate the sparse-matrix-based simulators whose performance is dominated by the entity of the matrix or the number of components of the circuits.展开更多
Many machine learning and data mining (MLDM] problems like recommendation, topic modeling, and medical diagnosis can be modeled as computing on bipartite graphs. However, inost distributed graph-parallel systems are ...Many machine learning and data mining (MLDM] problems like recommendation, topic modeling, and medical diagnosis can be modeled as computing on bipartite graphs. However, inost distributed graph-parallel systems are oblivious to the unique characteristics in such graphs and existing online graph partitioning algorithms usually cause excessive repli- cation of vertices as well as significant pressure on network communication. This article identifies the challenges and oppor- tunities of partitioning bipartite graphs for distributed MLDM processing and proposes BiGraph, a set of bipartite-oriented graph partitioning algorithms. BiGraph leverages observations such as the skewed distribution of vertices, discriminated computation load and imbalanced data sizes between the two subsets of vertices to derive a set of optimal graph partition- ing algorithms that result in minimal vertex replication and network communication. BiGraph has been implemented on PowerGraph and is shown to have a performance boost up to 17.75X (from 1.16X) for four typical MLDM algorithnls, due to reducing up to 80% vertex replication, and up to 96% network traffic.展开更多
Because of cloud computing's high degree of polymerization calculation mode, it can't give full play to the resources of the edge device such as computing, storage, etc. Fog computing can improve the resource ...Because of cloud computing's high degree of polymerization calculation mode, it can't give full play to the resources of the edge device such as computing, storage, etc. Fog computing can improve the resource utilization efficiency of the edge device, and solve the problem about service computing of the delay-sensitive applications. This paper researches on the framework of the fog computing, and adopts Cloud Atomization Technology to turn physical nodes in different levels into virtual machine nodes. On this basis, this paper uses the graph partitioning theory to build the fog computing's load balancing algorithm based on dynamic graph partitioning. The simulation results show that the framework of the fog computing after Cloud Atomization can build the system network flexibly, and dynamic load balancing mechanism can effectively configure system resources as well as reducing the consumption of node migration brought by system changes.展开更多
Opportunistic routing takes advantage of the broadcast nature of wireless communications by forwarding data through a set of opportunistic paths instead of one 'best' path in traditional routing. However, using the ...Opportunistic routing takes advantage of the broadcast nature of wireless communications by forwarding data through a set of opportunistic paths instead of one 'best' path in traditional routing. However, using the global scheduling opportunistic scheme like the existing opportunistic routing protocol (ExOR) would consume considerable transmission latency and energy in large-scale wireless topologies. In this article, a graph partitioning algorithm is proposed, namely, minimum cut with laplacians (MCL), to divide the Ad-hoc network topology into subgraphs with minimized edge cuts across them. Then the existing opportunistic routing can be applied locally in each subgraph. In this way, forwarders in different subgraphs can transmit simultaneously, and each node only needs to maintain a local forwarder list instead of a global one. The simulations show that using MCL scheme in the opportunistic routing can reduce the end-to-end delay by about 49%, and increase the life time of the wireless node by about 39%.展开更多
Hardware/software(HW/SW) partitioning is one of the key processes in an embedded system.It is used to determine which system components are assigned to hardware and which are processed by software.In contrast with p...Hardware/software(HW/SW) partitioning is one of the key processes in an embedded system.It is used to determine which system components are assigned to hardware and which are processed by software.In contrast with previous research that focuses on developing efficient heuristic,we focus on the pre-process of the task graph before the HW/SW partitioning in this paper,that is,enumerating all the sub-graphs that meet the requirements.Experimental results showed that the original graph can be reduced to 67% in the worst-case scenario and 58% in the best-case scenario.In conclusion,the reduced task graph saved hardware area while improving partitioning speed and accuracy.展开更多
This paper proposes a new method for dynamic airspace configuration based on a weighted graph model. The method begins with the construction of an undirected graph for the given airspace, where the vertices represent ...This paper proposes a new method for dynamic airspace configuration based on a weighted graph model. The method begins with the construction of an undirected graph for the given airspace, where the vertices represent those key points such as airports, waypoints, and the edges represent those air routes. Those vertices are used as the sites of Voronoi diagram, which divides the airspace into units called as cells. Then, aircraft counts of both each cell and of each air-route are computed. Thus, by assigning both the vertices and the edges with those aircraft counts, a weighted graph model comes into being. Accordingly the airspace configuration problem is described as a weighted graph partitioning problem. Then, the problem is solved by a graph partitioning algorithm, which is a mixture of general weighted graph cuts algorithm, an optimal dynamic load balancing algorithm and a heuristic algorithm. After the cuts algorithm partitions the model into sub-graphs, the load balancing algorithm together with the heuristic algorithm transfers aircraft counts to balance workload among sub-graphs. Lastly, airspace configuration is completed by determining the sector boundaries. The simulation result shows that the designed sectors satisfy not only workload balancing condition, but also the constraints such as convexity, connectivity, as well as minimum distance constraint.展开更多
In this paper we develop modeling techniques for a social partitioning problem. Different social interaction regulations are imposed during pandemics to prevent the spread of diseases. We suggest partitioning a set of...In this paper we develop modeling techniques for a social partitioning problem. Different social interaction regulations are imposed during pandemics to prevent the spread of diseases. We suggest partitioning a set of company employees as an effective way to curb the spread, and use integer programming techniques to model it. The goal of the model is to maximize the number of direct interactions between employees who are essential for company’s work subject to the constraint that all employees should be partitioned into components of no more than a certain size implied by the regulations. Then we further develop the basic model to take into account different restrictions and provisions. We also give heuristics for solving the problem. Our computational results include sensitivity analysis on some of the models and analysis of the heuristic performance.展开更多
Many real-world networks are found to be scale-free. However, graph partition technology, as a technology capable of parallel computing, performs poorly when scale-free graphs are provided. The reason for this is that...Many real-world networks are found to be scale-free. However, graph partition technology, as a technology capable of parallel computing, performs poorly when scale-free graphs are provided. The reason for this is that traditional partitioning algorithms are designed for random networks and regular networks, rather than for scale-free networks. Multilevel graph-partitioning algorithms are currently considered to be the state of the art and are used extensively. In this paper, we analyse the reasons why traditional multilevel graph-partitioning algorithms perform poorly and present a new multilevel graph-partitioning paradigm, top down partitioning, which derives its name from the comparison with the traditional bottom-up partitioning. A new multilevel partitioning algorithm, named betweenness-based partitioning algorithm, is also presented as an implementation of top-down partitioning paradigm. An experimental evaluation of seven different real-world scale-free networks shows that the betweenness-based partitioning algorithm significantly outperforms the existing state-of-the-art approaches.展开更多
New theories,methodologies,and technologies have been continuously invented and widely applied in modern software development,along with many new tools and best practices that are of remarkable significance in the sof...New theories,methodologies,and technologies have been continuously invented and widely applied in modern software development,along with many new tools and best practices that are of remarkable significance in the software industry.In Software Engineering(SE)programs of universities,it is quite difficult for their curricula to chase after the fast-evolving technology trend.As a consequence,there have been significant challenges in designing an evolvable SE curriculum.In this paper,we present a knowledge graph based curriculum design method for SE programs.Knowledge Points(KPs)are organized into a multi-layer and multi-dimensionally annotated knowledge graph called SEKG,and five principles are applied to partition the SEKG into a set of inter-related courses.Metrics for evaluating the quality of an SE curriculum are briefly discussed.This method can not only help design a systematic curriculum from existing software engineering KPs but also facilitate curriculum evolution to adapt to technology trends.展开更多
We have studied the problem of reaching a globally optimal segment for a graph-like environment with a single or a group of autonomous mobile agents. Firstly, two efficient simulated-annealing-like algorithms are give...We have studied the problem of reaching a globally optimal segment for a graph-like environment with a single or a group of autonomous mobile agents. Firstly, two efficient simulated-annealing-like algorithms are given for a single agent to solve the problem in a partially known environment and an unknown environment, respectively. It shows that under both proposed control strategies, the agent will eventually converge to a globally optimal segment with probability 1. Secondly, we use multi-agent searching to simultaneously reduce the computation complexity and accelerate convergence based on the algorithms we have given for a single agent. By exploiting graph partition, a gossip-consensus method based scheme is presented to update the key parameter--radius of the graph, ensuring that the agents spend much less time finding a globally optimal segment.展开更多
We enhance a robust parallel finite element model for coasts and estuaries cases with the use of N-Best refinement algorithms,in multilevel partitioning scheme.Graph partitioning is an important step to construct the ...We enhance a robust parallel finite element model for coasts and estuaries cases with the use of N-Best refinement algorithms,in multilevel partitioning scheme.Graph partitioning is an important step to construct the parallel model,in which computation speed is a big concern.The partitioning strategy includes the division of the research domain into several semi-equal-sized sub-domains,minimizing the sum weight of edges between different sub-domains.Multilevel schemes for graph partitioning are divided into three phases:coarsening,partitioning,and uncoarsening.In the uncoarsening phase,many refinement algorithms have been proposed previously,such as KL,Greedy,and Boundary refinements.In this study,we propose an N-Best refinement algorithm and show its advantages in our case study of Xiamen Bay.Compared with original partitioning algorithm in previous models,the N-Best algorithm can speed up the computation by 1.9 times,and the simulation results are in a good match with the in-situ data.展开更多
We focus on the single layer formulation which provides an integral equation of the first kind that is very badly conditioned. The condition number of the unpreconditioned system increases exponentially with the multi...We focus on the single layer formulation which provides an integral equation of the first kind that is very badly conditioned. The condition number of the unpreconditioned system increases exponentially with the multiscale levels. A remedy utilizing overlapping domain decompositions applied to the Boundary Element Method by means of wavelets is examined. The width of the overlapping of the subdomains plays an important role in the estimation of the eigenvalues as well as the condition number of the additive domain decomposition operator. We examine the convergence analysis of the domain decomposition method which depends on the wavelet levels and on the size of the subdomain overlaps. Our theoretical results related to the additive Schwarz method are corroborated by numerical outputs.展开更多
This paper aims at designing a better net-work i,,imozatopm strategy to fight against the Sus- ceptible-Infected-Susceptible (SIS) type epidemic spreading in networks. Previous work used the nurrber of drops in the ...This paper aims at designing a better net-work i,,imozatopm strategy to fight against the Sus- ceptible-Infected-Susceptible (SIS) type epidemic spreading in networks. Previous work used the nurrber of drops in the spectral radius of the net-work for evaluation and guiding the design of im-munization strategies. Instead, we propose using the infected node number in the steady state of SIS spreading as the appropriate metric. We use the metric to point out the limitations of the Equal Graph Partitioning (EGP) strategy and the "max-△λ" strategy, which are two representative network inmmnization strategies, and then identify the criti-cal role of epidemic spreading parameters in the e-valuation and design of network immuzization strat- egies. Based on all of these, we design a new immuzization strategy. Simulation results show that our strategy performs consistently better than the EGP strategy. In many cases, it uses only 50% less re-sources to achieve the same immuzization effect.展开更多
Social networks(SNs)are sources with extreme number of users around the world who are all sharing data like images,audio,and video to their friends using IoT devices.This concept is the so-called Social Internet of Th...Social networks(SNs)are sources with extreme number of users around the world who are all sharing data like images,audio,and video to their friends using IoT devices.This concept is the so-called Social Internet of Things(SIot).The evolving nature of edge-cloud computing has enabled storage of a large volume of data from various sources,and this task demands an efficient storage procedure.For this kind of large volume of data storage,the usage of data replication using edge with geo-distributed cloud service area is suited to fulfill the user’s expectations with low latency.The major issue is the way to store the data and replicate these large data items optimally and allocate the request from the data center efficiently.For efficient storage of these data,we use edge server,which is part of the cloud server,in this study.Thus,the data are distributed and stored with quick access,which will reduce the latency with response.The proposed data placement approach learns with machine learning(ML)algorithm called radial basis kernel function assisted with support vector machine(RBF-SVM)to classify the data center for storing the user and friend’s data from the SIoT devices.These learning algorithms will be used to predict the workload of the data stored in the data center as either edge or cloud depending on the existing time slots.The data placement with dynamic nature is also optimized using the proposed dynamic graph partitioning(GP)method to meet the individual user’s demand of low latency with minimum costs.This way will keep the SIoT data placement efficient and effective over time.Accordingly,this proposed data placement and replication approach introduces three kinds of innovations compared with the existing data placement approach.(i)Rather than storing the user data in a single cloud,this study uses the edge server closest to the SIoT devices for faster access with reduced response time.(ii)The classification algorithm called RBF-SVM is used to find storage for user for reducing data replication.(iii)Dynamic GP is introduced for data placement with reduced latency and minimum cost to fulfil the dynamic nature of the SN.The simulation result of this approach obtains reduced latency of 130 ms and minimum cost compared with those of the existing data placement approaches.Therefore,our proposed data placement with ML-based learning on edge provides promising results in terms of efficiency,effectiveness,and performance with reduced latency and minimum cost.展开更多
To improve the segmentation quality and efficiency of color image,a novel approach which combines the advantages of the mean shift(MS) segmentation and improved ant clustering method is proposed.The regions which can ...To improve the segmentation quality and efficiency of color image,a novel approach which combines the advantages of the mean shift(MS) segmentation and improved ant clustering method is proposed.The regions which can preserve the discontinuity characteristics of an image are segmented by MS algorithm,and then they are represented by a graph in which every region is represented by a node.In order to solve the graph partition problem,an improved ant clustering algorithm,called similarity carrying ant model(SCAM-ant),is proposed,in which a new similarity calculation method is given.Using SCAM-ant,the maximum number of items that each ant can carry will increase,the clustering time will be effectively reduced,and globally optimized clustering can also be realized.Because the graph is not based on the pixels of original image but on the segmentation result of MS algorithm,the computational complexity is greatly reduced.Experiments show that the proposed method can realize color image segmentation efficiently,and compared with the conventional methods based on the image pixels,it improves the image segmentation quality and the anti-interference ability.展开更多
Two problems for task schedules in a multiprocessor parallel system are discussed in Ans paper (1) given a partially ordered set of tasks represented by the venices of an acyclic directed graph with their correspondin...Two problems for task schedules in a multiprocessor parallel system are discussed in Ans paper (1) given a partially ordered set of tasks represented by the venices of an acyclic directed graph with their corresponding processing bines, derive the lower bound on the Annimum time(LBMT) needed to process the task graph for a given number of processors. (2) Determine the lower bound on minimum number of processors(LBMP) needed to complete those tasks in minimum bine. It is shown that the proposed LBMT is sharper than previously Known values and the comPUtational aspeCts of these bounds are also discussed.展开更多
Graph similarity search is a common operation of graph database,and graph editing distance constraint is the most common similarity measure to solve graph similarity search problem.However,accurate calculation of grap...Graph similarity search is a common operation of graph database,and graph editing distance constraint is the most common similarity measure to solve graph similarity search problem.However,accurate calculation of graph editing distance is proved to be NP hard,and the filter and verification framework are adopted in current method.In this paper,a dictionary tree based clustering index structure is proposed to reduce the cost of candidate graph,and is verified in the filtering stage.An efficient incremental partition algorithm was designed.By calculating the distance between query graph and candidate graph partition,the filtering effect was further enhanced.Experiments on real large graph datasets show that the performance of this algorithm is significantly better than that of the existing algorithms.展开更多
Szemerédi’s Regularity Lemma(SRL)is a crucial tool in the analysis of large graphs,having made significant contributions in the proof of some sensational results in mathematics.Traditional methods for studying p...Szemerédi’s Regularity Lemma(SRL)is a crucial tool in the analysis of large graphs,having made significant contributions in the proof of some sensational results in mathematics.Traditional methods for studying proteins in Protein-Protein Interaction(PPI)networks typically only extract the first-order or second-order neighbor information of proteins,ignoring the potential third-order or higher-order neighbor information between proteins,which may hide certain relationships between proteins.To explore more in-depth insights for PPI networks,we take into account the fourth-order neighbor information of proteins and reconstruct the network in this paper,naming it the weighted dense PPI network.We then partition it using SRL,which primarily utilizes the structural information and corresponds to a unique partition of the original network.Bioinformatics analyses such as those for pathway enrichment analysis and multiple sequence alignment show that our method can classify interacting protein pairs,grouping proteins with functional association,disease association,and sequence similarity together.Overall,this paper has three essential contributions:(1)we present a new model to overcome the astronomically large demand of vertices in applying SRL,and achieve protein classification;(2)we reconstruct a weighted dense PPI network which can make SRL work and mine potential interactions more efficiently;and(3)proteins in the same class partitioned by our method not only have sequence similarity,but also have functional associations.展开更多
The spectrum of a graph is the set of all eigenvalues of the Laplacian matrix of the graph. There is a closed relationship between the Laplacian spectrum of graphs and some properties of graphs such as connectivity. I...The spectrum of a graph is the set of all eigenvalues of the Laplacian matrix of the graph. There is a closed relationship between the Laplacian spectrum of graphs and some properties of graphs such as connectivity. In the recent years Laplacian spectrum of graphs has been widely applied in many fields. The application of Laplacian spectrum of graphs to circuit partitioning problems is reviewed in this paper. A new criterion of circuit partitioning is proposed and the bounds of the partition ratio for weighted graphs are also presented. Moreover, the deficiency of graph-partitioning algorithms by Laplacian eigenvectors is addressed and an algorithm by means of the minimal spanning tree of a graph is proposed. By virtue of taking the graph structure into consideration this algorithm can fulfill general requirements of circuit partitioning.展开更多
基金supported by Shandong Provincial Key Research and Development Program of China(2021CXGC010107,2020CXGC010107)the Shandong Provincial Natural Science Foundation of China(ZR2020KF035)the New 20 Project of Higher Education of Jinan,China(202228017).
文摘Blockchain technology,with its attributes of decentralization,immutability,and traceability,has emerged as a powerful catalyst for enhancing traditional industries in terms of optimizing business processes.However,transaction performance and scalability has become the main challenges hindering the widespread adoption of blockchain.Due to its inability to meet the demands of high-frequency trading,blockchain cannot be adopted in many scenarios.To improve the transaction capacity,researchers have proposed some on-chain scaling technologies,including lightning networks,directed acyclic graph technology,state channels,and shardingmechanisms,inwhich sharding emerges as a potential scaling technology.Nevertheless,excessive cross-shard transactions and uneven shard workloads prevent the sharding mechanism from achieving the expected aim.This paper proposes a graphbased sharding scheme for public blockchain to efficiently balance the transaction distribution.Bymitigating crossshard transactions and evening-out workloads among shards,the scheme reduces transaction confirmation latency and enhances the transaction capacity of the blockchain.Therefore,the scheme can achieve a high-frequency transaction as well as a better blockchain scalability.Experiments results show that the scheme effectively reduces the cross-shard transaction ratio to a range of 35%-56%and significantly decreases the transaction confirmation latency to 6 s in a blockchain with no more than 25 shards.
文摘A modified reduced-order method for RC networks which takes a division-and-conquest strategy is presented.The whole network is partitioned into a set of sub-networks at first,then each of them is reduced by Krylov subspace techniques,and finally all the reduced sub-networks are incorporated together.With some accuracy,this method can reduce the number of both nodes and components of the circuit comparing to the traditional methods which usually only offer a reduced net with less nodes.This can markedly accelerate the sparse-matrix-based simulators whose performance is dominated by the entity of the matrix or the number of components of the circuits.
基金This work was supported in part by the Doctoral Fund of Ministry of Education of China under Grant No. 20130073120040, the Program for New Century Excellent Talents in University of Ministry of Education of China, the Shanghai Science and Technology Developnmnt hinds under Grant No. 12QA1401700, a foundation for the Author of National Excellent Doctoral Dissertation of China, the Open Project Program of the State Key Laboratory of Mathematical Engineering and Advanced Computing under Grant No. 2014A05, the National Natural Science Foundation of China under Grant Nos. 61003002, 61402284, the Shanghai Science and Technology Development Fund for High-Tech Achievement Translation under Grant No. 14511100902, and the Singapore National Research Foundation under Grant No. CREATE E2S2.
文摘Many machine learning and data mining (MLDM] problems like recommendation, topic modeling, and medical diagnosis can be modeled as computing on bipartite graphs. However, inost distributed graph-parallel systems are oblivious to the unique characteristics in such graphs and existing online graph partitioning algorithms usually cause excessive repli- cation of vertices as well as significant pressure on network communication. This article identifies the challenges and oppor- tunities of partitioning bipartite graphs for distributed MLDM processing and proposes BiGraph, a set of bipartite-oriented graph partitioning algorithms. BiGraph leverages observations such as the skewed distribution of vertices, discriminated computation load and imbalanced data sizes between the two subsets of vertices to derive a set of optimal graph partition- ing algorithms that result in minimal vertex replication and network communication. BiGraph has been implemented on PowerGraph and is shown to have a performance boost up to 17.75X (from 1.16X) for four typical MLDM algorithnls, due to reducing up to 80% vertex replication, and up to 96% network traffic.
基金supported in part by the National Science and technology support program of P.R.China(No.2014BAH29F05)
文摘Because of cloud computing's high degree of polymerization calculation mode, it can't give full play to the resources of the edge device such as computing, storage, etc. Fog computing can improve the resource utilization efficiency of the edge device, and solve the problem about service computing of the delay-sensitive applications. This paper researches on the framework of the fog computing, and adopts Cloud Atomization Technology to turn physical nodes in different levels into virtual machine nodes. On this basis, this paper uses the graph partitioning theory to build the fog computing's load balancing algorithm based on dynamic graph partitioning. The simulation results show that the framework of the fog computing after Cloud Atomization can build the system network flexibly, and dynamic load balancing mechanism can effectively configure system resources as well as reducing the consumption of node migration brought by system changes.
基金supported by the National Natural Science Foundation of China (60573111)the Project for Supporting Excellent Scholars in the New Century, and the Ministry of Education of China (NCET-04-0112)
文摘Opportunistic routing takes advantage of the broadcast nature of wireless communications by forwarding data through a set of opportunistic paths instead of one 'best' path in traditional routing. However, using the global scheduling opportunistic scheme like the existing opportunistic routing protocol (ExOR) would consume considerable transmission latency and energy in large-scale wireless topologies. In this article, a graph partitioning algorithm is proposed, namely, minimum cut with laplacians (MCL), to divide the Ad-hoc network topology into subgraphs with minimized edge cuts across them. Then the existing opportunistic routing can be applied locally in each subgraph. In this way, forwarders in different subgraphs can transmit simultaneously, and each node only needs to maintain a local forwarder list instead of a global one. The simulations show that using MCL scheme in the opportunistic routing can reduce the end-to-end delay by about 49%, and increase the life time of the wireless node by about 39%.
基金Supported by the National Natural Science Foundation of China (60970016,61173032)
文摘Hardware/software(HW/SW) partitioning is one of the key processes in an embedded system.It is used to determine which system components are assigned to hardware and which are processed by software.In contrast with previous research that focuses on developing efficient heuristic,we focus on the pre-process of the task graph before the HW/SW partitioning in this paper,that is,enumerating all the sub-graphs that meet the requirements.Experimental results showed that the original graph can be reduced to 67% in the worst-case scenario and 58% in the best-case scenario.In conclusion,the reduced task graph saved hardware area while improving partitioning speed and accuracy.
基金supported by the National Natural Science Foundationof China(No.61079001)
文摘This paper proposes a new method for dynamic airspace configuration based on a weighted graph model. The method begins with the construction of an undirected graph for the given airspace, where the vertices represent those key points such as airports, waypoints, and the edges represent those air routes. Those vertices are used as the sites of Voronoi diagram, which divides the airspace into units called as cells. Then, aircraft counts of both each cell and of each air-route are computed. Thus, by assigning both the vertices and the edges with those aircraft counts, a weighted graph model comes into being. Accordingly the airspace configuration problem is described as a weighted graph partitioning problem. Then, the problem is solved by a graph partitioning algorithm, which is a mixture of general weighted graph cuts algorithm, an optimal dynamic load balancing algorithm and a heuristic algorithm. After the cuts algorithm partitions the model into sub-graphs, the load balancing algorithm together with the heuristic algorithm transfers aircraft counts to balance workload among sub-graphs. Lastly, airspace configuration is completed by determining the sector boundaries. The simulation result shows that the designed sectors satisfy not only workload balancing condition, but also the constraints such as convexity, connectivity, as well as minimum distance constraint.
文摘In this paper we develop modeling techniques for a social partitioning problem. Different social interaction regulations are imposed during pandemics to prevent the spread of diseases. We suggest partitioning a set of company employees as an effective way to curb the spread, and use integer programming techniques to model it. The goal of the model is to maximize the number of direct interactions between employees who are essential for company’s work subject to the constraint that all employees should be partitioned into components of no more than a certain size implied by the regulations. Then we further develop the basic model to take into account different restrictions and provisions. We also give heuristics for solving the problem. Our computational results include sensitivity analysis on some of the models and analysis of the heuristic performance.
基金supported by the National Science Foundation for Distinguished Young Scholars of China(Grant Nos.61003082 and 60903059)the National Natural Science Foundation of China(Grant No.60873014)the Foundation for Innovative Research Groups of the National Natural Science Foundation of China(Grant No.60921062)
文摘Many real-world networks are found to be scale-free. However, graph partition technology, as a technology capable of parallel computing, performs poorly when scale-free graphs are provided. The reason for this is that traditional partitioning algorithms are designed for random networks and regular networks, rather than for scale-free networks. Multilevel graph-partitioning algorithms are currently considered to be the state of the art and are used extensively. In this paper, we analyse the reasons why traditional multilevel graph-partitioning algorithms perform poorly and present a new multilevel graph-partitioning paradigm, top down partitioning, which derives its name from the comparison with the traditional bottom-up partitioning. A new multilevel partitioning algorithm, named betweenness-based partitioning algorithm, is also presented as an implementation of top-down partitioning paradigm. An experimental evaluation of seven different real-world scale-free networks shows that the betweenness-based partitioning algorithm significantly outperforms the existing state-of-the-art approaches.
文摘New theories,methodologies,and technologies have been continuously invented and widely applied in modern software development,along with many new tools and best practices that are of remarkable significance in the software industry.In Software Engineering(SE)programs of universities,it is quite difficult for their curricula to chase after the fast-evolving technology trend.As a consequence,there have been significant challenges in designing an evolvable SE curriculum.In this paper,we present a knowledge graph based curriculum design method for SE programs.Knowledge Points(KPs)are organized into a multi-layer and multi-dimensionally annotated knowledge graph called SEKG,and five principles are applied to partition the SEKG into a set of inter-related courses.Metrics for evaluating the quality of an SE curriculum are briefly discussed.This method can not only help design a systematic curriculum from existing software engineering KPs but also facilitate curriculum evolution to adapt to technology trends.
文摘We have studied the problem of reaching a globally optimal segment for a graph-like environment with a single or a group of autonomous mobile agents. Firstly, two efficient simulated-annealing-like algorithms are given for a single agent to solve the problem in a partially known environment and an unknown environment, respectively. It shows that under both proposed control strategies, the agent will eventually converge to a globally optimal segment with probability 1. Secondly, we use multi-agent searching to simultaneously reduce the computation complexity and accelerate convergence based on the algorithms we have given for a single agent. By exploiting graph partition, a gossip-consensus method based scheme is presented to update the key parameter--radius of the graph, ensuring that the agents spend much less time finding a globally optimal segment.
基金Supported by the National Natural Science Foundation of China (Nos. 40406005,41076001,40440420596)
文摘We enhance a robust parallel finite element model for coasts and estuaries cases with the use of N-Best refinement algorithms,in multilevel partitioning scheme.Graph partitioning is an important step to construct the parallel model,in which computation speed is a big concern.The partitioning strategy includes the division of the research domain into several semi-equal-sized sub-domains,minimizing the sum weight of edges between different sub-domains.Multilevel schemes for graph partitioning are divided into three phases:coarsening,partitioning,and uncoarsening.In the uncoarsening phase,many refinement algorithms have been proposed previously,such as KL,Greedy,and Boundary refinements.In this study,we propose an N-Best refinement algorithm and show its advantages in our case study of Xiamen Bay.Compared with original partitioning algorithm in previous models,the N-Best algorithm can speed up the computation by 1.9 times,and the simulation results are in a good match with the in-situ data.
文摘We focus on the single layer formulation which provides an integral equation of the first kind that is very badly conditioned. The condition number of the unpreconditioned system increases exponentially with the multiscale levels. A remedy utilizing overlapping domain decompositions applied to the Boundary Element Method by means of wavelets is examined. The width of the overlapping of the subdomains plays an important role in the estimation of the eigenvalues as well as the condition number of the additive domain decomposition operator. We examine the convergence analysis of the domain decomposition method which depends on the wavelet levels and on the size of the subdomain overlaps. Our theoretical results related to the additive Schwarz method are corroborated by numerical outputs.
基金supported by the State Key Development Program of Basic Research of China under Grants No.2007CB307104,No. 2007CB307100
文摘This paper aims at designing a better net-work i,,imozatopm strategy to fight against the Sus- ceptible-Infected-Susceptible (SIS) type epidemic spreading in networks. Previous work used the nurrber of drops in the spectral radius of the net-work for evaluation and guiding the design of im-munization strategies. Instead, we propose using the infected node number in the steady state of SIS spreading as the appropriate metric. We use the metric to point out the limitations of the Equal Graph Partitioning (EGP) strategy and the "max-△λ" strategy, which are two representative network inmmnization strategies, and then identify the criti-cal role of epidemic spreading parameters in the e-valuation and design of network immuzization strat- egies. Based on all of these, we design a new immuzization strategy. Simulation results show that our strategy performs consistently better than the EGP strategy. In many cases, it uses only 50% less re-sources to achieve the same immuzization effect.
文摘Social networks(SNs)are sources with extreme number of users around the world who are all sharing data like images,audio,and video to their friends using IoT devices.This concept is the so-called Social Internet of Things(SIot).The evolving nature of edge-cloud computing has enabled storage of a large volume of data from various sources,and this task demands an efficient storage procedure.For this kind of large volume of data storage,the usage of data replication using edge with geo-distributed cloud service area is suited to fulfill the user’s expectations with low latency.The major issue is the way to store the data and replicate these large data items optimally and allocate the request from the data center efficiently.For efficient storage of these data,we use edge server,which is part of the cloud server,in this study.Thus,the data are distributed and stored with quick access,which will reduce the latency with response.The proposed data placement approach learns with machine learning(ML)algorithm called radial basis kernel function assisted with support vector machine(RBF-SVM)to classify the data center for storing the user and friend’s data from the SIoT devices.These learning algorithms will be used to predict the workload of the data stored in the data center as either edge or cloud depending on the existing time slots.The data placement with dynamic nature is also optimized using the proposed dynamic graph partitioning(GP)method to meet the individual user’s demand of low latency with minimum costs.This way will keep the SIoT data placement efficient and effective over time.Accordingly,this proposed data placement and replication approach introduces three kinds of innovations compared with the existing data placement approach.(i)Rather than storing the user data in a single cloud,this study uses the edge server closest to the SIoT devices for faster access with reduced response time.(ii)The classification algorithm called RBF-SVM is used to find storage for user for reducing data replication.(iii)Dynamic GP is introduced for data placement with reduced latency and minimum cost to fulfil the dynamic nature of the SN.The simulation result of this approach obtains reduced latency of 130 ms and minimum cost compared with those of the existing data placement approaches.Therefore,our proposed data placement with ML-based learning on edge provides promising results in terms of efficiency,effectiveness,and performance with reduced latency and minimum cost.
基金Project(60874070) supported by the National Natural Science Foundation of China
文摘To improve the segmentation quality and efficiency of color image,a novel approach which combines the advantages of the mean shift(MS) segmentation and improved ant clustering method is proposed.The regions which can preserve the discontinuity characteristics of an image are segmented by MS algorithm,and then they are represented by a graph in which every region is represented by a node.In order to solve the graph partition problem,an improved ant clustering algorithm,called similarity carrying ant model(SCAM-ant),is proposed,in which a new similarity calculation method is given.Using SCAM-ant,the maximum number of items that each ant can carry will increase,the clustering time will be effectively reduced,and globally optimized clustering can also be realized.Because the graph is not based on the pixels of original image but on the segmentation result of MS algorithm,the computational complexity is greatly reduced.Experiments show that the proposed method can realize color image segmentation efficiently,and compared with the conventional methods based on the image pixels,it improves the image segmentation quality and the anti-interference ability.
文摘Two problems for task schedules in a multiprocessor parallel system are discussed in Ans paper (1) given a partially ordered set of tasks represented by the venices of an acyclic directed graph with their corresponding processing bines, derive the lower bound on the Annimum time(LBMT) needed to process the task graph for a given number of processors. (2) Determine the lower bound on minimum number of processors(LBMP) needed to complete those tasks in minimum bine. It is shown that the proposed LBMT is sharper than previously Known values and the comPUtational aspeCts of these bounds are also discussed.
基金The Natural Science Foundation of Heilongjiang Province under Grant Nos.F2018028.
文摘Graph similarity search is a common operation of graph database,and graph editing distance constraint is the most common similarity measure to solve graph similarity search problem.However,accurate calculation of graph editing distance is proved to be NP hard,and the filter and verification framework are adopted in current method.In this paper,a dictionary tree based clustering index structure is proposed to reduce the cost of candidate graph,and is verified in the filtering stage.An efficient incremental partition algorithm was designed.By calculating the distance between query graph and candidate graph partition,the filtering effect was further enhanced.Experiments on real large graph datasets show that the performance of this algorithm is significantly better than that of the existing algorithms.
基金National Natural Science Foundation of China(12271362,12071488,11971488,12131020,31930022)Strategic Priority Research Program of the Chinese Academy of Sciences,(XDB38040400,XDB38020000).
文摘Szemerédi’s Regularity Lemma(SRL)is a crucial tool in the analysis of large graphs,having made significant contributions in the proof of some sensational results in mathematics.Traditional methods for studying proteins in Protein-Protein Interaction(PPI)networks typically only extract the first-order or second-order neighbor information of proteins,ignoring the potential third-order or higher-order neighbor information between proteins,which may hide certain relationships between proteins.To explore more in-depth insights for PPI networks,we take into account the fourth-order neighbor information of proteins and reconstruct the network in this paper,naming it the weighted dense PPI network.We then partition it using SRL,which primarily utilizes the structural information and corresponds to a unique partition of the original network.Bioinformatics analyses such as those for pathway enrichment analysis and multiple sequence alignment show that our method can classify interacting protein pairs,grouping proteins with functional association,disease association,and sequence similarity together.Overall,this paper has three essential contributions:(1)we present a new model to overcome the astronomically large demand of vertices in applying SRL,and achieve protein classification;(2)we reconstruct a weighted dense PPI network which can make SRL work and mine potential interactions more efficiently;and(3)proteins in the same class partitioned by our method not only have sequence similarity,but also have functional associations.
基金This work was supported in part by the National Natural Science Foundation of China(Grant Nos.60025101 and 90207001)by the National Basic Research Priorities Program(Contract No.G1999032903).
文摘The spectrum of a graph is the set of all eigenvalues of the Laplacian matrix of the graph. There is a closed relationship between the Laplacian spectrum of graphs and some properties of graphs such as connectivity. In the recent years Laplacian spectrum of graphs has been widely applied in many fields. The application of Laplacian spectrum of graphs to circuit partitioning problems is reviewed in this paper. A new criterion of circuit partitioning is proposed and the bounds of the partition ratio for weighted graphs are also presented. Moreover, the deficiency of graph-partitioning algorithms by Laplacian eigenvectors is addressed and an algorithm by means of the minimal spanning tree of a graph is proposed. By virtue of taking the graph structure into consideration this algorithm can fulfill general requirements of circuit partitioning.