With the increasing popularity of blockchain applications, the security of data sources on the blockchain is gradually receiving attention. Providing reliable data for the blockchain safely and efficiently has become ...With the increasing popularity of blockchain applications, the security of data sources on the blockchain is gradually receiving attention. Providing reliable data for the blockchain safely and efficiently has become a research hotspot, and the security of the oracle responsible for providing reliable data has attracted much attention. The most widely used centralized oracles in blockchain, such as Provable and Town Crier, all rely on a single oracle to obtain data, which suffers from a single point of failure and limits the large-scale development of blockchain. To this end, the distributed oracle scheme is put forward, but the existing distributed oracle schemes such as Chainlink and Augur generally have low execution efficiency and high communication overhead, which leads to their poor applicability. To solve the above problems, this paper proposes a trusted distributed oracle scheme based on a share recovery threshold signature. First, a data verification method of distributed oracles is designed based on threshold signature. By aggregating the signatures of oracles, data from different data sources can be mutually verified, leading to a more efficient data verification and aggregation process. Then, a credibility-based cluster head election algorithm is designed, which reduces the communication overhead by clarifying the function distribution and building a hierarchical structure. Considering the good performance of the BLS threshold signature in large-scale applications, this paper combines it with distributed oracle technology and proposes a BLS threshold signature algorithm that supports share recovery in distributed oracles. The share recovery mechanism enables the proposed scheme to solve the key loss issue, and the setting of the threshold value enables the proposed scheme to complete signature aggregation with only a threshold number of oracles, making the scheme more robust. Finally, experimental results indicate that, by using the threshold signature technology and the cluster head election algorithm, our scheme effectively improves the execution efficiency of oracles and solves the problem of a single point of failure, leading to higher scalability and robustness.展开更多
After a century of relative stability in the electricity sector,the widespread adoption of distributed energy resources,along with recent advancements in computing and communication technologies,has fundamentally alte...After a century of relative stability in the electricity sector,the widespread adoption of distributed energy resources,along with recent advancements in computing and communication technologies,has fundamentally altered how energy is consumed,traded,and utilized.This change signifies a crucial shift as the power system evolves from its traditional hierarchical organization to a more decentralized approach.At the heart of this transformation are innovative energy distribution models,like peer-to-peer(P2P)sharing,which enable communities to collaboratively manage their energy resources.The effectiveness of P2P sharing not only improves the economic prospects for prosumers,who generate and consume energy,but also enhances energy resilience and sustainability.This allows communities to better leverage local resources while fostering a sense of collective responsibility and collaboration in energy management.However,there is still no extensive implementation of such sharing models in today’s electricitymarkets.Research on distributed energy P2P trading is still in the exploratory stage,and it is particularly important to comprehensively understand and analyze the existing distributed energy P2P trading market.This paper contributes with an overview of the P2P markets that starts with the network framework,market structure,technical approach for trading mechanism,and blockchain technology,moving to the outlook in this field.展开更多
The distributed permutation flow shop scheduling problem(DPFSP)has received increasing attention in recent years.The iterated greedy algorithm(IGA)serves as a powerful optimizer for addressing such a problem because o...The distributed permutation flow shop scheduling problem(DPFSP)has received increasing attention in recent years.The iterated greedy algorithm(IGA)serves as a powerful optimizer for addressing such a problem because of its straightforward,single-solution evolution framework.However,a potential draw-back of IGA is the lack of utilization of historical information,which could lead to an imbalance between exploration and exploitation,especially in large-scale DPFSPs.As a consequence,this paper develops an IGA with memory and learning mechanisms(MLIGA)to efficiently solve the DPFSP targeted at the mini-malmakespan.InMLIGA,we incorporate a memory mechanism to make a more informed selection of the initial solution at each stage of the search,by extending,reconstructing,and reinforcing the information from previous solutions.In addition,we design a twolayer cooperative reinforcement learning approach to intelligently determine the key parameters of IGA and the operations of the memory mechanism.Meanwhile,to ensure that the experience generated by each perturbation operator is fully learned and to reduce the prior parameters of MLIGA,a probability curve-based acceptance criterion is proposed by combining a cube root function with custom rules.At last,a discrete adaptive learning rate is employed to enhance the stability of the memory and learningmechanisms.Complete ablation experiments are utilized to verify the effectiveness of the memory mechanism,and the results show that this mechanism is capable of improving the performance of IGA to a large extent.Furthermore,through comparative experiments involving MLIGA and five state-of-the-art algorithms on 720 benchmarks,we have discovered that MLI-GA demonstrates significant potential for solving large-scale DPFSPs.This indicates that MLIGA is well-suited for real-world distributed flow shop scheduling.展开更多
In this paper,we investigate the periodic traveling wave solutions problem for a single population model with advection and distributed delay.By the bifurcation analysis method,we can obtain periodic traveling wave so...In this paper,we investigate the periodic traveling wave solutions problem for a single population model with advection and distributed delay.By the bifurcation analysis method,we can obtain periodic traveling wave solutions for this model under the influence of advection term and distributed delay.The obtained results indicate that weak kernel and strong kernel can both deduce the existence of periodic traveling wave solutions.Finally,we apply the main results in this paper to Logistic model and Nicholson’s blowflies model.展开更多
Fraction repetition(FR)codes are integral in distributed storage systems(DSS)with exact repair-by-transfer,while pliable fraction repetition codes are vital for DSSs in which both the per-node storage and repetition d...Fraction repetition(FR)codes are integral in distributed storage systems(DSS)with exact repair-by-transfer,while pliable fraction repetition codes are vital for DSSs in which both the per-node storage and repetition degree can easily be adjusted simultaneously.This paper introduces a new type of pliable FR codes,called absolute balanced pliable FR(ABPFR)codes,in which the access balancing in DSS is considered.Additionally,the equivalence between pliable FR codes and resolvable transversal packings in combinatorial design theory is presented.Then constructions of pliable FR codes and ABPFR codes based on resolvable transversal packings are presented.展开更多
Reconfiguration,as well as optimal utilization of distributed generation sources and capacitor banks,are highly effective methods for reducing losses and improving the voltage profile,or in other words,the power quali...Reconfiguration,as well as optimal utilization of distributed generation sources and capacitor banks,are highly effective methods for reducing losses and improving the voltage profile,or in other words,the power quality in the power distribution system.Researchers have considered the use of distributed generation resources in recent years.There are numerous advantages to utilizing these resources,the most significant of which are the reduction of network losses and enhancement of voltage stability.Non-dominated Sorting Genetic Algorithm II(NSGA-II),Multi-Objective Particle Swarm Optimization(MOPSO),and Intersect Mutation Differential Evolution(IMDE)algorithms are used in this paper to perform optimal reconfiguration,simultaneous location,and capacity determination of distributed generation resources and capacitor banks.Three scenarios were used to replicate the studies.The reconfiguration of the switches,as well as the location and determination of the capacitor bank’s optimal capacity,were investigated in this scenario.However,in the third scenario,reconfiguration,and determining the location and capacity of the Distributed Generation(DG)resources and capacitor banks have been carried out simultaneously.Finally,the simulation results of these three algorithms are compared.The results indicate that the proposed NSGAII algorithm outperformed the other two multi-objective algorithms and was capable of maintaining smaller objective functions in all scenarios.Specifically,the energy losses were reduced from 211 to 51.35 kW(a 75.66%reduction),119.13 kW(a 43.54%reduction),and 23.13 kW(an 89.04%reduction),while the voltage stability index(VSI)decreased from 6.96 to 2.105,1.239,and 1.257,respectively,demonstrating significant improvement in the voltage profile.展开更多
Understanding the evolutionary processes that influence the distribution of genetic diversity in natural populations is a key issue in evolutionary biology. Both species' distribution ranges and environmental grad...Understanding the evolutionary processes that influence the distribution of genetic diversity in natural populations is a key issue in evolutionary biology. Both species' distribution ranges and environmental gradients can influence this diversity through mechanisms such as gene flow, selection, and genetic drift. To explore how these forces interact, we assessed neutral and adaptive genetic variation in three widely distributed and two narrowly distributed bird species co-occurring along the Cauca River canyon in Antioquia, Colombia—a region of pronounced environmental heterogeneity. We sampled individuals across eight sites spanning the canyon's gradient and analyzed genetic diversity and structure using microsatellites and toll-like receptors (TLRs), a gene family involved in innate immunity. Widely distributed species consistently exhibited higher genetic diversity at both marker types compared to their narrowly distributed counterparts. Although we did not find a significant relationship between microsatellite heterozygosity and TLR heterozygosity, we evidenced a negative trend for widely distributed species and a positive trend for narrowly distributed species. This result suggests that there is a stronger effect of genetic drift in narrowly distributed species. Our results highlight the role of distribution range in maintaining genetic diversity and suggest that environmental gradients, by interacting with gene flow and selection, may influence patterns of adaptive variation.展开更多
Curved geostructures,such as tunnels,are commonly encountered in geotechnical engineering and are critical to maintaining structural stability.Ensuring their proper performance through field monitoring during their se...Curved geostructures,such as tunnels,are commonly encountered in geotechnical engineering and are critical to maintaining structural stability.Ensuring their proper performance through field monitoring during their service life is essential for the overall functionality of geotechnical infrastructure.Distributed Brillouin sensing(DBS)is increasingly applied in geotechnical projects due to its ability to acquire spatially continuous strain and temperature distributions over distances of up to 150 km using a single optical fibre.However,limited by the complex operations of distributed optic fibre sensing(DFOS)sensors in curved structures,previous reports about exploiting DBS in geotechnical structural health monitoring(SHM)have mostly been focused on flat surfaces.The lack of suitable DFOS installation methods matched to the spatial characteristics of continuous monitoring is one of the major factors that hinder the further application of this technique in curved structures.This review paper starts with a brief introduction of the fundamental working principle of DBS and the inherent limitations of DBS being used on monitoring curved surfaces.Subsequently,the state-of-the-art installation methods of optical fibres in curved structures are reviewed and compared to address the most suitable scenario of each method and their advantages and disadvantages.The installation challenges of optical fibres that can highly affect measurement accuracy are also discussed in the paper.展开更多
Dear Editor,The letter deals with the distributed state and fault estimation of the whole physical layer for cyber-physical systems(CPSs) when the cyber layer suffers from DoS attacks. With the advancement of embedded...Dear Editor,The letter deals with the distributed state and fault estimation of the whole physical layer for cyber-physical systems(CPSs) when the cyber layer suffers from DoS attacks. With the advancement of embedded computing, communication and related hardware technologies, CPSs have attracted extensive attention and have been widely used in power system, traffic network, refrigeration system and other fields.展开更多
A distributed bearing-only target tracking algorithm based on variational Bayesian inference(VBI)under random measurement anomalies is proposed for the problem of adverse effect of random measurement anomalies on the ...A distributed bearing-only target tracking algorithm based on variational Bayesian inference(VBI)under random measurement anomalies is proposed for the problem of adverse effect of random measurement anomalies on the state estimation accuracy of moving targets in bearing-only tracking scenarios.Firstly,the measurement information of each sensor is complemented by using triangulation under the distributed framework.Secondly,the Student-t distribution is selected to model the measurement likelihood probability density function,and the joint posteriori probability density function of the estimated variables is approximately decoupled by VBI.Finally,the estimation results of each local filter are sent to the fusion center and fed back to each local filter.The simulation results show that the proposed distributed bearing-only target tracking algorithm based on VBI in the presence of abnormal measurement noise comprehensively considers the influence of system nonlinearity and random anomaly of measurement noise,and has higher estimation accuracy and robustness than other existing algorithms in the above scenarios.展开更多
Waveform generation and digitization play essential roles in numerous physics experiments.In traditional distributed systems for large-scale experiments,each frontend node contains an FPGA for data preprocessing,which...Waveform generation and digitization play essential roles in numerous physics experiments.In traditional distributed systems for large-scale experiments,each frontend node contains an FPGA for data preprocessing,which interfaces with various data converters and exchanges data with a backend central processor.However,the streaming readout architecture has become a new paradigm for several experiments benefiting from advancements in data transmission and computing technologies.This paper proposes a scalable distributed waveform generation and digitization system that utilizes fiber optical connections for data transmission between frontend nodes and a central processor.By utilizing transparent transmission on top of the data link layer,the clock and data ports of the converters in the frontend nodes are directly mapped to the FPGA firmware at the backend.This streaming readout architecture reduces the complexity of frontend development and maintains the data conversion in proximity to the detector.Each frontend node uses a local clock for waveform digitization.To translate the timing information of events in each channel into the system clock domain within the backend central processing FPGA,a novel method is proposed and evaluated using a demonstrator system.展开更多
In today’s complex and rapidly changing business environment,the traditional single-organization service model can no longer meet the needs of multi-organization collaborative processing.Based on existing business pr...In today’s complex and rapidly changing business environment,the traditional single-organization service model can no longer meet the needs of multi-organization collaborative processing.Based on existing business process engine technologies,this paper proposes a distributed heterogeneous process engine collaboration method for crossorganizational scenarios.The core of this method lies in achieving unified access and management of heterogeneous engines through a business process model adapter and a common operation interface.The key technologies include:Meta-Process Control Architecture,where the central engine(meta-process scheduler)decomposes the original process into fine-grained sub-processes and schedules their execution in a unified order,ensuring consistency with the original process logic;Process Model Adapter,which addresses the BPMN2.0 model differences among heterogeneous engines such as Flowable and Activiti through a matching-and-replacement mechanism,providing a unified process model standard for different engines;Common Operation Interface,which encapsulates the REST APIs of heterogeneous engines and offers a single,standardized interface for process deployment,instance management,and status synchronization.This method integrates multiple techniques to address API differences,process model incompatibilities,and execution order consistency issues among heterogeneous engines,delivering a unified,flexible,and scalable solution for cross-organizational process collaboration.展开更多
Photovoltaic(PV)power generation is undergoing significant growth and serves as a key driver of the global energy transition.However,its intermittent nature,which fluctuates with weather conditions,has raised concerns...Photovoltaic(PV)power generation is undergoing significant growth and serves as a key driver of the global energy transition.However,its intermittent nature,which fluctuates with weather conditions,has raised concerns about grid stability.Accurate PV power prediction has been demonstrated as crucial for power system operation and scheduling,enabling power slope control,fluctuation mitigation,grid stability enhancement,and reliable data support for secure grid operation.However,existing prediction models primarily target centralized PV plants,largely neglecting the spatiotemporal coupling dynamics and output uncertainties inherent to distributed PV systems.This study proposes a novel Spatio-Temporal Graph Neural Network(STGNN)architecture for distributed PV power generation prediction,designed to enhance distributed photovoltaic(PV)power generation forecasting accuracy and support regional grid scheduling.This approach models each PV power plant as a node in an undirected graph,with edges representing correlations between plants to capture spatial dependencies.The model comprises multiple Sparse Attention-based Adaptive Spatio-Temporal(SAAST)blocks.The SAAST blocks include sparse temporal attention,sparse spatial attention,an adaptive Graph Convolutional Network(GCN),and a temporal convolution network(TCN).These components eliminate weak temporal and spatial correlations,better represent dynamic spatial dependencies,and further enhance prediction accuracy.Finally,multi-dimensional comparative experiments between the STGNN and other models on the DKASC PV dataset demonstrate its superior performance in terms of accuracy and goodness-of-fit for distributed PV power generation prediction.展开更多
Distributed computing is an important topic in the field of wireless communications and networking,and its high efficiency in handling large amounts of data is particularly noteworthy.Although distributed computing be...Distributed computing is an important topic in the field of wireless communications and networking,and its high efficiency in handling large amounts of data is particularly noteworthy.Although distributed computing benefits from its ability of processing data in parallel,the communication burden between different servers is incurred,thereby the computation process is detained.Recent researches have applied coding in distributed computing to reduce the communication burden,where repetitive computation is utilized to enable multicast opportunities so that the same coded information can be reused across different servers.To handle the computation tasks in practical heterogeneous systems,we propose a novel coding scheme to effectively mitigate the "straggling effect" in distributed computing.We assume that there are two types of servers in the system and the only difference between them is their computational capabilities,the servers with lower computational capabilities are called stragglers.Given any ratio of fast servers to slow servers and any gap of computational capabilities between them,we achieve approximately the same computation time for both fast and slow servers by assigning different amounts of computation tasks to them,thus reducing the overall computation time.Furthermore,we investigate the informationtheoretic lower bound of the inter-communication load and show that the lower bound is within a constant multiplicative gap to the upper bound achieved by our scheme.Various simulations also validate the effectiveness of the proposed scheme.展开更多
A centralized-distributed scheduling strategy for distribution networks based on multi-temporal and hierarchical cooperative game is proposed to address the issues of difficult operation control and energy optimizatio...A centralized-distributed scheduling strategy for distribution networks based on multi-temporal and hierarchical cooperative game is proposed to address the issues of difficult operation control and energy optimization interaction in distribution network transformer areas,as well as the problem of significant photovoltaic curtailment due to the inability to consume photovoltaic power locally.A scheduling architecture combiningmulti-temporal scales with a three-level decision-making hierarchy is established:the overall approach adopts a centralized-distributed method,analyzing the operational characteristics and interaction relationships of the distribution network center layer,cluster layer,and transformer area layer,providing a“spatial foundation”for subsequent optimization.The optimization process is divided into two stages on the temporal scale:in the first stage,based on forecasted electricity load and demand response characteristics,time-of-use electricity prices are utilized to formulate day-ahead optimization strategies;in the second stage,based on the charging and discharging characteristics of energy storage vehicles and multi-agent cooperative game relationships,rolling electricity prices and optimal interactive energy solutions are determined among clusters and transformer areas using the Nash bargaining theory.Finally,a distributed optimization algorithm using the bisection method is employed to solve the constructed model.Simulation results demonstrate that the proposed optimization strategy can facilitate photovoltaic consumption in the distribution network and enhance grid economy.展开更多
Coal mining induces changes in the nature of rock and soil bodies,as well as hydrogeological conditions,which can easily trigger the occurrence of geological disasters such as water inrush,movement of the coal seam ro...Coal mining induces changes in the nature of rock and soil bodies,as well as hydrogeological conditions,which can easily trigger the occurrence of geological disasters such as water inrush,movement of the coal seam roof and floor,and rock burst.Transparency in coal mine geological conditions provides technical support for intelligent coal mining and geological disaster prevention.In this sense,it is of great significance to address the requirements for informatizing coal mine geological conditions,dynamically adjust sensing parameters,and accurately identify disaster characteristics so as to prevent and control coal mine geological disasters.This paper examines the various action fields associated with geological disasters in mining faces and scrutinizes the types and sensing parameters of geological disasters resulting from coal seam mining.On this basis,it summarizes a distributed fiber-optic sensing technology framework for transparent geology in coal mines.Combined with the multi-field monitoring characteristics of the strain field,the temperature field,and the vibration field of distributed optical fiber sensing technology,parameters such as the strain increment ratio,the aquifer temperature gradient,and the acoustic wave amplitude are extracted as eigenvalues for identifying rock breaking,aquifer water level,and water cut range,and a multi-field sensing method is established for identifying the characteristics of mining-induced rock mass disasters.The development direction of transparent geology based on optical fiber sensing technology is proposed in terms of the aspects of sensing optical fiber structure for large deformation monitoring,identification accuracy of optical fiber acoustic signals,multi-parameter monitoring,and early warning methods.展开更多
The Internet of Things(IoT)is a smart infrastructure where devices share captured data with the respective server or edge modules.However,secure and reliable communication is among the challenging tasks in these netwo...The Internet of Things(IoT)is a smart infrastructure where devices share captured data with the respective server or edge modules.However,secure and reliable communication is among the challenging tasks in these networks,as shared channels are used to transmit packets.In this paper,a decision tree is integrated with other metrics to form a secure distributed communication strategy for IoT.Initially,every device works collaboratively to form a distributed network.In this model,if a device is deployed outside the coverage area of the nearest server,it communicates indirectly through the neighboring devices.For this purpose,every device collects data from the respective neighboring devices,such as hop count,average packet transmission delay,criticality factor,link reliability,and RSSI value,etc.These parameters are used to find an optimal route from the source to the destination.Secondly,the proposed approach has enabled devices to learn from the environment and adjust the optimal route-finding formula accordingly.Moreover,these devices and server modules must ensure that every packet is transmitted securely,which is possible only if it is encrypted with an encryption algorithm.For this purpose,a decision tree-enabled device-to-server authentication algorithm is presented where every device and server must take part in the offline phase.Simulation results have verified that the proposed distributed communication approach has the potential to ensure the integrity and confidentiality of data during transmission.Moreover,the proposed approach has outperformed the existing approaches in terms of communication cost,processing overhead,end-to-end delay,packet loss ratio,and throughput.Finally,the proposed approach is adoptable in different networking infrastructures.展开更多
This research introduces a unique approach to segmenting breast cancer images using a U-Net-based architecture.However,the computational demand for image processing is very high.Therefore,we have conducted this resear...This research introduces a unique approach to segmenting breast cancer images using a U-Net-based architecture.However,the computational demand for image processing is very high.Therefore,we have conducted this research to build a system that enables image segmentation training with low-power machines.To accomplish this,all data are divided into several segments,each being trained separately.In the case of prediction,the initial output is predicted from each trained model for an input,where the ultimate output is selected based on the pixel-wise majority voting of the expected outputs,which also ensures data privacy.In addition,this kind of distributed training system allows different computers to be used simultaneously.That is how the training process takes comparatively less time than typical training approaches.Even after completing the training,the proposed prediction system allows a newly trained model to be included in the system.Thus,the prediction is consistently more accurate.We evaluated the effectiveness of the ultimate output based on four performance matrices:average pixel accuracy,mean absolute error,average specificity,and average balanced accuracy.The experimental results show that the scores of average pixel accuracy,mean absolute error,average specificity,and average balanced accuracy are 0.9216,0.0687,0.9477,and 0.8674,respectively.In addition,the proposed method was compared with four other state-of-the-art models in terms of total training time and usage of computational resources.And it outperformed all of them in these aspects.展开更多
The Tactile Internet of Things(TIoT)promises transformative applications—ranging from remote surgery to industrial robotics—by incorporating haptic feedback into traditional IoT systems.Yet TIoT’s stringent require...The Tactile Internet of Things(TIoT)promises transformative applications—ranging from remote surgery to industrial robotics—by incorporating haptic feedback into traditional IoT systems.Yet TIoT’s stringent requirements for ultra-low latency,high reliability,and robust privacy present significant challenges.Conventional centralized Federated Learning(FL)architectures struggle with latency and privacy constraints,while fully distributed FL(DFL)faces scalability and non-IID data issues as client populations expand and datasets become increasingly heterogeneous.To address these limitations,we propose a Clustered Distributed Federated Learning(CDFL)architecture tailored for a 6G-enabled TIoT environment.Clients are grouped into clusters based on data similarity and/or geographical proximity,enabling local intra-cluster aggregation before inter-cluster model sharing.This hierarchical,peer-to-peer approach reduces communication overhead,mitigates non-IID effects,and eliminates single points of failure.By offloading aggregation to the network edge and leveraging dynamic clustering,CDFL enhances both computational and communication efficiency.Extensive analysis and simulation demonstrate that CDFL outperforms both centralized FL and DFL as the number of clients grows.Specifically,CDFL demonstrates up to a 30%reduction in training time under highly heterogeneous data distributions,indicating faster convergence.It also reduces communication overhead by approximately 40%compared to DFL.These improvements and enhanced network performance metrics highlight CDFL’s effectiveness for practical TIoT deployments.These results validate CDFL as a scalable,privacy-preserving solution for next-generation TIoT applications.展开更多
In this paper,we consider the distributed online optimization problem on a time-varying network,where each agent on the network has its own time-varying objective function and the goal is to minimize the overall loss ...In this paper,we consider the distributed online optimization problem on a time-varying network,where each agent on the network has its own time-varying objective function and the goal is to minimize the overall loss accumulated.Moreover,we focus on distributed algorithms which do not use gradient information and projection operators to improve the applicability and computational efficiency.By introducing the deterministic differences and the randomized differences to substitute the gradient information of the objective functions and removing the projection operator in the traditional algorithms,we design two kinds of gradient-free distributed online optimization algorithms without projection step,which can economize considerable computational resources as well as has less limitations on the applicability.We prove that both of two algorithms achieves consensus of the estimates and regrets of\(O\left(\log(T)\right)\)for local strongly convex objective,respectively.Finally,a simulation example is provided to verify the theoretical results.展开更多
基金supported by the National Natural Science Foundation of China(Grant No.62102449)the Central Plains Talent Program under Grant No.224200510003.
文摘With the increasing popularity of blockchain applications, the security of data sources on the blockchain is gradually receiving attention. Providing reliable data for the blockchain safely and efficiently has become a research hotspot, and the security of the oracle responsible for providing reliable data has attracted much attention. The most widely used centralized oracles in blockchain, such as Provable and Town Crier, all rely on a single oracle to obtain data, which suffers from a single point of failure and limits the large-scale development of blockchain. To this end, the distributed oracle scheme is put forward, but the existing distributed oracle schemes such as Chainlink and Augur generally have low execution efficiency and high communication overhead, which leads to their poor applicability. To solve the above problems, this paper proposes a trusted distributed oracle scheme based on a share recovery threshold signature. First, a data verification method of distributed oracles is designed based on threshold signature. By aggregating the signatures of oracles, data from different data sources can be mutually verified, leading to a more efficient data verification and aggregation process. Then, a credibility-based cluster head election algorithm is designed, which reduces the communication overhead by clarifying the function distribution and building a hierarchical structure. Considering the good performance of the BLS threshold signature in large-scale applications, this paper combines it with distributed oracle technology and proposes a BLS threshold signature algorithm that supports share recovery in distributed oracles. The share recovery mechanism enables the proposed scheme to solve the key loss issue, and the setting of the threshold value enables the proposed scheme to complete signature aggregation with only a threshold number of oracles, making the scheme more robust. Finally, experimental results indicate that, by using the threshold signature technology and the cluster head election algorithm, our scheme effectively improves the execution efficiency of oracles and solves the problem of a single point of failure, leading to higher scalability and robustness.
基金funded by the National Natural Science Foundation of China(52167013)the Key Program of Natural Science Foundation of Gansu Province(24JRRA225)Natural Science Foundation of Gansu Province(23JRRA891).
文摘After a century of relative stability in the electricity sector,the widespread adoption of distributed energy resources,along with recent advancements in computing and communication technologies,has fundamentally altered how energy is consumed,traded,and utilized.This change signifies a crucial shift as the power system evolves from its traditional hierarchical organization to a more decentralized approach.At the heart of this transformation are innovative energy distribution models,like peer-to-peer(P2P)sharing,which enable communities to collaboratively manage their energy resources.The effectiveness of P2P sharing not only improves the economic prospects for prosumers,who generate and consume energy,but also enhances energy resilience and sustainability.This allows communities to better leverage local resources while fostering a sense of collective responsibility and collaboration in energy management.However,there is still no extensive implementation of such sharing models in today’s electricitymarkets.Research on distributed energy P2P trading is still in the exploratory stage,and it is particularly important to comprehensively understand and analyze the existing distributed energy P2P trading market.This paper contributes with an overview of the P2P markets that starts with the network framework,market structure,technical approach for trading mechanism,and blockchain technology,moving to the outlook in this field.
基金supported in part by the National Key Research and Development Program of China under Grant No.2021YFF0901300in part by the National Natural Science Foundation of China under Grant Nos.62173076 and 72271048.
文摘The distributed permutation flow shop scheduling problem(DPFSP)has received increasing attention in recent years.The iterated greedy algorithm(IGA)serves as a powerful optimizer for addressing such a problem because of its straightforward,single-solution evolution framework.However,a potential draw-back of IGA is the lack of utilization of historical information,which could lead to an imbalance between exploration and exploitation,especially in large-scale DPFSPs.As a consequence,this paper develops an IGA with memory and learning mechanisms(MLIGA)to efficiently solve the DPFSP targeted at the mini-malmakespan.InMLIGA,we incorporate a memory mechanism to make a more informed selection of the initial solution at each stage of the search,by extending,reconstructing,and reinforcing the information from previous solutions.In addition,we design a twolayer cooperative reinforcement learning approach to intelligently determine the key parameters of IGA and the operations of the memory mechanism.Meanwhile,to ensure that the experience generated by each perturbation operator is fully learned and to reduce the prior parameters of MLIGA,a probability curve-based acceptance criterion is proposed by combining a cube root function with custom rules.At last,a discrete adaptive learning rate is employed to enhance the stability of the memory and learningmechanisms.Complete ablation experiments are utilized to verify the effectiveness of the memory mechanism,and the results show that this mechanism is capable of improving the performance of IGA to a large extent.Furthermore,through comparative experiments involving MLIGA and five state-of-the-art algorithms on 720 benchmarks,we have discovered that MLI-GA demonstrates significant potential for solving large-scale DPFSPs.This indicates that MLIGA is well-suited for real-world distributed flow shop scheduling.
基金Supported by the National Natural Science Foundation of China(12261050)Science and Technology Project of Department of Education of Jiangxi Province(GJJ2201612 and GJJ211027)Natural Science Foundation of Jiangxi Province of China(20212BAB202021)。
文摘In this paper,we investigate the periodic traveling wave solutions problem for a single population model with advection and distributed delay.By the bifurcation analysis method,we can obtain periodic traveling wave solutions for this model under the influence of advection term and distributed delay.The obtained results indicate that weak kernel and strong kernel can both deduce the existence of periodic traveling wave solutions.Finally,we apply the main results in this paper to Logistic model and Nicholson’s blowflies model.
基金Supported in part by the National Key R&D Program of China(No.2020YFA0712300)NSFC(No.61872353)。
文摘Fraction repetition(FR)codes are integral in distributed storage systems(DSS)with exact repair-by-transfer,while pliable fraction repetition codes are vital for DSSs in which both the per-node storage and repetition degree can easily be adjusted simultaneously.This paper introduces a new type of pliable FR codes,called absolute balanced pliable FR(ABPFR)codes,in which the access balancing in DSS is considered.Additionally,the equivalence between pliable FR codes and resolvable transversal packings in combinatorial design theory is presented.Then constructions of pliable FR codes and ABPFR codes based on resolvable transversal packings are presented.
文摘Reconfiguration,as well as optimal utilization of distributed generation sources and capacitor banks,are highly effective methods for reducing losses and improving the voltage profile,or in other words,the power quality in the power distribution system.Researchers have considered the use of distributed generation resources in recent years.There are numerous advantages to utilizing these resources,the most significant of which are the reduction of network losses and enhancement of voltage stability.Non-dominated Sorting Genetic Algorithm II(NSGA-II),Multi-Objective Particle Swarm Optimization(MOPSO),and Intersect Mutation Differential Evolution(IMDE)algorithms are used in this paper to perform optimal reconfiguration,simultaneous location,and capacity determination of distributed generation resources and capacitor banks.Three scenarios were used to replicate the studies.The reconfiguration of the switches,as well as the location and determination of the capacitor bank’s optimal capacity,were investigated in this scenario.However,in the third scenario,reconfiguration,and determining the location and capacity of the Distributed Generation(DG)resources and capacitor banks have been carried out simultaneously.Finally,the simulation results of these three algorithms are compared.The results indicate that the proposed NSGAII algorithm outperformed the other two multi-objective algorithms and was capable of maintaining smaller objective functions in all scenarios.Specifically,the energy losses were reduced from 211 to 51.35 kW(a 75.66%reduction),119.13 kW(a 43.54%reduction),and 23.13 kW(an 89.04%reduction),while the voltage stability index(VSI)decreased from 6.96 to 2.105,1.239,and 1.257,respectively,demonstrating significant improvement in the voltage profile.
基金funded by the Empresas Públicas de Medellín and Universidad de Antioquia.
文摘Understanding the evolutionary processes that influence the distribution of genetic diversity in natural populations is a key issue in evolutionary biology. Both species' distribution ranges and environmental gradients can influence this diversity through mechanisms such as gene flow, selection, and genetic drift. To explore how these forces interact, we assessed neutral and adaptive genetic variation in three widely distributed and two narrowly distributed bird species co-occurring along the Cauca River canyon in Antioquia, Colombia—a region of pronounced environmental heterogeneity. We sampled individuals across eight sites spanning the canyon's gradient and analyzed genetic diversity and structure using microsatellites and toll-like receptors (TLRs), a gene family involved in innate immunity. Widely distributed species consistently exhibited higher genetic diversity at both marker types compared to their narrowly distributed counterparts. Although we did not find a significant relationship between microsatellite heterozygosity and TLR heterozygosity, we evidenced a negative trend for widely distributed species and a positive trend for narrowly distributed species. This result suggests that there is a stronger effect of genetic drift in narrowly distributed species. Our results highlight the role of distribution range in maintaining genetic diversity and suggest that environmental gradients, by interacting with gene flow and selection, may influence patterns of adaptive variation.
基金support provided by Science Foundation Ireland Frontiers for the Future Programme,21/FFP-P/10090.
文摘Curved geostructures,such as tunnels,are commonly encountered in geotechnical engineering and are critical to maintaining structural stability.Ensuring their proper performance through field monitoring during their service life is essential for the overall functionality of geotechnical infrastructure.Distributed Brillouin sensing(DBS)is increasingly applied in geotechnical projects due to its ability to acquire spatially continuous strain and temperature distributions over distances of up to 150 km using a single optical fibre.However,limited by the complex operations of distributed optic fibre sensing(DFOS)sensors in curved structures,previous reports about exploiting DBS in geotechnical structural health monitoring(SHM)have mostly been focused on flat surfaces.The lack of suitable DFOS installation methods matched to the spatial characteristics of continuous monitoring is one of the major factors that hinder the further application of this technique in curved structures.This review paper starts with a brief introduction of the fundamental working principle of DBS and the inherent limitations of DBS being used on monitoring curved surfaces.Subsequently,the state-of-the-art installation methods of optical fibres in curved structures are reviewed and compared to address the most suitable scenario of each method and their advantages and disadvantages.The installation challenges of optical fibres that can highly affect measurement accuracy are also discussed in the paper.
基金supported by the National Natural Science Foundation of China(62303273,62373226)the National Research Foundation,Singapore through the Medium Sized Center for Advanced Robotics Technology Innovation(WP2.7)
文摘Dear Editor,The letter deals with the distributed state and fault estimation of the whole physical layer for cyber-physical systems(CPSs) when the cyber layer suffers from DoS attacks. With the advancement of embedded computing, communication and related hardware technologies, CPSs have attracted extensive attention and have been widely used in power system, traffic network, refrigeration system and other fields.
基金Supported by the Science and Technology Key Project of Science and Technology Department of Henan Province(No.252102211041)the Key Research and Development Projects of Henan Province(No.231111212500).
文摘A distributed bearing-only target tracking algorithm based on variational Bayesian inference(VBI)under random measurement anomalies is proposed for the problem of adverse effect of random measurement anomalies on the state estimation accuracy of moving targets in bearing-only tracking scenarios.Firstly,the measurement information of each sensor is complemented by using triangulation under the distributed framework.Secondly,the Student-t distribution is selected to model the measurement likelihood probability density function,and the joint posteriori probability density function of the estimated variables is approximately decoupled by VBI.Finally,the estimation results of each local filter are sent to the fusion center and fed back to each local filter.The simulation results show that the proposed distributed bearing-only target tracking algorithm based on VBI in the presence of abnormal measurement noise comprehensively considers the influence of system nonlinearity and random anomaly of measurement noise,and has higher estimation accuracy and robustness than other existing algorithms in the above scenarios.
基金supported by the National Key Research and Development Program of China(No.2022YFA1604703)the National Natural Science Foundation of China(No.12375189)the National Key Research and Development Program of China(No.2021YFA1601300)。
文摘Waveform generation and digitization play essential roles in numerous physics experiments.In traditional distributed systems for large-scale experiments,each frontend node contains an FPGA for data preprocessing,which interfaces with various data converters and exchanges data with a backend central processor.However,the streaming readout architecture has become a new paradigm for several experiments benefiting from advancements in data transmission and computing technologies.This paper proposes a scalable distributed waveform generation and digitization system that utilizes fiber optical connections for data transmission between frontend nodes and a central processor.By utilizing transparent transmission on top of the data link layer,the clock and data ports of the converters in the frontend nodes are directly mapped to the FPGA firmware at the backend.This streaming readout architecture reduces the complexity of frontend development and maintains the data conversion in proximity to the detector.Each frontend node uses a local clock for waveform digitization.To translate the timing information of events in each channel into the system clock domain within the backend central processing FPGA,a novel method is proposed and evaluated using a demonstrator system.
文摘In today’s complex and rapidly changing business environment,the traditional single-organization service model can no longer meet the needs of multi-organization collaborative processing.Based on existing business process engine technologies,this paper proposes a distributed heterogeneous process engine collaboration method for crossorganizational scenarios.The core of this method lies in achieving unified access and management of heterogeneous engines through a business process model adapter and a common operation interface.The key technologies include:Meta-Process Control Architecture,where the central engine(meta-process scheduler)decomposes the original process into fine-grained sub-processes and schedules their execution in a unified order,ensuring consistency with the original process logic;Process Model Adapter,which addresses the BPMN2.0 model differences among heterogeneous engines such as Flowable and Activiti through a matching-and-replacement mechanism,providing a unified process model standard for different engines;Common Operation Interface,which encapsulates the REST APIs of heterogeneous engines and offers a single,standardized interface for process deployment,instance management,and status synchronization.This method integrates multiple techniques to address API differences,process model incompatibilities,and execution order consistency issues among heterogeneous engines,delivering a unified,flexible,and scalable solution for cross-organizational process collaboration.
基金supported by the State Grid Corporation of China Headquarters Science and Technology Project“Research on Key Technologies for Power System Source-Load Forecasting and Regulation Capacity Assessment Oriented towards Major Weather Processes”(4000-202355381A-2-3-XG).
文摘Photovoltaic(PV)power generation is undergoing significant growth and serves as a key driver of the global energy transition.However,its intermittent nature,which fluctuates with weather conditions,has raised concerns about grid stability.Accurate PV power prediction has been demonstrated as crucial for power system operation and scheduling,enabling power slope control,fluctuation mitigation,grid stability enhancement,and reliable data support for secure grid operation.However,existing prediction models primarily target centralized PV plants,largely neglecting the spatiotemporal coupling dynamics and output uncertainties inherent to distributed PV systems.This study proposes a novel Spatio-Temporal Graph Neural Network(STGNN)architecture for distributed PV power generation prediction,designed to enhance distributed photovoltaic(PV)power generation forecasting accuracy and support regional grid scheduling.This approach models each PV power plant as a node in an undirected graph,with edges representing correlations between plants to capture spatial dependencies.The model comprises multiple Sparse Attention-based Adaptive Spatio-Temporal(SAAST)blocks.The SAAST blocks include sparse temporal attention,sparse spatial attention,an adaptive Graph Convolutional Network(GCN),and a temporal convolution network(TCN).These components eliminate weak temporal and spatial correlations,better represent dynamic spatial dependencies,and further enhance prediction accuracy.Finally,multi-dimensional comparative experiments between the STGNN and other models on the DKASC PV dataset demonstrate its superior performance in terms of accuracy and goodness-of-fit for distributed PV power generation prediction.
基金supported by NSF China(No.T2421002,62061146002,62020106005)。
文摘Distributed computing is an important topic in the field of wireless communications and networking,and its high efficiency in handling large amounts of data is particularly noteworthy.Although distributed computing benefits from its ability of processing data in parallel,the communication burden between different servers is incurred,thereby the computation process is detained.Recent researches have applied coding in distributed computing to reduce the communication burden,where repetitive computation is utilized to enable multicast opportunities so that the same coded information can be reused across different servers.To handle the computation tasks in practical heterogeneous systems,we propose a novel coding scheme to effectively mitigate the "straggling effect" in distributed computing.We assume that there are two types of servers in the system and the only difference between them is their computational capabilities,the servers with lower computational capabilities are called stragglers.Given any ratio of fast servers to slow servers and any gap of computational capabilities between them,we achieve approximately the same computation time for both fast and slow servers by assigning different amounts of computation tasks to them,thus reducing the overall computation time.Furthermore,we investigate the informationtheoretic lower bound of the inter-communication load and show that the lower bound is within a constant multiplicative gap to the upper bound achieved by our scheme.Various simulations also validate the effectiveness of the proposed scheme.
基金funded by the Jilin Province Science and Technology Development Plan Project(20230101344JC).
文摘A centralized-distributed scheduling strategy for distribution networks based on multi-temporal and hierarchical cooperative game is proposed to address the issues of difficult operation control and energy optimization interaction in distribution network transformer areas,as well as the problem of significant photovoltaic curtailment due to the inability to consume photovoltaic power locally.A scheduling architecture combiningmulti-temporal scales with a three-level decision-making hierarchy is established:the overall approach adopts a centralized-distributed method,analyzing the operational characteristics and interaction relationships of the distribution network center layer,cluster layer,and transformer area layer,providing a“spatial foundation”for subsequent optimization.The optimization process is divided into two stages on the temporal scale:in the first stage,based on forecasted electricity load and demand response characteristics,time-of-use electricity prices are utilized to formulate day-ahead optimization strategies;in the second stage,based on the charging and discharging characteristics of energy storage vehicles and multi-agent cooperative game relationships,rolling electricity prices and optimal interactive energy solutions are determined among clusters and transformer areas using the Nash bargaining theory.Finally,a distributed optimization algorithm using the bisection method is employed to solve the constructed model.Simulation results demonstrate that the proposed optimization strategy can facilitate photovoltaic consumption in the distribution network and enhance grid economy.
基金National Natural Science Foundation of China,Grant/Award Number:42130706。
文摘Coal mining induces changes in the nature of rock and soil bodies,as well as hydrogeological conditions,which can easily trigger the occurrence of geological disasters such as water inrush,movement of the coal seam roof and floor,and rock burst.Transparency in coal mine geological conditions provides technical support for intelligent coal mining and geological disaster prevention.In this sense,it is of great significance to address the requirements for informatizing coal mine geological conditions,dynamically adjust sensing parameters,and accurately identify disaster characteristics so as to prevent and control coal mine geological disasters.This paper examines the various action fields associated with geological disasters in mining faces and scrutinizes the types and sensing parameters of geological disasters resulting from coal seam mining.On this basis,it summarizes a distributed fiber-optic sensing technology framework for transparent geology in coal mines.Combined with the multi-field monitoring characteristics of the strain field,the temperature field,and the vibration field of distributed optical fiber sensing technology,parameters such as the strain increment ratio,the aquifer temperature gradient,and the acoustic wave amplitude are extracted as eigenvalues for identifying rock breaking,aquifer water level,and water cut range,and a multi-field sensing method is established for identifying the characteristics of mining-induced rock mass disasters.The development direction of transparent geology based on optical fiber sensing technology is proposed in terms of the aspects of sensing optical fiber structure for large deformation monitoring,identification accuracy of optical fiber acoustic signals,multi-parameter monitoring,and early warning methods.
基金supported by the Princess Nourah bint Abdulrahman University Riyadh,Saudi Arabia,through Project number(PNURSP2025R235).
文摘The Internet of Things(IoT)is a smart infrastructure where devices share captured data with the respective server or edge modules.However,secure and reliable communication is among the challenging tasks in these networks,as shared channels are used to transmit packets.In this paper,a decision tree is integrated with other metrics to form a secure distributed communication strategy for IoT.Initially,every device works collaboratively to form a distributed network.In this model,if a device is deployed outside the coverage area of the nearest server,it communicates indirectly through the neighboring devices.For this purpose,every device collects data from the respective neighboring devices,such as hop count,average packet transmission delay,criticality factor,link reliability,and RSSI value,etc.These parameters are used to find an optimal route from the source to the destination.Secondly,the proposed approach has enabled devices to learn from the environment and adjust the optimal route-finding formula accordingly.Moreover,these devices and server modules must ensure that every packet is transmitted securely,which is possible only if it is encrypted with an encryption algorithm.For this purpose,a decision tree-enabled device-to-server authentication algorithm is presented where every device and server must take part in the offline phase.Simulation results have verified that the proposed distributed communication approach has the potential to ensure the integrity and confidentiality of data during transmission.Moreover,the proposed approach has outperformed the existing approaches in terms of communication cost,processing overhead,end-to-end delay,packet loss ratio,and throughput.Finally,the proposed approach is adoptable in different networking infrastructures.
基金the Researchers Supporting Project,King Saud University,Saudi Arabia,for funding this research work through Project No.RSPD2025R951.
文摘This research introduces a unique approach to segmenting breast cancer images using a U-Net-based architecture.However,the computational demand for image processing is very high.Therefore,we have conducted this research to build a system that enables image segmentation training with low-power machines.To accomplish this,all data are divided into several segments,each being trained separately.In the case of prediction,the initial output is predicted from each trained model for an input,where the ultimate output is selected based on the pixel-wise majority voting of the expected outputs,which also ensures data privacy.In addition,this kind of distributed training system allows different computers to be used simultaneously.That is how the training process takes comparatively less time than typical training approaches.Even after completing the training,the proposed prediction system allows a newly trained model to be included in the system.Thus,the prediction is consistently more accurate.We evaluated the effectiveness of the ultimate output based on four performance matrices:average pixel accuracy,mean absolute error,average specificity,and average balanced accuracy.The experimental results show that the scores of average pixel accuracy,mean absolute error,average specificity,and average balanced accuracy are 0.9216,0.0687,0.9477,and 0.8674,respectively.In addition,the proposed method was compared with four other state-of-the-art models in terms of total training time and usage of computational resources.And it outperformed all of them in these aspects.
基金supported by the Deanship of Scientific Research(DSR),King Abdulaziz University,Jeddah,under grant No.GPIP:2040-611-2024。
文摘The Tactile Internet of Things(TIoT)promises transformative applications—ranging from remote surgery to industrial robotics—by incorporating haptic feedback into traditional IoT systems.Yet TIoT’s stringent requirements for ultra-low latency,high reliability,and robust privacy present significant challenges.Conventional centralized Federated Learning(FL)architectures struggle with latency and privacy constraints,while fully distributed FL(DFL)faces scalability and non-IID data issues as client populations expand and datasets become increasingly heterogeneous.To address these limitations,we propose a Clustered Distributed Federated Learning(CDFL)architecture tailored for a 6G-enabled TIoT environment.Clients are grouped into clusters based on data similarity and/or geographical proximity,enabling local intra-cluster aggregation before inter-cluster model sharing.This hierarchical,peer-to-peer approach reduces communication overhead,mitigates non-IID effects,and eliminates single points of failure.By offloading aggregation to the network edge and leveraging dynamic clustering,CDFL enhances both computational and communication efficiency.Extensive analysis and simulation demonstrate that CDFL outperforms both centralized FL and DFL as the number of clients grows.Specifically,CDFL demonstrates up to a 30%reduction in training time under highly heterogeneous data distributions,indicating faster convergence.It also reduces communication overhead by approximately 40%compared to DFL.These improvements and enhanced network performance metrics highlight CDFL’s effectiveness for practical TIoT deployments.These results validate CDFL as a scalable,privacy-preserving solution for next-generation TIoT applications.
文摘In this paper,we consider the distributed online optimization problem on a time-varying network,where each agent on the network has its own time-varying objective function and the goal is to minimize the overall loss accumulated.Moreover,we focus on distributed algorithms which do not use gradient information and projection operators to improve the applicability and computational efficiency.By introducing the deterministic differences and the randomized differences to substitute the gradient information of the objective functions and removing the projection operator in the traditional algorithms,we design two kinds of gradient-free distributed online optimization algorithms without projection step,which can economize considerable computational resources as well as has less limitations on the applicability.We prove that both of two algorithms achieves consensus of the estimates and regrets of\(O\left(\log(T)\right)\)for local strongly convex objective,respectively.Finally,a simulation example is provided to verify the theoretical results.