In 6G era,service forms in which computing power acts as the core will be ubiquitous in the network.At the same time,the collaboration among edge computing,cloud computing and network is needed to support edge computi...In 6G era,service forms in which computing power acts as the core will be ubiquitous in the network.At the same time,the collaboration among edge computing,cloud computing and network is needed to support edge computing service with strong demand for computing power,so as to realize the optimization of resource utilization.Based on this,the article discusses the research background,key techniques and main application scenarios of computing power network.Through the demonstration,it can be concluded that the technical solution of computing power network can effectively meet the multi-level deployment and flexible scheduling needs of the future 6G business for computing,storage and network,and adapt to the integration needs of computing power and network in various scenarios,such as user oriented,government enterprise oriented,computing power open and so on.展开更多
Along with the further development of science and technology, computer hardware and the Intemet are in a rapid development, and information technology has been widely used in all fields so that complex problems are si...Along with the further development of science and technology, computer hardware and the Intemet are in a rapid development, and information technology has been widely used in all fields so that complex problems are simply solved. Because of the needs for the development, software starts to mutually integrate with complex power network, making the scale of software increase greatly. Such a growing trend of software promotes soft-ware development to go beyond a general understanding and control and thus a complex system is formed. It is necessary to strengthen the research of complex network theory, and this is a new way to help people study the complexity of software systems. In this paper, the development course of complex dynamic network is introduced simply and the use of complex power network in the software engineering is summarized. Hopefully, this paper can help the crossover study of complex power network and software engineering in the future.展开更多
With the acceleration of the intelligent transformation of power systems,the requirements for communication technology are increasingly stringent.The application of 5G mobile communication technology in power communic...With the acceleration of the intelligent transformation of power systems,the requirements for communication technology are increasingly stringent.The application of 5G mobile communication technology in power communication is analyzed.In this study,5G technology features,application principles,and practical strategies are discussed,and methods such as network slicing,customized deployment,edge computing collaborative application,communication equipment integration and upgrading,and multi-technology collaboration and complementation are proposed.It aims to effectively improve the efficiency,reliability,and security of power communication,solve the problem that traditional communication technology is difficult to meet the diversified needs of power business,and achieve the effect of optimizing the power communication network and supporting the intelligent development of the power system.展开更多
Social computing and online groups have accompanied in a new age of the network, where information, networking and communication technologies are enabling systematized human efforts in primarily innovative ways. The s...Social computing and online groups have accompanied in a new age of the network, where information, networking and communication technologies are enabling systematized human efforts in primarily innovative ways. The social network communities working on various social network domains face different hurdles, including various new research studies and challenges in social computing. The researcher should try to expand the scope and establish new ideas and methods even from other disciplines to address the various challenges. This idea has diverse academic association, social links and technical characteristics. Thus it offers an ultimate opportunity for researchers to find out the issues in social computing and provide innovative solutions for conveying the information between social online groups on network computing. In this research paper we investigate the different issues in social media like users’ privacy and security, network reliabilities, and desire data availability on these social media, users’ awareness about the social networks and problems faced by academic domains. A huge number of users operated the social networks for retrieving and disseminating their real time and offline information to various places. The information may be transmitted on local networks or may be on global networks. The main concerns of users on social media are secure and fast communication channels. Facebook and YouTube both claimed for efficient security mechanism and fast communication channels for multimedia data. In this research a survey has been conducted in the most populated cities where a large number of Facebook and YouTube users have been found. During the survey several regular users indicate the certain potential issues continuously occurred on these social web sites interfaces, for example unwanted advertisement, fake IDS, uncensored videos and unknown friend request which cause the poor speed of channel communication, poor uploading and downloading data speed, channel interferences, security of data, privacy of users, integrity and reliability of user communication on these social sites. The major issues faced by active users of Facebook and YouTube have been highlighted in this research.展开更多
Network information communication technology in power systems is the key to ensuring the safe and efficient operation of power grids.The network information communication technology itself has advantages in automation...Network information communication technology in power systems is the key to ensuring the safe and efficient operation of power grids.The network information communication technology itself has advantages in automation operation and information transmission,thus is widely applied to the power system.In the case of ensuring that the power system is compatible with the network information communication technology,the control of the power system can be strengthened,and the operational efficiency of the power system can be improved.This paper mainly analyzes the specific application of network information communication technology in power system.展开更多
In recent years, the concept of "cloud" in the construction of electric power enterprise information system has become a hot topic, which is sought after by electric power information enterprises. Cloud comp...In recent years, the concept of "cloud" in the construction of electric power enterprise information system has become a hot topic, which is sought after by electric power information enterprises. Cloud computing technology is becoming the core focus of the development of China's IT industry. With the popularization and implementation of this technology, many high wall barriers have been broken down, and the former computing resources have been transferred from the data center of the computer room to the "cloud", eliminating the barriers between people and information technology. Intelligent technology represented by cloud computing is becoming a new driving force for the transformation of the power industry. Under the wave of digital and smart industries, cloud services continue to present a rapid growth pattern in the power market. Based on the reality, this paper expounds the application status of cloud computing technology in the construction of electric power information, and how to better promote the application of cloud computing technology in the construction of electric power information.展开更多
With the rapid development of cloud computing,edge computing,and smart devices,computing power resources indicate a trend of ubiquitous deployment.The traditional network architecture cannot efficiently leverage these...With the rapid development of cloud computing,edge computing,and smart devices,computing power resources indicate a trend of ubiquitous deployment.The traditional network architecture cannot efficiently leverage these distributed computing power resources due to computing power island effect.To overcome these problems and improve network efficiency,a new network computing paradigm is proposed,i.e.,Computing Power Network(CPN).Computing power network can connect ubiquitous and heterogenous computing power resources through networking to realize computing power scheduling flexibly.In this survey,we make an exhaustive review on the state-of-the-art research efforts on computing power network.We first give an overview of computing power network,including definition,architecture,and advantages.Next,a comprehensive elaboration of issues on computing power modeling,information awareness and announcement,resource allocation,network forwarding,computing power transaction platform and resource orchestration platform is presented.The computing power network testbed is built and evaluated.The applications and use cases in computing power network are discussed.Then,the key enabling technologies for computing power network are introduced.Finally,open challenges and future research directions are presented as well.展开更多
Driven by diverse intelligent applications,computing capability is moving from the central cloud to the edge of the network in the form of small cloud nodes,forming a distributed computing power network.Tasked with bo...Driven by diverse intelligent applications,computing capability is moving from the central cloud to the edge of the network in the form of small cloud nodes,forming a distributed computing power network.Tasked with both packet transmission and data processing,it requires joint optimization of communications and computing.Considering the diverse requirements of applications,we develop a dynamic control policy of routing to determine both paths and computing nodes in a distributed computing power network.Different from traditional routing protocols,additional metrics related to computing are taken into consideration in the proposed policy.Based on the multi-attribute decision theory and the fuzzy logic theory,we propose two routing selection algorithms,the Fuzzy Logic-Based Routing(FLBR)algorithm and the low-complexity Pairwise Multi-Attribute Decision-Making(l PMADM)algorithm.Simulation results show that the proposed policy could achieve better performance in average processing delay,user satisfaction,and load balancing compared with existing works.展开更多
Medical image analysis has become a cornerstone of modern healthcare,driven by the exponential growth of data from imaging modalities such as MRI,CT,PET,ultrasound,and X-ray.Traditional machine learning methods have m...Medical image analysis has become a cornerstone of modern healthcare,driven by the exponential growth of data from imaging modalities such as MRI,CT,PET,ultrasound,and X-ray.Traditional machine learning methods have made early contributions;however,recent advancements in deep learning(DL)have revolutionized the field,offering state-of-the-art performance in image classification,segmentation,detection,fusion,registration,and enhancement.This comprehensive review presents an in-depth analysis of deep learning methodologies applied across medical image analysis tasks,highlighting both foundational models and recent innovations.The article begins by introducing conventional techniques and their limitations,setting the stage for DL-based solutions.Core DL architectures,including Convolutional Neural Networks(CNNs),Recurrent Neural Networks(RNNs),Generative Adversarial Networks(GANs),Vision Transformers(ViTs),and hybrid models,are discussed in detail,including their advantages and domain-specific adaptations.Advanced learning paradigms such as semi-supervised learning,selfsupervised learning,and few-shot learning are explored for their potential to mitigate data annotation challenges in clinical datasets.This review further categorizes major tasks in medical image analysis,elaborating on how DL techniques have enabled precise tumor segmentation,lesion detection,modality fusion,super-resolution,and robust classification across diverse clinical settings.Emphasis is placed on applications in oncology,cardiology,neurology,and infectious diseases,including COVID-19.Challenges such as data scarcity,label imbalance,model generalizability,interpretability,and integration into clinical workflows are critically examined.Ethical considerations,explainable AI(XAI),federated learning,and regulatory compliance are discussed as essential components of real-world deployment.Benchmark datasets,evaluation metrics,and comparative performance analyses are presented to support future research.The article concludes with a forward-looking perspective on the role of foundation models,multimodal learning,edge AI,and bio-inspired computing in the future of medical imaging.Overall,this review serves as a valuable resource for researchers,clinicians,and developers aiming to harness deep learning for intelligent,efficient,and clinically viable medical image analysis.展开更多
In the smart city paradigm, the deployment of Internet of Things(IoT) services and solutions requires extensive communication and computingresources to place and process IoT applications in real time, which consumesa ...In the smart city paradigm, the deployment of Internet of Things(IoT) services and solutions requires extensive communication and computingresources to place and process IoT applications in real time, which consumesa lot of energy and increases operational costs. Usually, IoT applications areplaced in the cloud to provide high-quality services and scalable resources.However, the existing cloud-based approach should consider the above constraintsto efficiently place and process IoT applications. In this paper, anefficient optimization approach for placing IoT applications in a multi-layerfog-cloud environment is proposed using a mathematical model (Mixed-Integer Linear Programming (MILP)). This approach takes into accountIoT application requirements, available resource capacities, and geographicallocations of servers, which would help optimize IoT application placementdecisions, considering multiple objectives such as data transmission, powerconsumption, and cost. Simulation experiments were conducted with variousIoT applications (e.g., augmented reality, infotainment, healthcare, andcompute-intensive) to simulate realistic scenarios. The results showed thatthe proposed approach outperformed the existing cloud-based approach interms of reducing data transmission by 64% and the associated processingand networking power consumption costs by up to 78%. Finally, a heuristicapproach was developed to validate and imitate the presented approach. Itshowed comparable outcomes to the proposed model, with the gap betweenthem reach to a maximum of 5.4% of the total power consumption.展开更多
As an open network architecture,Wireless Computing PowerNetworks(WCPN)pose newchallenges for achieving efficient and secure resource management in networks,because of issues such as insecure communication channels and...As an open network architecture,Wireless Computing PowerNetworks(WCPN)pose newchallenges for achieving efficient and secure resource management in networks,because of issues such as insecure communication channels and untrusted device terminals.Blockchain,as a shared,immutable distributed ledger,provides a secure resource management solution for WCPN.However,integrating blockchain into WCPN faces challenges like device heterogeneity,monitoring communication states,and dynamic network nature.Whereas Digital Twins(DT)can accurately maintain digital models of physical entities through real-time data updates and self-learning,enabling continuous optimization of WCPN,improving synchronization performance,ensuring real-time accuracy,and supporting smooth operation of WCPN services.In this paper,we propose a DT for blockchain-empowered WCPN architecture that guarantees real-time data transmission between physical entities and digital models.We adopt an enumeration-based optimal placement algorithm(EOPA)and an improved simulated annealing-based near-optimal placement algorithm(ISAPA)to achieve minimum average DT synchronization latency under the constraint of DT error.Numerical results show that the proposed solution in this paper outperforms benchmarks in terms of average synchronization latency.展开更多
In many IIoT architectures,various devices connect to the edge cloud via gateway systems.For data processing,numerous data are delivered to the edge cloud.Delivering data to an appropriate edge cloud is critical to im...In many IIoT architectures,various devices connect to the edge cloud via gateway systems.For data processing,numerous data are delivered to the edge cloud.Delivering data to an appropriate edge cloud is critical to improve IIoT service efficiency.There are two types of costs for this kind of IoT network:a communication cost and a computing cost.For service efficiency,the communication cost of data transmission should be minimized,and the computing cost in the edge cloud should be also minimized.Therefore,in this paper,the communication cost for data transmission is defined as the delay factor,and the computing cost in the edge cloud is defined as the waiting time of the computing intensity.The proposed method selects an edge cloud that minimizes the total cost of the communication and computing costs.That is,a device chooses a routing path to the selected edge cloud based on the costs.The proposed method controls the data flows in a mesh-structured network and appropriately distributes the data processing load.The performance of the proposed method is validated through extensive computer simulation.When the transition probability from good to bad is 0.3 and the transition probability from bad to good is 0.7 in wireless and edge cloud states,the proposed method reduced both the average delay and the service pause counts to about 25%of the existing method.展开更多
Computing Power Network(CPN)is emerging as one of the important research interests in beyond 5G(B5G)or 6G.This paper constructs a CPN based on Federated Learning(FL),where all Multi-access Edge Computing(MEC)servers a...Computing Power Network(CPN)is emerging as one of the important research interests in beyond 5G(B5G)or 6G.This paper constructs a CPN based on Federated Learning(FL),where all Multi-access Edge Computing(MEC)servers are linked to a computing power center via wireless links.Through this FL procedure,each MEC server in CPN can independently train the learning models using localized data,thus preserving data privacy.However,it is challenging to motivate MEC servers to participate in the FL process in an efficient way and difficult to ensure energy efficiency for MEC servers.To address these issues,we first introduce an incentive mechanism using the Stackelberg game framework to motivate MEC servers.Afterwards,we formulate a comprehensive algorithm to jointly optimize the communication resource(wireless bandwidth and transmission power)allocations and the computation resource(computation capacity of MEC servers)allocations while ensuring the local accuracy of the training of each MEC server.The numerical data validates that the proposed incentive mechanism and joint optimization algorithm do improve the energy efficiency and performance of the considered CPN.展开更多
Federated Learning(FL)is a novel distributed machine learning methodology that addresses large-scale parallel computing challenges while safeguarding data security.However,the traditional FL model in communication sce...Federated Learning(FL)is a novel distributed machine learning methodology that addresses large-scale parallel computing challenges while safeguarding data security.However,the traditional FL model in communication scenarios,whether for uplink or downlink communications,may give rise to several network problems,such as bandwidth occupation,additional network latency,and bandwidth fragmentation.In this paper,we propose an adaptive chained training approach(Fed ACT)for FL in computing power networks.First,a Computation-driven Clustering Strategy(CCS)is designed.The server clusters clients by task processing delays to minimize waiting delays at the central server.Second,we propose a Genetic-Algorithm-based Sorting(GAS)method to optimize the order of clients participating in training.Finally,based on the table lookup and forwarding rules of the Segment Routing over IPv6(SRv6)protocol,the sorting results of GAS are written into the SRv6 packet header,to control the order in which clients participate in model training.We conduct extensive experiments on two datasets of CIFAR-10 and MNIST,and the results demonstrate that the proposed algorithm offers improved accuracy,diminished communication costs,and reduced network delays.展开更多
A “star shaped topology” network for solar radio dynamic spectrograph large scale data acquisition, storage and processing system (meterwave band 230 ~ 300MHz and decimeter band 700 ~ 1 4 GHz) and auto control sys...A “star shaped topology” network for solar radio dynamic spectrograph large scale data acquisition, storage and processing system (meterwave band 230 ~ 300MHz and decimeter band 700 ~ 1 4 GHz) and auto control system for the 10meter antenna has been set up at the Yunnan Observatory. Resource sharing among the optical disk drive, laser printer, MODEM and other devices is realized. And a way to large scale data processing and storage for the solar radio observations of high time and frequency resolutions is found. A homepage is made on the Internet and the test of remote control of the telescope is successful.展开更多
The mutual-interference phenomenon among multiple applications delivered as services through Cloud Services Delivery Network(CSDN)influences their QoS seriously.In order to deploy multiple applications dependably and ...The mutual-interference phenomenon among multiple applications delivered as services through Cloud Services Delivery Network(CSDN)influences their QoS seriously.In order to deploy multiple applications dependably and efficiently,we propose the Multiple Applications Co-Exist(MACE)method.MACE classifies multiple applications into different types and deploys them using isolation to some extent.Meanwhile,resource static allocation,dynamic supplement and resource reserved mechanism to minimize mutual-interference and maximize resource utilization are designed.After MACE is applied to a real large-scale CSDN and evaluated through 6-month measurement,we find that the CSDN load is more balanced,the bandwidth utilization increases by about 20%,the multiple applications'potential statistical multiplexing ratio decreases from 12% to 5%,and the number of complaint events affecting the dependability of CSDN services caused by multiple applications'mutual-interference has dropped to 0.Obviously,MACE offers a tradeoff and improvement for the dependability and efficiency goals of CSDN.展开更多
Three recent breakthroughs due to AI in arts and science serve as motivation:An award winning digital image,protein folding,fast matrix multiplication.Many recent developments in artificial neural networks,particularl...Three recent breakthroughs due to AI in arts and science serve as motivation:An award winning digital image,protein folding,fast matrix multiplication.Many recent developments in artificial neural networks,particularly deep learning(DL),applied and relevant to computational mechanics(solid,fluids,finite-element technology)are reviewed in detail.Both hybrid and pure machine learning(ML)methods are discussed.Hybrid methods combine traditional PDE discretizations with ML methods either(1)to help model complex nonlinear constitutive relations,(2)to nonlinearly reduce the model order for efficient simulation(turbulence),or(3)to accelerate the simulation by predicting certain components in the traditional integration methods.Here,methods(1)and(2)relied on Long-Short-Term Memory(LSTM)architecture,with method(3)relying on convolutional neural networks.Pure ML methods to solve(nonlinear)PDEs are represented by Physics-Informed Neural network(PINN)methods,which could be combined with attention mechanism to address discontinuous solutions.Both LSTM and attention architectures,together with modern and generalized classic optimizers to include stochasticity for DL networks,are extensively reviewed.Kernel machines,including Gaussian processes,are provided to sufficient depth for more advanced works such as shallow networks with infinite width.Not only addressing experts,readers are assumed familiar with computational mechanics,but not with DL,whose concepts and applications are built up from the basics,aiming at bringing first-time learners quickly to the forefront of research.History and limitations of AI are recounted and discussed,with particular attention at pointing out misstatements or misconceptions of the classics,even in well-known references.Positioning and pointing control of a large-deformable beam is given as an example.展开更多
In medical imaging,accurate brain tumor classification in medical imaging requires real-time processing and efficient computation,making hardware acceleration essential.Field Programmable Gate Arrays(FPGAs)offer paral...In medical imaging,accurate brain tumor classification in medical imaging requires real-time processing and efficient computation,making hardware acceleration essential.Field Programmable Gate Arrays(FPGAs)offer parallelism and reconfigurability,making them well-suited for such tasks.In this study,we propose a hardware-accelerated Convolutional Neural Network(CNN)for brain cancer classification,implemented on the PYNQ-Z2 FPGA.Our approach optimizes the first Conv2D layer using different numerical representations:8-bit fixed-point(INT8),16-bit fixed-point(FP16),and 32-bit fixed-point(FP32),while the remaining layers run on an ARM Cortex-A9 processor.Experimental results demonstrate that FPGA acceleration significantly outperforms the CPU(Central Processing Unit)based approach.The obtained results emphasize the critical importance of selecting the appropriate numerical representation for hardware acceleration in medical imaging.On the PYNQ-Z2 FPGA,the INT8 achieves a 16.8%reduction in latency and 22.2%power savings compared to FP32,making it ideal for real-time and energy-constrained applications.FP16 offers a strong balance,delivering only a 0.1%drop in accuracy compared to FP32(94.1%vs.94.2%)while improving latency by 5%and reducing power consumption by 11.1%.Compared to prior works,the proposed FPGA-based CNN model achieves the highest classification accuracy(94.2%)with a throughput of up to 1.562 FPS,outperforming GPU-based and traditional CPU methods in both accuracy and hardware efficiency.These findings demonstrate the effectiveness of FPGA-based AI acceleration for real-time,power-efficient,and high-performance brain tumor classification,showcasing its practical potential in next-generation medical imaging systems.展开更多
Computing Power Network(CPN)is a new paradigm that integrates communication,computing,and storage resources to provide services for tasks.However,tasks composed of non-independent subtasks have a preference for the re...Computing Power Network(CPN)is a new paradigm that integrates communication,computing,and storage resources to provide services for tasks.However,tasks composed of non-independent subtasks have a preference for the resources required at each stage,which increases the difficulty of heterogeneous resource allocation and reduces the latency performance of CPN services.Motivated by this,this paper jointly optimizes the full-service cycle of tasks,including transmission,task partitioning,and offloading.First,the transmission bandwidth is dynamically configured based on delay sensitivity of tasks.Second,with the real-time information from edge resource clusters and state resource clusters in the network,the optimal partitioning for a computation task is derived.Third,personalized resource allocation schemes are customized for computation and storage tasks respectively.Finally,the impact of resource parameter configuration on the latency violation probability of CPN is revealed.Moreover,compared with the benchmark schemes,our proposed scheme reduces the network latency violation probability by up to 1.17×in the same network setting.展开更多
Sparse matrix operations are widely used in computational science and engineering applications such as quantum chemistry and finite element analysis,as well as modern machine learning scenarios such as social network ...Sparse matrix operations are widely used in computational science and engineering applications such as quantum chemistry and finite element analysis,as well as modern machine learning scenarios such as social network and compressed deep neural networks.The University of California,Berkeley in the famous article‘A View of the Parallel Computing Landscape’,Asanovic et al.展开更多
基金This work was supported by the National Key R&D Program of China No.2019YFB1802800.
文摘In 6G era,service forms in which computing power acts as the core will be ubiquitous in the network.At the same time,the collaboration among edge computing,cloud computing and network is needed to support edge computing service with strong demand for computing power,so as to realize the optimization of resource utilization.Based on this,the article discusses the research background,key techniques and main application scenarios of computing power network.Through the demonstration,it can be concluded that the technical solution of computing power network can effectively meet the multi-level deployment and flexible scheduling needs of the future 6G business for computing,storage and network,and adapt to the integration needs of computing power and network in various scenarios,such as user oriented,government enterprise oriented,computing power open and so on.
文摘Along with the further development of science and technology, computer hardware and the Intemet are in a rapid development, and information technology has been widely used in all fields so that complex problems are simply solved. Because of the needs for the development, software starts to mutually integrate with complex power network, making the scale of software increase greatly. Such a growing trend of software promotes soft-ware development to go beyond a general understanding and control and thus a complex system is formed. It is necessary to strengthen the research of complex network theory, and this is a new way to help people study the complexity of software systems. In this paper, the development course of complex dynamic network is introduced simply and the use of complex power network in the software engineering is summarized. Hopefully, this paper can help the crossover study of complex power network and software engineering in the future.
文摘With the acceleration of the intelligent transformation of power systems,the requirements for communication technology are increasingly stringent.The application of 5G mobile communication technology in power communication is analyzed.In this study,5G technology features,application principles,and practical strategies are discussed,and methods such as network slicing,customized deployment,edge computing collaborative application,communication equipment integration and upgrading,and multi-technology collaboration and complementation are proposed.It aims to effectively improve the efficiency,reliability,and security of power communication,solve the problem that traditional communication technology is difficult to meet the diversified needs of power business,and achieve the effect of optimizing the power communication network and supporting the intelligent development of the power system.
文摘Social computing and online groups have accompanied in a new age of the network, where information, networking and communication technologies are enabling systematized human efforts in primarily innovative ways. The social network communities working on various social network domains face different hurdles, including various new research studies and challenges in social computing. The researcher should try to expand the scope and establish new ideas and methods even from other disciplines to address the various challenges. This idea has diverse academic association, social links and technical characteristics. Thus it offers an ultimate opportunity for researchers to find out the issues in social computing and provide innovative solutions for conveying the information between social online groups on network computing. In this research paper we investigate the different issues in social media like users’ privacy and security, network reliabilities, and desire data availability on these social media, users’ awareness about the social networks and problems faced by academic domains. A huge number of users operated the social networks for retrieving and disseminating their real time and offline information to various places. The information may be transmitted on local networks or may be on global networks. The main concerns of users on social media are secure and fast communication channels. Facebook and YouTube both claimed for efficient security mechanism and fast communication channels for multimedia data. In this research a survey has been conducted in the most populated cities where a large number of Facebook and YouTube users have been found. During the survey several regular users indicate the certain potential issues continuously occurred on these social web sites interfaces, for example unwanted advertisement, fake IDS, uncensored videos and unknown friend request which cause the poor speed of channel communication, poor uploading and downloading data speed, channel interferences, security of data, privacy of users, integrity and reliability of user communication on these social sites. The major issues faced by active users of Facebook and YouTube have been highlighted in this research.
文摘Network information communication technology in power systems is the key to ensuring the safe and efficient operation of power grids.The network information communication technology itself has advantages in automation operation and information transmission,thus is widely applied to the power system.In the case of ensuring that the power system is compatible with the network information communication technology,the control of the power system can be strengthened,and the operational efficiency of the power system can be improved.This paper mainly analyzes the specific application of network information communication technology in power system.
文摘In recent years, the concept of "cloud" in the construction of electric power enterprise information system has become a hot topic, which is sought after by electric power information enterprises. Cloud computing technology is becoming the core focus of the development of China's IT industry. With the popularization and implementation of this technology, many high wall barriers have been broken down, and the former computing resources have been transferred from the data center of the computer room to the "cloud", eliminating the barriers between people and information technology. Intelligent technology represented by cloud computing is becoming a new driving force for the transformation of the power industry. Under the wave of digital and smart industries, cloud services continue to present a rapid growth pattern in the power market. Based on the reality, this paper expounds the application status of cloud computing technology in the construction of electric power information, and how to better promote the application of cloud computing technology in the construction of electric power information.
基金supported by the National Science Foundation of China under Grant 62271062 and 62071063by the Zhijiang Laboratory Open Project Fund 2020LCOAB01。
文摘With the rapid development of cloud computing,edge computing,and smart devices,computing power resources indicate a trend of ubiquitous deployment.The traditional network architecture cannot efficiently leverage these distributed computing power resources due to computing power island effect.To overcome these problems and improve network efficiency,a new network computing paradigm is proposed,i.e.,Computing Power Network(CPN).Computing power network can connect ubiquitous and heterogenous computing power resources through networking to realize computing power scheduling flexibly.In this survey,we make an exhaustive review on the state-of-the-art research efforts on computing power network.We first give an overview of computing power network,including definition,architecture,and advantages.Next,a comprehensive elaboration of issues on computing power modeling,information awareness and announcement,resource allocation,network forwarding,computing power transaction platform and resource orchestration platform is presented.The computing power network testbed is built and evaluated.The applications and use cases in computing power network are discussed.Then,the key enabling technologies for computing power network are introduced.Finally,open challenges and future research directions are presented as well.
文摘Driven by diverse intelligent applications,computing capability is moving from the central cloud to the edge of the network in the form of small cloud nodes,forming a distributed computing power network.Tasked with both packet transmission and data processing,it requires joint optimization of communications and computing.Considering the diverse requirements of applications,we develop a dynamic control policy of routing to determine both paths and computing nodes in a distributed computing power network.Different from traditional routing protocols,additional metrics related to computing are taken into consideration in the proposed policy.Based on the multi-attribute decision theory and the fuzzy logic theory,we propose two routing selection algorithms,the Fuzzy Logic-Based Routing(FLBR)algorithm and the low-complexity Pairwise Multi-Attribute Decision-Making(l PMADM)algorithm.Simulation results show that the proposed policy could achieve better performance in average processing delay,user satisfaction,and load balancing compared with existing works.
文摘Medical image analysis has become a cornerstone of modern healthcare,driven by the exponential growth of data from imaging modalities such as MRI,CT,PET,ultrasound,and X-ray.Traditional machine learning methods have made early contributions;however,recent advancements in deep learning(DL)have revolutionized the field,offering state-of-the-art performance in image classification,segmentation,detection,fusion,registration,and enhancement.This comprehensive review presents an in-depth analysis of deep learning methodologies applied across medical image analysis tasks,highlighting both foundational models and recent innovations.The article begins by introducing conventional techniques and their limitations,setting the stage for DL-based solutions.Core DL architectures,including Convolutional Neural Networks(CNNs),Recurrent Neural Networks(RNNs),Generative Adversarial Networks(GANs),Vision Transformers(ViTs),and hybrid models,are discussed in detail,including their advantages and domain-specific adaptations.Advanced learning paradigms such as semi-supervised learning,selfsupervised learning,and few-shot learning are explored for their potential to mitigate data annotation challenges in clinical datasets.This review further categorizes major tasks in medical image analysis,elaborating on how DL techniques have enabled precise tumor segmentation,lesion detection,modality fusion,super-resolution,and robust classification across diverse clinical settings.Emphasis is placed on applications in oncology,cardiology,neurology,and infectious diseases,including COVID-19.Challenges such as data scarcity,label imbalance,model generalizability,interpretability,and integration into clinical workflows are critically examined.Ethical considerations,explainable AI(XAI),federated learning,and regulatory compliance are discussed as essential components of real-world deployment.Benchmark datasets,evaluation metrics,and comparative performance analyses are presented to support future research.The article concludes with a forward-looking perspective on the role of foundation models,multimodal learning,edge AI,and bio-inspired computing in the future of medical imaging.Overall,this review serves as a valuable resource for researchers,clinicians,and developers aiming to harness deep learning for intelligent,efficient,and clinically viable medical image analysis.
文摘In the smart city paradigm, the deployment of Internet of Things(IoT) services and solutions requires extensive communication and computingresources to place and process IoT applications in real time, which consumesa lot of energy and increases operational costs. Usually, IoT applications areplaced in the cloud to provide high-quality services and scalable resources.However, the existing cloud-based approach should consider the above constraintsto efficiently place and process IoT applications. In this paper, anefficient optimization approach for placing IoT applications in a multi-layerfog-cloud environment is proposed using a mathematical model (Mixed-Integer Linear Programming (MILP)). This approach takes into accountIoT application requirements, available resource capacities, and geographicallocations of servers, which would help optimize IoT application placementdecisions, considering multiple objectives such as data transmission, powerconsumption, and cost. Simulation experiments were conducted with variousIoT applications (e.g., augmented reality, infotainment, healthcare, andcompute-intensive) to simulate realistic scenarios. The results showed thatthe proposed approach outperformed the existing cloud-based approach interms of reducing data transmission by 64% and the associated processingand networking power consumption costs by up to 78%. Finally, a heuristicapproach was developed to validate and imitate the presented approach. Itshowed comparable outcomes to the proposed model, with the gap betweenthem reach to a maximum of 5.4% of the total power consumption.
基金supported by the National Natural Science Foundation of China under Grant 62272391in part by the Key Industry Innovation Chain of Shaanxi under Grant 2021ZDLGY05-08.
文摘As an open network architecture,Wireless Computing PowerNetworks(WCPN)pose newchallenges for achieving efficient and secure resource management in networks,because of issues such as insecure communication channels and untrusted device terminals.Blockchain,as a shared,immutable distributed ledger,provides a secure resource management solution for WCPN.However,integrating blockchain into WCPN faces challenges like device heterogeneity,monitoring communication states,and dynamic network nature.Whereas Digital Twins(DT)can accurately maintain digital models of physical entities through real-time data updates and self-learning,enabling continuous optimization of WCPN,improving synchronization performance,ensuring real-time accuracy,and supporting smooth operation of WCPN services.In this paper,we propose a DT for blockchain-empowered WCPN architecture that guarantees real-time data transmission between physical entities and digital models.We adopt an enumeration-based optimal placement algorithm(EOPA)and an improved simulated annealing-based near-optimal placement algorithm(ISAPA)to achieve minimum average DT synchronization latency under the constraint of DT error.Numerical results show that the proposed solution in this paper outperforms benchmarks in terms of average synchronization latency.
基金supported by the National Research Foundation of Korea (NRF) grant funded by the Korea Government (MSIT) (No.2021R1C1C1013133)supported by the Institute of Information and Communications Technology Planning and Evaluation (IITP)grant funded by the Korea Government (MSIT) (RS-2022-00167197,Development of Intelligent 5G/6G Infrastructure Technology for The Smart City)supported by the Soonchunhyang University Research Fund.
文摘In many IIoT architectures,various devices connect to the edge cloud via gateway systems.For data processing,numerous data are delivered to the edge cloud.Delivering data to an appropriate edge cloud is critical to improve IIoT service efficiency.There are two types of costs for this kind of IoT network:a communication cost and a computing cost.For service efficiency,the communication cost of data transmission should be minimized,and the computing cost in the edge cloud should be also minimized.Therefore,in this paper,the communication cost for data transmission is defined as the delay factor,and the computing cost in the edge cloud is defined as the waiting time of the computing intensity.The proposed method selects an edge cloud that minimizes the total cost of the communication and computing costs.That is,a device chooses a routing path to the selected edge cloud based on the costs.The proposed method controls the data flows in a mesh-structured network and appropriately distributes the data processing load.The performance of the proposed method is validated through extensive computer simulation.When the transition probability from good to bad is 0.3 and the transition probability from bad to good is 0.7 in wireless and edge cloud states,the proposed method reduced both the average delay and the service pause counts to about 25%of the existing method.
基金partly funded by MOST Major Research and Development Project(Grant No 2021YFB2900204)Natural Science Foundation of China(Grant No 62132004)+1 种基金Sichuan Major R&D Project(Grant No 22QYCX0168)the Key Research and Development Program of Zhejiang Province(Grant No 2022C01093)。
文摘Computing Power Network(CPN)is emerging as one of the important research interests in beyond 5G(B5G)or 6G.This paper constructs a CPN based on Federated Learning(FL),where all Multi-access Edge Computing(MEC)servers are linked to a computing power center via wireless links.Through this FL procedure,each MEC server in CPN can independently train the learning models using localized data,thus preserving data privacy.However,it is challenging to motivate MEC servers to participate in the FL process in an efficient way and difficult to ensure energy efficiency for MEC servers.To address these issues,we first introduce an incentive mechanism using the Stackelberg game framework to motivate MEC servers.Afterwards,we formulate a comprehensive algorithm to jointly optimize the communication resource(wireless bandwidth and transmission power)allocations and the computation resource(computation capacity of MEC servers)allocations while ensuring the local accuracy of the training of each MEC server.The numerical data validates that the proposed incentive mechanism and joint optimization algorithm do improve the energy efficiency and performance of the considered CPN.
基金supported by the National Key R&D Program of China(No.2021YFB2900200)。
文摘Federated Learning(FL)is a novel distributed machine learning methodology that addresses large-scale parallel computing challenges while safeguarding data security.However,the traditional FL model in communication scenarios,whether for uplink or downlink communications,may give rise to several network problems,such as bandwidth occupation,additional network latency,and bandwidth fragmentation.In this paper,we propose an adaptive chained training approach(Fed ACT)for FL in computing power networks.First,a Computation-driven Clustering Strategy(CCS)is designed.The server clusters clients by task processing delays to minimize waiting delays at the central server.Second,we propose a Genetic-Algorithm-based Sorting(GAS)method to optimize the order of clients participating in training.Finally,based on the table lookup and forwarding rules of the Segment Routing over IPv6(SRv6)protocol,the sorting results of GAS are written into the SRv6 packet header,to control the order in which clients participate in model training.We conduct extensive experiments on two datasets of CIFAR-10 and MNIST,and the results demonstrate that the proposed algorithm offers improved accuracy,diminished communication costs,and reduced network delays.
文摘A “star shaped topology” network for solar radio dynamic spectrograph large scale data acquisition, storage and processing system (meterwave band 230 ~ 300MHz and decimeter band 700 ~ 1 4 GHz) and auto control system for the 10meter antenna has been set up at the Yunnan Observatory. Resource sharing among the optical disk drive, laser printer, MODEM and other devices is realized. And a way to large scale data processing and storage for the solar radio observations of high time and frequency resolutions is found. A homepage is made on the Internet and the test of remote control of the telescope is successful.
基金National Basic Research Program of China under Grant No. 2011CB302600National Natural Science Foundation of China under Grant No. 90818028,No. 61003226National Science Fund for Distinguished Young Scholars under Grant No. 60625203
文摘The mutual-interference phenomenon among multiple applications delivered as services through Cloud Services Delivery Network(CSDN)influences their QoS seriously.In order to deploy multiple applications dependably and efficiently,we propose the Multiple Applications Co-Exist(MACE)method.MACE classifies multiple applications into different types and deploys them using isolation to some extent.Meanwhile,resource static allocation,dynamic supplement and resource reserved mechanism to minimize mutual-interference and maximize resource utilization are designed.After MACE is applied to a real large-scale CSDN and evaluated through 6-month measurement,we find that the CSDN load is more balanced,the bandwidth utilization increases by about 20%,the multiple applications'potential statistical multiplexing ratio decreases from 12% to 5%,and the number of complaint events affecting the dependability of CSDN services caused by multiple applications'mutual-interference has dropped to 0.Obviously,MACE offers a tradeoff and improvement for the dependability and efficiency goals of CSDN.
文摘Three recent breakthroughs due to AI in arts and science serve as motivation:An award winning digital image,protein folding,fast matrix multiplication.Many recent developments in artificial neural networks,particularly deep learning(DL),applied and relevant to computational mechanics(solid,fluids,finite-element technology)are reviewed in detail.Both hybrid and pure machine learning(ML)methods are discussed.Hybrid methods combine traditional PDE discretizations with ML methods either(1)to help model complex nonlinear constitutive relations,(2)to nonlinearly reduce the model order for efficient simulation(turbulence),or(3)to accelerate the simulation by predicting certain components in the traditional integration methods.Here,methods(1)and(2)relied on Long-Short-Term Memory(LSTM)architecture,with method(3)relying on convolutional neural networks.Pure ML methods to solve(nonlinear)PDEs are represented by Physics-Informed Neural network(PINN)methods,which could be combined with attention mechanism to address discontinuous solutions.Both LSTM and attention architectures,together with modern and generalized classic optimizers to include stochasticity for DL networks,are extensively reviewed.Kernel machines,including Gaussian processes,are provided to sufficient depth for more advanced works such as shallow networks with infinite width.Not only addressing experts,readers are assumed familiar with computational mechanics,but not with DL,whose concepts and applications are built up from the basics,aiming at bringing first-time learners quickly to the forefront of research.History and limitations of AI are recounted and discussed,with particular attention at pointing out misstatements or misconceptions of the classics,even in well-known references.Positioning and pointing control of a large-deformable beam is given as an example.
基金supported by Northern Border University Researchers Supporting Project number(NBU-FFR-2025-432-03),Northern Border University,Arar,Saudi Arabia.
文摘In medical imaging,accurate brain tumor classification in medical imaging requires real-time processing and efficient computation,making hardware acceleration essential.Field Programmable Gate Arrays(FPGAs)offer parallelism and reconfigurability,making them well-suited for such tasks.In this study,we propose a hardware-accelerated Convolutional Neural Network(CNN)for brain cancer classification,implemented on the PYNQ-Z2 FPGA.Our approach optimizes the first Conv2D layer using different numerical representations:8-bit fixed-point(INT8),16-bit fixed-point(FP16),and 32-bit fixed-point(FP32),while the remaining layers run on an ARM Cortex-A9 processor.Experimental results demonstrate that FPGA acceleration significantly outperforms the CPU(Central Processing Unit)based approach.The obtained results emphasize the critical importance of selecting the appropriate numerical representation for hardware acceleration in medical imaging.On the PYNQ-Z2 FPGA,the INT8 achieves a 16.8%reduction in latency and 22.2%power savings compared to FP32,making it ideal for real-time and energy-constrained applications.FP16 offers a strong balance,delivering only a 0.1%drop in accuracy compared to FP32(94.1%vs.94.2%)while improving latency by 5%and reducing power consumption by 11.1%.Compared to prior works,the proposed FPGA-based CNN model achieves the highest classification accuracy(94.2%)with a throughput of up to 1.562 FPS,outperforming GPU-based and traditional CPU methods in both accuracy and hardware efficiency.These findings demonstrate the effectiveness of FPGA-based AI acceleration for real-time,power-efficient,and high-performance brain tumor classification,showcasing its practical potential in next-generation medical imaging systems.
基金supported in part by the Chongqing Postgraduate Research and Innovation Project(CYB22250)National Natural Science Foundation of China(62271096,U20A20157)+2 种基金Natural Science Foundation of Chongqing-China(CSTB2023NSCQ-LZX0134,CSTB2024NSCQ-LZX0124)University Innovation Research Group of Chongqing(CXQT20017)Youth Innovation Group Support Program of ICE Discipline of CQUPT(SCIE-QN-2022-04)。
文摘Computing Power Network(CPN)is a new paradigm that integrates communication,computing,and storage resources to provide services for tasks.However,tasks composed of non-independent subtasks have a preference for the resources required at each stage,which increases the difficulty of heterogeneous resource allocation and reduces the latency performance of CPN services.Motivated by this,this paper jointly optimizes the full-service cycle of tasks,including transmission,task partitioning,and offloading.First,the transmission bandwidth is dynamically configured based on delay sensitivity of tasks.Second,with the real-time information from edge resource clusters and state resource clusters in the network,the optimal partitioning for a computation task is derived.Third,personalized resource allocation schemes are customized for computation and storage tasks respectively.Finally,the impact of resource parameter configuration on the latency violation probability of CPN is revealed.Moreover,compared with the benchmark schemes,our proposed scheme reduces the network latency violation probability by up to 1.17×in the same network setting.
文摘Sparse matrix operations are widely used in computational science and engineering applications such as quantum chemistry and finite element analysis,as well as modern machine learning scenarios such as social network and compressed deep neural networks.The University of California,Berkeley in the famous article‘A View of the Parallel Computing Landscape’,Asanovic et al.