期刊文献+
共找到39篇文章
< 1 2 >
每页显示 20 50 100
On an Ultra-Dense LEO-Satellite-Based Computing Network Constellation Design
1
作者 Yijing Sun Boya Di +1 位作者 Ruoqi Deng Lingyang Song 《Engineering》 2025年第11期103-114,共12页
Commercial ultra-dense low-Earth-orbit(LEO)satellite constellations have recently been deployed to provide seamless global Internet services.To improve the satellite network transmission efficiency and provide robust ... Commercial ultra-dense low-Earth-orbit(LEO)satellite constellations have recently been deployed to provide seamless global Internet services.To improve the satellite network transmission efficiency and provide robust wide-coverage computing services for future sixth-generation(6G)users,growing attention has been focused on LEO-satellite-based computing networks,to which ground users can offload computation tasks.However,how to design a LEO satellite constellation for computing networks,while considering discrepancies in the computing requirements of different regions,remains an open question.In this paper,we investigate an ultra-dense LEO-satellite-based computing network to which ground user terminals(UTs)offload part of their computing tasks to satellites.We formulate the ultra-dense constellation design problem as a multi-objective optimization problem(MOOP)to maximize the average coverage rate,transmission capacity,and computational capability,while minimizing the number of satellites.In order to depict the connectivity characteristics of satellite-based computing networks,we propose a terrestrial-satellite connectivity model to determine the coverage rate in different regions.We design a priority-adaptive algorithm to design the optimal inclined-orbit constellation by solving this MOOP.Simulation results verify the accuracy of our theoretical connectivity model and show the optimal constellation deployment,given quality-of-service(QoS)requirements.For the same number of deployed LEO satellites,the proposed constellation outperforms its existing counterparts;in particular,it achieves 25%-45%performance improvements in the average coverage rate. 展开更多
关键词 Low-Earth-orbit satellite constellation Satellite-based computing network Multi-objective optimization
在线阅读 下载PDF
Computing Power Network:The Architecture of Convergence of Computing and Networking towards 6G Requirement 被引量:55
2
作者 Xiongyan Tang Chang Cao +4 位作者 Youxiang Wang Shuai Zhang Ying Liu Mingxuan Li Tao He 《China Communications》 SCIE CSCD 2021年第2期175-185,共11页
In 6G era,service forms in which computing power acts as the core will be ubiquitous in the network.At the same time,the collaboration among edge computing,cloud computing and network is needed to support edge computi... In 6G era,service forms in which computing power acts as the core will be ubiquitous in the network.At the same time,the collaboration among edge computing,cloud computing and network is needed to support edge computing service with strong demand for computing power,so as to realize the optimization of resource utilization.Based on this,the article discusses the research background,key techniques and main application scenarios of computing power network.Through the demonstration,it can be concluded that the technical solution of computing power network can effectively meet the multi-level deployment and flexible scheduling needs of the future 6G business for computing,storage and network,and adapt to the integration needs of computing power and network in various scenarios,such as user oriented,government enterprise oriented,computing power open and so on. 展开更多
关键词 6G edge computing cloud computing convergence of cloud and network computing power network
在线阅读 下载PDF
Computing Power Network:A Survey 被引量:23
3
作者 Sun Yukun Lei Bo +4 位作者 Liu Junlin Huang Haonan Zhang Xing Peng Jing Wang Wenbo 《China Communications》 SCIE CSCD 2024年第9期109-145,共37页
With the rapid development of cloud computing,edge computing,and smart devices,computing power resources indicate a trend of ubiquitous deployment.The traditional network architecture cannot efficiently leverage these... With the rapid development of cloud computing,edge computing,and smart devices,computing power resources indicate a trend of ubiquitous deployment.The traditional network architecture cannot efficiently leverage these distributed computing power resources due to computing power island effect.To overcome these problems and improve network efficiency,a new network computing paradigm is proposed,i.e.,Computing Power Network(CPN).Computing power network can connect ubiquitous and heterogenous computing power resources through networking to realize computing power scheduling flexibly.In this survey,we make an exhaustive review on the state-of-the-art research efforts on computing power network.We first give an overview of computing power network,including definition,architecture,and advantages.Next,a comprehensive elaboration of issues on computing power modeling,information awareness and announcement,resource allocation,network forwarding,computing power transaction platform and resource orchestration platform is presented.The computing power network testbed is built and evaluated.The applications and use cases in computing power network are discussed.Then,the key enabling technologies for computing power network are introduced.Finally,open challenges and future research directions are presented as well. 展开更多
关键词 computing power modeling computing power network computing power scheduling information awareness network forwarding
在线阅读 下载PDF
Joint Resource Allocation Using Evolutionary Algorithms in Heterogeneous Mobile Cloud Computing Networks 被引量:10
4
作者 Weiwei Xia Lianfeng Shen 《China Communications》 SCIE CSCD 2018年第8期189-204,共16页
The problem of joint radio and cloud resources allocation is studied for heterogeneous mobile cloud computing networks. The objective of the proposed joint resource allocation schemes is to maximize the total utility ... The problem of joint radio and cloud resources allocation is studied for heterogeneous mobile cloud computing networks. The objective of the proposed joint resource allocation schemes is to maximize the total utility of users as well as satisfy the required quality of service(QoS) such as the end-to-end response latency experienced by each user. We formulate the problem of joint resource allocation as a combinatorial optimization problem. Three evolutionary approaches are considered to solve the problem: genetic algorithm(GA), ant colony optimization with genetic algorithm(ACO-GA), and quantum genetic algorithm(QGA). To decrease the time complexity, we propose a mapping process between the resource allocation matrix and the chromosome of GA, ACO-GA, and QGA, search the available radio and cloud resource pairs based on the resource availability matrixes for ACOGA, and encode the difference value between the allocated resources and the minimum resource requirement for QGA. Extensive simulation results show that our proposed methods greatly outperform the existing algorithms in terms of running time, the accuracy of final results, the total utility, resource utilization and the end-to-end response latency guaranteeing. 展开更多
关键词 heterogeneous mobile cloud computing networks resource allocation genetic algorithm ant colony optimization quantum genetic algorithm
在线阅读 下载PDF
Federated learning based QoS-aware caching decisions in fog-enabled internet of things networks 被引量:2
5
作者 Xiaoge Huang Zhi Chen +1 位作者 Qianbin Chen Jie Zhang 《Digital Communications and Networks》 SCIE CSCD 2023年第2期580-589,共10页
Quality of Service(QoS)in the 6G application scenario is an important issue with the premise of the massive data transmission.Edge caching based on the fog computing network is considered as a potential solution to ef... Quality of Service(QoS)in the 6G application scenario is an important issue with the premise of the massive data transmission.Edge caching based on the fog computing network is considered as a potential solution to effectively reduce the content fetch delay for latency-sensitive services of Internet of Things(IoT)devices.Considering the time-varying scenario,the machine learning techniques could further reduce the content fetch delay by optimizing the caching decisions.In this paper,to minimize the content fetch delay and ensure the QoS of the network,a Device-to-Device(D2D)assisted fog computing network architecture is introduced,which supports federated learning and QoS-aware caching decisions based on time-varying user preferences.To release the network congestion and the risk of the user privacy leakage,federated learning,is enabled in the D2D-assisted fog computing network.Specifically,it has been observed that federated learning yields suboptimal results according to the Non-Independent Identical Distribution(Non-IID)of local users data.To address this issue,a distributed cluster-based user preference estimation algorithm is proposed to optimize the content caching placement,improve the cache hit rate,the content fetch delay and the convergence rate,which can effectively mitigate the impact of the Non-IID data set by clustering.The simulation results show that the proposed algorithm provides a considerable performance improvement with better learning results compared with the existing algorithms. 展开更多
关键词 Fog computing network IoT D2D communication Deep neural network Federated learning
在线阅读 下载PDF
Numerical simulation of neuronal spike patterns in a retinal network model 被引量:1
6
作者 Lei Wang Shenquan Liu Shanxing Ou 《Neural Regeneration Research》 SCIE CAS CSCD 2011年第16期1254-1260,共7页
This study utilized a neuronal compartment model and NEURON software to study the effects of external light stimulation on retinal photoreceptors and spike patterns of neurons in a retinal network Following light stim... This study utilized a neuronal compartment model and NEURON software to study the effects of external light stimulation on retinal photoreceptors and spike patterns of neurons in a retinal network Following light stimulation of different shapes and sizes, changes in the spike features of ganglion cells indicated that different shapes of light stimulation elicited different retinal responses. By manipulating the shape of light stimulation, we investigated the effects of the large number of electrical synapses existing between retinal neurons. Model simulation and analysis suggested that interplexiform cells play an important role in visual signal information processing in the retina, and the findings indicated that our constructed retinal network model was reliable and feasible. In addition, the simulation results demonstrated that ganglion cells exhibited a variety of spike patterns under different light stimulation sizes and different stimulation shapes, which reflect the functions of the retina in signal transmission and processing. 展开更多
关键词 computational network model RETINA light stimulation ganglion cell spike pattern
在线阅读 下载PDF
A novel routing method for dynamic control in distributed computing power networks 被引量:2
7
作者 Lujie Guo Fengxian Guo Mugen Peng 《Digital Communications and Networks》 CSCD 2024年第6期1644-1652,共9页
Driven by diverse intelligent applications,computing capability is moving from the central cloud to the edge of the network in the form of small cloud nodes,forming a distributed computing power network.Tasked with bo... Driven by diverse intelligent applications,computing capability is moving from the central cloud to the edge of the network in the form of small cloud nodes,forming a distributed computing power network.Tasked with both packet transmission and data processing,it requires joint optimization of communications and computing.Considering the diverse requirements of applications,we develop a dynamic control policy of routing to determine both paths and computing nodes in a distributed computing power network.Different from traditional routing protocols,additional metrics related to computing are taken into consideration in the proposed policy.Based on the multi-attribute decision theory and the fuzzy logic theory,we propose two routing selection algorithms,the Fuzzy Logic-Based Routing(FLBR)algorithm and the low-complexity Pairwise Multi-Attribute Decision-Making(l PMADM)algorithm.Simulation results show that the proposed policy could achieve better performance in average processing delay,user satisfaction,and load balancing compared with existing works. 展开更多
关键词 Computing power networks ROUTING Fuzzy logic Multi-attribute decision making
在线阅读 下载PDF
A Novel Stateful PCE-Cloud Based Control Architecture of Optical Networks for Cloud Services 被引量:1
8
作者 QIN Panke CHEN Xue +1 位作者 WANG Lei WANG Liqian 《China Communications》 SCIE CSCD 2015年第10期117-127,共11页
The next-generation optical network is a service oriented network,which could be delivered by utilizing the generalized multiprotocol label switching(GMPLS) based control plane to realize lots of intelligent features ... The next-generation optical network is a service oriented network,which could be delivered by utilizing the generalized multiprotocol label switching(GMPLS) based control plane to realize lots of intelligent features such as rapid provisioning,automated protection and restoration(P&R),efficient resource allocation,and support for different quality of service(QoS) requirements.In this paper,we propose a novel stateful PCE-cloud(SPC)based architecture of GMPLS optical networks for cloud services.The cloud computing technologies(e.g.virtualization and parallel computing) are applied to the construction of SPC for improving the reliability and maximizing resource utilization.The functions of SPC and GMPLS based control plane are expanded according to the features of cloud services for different QoS requirements.The architecture and detailed description of the components of SPC are provided.Different potential cooperation relationships between public stateful PCE cloud(PSPC) and region stateful PCE cloud(RSPC) are investigated.Moreover,we present the policy-enabled and constraint-based routing scheme base on the cooperation of PSPC and RSPC.Simulation results for verifying the performance of routing and control plane reliability are analyzed. 展开更多
关键词 optical networks control plane GMPLS stateful PCE cloud computing Qo S
在线阅读 下载PDF
Efficient Broadcast Retransmission Based on Network Coding for InterPlaNetary Internet 被引量:1
9
作者 苟亮 边东明 +2 位作者 张更新 徐志平 申振 《China Communications》 SCIE CSCD 2013年第8期111-124,共14页
In traditional wireless broadcast networks,a corrupted packet must be retransmitted even if it has been lost by only one receiver.Obviously,this is not bandwidth-efficient for the receivers that already hold the retra... In traditional wireless broadcast networks,a corrupted packet must be retransmitted even if it has been lost by only one receiver.Obviously,this is not bandwidth-efficient for the receivers that already hold the retransmitted packet.Therefore,it is important to develop a method to realise efficient broadcast transmission.Network coding is a promising technique in this scenario.However,none of the proposed schemes achieves both high transmission efficiency and low computational complexity simultaneously so far.To address this problem,a novel Efficient Opportunistic Network Coding Retransmission(EONCR)scheme is proposed in this paper.This scheme employs a new packet scheduling algorithm which uses a Packet Distribution Matrix(PDM)directly to select the coded packets.The analysis and simulation results indicate that transmission efficiency of EONCR is over 0.1,more than the schemes proposed previously in some simulation conditions,and the computational overhead is reduced substantially.Hence,it has great application prospects in wireless broadcast networks,especially energyand bandwidth-limited systems such as satellite broadcast systems and Planetary Networks(PNs). 展开更多
关键词 wireless broadcast retransmission opportunistic network coding packet scheduling transmission efficiency computational complexity PN
在线阅读 下载PDF
Joint Optimization of Energy Consumption and Network Latency in Blockchain-Enabled Fog Computing Networks
10
作者 Huang Xiaoge Yin Hongbo +3 位作者 Cao Bin Wang Yongsheng Chen Qianbin Zhang Jie 《China Communications》 SCIE CSCD 2024年第4期104-119,共16页
Fog computing is considered as a solution to accommodate the emergence of booming requirements from a large variety of resource-limited Internet of Things(IoT)devices.To ensure the security of private data,in this pap... Fog computing is considered as a solution to accommodate the emergence of booming requirements from a large variety of resource-limited Internet of Things(IoT)devices.To ensure the security of private data,in this paper,we introduce a blockchain-enabled three-layer device-fog-cloud heterogeneous network.A reputation model is proposed to update the credibility of the fog nodes(FN),which is used to select blockchain nodes(BN)from FNs to participate in the consensus process.According to the Rivest-Shamir-Adleman(RSA)encryption algorithm applied to the blockchain system,FNs could verify the identity of the node through its public key to avoid malicious attacks.Additionally,to reduce the computation complexity of the consensus algorithms and the network overhead,we propose a dynamic offloading and resource allocation(DORA)algorithm and a reputation-based democratic byzantine fault tolerant(R-DBFT)algorithm to optimize the offloading decisions and decrease the number of BNs in the consensus algorithm while ensuring the network security.Simulation results demonstrate that the proposed algorithm could efficiently reduce the network overhead,and obtain a considerable performance improvement compared to the related algorithms in the previous literature. 展开更多
关键词 blockchain energy consumption fog computing network Internet of Things LATENCY
在线阅读 下载PDF
Efficient Digital Twin Placement for Blockchain-Empowered Wireless Computing Power Network
11
作者 Wei Wu Liang Yu +2 位作者 Liping Yang Yadong Zhang Peng Wang 《Computers, Materials & Continua》 SCIE EI 2024年第7期587-603,共17页
As an open network architecture,Wireless Computing PowerNetworks(WCPN)pose newchallenges for achieving efficient and secure resource management in networks,because of issues such as insecure communication channels and... As an open network architecture,Wireless Computing PowerNetworks(WCPN)pose newchallenges for achieving efficient and secure resource management in networks,because of issues such as insecure communication channels and untrusted device terminals.Blockchain,as a shared,immutable distributed ledger,provides a secure resource management solution for WCPN.However,integrating blockchain into WCPN faces challenges like device heterogeneity,monitoring communication states,and dynamic network nature.Whereas Digital Twins(DT)can accurately maintain digital models of physical entities through real-time data updates and self-learning,enabling continuous optimization of WCPN,improving synchronization performance,ensuring real-time accuracy,and supporting smooth operation of WCPN services.In this paper,we propose a DT for blockchain-empowered WCPN architecture that guarantees real-time data transmission between physical entities and digital models.We adopt an enumeration-based optimal placement algorithm(EOPA)and an improved simulated annealing-based near-optimal placement algorithm(ISAPA)to achieve minimum average DT synchronization latency under the constraint of DT error.Numerical results show that the proposed solution in this paper outperforms benchmarks in terms of average synchronization latency. 展开更多
关键词 Wireless computing power network blockchain digital twin placement minimum synchronization latency
在线阅读 下载PDF
A game incentive mechanism for energy efficient federated learning in computing power networks
12
作者 Xiao Lin Ruolin Wu +1 位作者 Haibo Mei Kun Yang 《Digital Communications and Networks》 CSCD 2024年第6期1741-1747,共7页
Computing Power Network(CPN)is emerging as one of the important research interests in beyond 5G(B5G)or 6G.This paper constructs a CPN based on Federated Learning(FL),where all Multi-access Edge Computing(MEC)servers a... Computing Power Network(CPN)is emerging as one of the important research interests in beyond 5G(B5G)or 6G.This paper constructs a CPN based on Federated Learning(FL),where all Multi-access Edge Computing(MEC)servers are linked to a computing power center via wireless links.Through this FL procedure,each MEC server in CPN can independently train the learning models using localized data,thus preserving data privacy.However,it is challenging to motivate MEC servers to participate in the FL process in an efficient way and difficult to ensure energy efficiency for MEC servers.To address these issues,we first introduce an incentive mechanism using the Stackelberg game framework to motivate MEC servers.Afterwards,we formulate a comprehensive algorithm to jointly optimize the communication resource(wireless bandwidth and transmission power)allocations and the computation resource(computation capacity of MEC servers)allocations while ensuring the local accuracy of the training of each MEC server.The numerical data validates that the proposed incentive mechanism and joint optimization algorithm do improve the energy efficiency and performance of the considered CPN. 展开更多
关键词 Computing power network Federated learning Energy efficiency Stackelberg game Resource allocation
在线阅读 下载PDF
FedACT:An adaptive chained training approach for federated learning in computing power networks
13
作者 Min Wei Qianying Zhao +4 位作者 Bo Lei Yizhuo Cai Yushun Zhang Xing Zhang Wenbo Wang 《Digital Communications and Networks》 CSCD 2024年第6期1576-1589,共14页
Federated Learning(FL)is a novel distributed machine learning methodology that addresses large-scale parallel computing challenges while safeguarding data security.However,the traditional FL model in communication sce... Federated Learning(FL)is a novel distributed machine learning methodology that addresses large-scale parallel computing challenges while safeguarding data security.However,the traditional FL model in communication scenarios,whether for uplink or downlink communications,may give rise to several network problems,such as bandwidth occupation,additional network latency,and bandwidth fragmentation.In this paper,we propose an adaptive chained training approach(Fed ACT)for FL in computing power networks.First,a Computation-driven Clustering Strategy(CCS)is designed.The server clusters clients by task processing delays to minimize waiting delays at the central server.Second,we propose a Genetic-Algorithm-based Sorting(GAS)method to optimize the order of clients participating in training.Finally,based on the table lookup and forwarding rules of the Segment Routing over IPv6(SRv6)protocol,the sorting results of GAS are written into the SRv6 packet header,to control the order in which clients participate in model training.We conduct extensive experiments on two datasets of CIFAR-10 and MNIST,and the results demonstrate that the proposed algorithm offers improved accuracy,diminished communication costs,and reduced network delays. 展开更多
关键词 Computing power network(CPN) Federated learning(FL) Segment routing IPv6(SRv6) Communication overheads Model accuracy
在线阅读 下载PDF
Going beyond Computation and Its Limits: Injecting Cognition into Computing
14
作者 Rao Mikkilineni 《Applied Mathematics》 2012年第11期1826-1835,共10页
Cognition is the ability to process information, apply knowledge, and change the circumstance. Cognition is associated with intent and its accomplishment through various processes that monitor and control a system and... Cognition is the ability to process information, apply knowledge, and change the circumstance. Cognition is associated with intent and its accomplishment through various processes that monitor and control a system and its environment. Cognition is associated with a sense of “self” (the observer) and the systems with which it interacts (the environment or the “observed”). Cognition extensively uses time and history in executing and regulating tasks that constitute a cognitive process. Whether cognition is computation in the strict sense of adhering to Turing-Church thesis or needs additional constructs is a very relevant question for addressing the design of self-managing (autonomous) distributed computing systems. In this paper we argue that cognition requires more than mere book-keeping provided by the Turing machines and certain aspects of cognition such as self-identity, self-description, self-monitoring and self-management can be implemented using parallel extensions to current serial von-Neumann stored program control (SPC) Turing machine implementations. We argue that the new DIME (Distributed Intelligent Computing Element) computing model, recently introduced as the building block of the DIME network architecture, is an analogue of Turing’s O-machine and extends it to implement a recursive managed distributed computing network, which can be viewed as an interconnected group of such specialized Oracle machines, referred to as a DIME network. The DIME network architecture provides the architectural resiliency, which is often associated with cellular organisms, through auto-failover;auto-scaling;live-migration;and end-to-end transaction security assurance in a distributed system. We argue that the self-identity and self-management processes of a DIME network inject the elements of cognition into Turing machine based computing as is demonstrated by two prototypes eliminating the complexity introduced by hypervisors, virtual machines and other layers of ad-hoc management software in today’s distributed computing environments. 展开更多
关键词 COGNITION Cognitive Process computationALISM TURING MACHINE TURING O-Machine DIME DIME network Architecture
暂未订购
Cellular Computational Networks Based Hierarchical Data-driven Dynamic State Estimation Method Considering Uncertainties
15
作者 Lili Wu Yi Wang +1 位作者 Yaoqiang Wang Jikai Si 《Protection and Control of Modern Power Systems》 2025年第2期150-161,共12页
Accurate generator information is crucial for the efficient control and operation of a power system.This study proposes a hierarchical data-driven approach for dynamic state estimation(DSE)of generators using cellular... Accurate generator information is crucial for the efficient control and operation of a power system.This study proposes a hierarchical data-driven approach for dynamic state estimation(DSE)of generators using cellular computational networks(CCNs)structure.The proposed method initially divides the problem of dynamic state estimation into multiple layers through hierarchical architecture.In the prediction layer,CCNs are employed to reduce the system scale by considering only relevant generators.In the correction layer,a novel adaptive filter is utilized to increase data abundance.Simulation results demonstrate that the proposed hierarchical data-driven method can accurately estimate states using PMU data alone while maintaining high computational efficiency.Additionally,it offers easy scalability and strong robustness against uncertainties.The proposed method has potential applications in online dynamic state estimation and real-time security monitoring. 展开更多
关键词 Cellular computational networks data driven dynamic state estimation HIERARCHICAL model uncertainty
在线阅读 下载PDF
Neural circuit and its functional roles in cerebellar cortex 被引量:1
16
作者 汪雷 刘深泉 《Neuroscience Bulletin》 SCIE CAS CSCD 2011年第3期173-184,共12页
Objective To investigate the spike activities of cerebellar cortical cells in a computational network model con- structed based on the anatomical structure of cerebellar cortex. Methods and Results The multicompartmen... Objective To investigate the spike activities of cerebellar cortical cells in a computational network model con- structed based on the anatomical structure of cerebellar cortex. Methods and Results The multicompartment model of neuron and NEURON software were used to study the external influences on cerebellar cortical cells. Various potential spike patterns in these cells were obtained. By analyzing the impacts of different incoming stimuli on the potential spike of Purkinje cell, temporal focusing caused by the granule cell-golgi cell feedback inhibitory loop to Purkinje cell and spa- tial focusing caused by the parallel fiber-basket/stellate cell local inhibitory loop to Purkinje cell were discussed. Finally, the motor learning process of rabbit eye blink conditioned reflex was demonstrated in this model. The simulation results showed that when the afferent from climbing fiber existed, rabbit adaptation to eye blinking gradually became stable under the Spike Timing-Dependent Plasticity (STDP) learning rule. Conclusion The constructed cerebellar cortex network is a reliable and feasible model. The model simulation results confirmed the output signal stability of cerebellar cortex after STDP learning and the network can execute the function of spatial and temporal focusing. 展开更多
关键词 computational network model cerebellar cortex temporal focusing spatial focusing Spike Timing-DependentPlasticity eye blink conditioned reflex
原文传递
计算主义下虚拟网络复杂性探究 被引量:1
17
作者 景卉 周维刚 《系统科学学报》 2008年第1期31-34,40,共5页
在简要的对计算主义作为一种新的本体论哲学观所经历的三个阶段进行评述,并对计算主义视野下的虚拟网络空间所呈现出的复杂性特征进行初步探究,指出虚拟网络空间具有自演化、自组织、涌现性、自相似等复杂性特征,试图指出计算主义思潮... 在简要的对计算主义作为一种新的本体论哲学观所经历的三个阶段进行评述,并对计算主义视野下的虚拟网络空间所呈现出的复杂性特征进行初步探究,指出虚拟网络空间具有自演化、自组织、涌现性、自相似等复杂性特征,试图指出计算主义思潮对当代哲学与科技的发展所产生的深刻影响。 展开更多
关键词 计算主义 虚拟网络空间 复杂性 本体论
在线阅读 下载PDF
Computational Approaches for Prioritizing Candidate Disease Genes Based on PPI Networks 被引量:5
18
作者 Wei Lan Jianxin Wang +2 位作者 Min Li Wei Peng Fangxiang Wu 《Tsinghua Science and Technology》 SCIE EI CAS CSCD 2015年第5期500-512,共13页
With the continuing development and improvement of genome-wide techniques, a great number of candidate genes are discovered. How to identify the most likely disease genes among a large number of candidates becomes a f... With the continuing development and improvement of genome-wide techniques, a great number of candidate genes are discovered. How to identify the most likely disease genes among a large number of candidates becomes a fundamental challenge in human health. A common view is that genes related to a specific or similar disease tend to reside in the same neighbourhood of biomolecular networks. Recently, based on such observations,many methods have been developed to tackle this challenge. In this review, we firstly introduce the concept of disease genes, their properties, and available data for identifying them. Then we review the recent computational approaches for prioritizing candidate disease genes based on Protein-Protein Interaction(PPI) networks and investigate their advantages and disadvantages. Furthermore, some pieces of existing software and network resources are summarized. Finally, we discuss key issues in prioritizing candidate disease genes and point out some future research directions. 展开更多
关键词 candidate disease-gene prioritization protein-protein interaction network human disease computational tools
原文传递
Process of Petri Nets Extension
19
作者 ZHOU Guofu HE Yanxiang DU Zhuomin 《Wuhan University Journal of Natural Sciences》 EI CAS 2006年第2期351-354,共4页
To describe the dynamic semantics for the network computing, the concept on process is presented Based on the semantic model with variable, resource and relation. Accordingly, the formal definition of process and the ... To describe the dynamic semantics for the network computing, the concept on process is presented Based on the semantic model with variable, resource and relation. Accordingly, the formal definition of process and the mapping rules from the specification of Petri nets extension to process are discussed in detail respectively. Based on the collective concepts of process, the specification of dynamic semantics also is constructed as a net system. Finally, to illustrate process intuitively, an example is specified completely. 展开更多
关键词 network computing computing model PROCESS Petri nets
在线阅读 下载PDF
全局与局部模型交替辅助的差分进化算法 被引量:6
20
作者 于成龙 付国霞 +1 位作者 孙超利 张国晨 《计算机工程》 CAS CSCD 北大核心 2022年第3期115-123,共9页
为求解实际复杂工程应用中的高维计算费时优化问题,提出一种全局与局部代理模型交替辅助的差分进化算法。利用历史样本训练全局和局部代理模型,通过交替搜索全局和局部代理模型得到模型最优解并对其进行真实目标函数评价,实现探索和开... 为求解实际复杂工程应用中的高维计算费时优化问题,提出一种全局与局部代理模型交替辅助的差分进化算法。利用历史样本训练全局和局部代理模型,通过交替搜索全局和局部代理模型得到模型最优解并对其进行真实目标函数评价,实现探索和开采的平衡以减少真实目标函数的计算次数,同时通过针对性地选择个体进行真实目标函数计算,辅助算法快速找到目标函数的较优解。在15个低维测试问题和14个高维测试问题上的实验结果表明,在有限的计算资源情况下,该算法在12个低维测试问题上相较于最优重启策略代理辅助的社会学习粒子群优化算法、基于主动学习的代理模型辅助的粒子群优化算法等表现更好,在7个高维测试问题上相较于高斯过程辅助的进化算法、代理模型辅助的分层粒子群优化算法、求解高维费时问题的代理辅助的多种群优化算法等能找到目标函数的更优解。 展开更多
关键词 全局代理模型 局部代理模型 差分进化算法 计算费时优化问题 径向基函数网络
在线阅读 下载PDF
上一页 1 2 下一页 到第
使用帮助 返回顶部