期刊文献+
共找到14,511篇文章
< 1 2 250 >
每页显示 20 50 100
A Comprehensive Study of Resource Provisioning and Optimization in Edge Computing
1
作者 Sreebha Bhaskaran Supriya Muthuraman 《Computers, Materials & Continua》 2025年第6期5037-5070,共34页
Efficient resource provisioning,allocation,and computation offloading are critical to realizing lowlatency,scalable,and energy-efficient applications in cloud,fog,and edge computing.Despite its importance,integrating ... Efficient resource provisioning,allocation,and computation offloading are critical to realizing lowlatency,scalable,and energy-efficient applications in cloud,fog,and edge computing.Despite its importance,integrating Software Defined Networks(SDN)for enhancing resource orchestration,task scheduling,and traffic management remains a relatively underexplored area with significant innovation potential.This paper provides a comprehensive review of existing mechanisms,categorizing resource provisioning approaches into static,dynamic,and user-centric models,while examining applications across domains such as IoT,healthcare,and autonomous systems.The survey highlights challenges such as scalability,interoperability,and security in managing dynamic and heterogeneous infrastructures.This exclusive research evaluates how SDN enables adaptive policy-based handling of distributed resources through advanced orchestration processes.Furthermore,proposes future directions,including AI-driven optimization techniques and hybrid orchestrationmodels.By addressing these emerging opportunities,thiswork serves as a foundational reference for advancing resource management strategies in next-generation cloud,fog,and edge computing ecosystems.This survey concludes that SDN-enabled computing environments find essential guidance in addressing upcoming management opportunities. 展开更多
关键词 Cloud computing edge computing fog computing resource provisioning resource allocation computation offloading optimization techniques software defined network
在线阅读 下载PDF
Comparative study of IoT-and AI-based computing disease detection approaches
2
作者 Wasiur Rhmann Jalaluddin Khan +8 位作者 Ghufran Ahmad Khan Zubair Ashraf Babita Pandey Mohammad Ahmar Khan Ashraf Ali Amaan Ishrat Abdulrahman Abdullah Alghamdi Bilal Ahamad Mohammad Khaja Shaik 《Data Science and Management》 2025年第1期94-106,共13页
The emergence of different computing methods such as cloud-,fog-,and edge-based Internet of Things(IoT)systems has provided the opportunity to develop intelligent systems for disease detection.Compared to other machin... The emergence of different computing methods such as cloud-,fog-,and edge-based Internet of Things(IoT)systems has provided the opportunity to develop intelligent systems for disease detection.Compared to other machine learning models,deep learning models have gained more attention from the research community,as they have shown better results with a large volume of data compared to shallow learning.However,no comprehensive survey has been conducted on integrated IoT-and computing-based systems that deploy deep learning for disease detection.This study evaluated different machine learning and deep learning algorithms and their hybrid and optimized algorithms for IoT-based disease detection,using the most recent papers on IoT-based disease detection systems that include computing approaches,such as cloud,edge,and fog.Their analysis focused on an IoT deep learning architecture suitable for disease detection.It also recognizes the different factors that require the attention of researchers to develop better IoT disease detection systems.This study can be helpful to researchers interested in developing better IoT-based disease detection and prediction systems based on deep learning using hybrid algorithms. 展开更多
关键词 Deep learning Internet of Things(IoT) Cloud computing Fog computing Edge computing
在线阅读 下载PDF
Computing over Space:Status,Challenges,and Opportunities
3
作者 Yaoqi Liu Yinhe Han +3 位作者 Hongxin Li Shuhao Gu Jibing Qiu Ting Li 《Engineering》 2025年第11期20-25,共6页
1.Introduction The rapid expansion of satellite constellations in recent years has resulted in the generation of massive amounts of data.This surge in data,coupled with diverse application scenarios,underscores the es... 1.Introduction The rapid expansion of satellite constellations in recent years has resulted in the generation of massive amounts of data.This surge in data,coupled with diverse application scenarios,underscores the escalating demand for high-performance computing over space.Computing over space entails the deployment of computational resources on platforms such as satellites to process large-scale data under constraints such as high radiation exposure,restricted power consumption,and minimized weight. 展开更多
关键词 satellite constellations deployment computational resources data processing space computing radiation exposure SPACE high performance computing power consumption
在线阅读 下载PDF
Joint Cooperative Task Offloading and Computing Resource Allocation for Low Earth Orbit Satellites
4
作者 Zhang Yuexia Zhang Siyu Zheng Hui 《China Communications》 2025年第10期88-100,共13页
Multispectral low earth orbit(LEO)satel-lites are characterized by a large volume of captured data and high spatial resolution,which can provide rich image information and data support for a vari-ety of fields,but it ... Multispectral low earth orbit(LEO)satel-lites are characterized by a large volume of captured data and high spatial resolution,which can provide rich image information and data support for a vari-ety of fields,but it is difficult for them to satisfy low-delay and low-energy consumed task processing re-quirements due to their limited computing resources.To address the above problems,this paper presents the LEO satellites cooperative task offloading and computing resource allocation(LEOC-TC)algorithm.Firstly,a LEO satellites cooperative task offloading system was designed so that the multispectral LEO satellites in the system could leave their tasks locally or offload them to other LEO satellites with servers for processing,thus providing high-quality information-processing services for multispectral LEO satellites.Secondly,an optimization problem with the objective of minimizing the weighted sum of the total task pro-cessing delay and total energy consumed for multi-spectral LEO satellite is established,and the optimiza-tion problem is split into an offloading ratio subprob-lem and a computing resource subproblem.Finally,Bernoulli mapping tuna swarm optimization algorithm is used to solve the above two sub-problems separately in order to satisfy the demand of low delay and low energy consumed by the system.Simulation results show that the total task processing cost of the LEOCTC algorithm can be reduced by 63.32%,66.67%,and 80.72%compared to the random offloading ratio algorithm,the average resource offloading algorithm,and the local computing algorithm,respectively. 展开更多
关键词 computing resource allocation interstellar collaboration low earth orbit satellites task offloading
在线阅读 下载PDF
Modified Neural Network Used for Host Utilization Predication in Cloud Computing Environment
5
作者 Arif Ullah Siti Fatimah Abdul Razak +1 位作者 Sumendra Yogarayan Md Shohel Sayeed 《Computers, Materials & Continua》 2025年第3期5185-5204,共20页
Networking,storage,and hardware are just a few of the virtual computing resources that the infrastruc-ture service model offers,depending on what the client needs.One essential aspect of cloud computing that improves ... Networking,storage,and hardware are just a few of the virtual computing resources that the infrastruc-ture service model offers,depending on what the client needs.One essential aspect of cloud computing that improves resource allocation techniques is host load prediction.This difficulty means that hardware resource allocation in cloud computing still results in hosting initialization issues,which add several minutes to response times.To solve this issue and accurately predict cloud capacity,cloud data centers use prediction algorithms.This permits dynamic cloud scalability while maintaining superior service quality.For host prediction,we therefore present a hybrid convolutional neural network long with short-term memory model in this work.First,the suggested hybrid model is input is subjected to the vector auto regression technique.The data in many variables that,prior to analysis,has been filtered to eliminate linear interdependencies.After that,the persisting data are processed and sent into the convolutional neural network layer,which gathers intricate details about the utilization of each virtual machine and central processing unit.The next step involves the use of extended short-term memory,which is suitable for representing the temporal information of irregular trends in time series components.The key to the entire process is that we used the most appropriate activation function for this type of model a scaled polynomial constant unit.Cloud systems require accurate prediction due to the increasing degrees of unpredictability in data centers.Because of this,two actual load traces were used in this study’s assessment of the performance.An example of the load trace is in the typical dispersed system.In comparison to CNN,VAR-GRU,VAR-MLP,ARIMA-LSTM,and other models,the experiment results demonstrate that our suggested approach offers state-of-the-art performance with higher accuracy in both datasets. 展开更多
关键词 Cloud computing DATACENTER virtual machine(VM) PREDICATION algorithm
在线阅读 下载PDF
Blockchain-Enabled Edge Computing Techniques for Advanced Video Surveillance in Autonomous Vehicles
6
作者 Mohammad Tabrez Quasim Khair Ul Nisa 《Computers, Materials & Continua》 2025年第4期1239-1255,共17页
The blockchain-based audiovisual transmission systems were built to create a distributed and flexible smart transport system(STS).This system lets customers,video creators,and service providers directly connect with e... The blockchain-based audiovisual transmission systems were built to create a distributed and flexible smart transport system(STS).This system lets customers,video creators,and service providers directly connect with each other.Blockchain-based STS devices need a lot of computer power to change different video feed quality and forms into different versions and structures that meet the needs of different users.On the other hand,existing blockchains can’t support live streaming because they take too long to process and don’t have enough computer power.Large amounts of video data being sent and analyzed put too much stress on networks for vehicles.A video surveillance method is suggested in this paper to improve the performance of the blockchain system’s data and lower the latency across the multiple access edge computing(MEC)system.The integration of MEC and blockchain for video surveillance in autonomous vehicles(IMEC-BVS)framework has been proposed.To deal with this problem,the joint optimization problem is shown using the actor-critical asynchronous advantage(ACAA)method and deep reinforcement training as a Markov Choice Progression(MCP).Simulation results show that the suggested method quickly converges and improves the performance of MEC and blockchain when used together for video surveillance in self-driving cars compared to other methods. 展开更多
关键词 Blockchain multiple access edge computing video surveillance autonomous vehicles
在线阅读 下载PDF
Intelligent Management of Resources for Smart Edge Computing in 5G Heterogeneous Networks Using Blockchain and Deep Learning
7
作者 Mohammad Tabrez Quasim Khair Ul Nisa +3 位作者 Mohammad Shahid Husain Abakar Ibraheem Abdalla Aadam Mohammed Waseequ Sheraz Mohammad Zunnun Khan 《Computers, Materials & Continua》 2025年第7期1169-1187,共19页
Smart edge computing(SEC)is a novel paradigm for computing that could transfer cloud-based applications to the edge network,supporting computation-intensive services like face detection and natural language processing... Smart edge computing(SEC)is a novel paradigm for computing that could transfer cloud-based applications to the edge network,supporting computation-intensive services like face detection and natural language processing.A core feature of mobile edge computing,SEC improves user experience and device performance by offloading local activities to edge processors.In this framework,blockchain technology is utilized to ensure secure and trustworthy communication between edge devices and servers,protecting against potential security threats.Additionally,Deep Learning algorithms are employed to analyze resource availability and optimize computation offloading decisions dynamically.IoT applications that require significant resources can benefit from SEC,which has better coverage.Although access is constantly changing and network devices have heterogeneous resources,it is not easy to create consistent,dependable,and instantaneous communication between edge devices and their processors,specifically in 5G Heterogeneous Network(HN)situations.Thus,an Intelligent Management of Resources for Smart Edge Computing(IMRSEC)framework,which combines blockchain,edge computing,and Artificial Intelligence(AI)into 5G HNs,has been proposed in this paper.As a result,a unique dual schedule deep reinforcement learning(DS-DRL)technique has been developed,consisting of a rapid schedule learning process and a slow schedule learning process.The primary objective is to minimize overall unloading latency and system resource usage by optimizing computation offloading,resource allocation,and application caching.Simulation results demonstrate that the DS-DRL approach reduces task execution time by 32%,validating the method’s effectiveness within the IMRSEC framework. 展开更多
关键词 Smart edge computing heterogeneous networks blockchain 5G network internet of things artificial intelligence
在线阅读 下载PDF
High-Throughput and Energy-Saving Blockchain for Untrusted IIoT Device Participation in Edge-to-End Collaborative Computing
8
作者 Zhang Zhen Huang Xiaowei +2 位作者 Li Chengjie Li Aihua Xiao Liqun 《China Communications》 2025年第11期132-143,共12页
The integration of blockchain and edgeto-end collaborative computing offers a solution to address the trust issues arising from untrusted IIoT devices.However,ensuring efficiency and energy-saving in applying blockcha... The integration of blockchain and edgeto-end collaborative computing offers a solution to address the trust issues arising from untrusted IIoT devices.However,ensuring efficiency and energy-saving in applying blockchain to edge-to-end collaborative computing remains a significant challenge.To tackle this,this paper proposes an innovative task-oriented blockchain architecture.The architecture comprises trusted Edge Computing(EC)servers and untrusted Industrial Internet of Things(IIoT)devices.We organize untrusted IIoT devices into several clusters,each executing a task in the form of smart contracts,and package the work logs of a task into a block.Executing a task with smart contracts within a cluster ensures the reliability of the task result.Reducing the scope of nodes involved in block consensus increases the overall throughput of the blockchain.Packaging task logs into blocks,storing and propagating blocks through corresponding Edge Computing(EC)servers reduces network load and avoids computing power competition.The paper also presents the proposed architecture’s theoretical TPS(Transactions Per Second)and failure probability calculations.Experimental results demonstrate that this architecture ensures computational security,improves TPS,and reduces resource consumption. 展开更多
关键词 blockchain technology consensus mechanism edge-to-end collaborative computing untrusted IIoT devices
在线阅读 下载PDF
Privacy Preserving Federated Anomaly Detection in IoT Edge Computing Using Bayesian Game Reinforcement Learning
9
作者 Fatima Asiri Wajdan Al Malwi +4 位作者 Fahad Masood Mohammed S.Alshehri Tamara Zhukabayeva Syed Aziz Shah Jawad Ahmad 《Computers, Materials & Continua》 2025年第8期3943-3960,共18页
Edge computing(EC)combined with the Internet of Things(IoT)provides a scalable and efficient solution for smart homes.Therapid proliferation of IoT devices poses real-time data processing and security challenges.EC ha... Edge computing(EC)combined with the Internet of Things(IoT)provides a scalable and efficient solution for smart homes.Therapid proliferation of IoT devices poses real-time data processing and security challenges.EC has become a transformative paradigm for addressing these challenges,particularly in intrusion detection and anomaly mitigation.The widespread connectivity of IoT edge networks has exposed them to various security threats,necessitating robust strategies to detect malicious activities.This research presents a privacy-preserving federated anomaly detection framework combined with Bayesian game theory(BGT)and double deep Q-learning(DDQL).The proposed framework integrates BGT to model attacker and defender interactions for dynamic threat level adaptation and resource availability.It also models a strategic layout between attackers and defenders that takes into account uncertainty.DDQL is incorporated to optimize decision-making and aids in learning optimal defense policies at the edge,thereby ensuring policy and decision optimization.Federated learning(FL)enables decentralized and unshared anomaly detection for sensitive data between devices.Data collection has been performed from various sensors in a real-time EC-IoT network to identify irregularities that occurred due to different attacks.The results reveal that the proposed model achieves high detection accuracy of up to 98%while maintaining low resource consumption.This study demonstrates the synergy between game theory and FL to strengthen anomaly detection in EC-IoT networks. 展开更多
关键词 IOT edge computing smart homes anomaly detection Bayesian game theory reinforcement learning
在线阅读 下载PDF
G-Phenomena as a Base of Scalable Distributed Computing—G-Phenomena in Moore’s Law
10
作者 Karolj Skala Davor Davidovic +1 位作者 Tomislav Lipic Ivan Sovic 《International Journal of Internet and Distributed Systems》 2014年第1期1-4,共4页
Today we witness the exponential growth of scientific research. This fast growth is possible thanks to the rapid development of computing systems since its first days in 1947 and the invention of transistor till the p... Today we witness the exponential growth of scientific research. This fast growth is possible thanks to the rapid development of computing systems since its first days in 1947 and the invention of transistor till the present days with high performance and scalable distributed computing systems. This fast growth of computing systems was first observed by Gordon E. Moore in 1965 and postulated as Moore’s Law. For the development of the scalable distributed computing systems, the year 2000 was a very special year. The first GHz speed processor, GB size memory and GB/s data transmission through network were achieved. Interestingly, in the same year the usable Grid computing systems emerged, which gave a strong impulse to a rapid development of distributed computing systems. This paper recognizes these facts that occurred in the year 2000, as the G-phenomena, a millennium cornerstone for the rapid development of scalable distributed systems evolved around the Grid and Cloud computing paradigms. 展开更多
关键词 Historical Development of Computing G-Phenomena Moore’s Law Distributed Computing SCALABILITY Grid Computing Cloud Computing Component
在线阅读 下载PDF
Computing for power system operation and planning: Then, now, and the future
11
作者 Yousu Chen Zhenyu Huang +1 位作者 Shuangshuang Jin Ang Li 《iEnergy》 2022年第3期315-324,共10页
With the global trend of pursuing clean energy and decarbonization,power systems have been evolving in a fast pace that we have never seen in the history of electrification.This evolution makes the power system more d... With the global trend of pursuing clean energy and decarbonization,power systems have been evolving in a fast pace that we have never seen in the history of electrification.This evolution makes the power system more dynamic and more distributed,with higher uncertainty.These new power system behaviors bring significant challenges in power system modeling and simulation as more data need to be analyzed for larger systems and more complex models to be solved in a shorter time period.The conventional computing approaches will not be sufficient for future power systems.This paper provides a historical review of computing for power system operation and planning,discusses technology advancements in high performance computing(HPC),and describes the drivers for employing HPC techniques.Some high performance computing application examples with different HPC techniques,including the latest quantum computing,are also presented to show how HPC techniques can help us be well prepared to meet the requirements of power system computing in a clean energy future. 展开更多
关键词 Power system computing high performance computing quantum computing contingency analysis state estimation dynamic simulation machine learning OPTIMIZATION exascale computing.
在线阅读 下载PDF
Distributed service caching with deep reinforcement learning for sustainable edge computing in large-scale AI
12
作者 Wei Liu Muhammad Bilal +1 位作者 Yuzhe Shi Xiaolong Xu 《Digital Communications and Networks》 2025年第5期1447-1456,共10页
Increasing reliance on large-scale AI models has led to rising demand for intelligent services.The centralized cloud computing approach has limitations in terms of data transfer efficiency and response time,and as a r... Increasing reliance on large-scale AI models has led to rising demand for intelligent services.The centralized cloud computing approach has limitations in terms of data transfer efficiency and response time,and as a result many service providers have begun to deploy edge servers to cache intelligent services in order to reduce transmission delay and communication energy consumption.However,finding the optimal service caching strategy remains a significant challenge due to the stochastic nature of service requests and the bulky nature of intelligent services.To deal with this,we propose a distributed service caching scheme integrating deep reinforcement learning(DRL)with mobility prediction,which we refer to as DSDM.Specifically,we employ the D3QN(Deep Double Dueling Q-Network)framework to integrate Long Short-Term Memory(LSTM)predicted mobile device locations into the service caching replacement algorithm and adopt the distributed multi-agent approach for learning and training.Experimental results demonstrate that DSDM achieves significant performance improvements in reducing communication energy consumption compared to traditional methods across various scenarios. 展开更多
关键词 Intelligent service Edge caching Deep reinforcement learning Mobility prediction
在线阅读 下载PDF
On Privacy-Preserved Machine Learning Using Secure Multi-Party Computing:Techniques and Trends
13
作者 Oshan Mudannayake Amila Indika +2 位作者 Upul Jayasinghe Gyu MyoungLee Janaka Alawatugoda 《Computers, Materials & Continua》 2025年第11期2527-2578,共52页
The rapid adoption of machine learning in sensitive domains,such as healthcare,finance,and government services,has heightened the need for robust,privacy-preserving techniques.Traditional machine learning approaches l... The rapid adoption of machine learning in sensitive domains,such as healthcare,finance,and government services,has heightened the need for robust,privacy-preserving techniques.Traditional machine learning approaches lack built-in privacy mechanisms,exposing sensitive data to risks,which motivates the development of Privacy-Preserving Machine Learning(PPML)methods.Despite significant advances in PPML,a comprehensive and focused exploration of Secure Multi-Party Computing(SMPC)within this context remains underdeveloped.This review aims to bridge this knowledge gap by systematically analyzing the role of SMPC in PPML,offering a structured overviewof current techniques,challenges,and future directions.Using a semi-systematicmapping studymethodology,this paper surveys recent literature spanning SMPC protocols,PPML frameworks,implementation approaches,threat models,and performance metrics.Emphasis is placed on identifying trends,technical limitations,and comparative strengths of leading SMPC-based methods.Our findings reveal thatwhile SMPCoffers strong cryptographic guarantees for privacy,challenges such as computational overhead,communication costs,and scalability persist.The paper also discusses critical vulnerabilities,practical deployment issues,and variations in protocol efficiency across use cases. 展开更多
关键词 CRYPTOGRAPHY data privacy machine learning multi-party computation PRIVACY SMPC PPML
在线阅读 下载PDF
Computing the ground state solution of Bose-Einstein condensates by an energy-minimizing normalized residual network
14
作者 Ren-Tao Wu Ji-Dong Gao +3 位作者 Yu-Han Wang Zhen-Wei Deng Ming-Jun Li Rong-Pei Zhang 《Chinese Physics B》 2025年第10期321-329,共9页
This paper introduces a novel numerical method based on an energy-minimizing normalized residual network(EMNorm Res Net)to compute the ground-state solution of Bose-Einstein condensates at zero or low temperatures.Sta... This paper introduces a novel numerical method based on an energy-minimizing normalized residual network(EMNorm Res Net)to compute the ground-state solution of Bose-Einstein condensates at zero or low temperatures.Starting from the three-dimensional Gross-Pitaevskii equation(GPE),we reduce it to the 1D and 2D GPEs because of the radial symmetry and cylindrical symmetry.The ground-state solution is formulated by minimizing the energy functional under constraints,which is directly solved using the EM-Norm Res Net approach.The paper provides detailed solutions for the ground states in 1D,2D(with radial symmetry),and 3D(with cylindrical symmetry).We use the Thomas-Fermi approximation as the target function to pre-train the neural network.Then,the formal network is trained using the energy minimization method.In contrast to traditional numerical methods,our neural network approach introduces two key innovations:(i)a novel normalization technique designed for high-dimensional systems within an energy-based loss function;(ii)improved training efficiency and model robustness by incorporating gradient stabilization techniques into residual networks.Extensive numerical experiments validate the method's accuracy across different spatial dimensions. 展开更多
关键词 Bose-Einstein condensate Gross-Pitaevskii equation energy minimization normalized residual network
原文传递
Secure Malicious Node Detection in Decentralized Healthcare Networks Using Cloud and Edge Computing with Blockchain-Enabled Federated Learning
15
作者 Raj Sonani Reham Alhejaili +2 位作者 Pushpalika Chatterjee Khalid Hamad Alnafisah Jehad Ali 《Computer Modeling in Engineering & Sciences》 2025年第9期3169-3189,共21页
Healthcare networks are transitioning from manual records to electronic health records,but this shift introduces vulnerabilities such as secure communication issues,privacy concerns,and the presence of malicious nodes... Healthcare networks are transitioning from manual records to electronic health records,but this shift introduces vulnerabilities such as secure communication issues,privacy concerns,and the presence of malicious nodes.Existing machine and deep learning-based anomalies detection methods often rely on centralized training,leading to reduced accuracy and potential privacy breaches.Therefore,this study proposes a Blockchain-based-Federated Learning architecture for Malicious Node Detection(BFL-MND)model.It trains models locally within healthcare clusters,sharing only model updates instead of patient data,preserving privacy and improving accuracy.Cloud and edge computing enhance the model’s scalability,while blockchain ensures secure,tamper-proof access to health data.Using the PhysioNet dataset,the proposed model achieves an accuracy of 0.95,F1 score of 0.93,precision of 0.94,and recall of 0.96,outperforming baseline models like random forest(0.88),adaptive boosting(0.90),logistic regression(0.86),perceptron(0.83),and deep neural networks(0.92). 展开更多
关键词 Authentication blockchain deep learning federated learning healthcare network machine learning wearable sensor nodes
在线阅读 下载PDF
SATVPC:Secure-agent-based trustworthy virtual private cloud model in open computing environments 被引量:2
16
作者 徐小龙 涂群 +2 位作者 BESSIS Nik 杨庚 王新珩 《Journal of Central South University》 SCIE EI CAS 2014年第8期3186-3196,共11页
Private clouds and public clouds are turning mutually into the open integrated cloud computing environment,which can aggregate and utilize WAN and LAN networks computing,storage,information and other hardware and soft... Private clouds and public clouds are turning mutually into the open integrated cloud computing environment,which can aggregate and utilize WAN and LAN networks computing,storage,information and other hardware and software resources sufficiently,but also bring a series of security,reliability and credibility problems.To solve these problems,a novel secure-agent-based trustworthy virtual private cloud model named SATVPC was proposed for the integrated and open cloud computing environment.Through the introduction of secure-agent technology,SATVPC provides an independent,safe and trustworthy computing virtual private platform for multi-tenant systems.In order to meet the needs of the credibility of SATVPC and mandate the trust relationship between each task execution agent and task executor node suitable for their security policies,a new dynamic composite credibility evaluation mechanism was presented,including the credit index computing algorithm and the credibility differentiation strategy.The experimental system shows that SATVPC and the credibility evaluation mechanism can ensure the security of open computing environments with feasibility.Experimental results and performance analysis also show that the credit indexes computing algorithm can evaluate the credibilities of task execution agents and task executor nodes quantitatively,correctly and operationally. 展开更多
关键词 cloud computing trustworthy computing VIRTUALIZATION agent
在线阅读 下载PDF
A Survey of Mobile Cloud Computing 被引量:7
17
作者 Xiaopeng Fan Jiannong Cao Haixia Mao 《ZTE Communications》 2011年第1期4-8,共5页
Mobile Cloud Computing (MCC) is emerging as one of the most important branches of cloud computing. In this paper, MCC is defined as cloud computing extended by mobility, and a new ad-hoc infrastructure based on mobi... Mobile Cloud Computing (MCC) is emerging as one of the most important branches of cloud computing. In this paper, MCC is defined as cloud computing extended by mobility, and a new ad-hoc infrastructure based on mobile devices. It provides mobile users with data storage and processing services on a cloud computing platform. Because mobile cloud computing is still in its infancy we aim to clarify confusion that has arisen from different views. Existing works are reviewed, and an overview of recent advances in mobile cloud computing is provided. We investigate representative infrastructures of mobile cloud computing and analyze key components. Moreover, emerging MCC models and services are discussed, and challenging issues are identified that will need to be addressed in future work. 展开更多
关键词 mobile cloud computing cloud computing
在线阅读 下载PDF
Security Architecture of Trusted Virtual Machine Monitor for Trusted Computing 被引量:2
18
作者 HUANG Qiang SHEN Changxiang FANG Yanxiang 《Wuhan University Journal of Natural Sciences》 CAS 2007年第1期13-16,共4页
With analysis of limitations Trusted Computing Group (TCG) has encountered, we argued that virtual machine monitor (VMM) is the appropriate architecture for implementing TCG specification. Putting together the VMM... With analysis of limitations Trusted Computing Group (TCG) has encountered, we argued that virtual machine monitor (VMM) is the appropriate architecture for implementing TCG specification. Putting together the VMM architecture, TCG hardware and application-oriented "thin" virtual machine (VM), Trusted VMM-based security architecture is present in this paper with the character of reduced and distributed trusted computing base (TCB). It provides isolation and integrity guarantees based on which general security requirements can be satisfied. 展开更多
关键词 trusted computing virtual machine monitor(VMM) separation kernel trusted computing base (TCB)
在线阅读 下载PDF
Programming for scientific computing on peta-scale heterogeneous parallel systems 被引量:1
19
作者 杨灿群 吴强 +2 位作者 唐滔 王锋 薛京灵 《Journal of Central South University》 SCIE EI CAS 2013年第5期1189-1203,共15页
Peta-scale high-perfomlance computing systems are increasingly built with heterogeneous CPU and GPU nodes to achieve higher power efficiency and computation throughput. While providing unprecedented capabilities to co... Peta-scale high-perfomlance computing systems are increasingly built with heterogeneous CPU and GPU nodes to achieve higher power efficiency and computation throughput. While providing unprecedented capabilities to conduct computational experiments of historic significance, these systems are presently difficult to program. The users, who are domain experts rather than computer experts, prefer to use programming models closer to their domains (e.g., physics and biology) rather than MPI and OpenME This has led the development of domain-specific programming that provides domain-specific programming interfaces but abstracts away some performance-critical architecture details. Based on experience in designing large-scale computing systems, a hybrid programming framework for scientific computing on heterogeneous architectures is proposed in this work. Its design philosophy is to provide a collaborative mechanism for domain experts and computer experts so that both domain-specific knowledge and performance-critical architecture details can be adequately exploited. Two real-world scientific applications have been evaluated on TH-IA, a peta-scale CPU-GPU heterogeneous system that is currently the 5th fastest supercomputer in the world. The experimental results show that the proposed framework is well suited for developing large-scale scientific computing applications on peta-scale heterogeneous CPU/GPU systems. 展开更多
关键词 heterogeneous parallel system programming framework scientific computing GPU computing molecular dynamic
在线阅读 下载PDF
EPVCNet:Enhancing privacy and security for image authentication in computing-sensitive 6G environment
20
作者 Muhammad Shafiq Lijing Ren +2 位作者 Denghui Zhang Thippa Reddy Gadekallu Mohammad Mahtab Alam 《Digital Communications and Networks》 2025年第5期1679-1688,共10页
As the 5G architecture gains momentum,interest in 6G is growing.The proliferation of Internet of Things(IoT)devices,capable of capturing sensitive images,has increased the need for secure transmission and robust acces... As the 5G architecture gains momentum,interest in 6G is growing.The proliferation of Internet of Things(IoT)devices,capable of capturing sensitive images,has increased the need for secure transmission and robust access control mechanisms.The vast amount of data generated by low-computing devices poses a challenge to traditional centralized access control,which relies on trusted third parties and complex computations,resulting in intricate interactions,higher hardware costs,and processing delays.To address these issues,this paper introduces a novel distributed access control approach that integrates a decentralized and lightweight encryption mechanism with image transmission.This method enhances data security and resource efficiency without imposing heavy computational and network burdens.In comparison to the best existing approach,it achieves a 7%improvement in accuracy,effectively addressing existing gaps in lightweight encryption and recognition performance. 展开更多
关键词 ISAC IOT Privacy and security VC
在线阅读 下载PDF
上一页 1 2 250 下一页 到第
使用帮助 返回顶部