期刊文献+
共找到2,126篇文章
< 1 2 107 >
每页显示 20 50 100
FedEPC:An Efficient and Privacy-Enhancing Clustering Federated Learning Method for Sensing-Computing Fusion Scenarios
1
作者 Ning Tang Wang Luo +6 位作者 Yiwei Wang Bao Feng Shuang Yang Jiangtao Xu Daohua Zhu Zhechen Huang Wei Liang 《Computers, Materials & Continua》 2025年第11期4091-4113,共23页
With the deep integration of edge computing,5G and Artificial Intelligence ofThings(AIoT)technologies,the large-scale deployment of intelligent terminal devices has given rise to data silos and privacy security challe... With the deep integration of edge computing,5G and Artificial Intelligence ofThings(AIoT)technologies,the large-scale deployment of intelligent terminal devices has given rise to data silos and privacy security challenges in sensing-computing fusion scenarios.Traditional federated learning(FL)algorithms face significant limitations in practical applications due to client drift,model bias,and resource constraints under non-independent and identically distributed(Non-IID)data,as well as the computational overhead and utility loss caused by privacy-preserving techniques.To address these issues,this paper proposes an Efficient and Privacy-enhancing Clustering Federated Learning method(FedEPC).This method introduces a dual-round client selection mechanism to optimize training.First,the Sparsity-based Privacy-preserving Representation Extraction Module(SPRE)and Adaptive Isomorphic Devices Clustering Module(AIDC)cluster clients based on privacy-sensitive features.Second,the Context-aware Incluster Client Selection Module(CICS)dynamically selects representative devices for training,ensuring heterogeneous data distributions are fully represented.By conducting federated training within clusters and aggregating personalized models,FedEPC effectively mitigates weight divergence caused by data heterogeneity,reduces the impact of client drift and straggler issues.Experimental results demonstrate that FedEPC significantly improves test accuracy in highly Non-IID data scenarios compared to FedAvg and existing clustering FL methods.By ensuring privacy security,FedEPC provides an efficient and robust solution for FL in resource-constrained devices within sensing-computing fusion scenarios,offering both theoretical value and engineering practicality. 展开更多
关键词 Federated learning edge computing clusterING NON-IID PRIVACY
在线阅读 下载PDF
Container cluster placement in edge computing based on reinforcement learning incorporating graph convolutional networks scheme
2
作者 Zhuo Chen Bowen Zhu Chuan Zhou 《Digital Communications and Networks》 2025年第1期60-70,共11页
Container-based virtualization technology has been more widely used in edge computing environments recently due to its advantages of lighter resource occupation, faster startup capability, and better resource utilizat... Container-based virtualization technology has been more widely used in edge computing environments recently due to its advantages of lighter resource occupation, faster startup capability, and better resource utilization efficiency. To meet the diverse needs of tasks, it usually needs to instantiate multiple network functions in the form of containers interconnect various generated containers to build a Container Cluster(CC). Then CCs will be deployed on edge service nodes with relatively limited resources. However, the increasingly complex and timevarying nature of tasks brings great challenges to optimal placement of CC. This paper regards the charges for various resources occupied by providing services as revenue, the service efficiency and energy consumption as cost, thus formulates a Mixed Integer Programming(MIP) model to describe the optimal placement of CC on edge service nodes. Furthermore, an Actor-Critic based Deep Reinforcement Learning(DRL) incorporating Graph Convolutional Networks(GCN) framework named as RL-GCN is proposed to solve the optimization problem. The framework obtains an optimal placement strategy through self-learning according to the requirements and objectives of the placement of CC. Particularly, through the introduction of GCN, the features of the association relationship between multiple containers in CCs can be effectively extracted to improve the quality of placement.The experiment results show that under different scales of service nodes and task requests, the proposed method can obtain the improved system performance in terms of placement error ratio, time efficiency of solution output and cumulative system revenue compared with other representative baseline methods. 展开更多
关键词 Edge computing Network virtualization Container cluster Deep reinforcement learning Graph convolutional network
在线阅读 下载PDF
Real-Time Monitoring Method for Cow Rumination Behavior Based on Edge Computing and Improved MobileNet v3
3
作者 ZHANG Yu LI Xiangting +4 位作者 SUN Yalin XUE Aidi ZHANG Yi JIANG Hailong SHEN Weizheng 《智慧农业(中英文)》 CSCD 2024年第4期29-41,共13页
[Objective]Real-time monitoring of cow ruminant behavior is of paramount importance for promptly obtaining relevant information about cow health and predicting cow diseases.Currently,various strategies have been propo... [Objective]Real-time monitoring of cow ruminant behavior is of paramount importance for promptly obtaining relevant information about cow health and predicting cow diseases.Currently,various strategies have been proposed for monitoring cow ruminant behavior,including video surveillance,sound recognition,and sensor monitoring methods.How‐ever,the application of edge device gives rise to the issue of inadequate real-time performance.To reduce the volume of data transmission and cloud computing workload while achieving real-time monitoring of dairy cow rumination behavior,a real-time monitoring method was proposed for cow ruminant behavior based on edge computing.[Methods]Autono‐mously designed edge devices were utilized to collect and process six-axis acceleration signals from cows in real-time.Based on these six-axis data,two distinct strategies,federated edge intelligence and split edge intelligence,were investigat‐ed for the real-time recognition of cow ruminant behavior.Focused on the real-time recognition method for cow ruminant behavior leveraging federated edge intelligence,the CA-MobileNet v3 network was proposed by enhancing the MobileNet v3 network with a collaborative attention mechanism.Additionally,a federated edge intelligence model was designed uti‐lizing the CA-MobileNet v3 network and the FedAvg federated aggregation algorithm.In the study on split edge intelli‐gence,a split edge intelligence model named MobileNet-LSTM was designed by integrating the MobileNet v3 network with a fusion collaborative attention mechanism and the Bi-LSTM network.[Results and Discussions]Through compara‐tive experiments with MobileNet v3 and MobileNet-LSTM,the federated edge intelligence model based on CA-Mo‐bileNet v3 achieved an average Precision rate,Recall rate,F1-Score,Specificity,and Accuracy of 97.1%,97.9%,97.5%,98.3%,and 98.2%,respectively,yielding the best recognition performance.[Conclusions]It is provided a real-time and effective method for monitoring cow ruminant behavior,and the proposed federated edge intelligence model can be ap‐plied in practical settings. 展开更多
关键词 cow rumination behavior real-time monitoring edge computing improved MobileNet v3 edge intelligence model Bi-LSTM
在线阅读 下载PDF
Priority-Based Scheduling and Orchestration in Edge-Cloud Computing:A Deep Reinforcement Learning-Enhanced Concurrency Control Approach
4
作者 Mohammad A Al Khaldy Ahmad Nabot +4 位作者 Ahmad Al-Qerem Mohammad Alauthman Amina Salhi Suhaila Abuowaida Naceur Chihaoui 《Computer Modeling in Engineering & Sciences》 2025年第10期673-697,共25页
The exponential growth of Internet ofThings(IoT)devices has created unprecedented challenges in data processing and resource management for time-critical applications.Traditional cloud computing paradigms cannot meet ... The exponential growth of Internet ofThings(IoT)devices has created unprecedented challenges in data processing and resource management for time-critical applications.Traditional cloud computing paradigms cannot meet the stringent latency requirements of modern IoT systems,while pure edge computing faces resource constraints that limit processing capabilities.This paper addresses these challenges by proposing a novel Deep Reinforcement Learning(DRL)-enhanced priority-based scheduling framework for hybrid edge-cloud computing environments.Our approach integrates adaptive priority assignment with a two-level concurrency control protocol that ensures both optimal performance and data consistency.The framework introduces three key innovations:(1)a DRL-based dynamic priority assignmentmechanism that learns fromsystem behavior,(2)a hybrid concurrency control protocol combining local edge validation with global cloud coordination,and(3)an integrated mathematical model that formalizes sensor-driven transactions across edge-cloud architectures.Extensive simulations across diverse workload scenarios demonstrate significant quantitative improvements:40%latency reduction,25%throughput increase,85%resource utilization(compared to 60%for heuristicmethods),40%reduction in energy consumption(300 vs.500 J per task),and 50%improvement in scalability factor(1.8 vs.1.2 for EDF)compared to state-of-the-art heuristic and meta-heuristic approaches.These results establish the framework as a robust solution for large-scale IoT and autonomous applications requiring real-time processing with consistency guarantees. 展开更多
关键词 Edge computing cloud computing scheduling algorithms orchestration strategies deep reinforcement learning concurrency control real-time systems IoT
在线阅读 下载PDF
Real-Time Identification Technology for Encrypted DNS Traffic with Privacy Protection
5
作者 Zhipeng Qin Hanbing Yan +2 位作者 Biyang Zhang Peng Wang Yitao Li 《Computers, Materials & Continua》 2025年第6期5811-5829,共19页
With the widespread adoption of encrypted Domain Name System(DNS)technologies such as DNS over Hyper Text Transfer Protocol Secure(HTTPS),traditional port and protocol-based traffic analysis methods have become ineffe... With the widespread adoption of encrypted Domain Name System(DNS)technologies such as DNS over Hyper Text Transfer Protocol Secure(HTTPS),traditional port and protocol-based traffic analysis methods have become ineffective.Although encrypted DNS enhances user privacy protection,it also provides concealed communication channels for malicious software,compelling detection technologies to shift towards statistical featurebased and machine learning approaches.However,these methods still face challenges in real-time performance and privacy protection.This paper proposes a real-time identification technology for encrypted DNS traffic with privacy protection.Firstly,a hierarchical architecture of cloud-edge-end collaboration is designed,incorporating task offloading strategies to balance privacy protection and identification efficiency.Secondly,a privacy-preserving federated learning mechanismbased on Federated Robust Aggregation(FedRA)is proposed,utilizingMedoid aggregation and differential privacy techniques to ensure data privacy and enhance identification accuracy.Finally,an edge offloading strategy based on a dynamic priority scheduling algorithm(DPSA)is designed to alleviate terminal burden and reduce latency.Simulation results demonstrate that the proposed technology significantly improves the accuracy and realtime performance of encrypted DNS traffic identification while protecting privacy,making it suitable for various network environments. 展开更多
关键词 Encrypted DNS edge computing federated learning real-time detection privacy protection
在线阅读 下载PDF
Integrating AI, Blockchain, and Edge Computing for Zero-Trust IoT Security:A Comprehensive Review of Advanced Cybersecurity Framework
6
作者 Inam Ullah Khan Fida Muhammad Khan +1 位作者 Zeeshan Ali Haider Fahad Alturise 《Computers, Materials & Continua》 2025年第12期4307-4344,共38页
The rapid expansion of the Internet of Things(IoT)has introduced significant security challenges due to the scale,complexity,and heterogeneity of interconnected devices.The current traditional centralized security mod... The rapid expansion of the Internet of Things(IoT)has introduced significant security challenges due to the scale,complexity,and heterogeneity of interconnected devices.The current traditional centralized security models are deemed irrelevant in dealing with these threats,especially in decentralized applications where the IoT devices may at times operate on minimal resources.The emergence of new technologies,including Artificial Intelligence(AI),blockchain,edge computing,and Zero-Trust-Architecture(ZTA),is offering potential solutions as it helps with additional threat detection,data integrity,and system resilience in real-time.AI offers sophisticated anomaly detection and prediction analytics,and blockchain delivers decentralized and tamper-proof insurance over device communication and exchange of information.Edge computing enables low-latency character processing by distributing and moving the computational workload near the devices.The ZTA enhances security by continuously verifying each device and user on the network,adhering to the“never trust,always verify”ideology.The present research paper is a review of these technologies,finding out how they are used in securing IoT ecosystems,the issues of such integration,and the possibility of developing a multi-layered,adaptive security structure.Major concerns,such as scalability,resource limitations,and interoperability,are identified,and the way to optimize the application of AI,blockchain,and edge computing in zero-trust IoT systems in the future is discussed. 展开更多
关键词 Internet of Things(IoT) artificial intelligence(AI) blockchain edge computing zero-trust-architecture(ZTA) IoT security real-time threat detection
在线阅读 下载PDF
A Multi-Objective Clustered Input Oriented Salp Swarm Algorithm in Cloud Computing
7
作者 Juliet A.Murali Brindha T. 《Computers, Materials & Continua》 SCIE EI 2024年第12期4659-4690,共32页
Infrastructure as a Service(IaaS)in cloud computing enables flexible resource distribution over the Internet,but achieving optimal scheduling remains a challenge.Effective resource allocation in cloud-based environmen... Infrastructure as a Service(IaaS)in cloud computing enables flexible resource distribution over the Internet,but achieving optimal scheduling remains a challenge.Effective resource allocation in cloud-based environments,particularly within the IaaS model,poses persistent challenges.Existing methods often struggle with slow opti-mization,imbalanced workload distribution,and inefficient use of available assets.These limitations result in longer processing times,increased operational expenses,and inadequate resource deployment,particularly under fluctuating demands.To overcome these issues,a novel Clustered Input-Oriented Salp Swarm Algorithm(CIOSSA)is introduced.This approach combines two distinct strategies:Task Splitting Agglomerative Clustering(TSAC)with an Input Oriented Salp Swarm Algorithm(IOSSA),which prioritizes tasks based on urgency,and a refined multi-leader model that accelerates optimization processes,enhancing both speed and accuracy.By continuously assessing system capacity before task distribution,the model ensures that assets are deployed effectively and costs are controlled.The dual-leader technique expands the potential solution space,leading to substantial gains in processing speed,cost-effectiveness,asset efficiency,and system throughput,as demonstrated by comprehensive tests.As a result,the suggested model performs better than existing approaches in terms of makespan,resource utilisation,throughput,and convergence speed,demonstrating that CIOSSA is scalable,reliable,and appropriate for the dynamic settings found in cloud computing. 展开更多
关键词 Cloud computing clustering resource allocation scheduling swam algorithms optimization common with in the subject discipline
在线阅读 下载PDF
New multi-DSP parallel computing architecture for real-time image processing 被引量:4
8
作者 Hu Junhong Zhang Tianxu Jiang Haoyang 《Journal of Systems Engineering and Electronics》 SCIE EI CSCD 2006年第4期883-889,共7页
The flexibility of traditional image processing system is limited because those system are designed for specific applications. In this paper, a new TMS320C64x-based multi-DSP parallel computing architecture is present... The flexibility of traditional image processing system is limited because those system are designed for specific applications. In this paper, a new TMS320C64x-based multi-DSP parallel computing architecture is presented. It has many promising characteristics such as powerful computing capability, broad I/O bandwidth, topology flexibility, and expansibility. The parallel system performance is evaluated by practical experiment. 展开更多
关键词 parallel computing image processing real-time computer architecture
在线阅读 下载PDF
Stream-computing of High Accuracy On-board Real-time Cloud Detection for High Resolution Optical Satellite Imagery 被引量:8
9
作者 Mi WANG Zhiqi ZHANG +2 位作者 Zhipeng DONG Shuying JIN Hongbo SU 《Journal of Geodesy and Geoinformation Science》 2019年第2期50-59,共10页
This paper focuses on the time efficiency for machine vision and intelligent photogrammetry, especially high accuracy on-board real-time cloud detection method. With the development of technology, the data acquisition... This paper focuses on the time efficiency for machine vision and intelligent photogrammetry, especially high accuracy on-board real-time cloud detection method. With the development of technology, the data acquisition ability is growing continuously and the volume of raw data is increasing explosively. Meanwhile, because of the higher requirement of data accuracy, the computation load is also becoming heavier. This situation makes time efficiency extremely important. Moreover, the cloud cover rate of optical satellite imagery is up to approximately 50%, which is seriously restricting the applications of on-board intelligent photogrammetry services. To meet the on-board cloud detection requirements and offer valid input data to subsequent processing, this paper presents a stream-computing of high accuracy on-board real-time cloud detection solution which follows the “bottom-up” understanding strategy of machine vision and uses multiple embedded GPU with significant potential to be applied on-board. Without external memory, the data parallel pipeline system based on multiple processing modules of this solution could afford the “stream-in, processing, stream-out” real-time stream computing. In experiments, images of GF-2 satellite are used to validate the accuracy and performance of this approach, and the experimental results show that this solution could not only bring up cloud detection accuracy, but also match the on-board real-time processing requirements. 展开更多
关键词 machine VISION intelligent PHOTOGRAMMETRY cloud detection STREAM computing ON-BOARD real-time processing
在线阅读 下载PDF
RT-Notification: A Novel Real-Time Notification Protocol for Wireless Control in Fog Computing
10
作者 Li Feng Jie Yang Huan Zhang 《China Communications》 SCIE CSCD 2017年第11期17-28,共12页
Fog computing is an emerging paradigm that has broad applications including storage, measurement and control. In this paper, we propose a novel real-time notification protocol called RT-Notification for wireless contr... Fog computing is an emerging paradigm that has broad applications including storage, measurement and control. In this paper, we propose a novel real-time notification protocol called RT-Notification for wireless control in fog computing. RT-Notification provides low-latency TDMA communication between an access point in Fog and a large number of portable monitoring devices equipped with sensor and actuator. RT-Notification differentiates two types of controls: urgent downlink actuator-oriented control and normal uplink access & scheduling control. Different from existing protocols, RT-Notification has two salient features:(i) support real-time notification of control frames, while not interrupting ongoing other transmissions, and(ii) support on-demand channel allocation for normal uplink access & scheduling control. RT-Notification can be implemented based on the commercial off-the-shelf 802.11 hardware. Our extensive simulations verify that RT-Notification is very effective in supporting the above two features. 展开更多
关键词 fog computing wireless control real-time NOTIFICATION
在线阅读 下载PDF
Granular classifier:Building traffic granules for encrypted traffic classification based on granular computing 被引量:2
11
作者 Xuyang Jing Jingjing Zhao +2 位作者 Zheng Yan Witold Pedrycz Xian Li 《Digital Communications and Networks》 CSCD 2024年第5期1428-1438,共11页
Accurate classification of encrypted traffic plays an important role in network management.However,current methods confronts several problems:inability to characterize traffic that exhibits great dispersion,inability ... Accurate classification of encrypted traffic plays an important role in network management.However,current methods confronts several problems:inability to characterize traffic that exhibits great dispersion,inability to classify traffic with multi-level features,and degradation due to limited training traffic size.To address these problems,this paper proposes a traffic granularity-based cryptographic traffic classification method,called Granular Classifier(GC).In this paper,a novel Cardinality-based Constrained Fuzzy C-Means(CCFCM)clustering algorithm is proposed to address the problem caused by limited training traffic,considering the ratio of cardinality that must be linked between flows to achieve good traffic partitioning.Then,an original representation format of traffic is presented based on granular computing,named Traffic Granules(TG),to accurately describe traffic structure by catching the dispersion of different traffic features.Each granule is a compact set of similar data with a refined boundary by excluding outliers.Based on TG,GC is constructed to perform traffic classification based on multi-level features.The performance of the GC is evaluated based on real-world encrypted network traffic data.Experimental results show that the GC achieves outstanding performance for encrypted traffic classification with limited size of training traffic and keeps accurate classification in dynamic network conditions. 展开更多
关键词 Encrypted traffic classification Semi-supervised clustering Granular computing Anomaly detection
在线阅读 下载PDF
Real-time hybrid simulation of structures equipped with viscoelastic-plastic dampers using a user-programmable computational platform 被引量:2
12
作者 Jack Wen Wei Guo Ali Ashasi-Sorkhabi +1 位作者 Oya Mercan Constantin Christopoulos 《Earthquake Engineering and Engineering Vibration》 SCIE EI CSCD 2017年第4期693-711,共19页
A user-programmable computational/control platform was developed at the University of Toronto that offers real-time hybrid simulation (RTHS) capabilities. The platform was verified previously using several linear ph... A user-programmable computational/control platform was developed at the University of Toronto that offers real-time hybrid simulation (RTHS) capabilities. The platform was verified previously using several linear physical substructures. The study presented in this paper is focused on further validating the RTHS platform using a nonlinear viscoelastic-plastic damper that has displacement, frequency and temperature-dependent properties. The validation study includes damper component characterization tests, as well as RTHS of a series of single-degree-of-freedom (SDOF) systems equipped with viscoelastic-plastic dampers that represent different structural designs. From the component characterization tests, it was found that for a wide range of excitation frequencies and friction slip loads, the tracking errors are comparable to the errors in RTHS of linear spring systems. The hybrid SDOF results are compared to an independently validated thermal- mechanical viscoelastic model to further validate the ability for the platform to test nonlinear systems. After the validation, as an application study, nonlinear SDOF hybrid tests were used to develop performance spectra to predict the response of structures equipped with damping systems that are more challenging to model analytically. The use of the experimental performance spectra is illustrated by comparing the predicted response to the hybrid test response of 2DOF systems equipped with viscoelastic-plastic dampers. 展开更多
关键词 real-time hybrid simulation user-programmable computational/control platform supplemental dampers performance spectra
在线阅读 下载PDF
Mobility-driven user-centric AP clustering in mobile edge computing-based ultra-dense networks 被引量:1
13
作者 Shuxin He Tianyu Wang Shaowei Wang 《Digital Communications and Networks》 SCIE 2020年第2期210-216,共7页
ultra-Dense Network(UDN)has been envisioned as a promising technology to provide high-quality wireless connectivity in dense urban areas,in which the density of Access Points(APs)is increased up to the point where it ... ultra-Dense Network(UDN)has been envisioned as a promising technology to provide high-quality wireless connectivity in dense urban areas,in which the density of Access Points(APs)is increased up to the point where it is comparable with or surpasses the density of active mobile users.In order to mitigate inter-AP interference and improve spectrum efficiency,APs in UDNs are usually clustered into multiple groups to serve different mobile users,respectively.However,as the number of APs increases,the computational capability within an AP group has become the bottleneck of AP clustering.In this paper,we first propose a novel UDN architecture based on Mobile Edge Computing(MEC),in which each MEC server is associated with a user-centric AP cluster to act as a mobile agent.In addition,in the context of MEC-based UDN,we leverage mobility prediction techniques to achieve a dynamic AP clustering scheme,in which the cluster structure can automatically adapt to the dynamic distribution of user traffic in a specific area.Simulation results show that the proposed scheme can highly increase the average user throughput compared with the baseline algorithm using max-SINR user association and equal bandwidth allocation,while it guarantees at the same time low transmission delay. 展开更多
关键词 AP clustering Dynamic user traffic Mobile edge computing Mobility-driven ultra-dense Networks
在线阅读 下载PDF
Imprecise Computation Based Real-time Fault Tolerant Implementation for Model Predictive Control
14
作者 周平方 谢剑英 《Journal of Donghua University(English Edition)》 EI CAS 2006年第1期148-150,共3页
Model predictive control (MPC) could not be deployed in real-time control systems for its computation time is not well defined. A real-time fault tolerant implementation algorithm based on imprecise computation is pro... Model predictive control (MPC) could not be deployed in real-time control systems for its computation time is not well defined. A real-time fault tolerant implementation algorithm based on imprecise computation is proposed for MPC, according to the solving process of quadratic programming (QP) problem. In this algorithm, system stability is guaranteed even when computation resource is not enough to finish optimization completely. By this kind of graceful degradation, the behavior of real-time control systems is still predictable and determinate. The algorithm is demonstrated by experiments on servomotor, and the simulation results show its effectiveness. 展开更多
关键词 model predictive control fault tolerance imprecise computation real-time control
在线阅读 下载PDF
Blockchain with Explainable Artificial Intelligence Driven Intrusion Detection for Clustered IoT Driven Ubiquitous Computing System
15
作者 Reda Salama Mahmoud Ragab 《Computer Systems Science & Engineering》 SCIE EI 2023年第9期2917-2932,共16页
In the Internet of Things(IoT)based system,the multi-level client’s requirements can be fulfilled by incorporating communication technologies with distributed homogeneous networks called ubiquitous computing systems(... In the Internet of Things(IoT)based system,the multi-level client’s requirements can be fulfilled by incorporating communication technologies with distributed homogeneous networks called ubiquitous computing systems(UCS).The UCS necessitates heterogeneity,management level,and data transmission for distributed users.Simultaneously,security remains a major issue in the IoT-driven UCS.Besides,energy-limited IoT devices need an effective clustering strategy for optimal energy utilization.The recent developments of explainable artificial intelligence(XAI)concepts can be employed to effectively design intrusion detection systems(IDS)for accomplishing security in UCS.In this view,this study designs a novel Blockchain with Explainable Artificial Intelligence Driven Intrusion Detection for IoT Driven Ubiquitous Computing System(BXAI-IDCUCS)model.The major intention of the BXAI-IDCUCS model is to accomplish energy efficacy and security in the IoT environment.The BXAI-IDCUCS model initially clusters the IoT nodes using an energy-aware duck swarm optimization(EADSO)algorithm to accomplish this.Besides,deep neural network(DNN)is employed for detecting and classifying intrusions in the IoT network.Lastly,blockchain technology is exploited for secure inter-cluster data transmission processes.To ensure the productive performance of the BXAI-IDCUCS model,a comprehensive experimentation study is applied,and the outcomes are assessed under different aspects.The comparison study emphasized the superiority of the BXAI-IDCUCS model over the current state-of-the-art approaches with a packet delivery ratio of 99.29%,a packet loss rate of 0.71%,a throughput of 92.95 Mbps,energy consumption of 0.0891 mJ,a lifetime of 3529 rounds,and accuracy of 99.38%. 展开更多
关键词 Blockchain internet of things ubiquitous computing explainable artificial intelligence clusterING deep learning
在线阅读 下载PDF
Heuristic file sorted assignment algorithm of parallel I/O on cluster computing system
16
作者 陈志刚 曾碧卿 +3 位作者 熊策 邓晓衡 曾志文 刘安丰 《Journal of Central South University of Technology》 EI 2005年第5期572-577,共6页
A new file assignment strategy of parallel I/O, which is named heuristic file sorted assignment algorithm was proposed on cluster computing system. Based on the load balancing, it assigns the files to the same disk ac... A new file assignment strategy of parallel I/O, which is named heuristic file sorted assignment algorithm was proposed on cluster computing system. Based on the load balancing, it assigns the files to the same disk according to the similar service time. Firstly, the files were sorted and stored at the set I in descending order in terms of their service time, then one disk of cluster node was selected randomly when the files were to be assigned, and at last the continuous files were taken orderly from the set I to the disk until the disk reached its load maximum. The experimental results show that the new strategy improves the performance by 20.2% when the load of the system is light and by 31.6% when the load is heavy. And the higher the data access rate, the more evident the improvement of the performance obtained by the heuristic file sorted assignment algorithm. 展开更多
关键词 cluster computing parallel I/O file sorted assignment variance of service time
在线阅读 下载PDF
Extended Balanced Scheduler with Clustering and Replication for Data Intensive Scientific Workflow Applications in Cloud Computing
17
作者 Satwinder Kaur Mehak Aggarwal 《Journal of Electronic Research and Application》 2018年第3期8-15,共8页
Cloud computing is an advance computing model using which several applications,data and countless IT services are provided over the Internet.Task scheduling plays a crucial role in cloud computing systems.The issue of... Cloud computing is an advance computing model using which several applications,data and countless IT services are provided over the Internet.Task scheduling plays a crucial role in cloud computing systems.The issue of task scheduling can be viewed as the finding or searching an optimal mapping/assignment of set of subtasks of different tasks over the available set of resources so that we can achieve the desired goals for tasks.With the enlargement of users of cloud the tasks need to be scheduled.Cloud’s performance depends on the task scheduling algorithms used.Numerous algorithms have been submitted in the past to solve the task scheduling problem for heterogeneous network of computers.The existing research work proposes different methods for data intensive applications which are energy and deadline aware task scheduling method.As scientific workflow is combination of fine grain and coarse grain task.Every task scheduled to VM has system overhead.If multiple fine grain task are executing in scientific workflow,it increase the scheduling overhead.To overcome the scheduling overhead,multiple small tasks has been combined to large task,which decrease the scheduling overhead and improve the execution time of the workflow.Horizontal clustering has been used to cluster the fine grained task further replication technique has been combined.The proposed scheduling algorithm improves the performance metrics such as execution time and cost.Further this research can be extended with improved clustering technique and replication methods. 展开更多
关键词 SCIENTIFIC WORKFLOW cloud computing REPLICATION clusterING scheduling
在线阅读 下载PDF
Real-time prediction of ship motions based on the reservoir computing model
18
作者 Yu Yang Tao Peng +1 位作者 Shijun Liao Jing Li 《Journal of Ocean Engineering and Science》 2025年第3期379-395,共17页
Real-time prediction of ship motions is crucial for ensuring the safety of offshore activities.In this study,we investigate the performance of the reservoir computing(RC)model in predicting the motions of a ship saili... Real-time prediction of ship motions is crucial for ensuring the safety of offshore activities.In this study,we investigate the performance of the reservoir computing(RC)model in predicting the motions of a ship sailing in irregular waves,comparing it with the long short-term memory(LSTM),bidirectional LSTM(BiLSTM),and gated recurrent unit(GRU)networks.The model tests are carried out in a towing tank to generate the datasets for training and testing the machine learning models.First,we explore the performance of machine learning models trained solely on motion data.It is found that the RC model outperforms the L STM,BiL STM,and GRU networks in both accuracy and efficiency for predicting ship motions.Besides,we investigate the performance of the RC model trained using the historical motion and wave elevation data.It is shown that,compared with the RC model trained solely on motion data,the RC model trained on the motion and wave elevation data can significantly improve the motion prediction accuracy.This study validates the effectiveness and efficiency of the RC model in ship motion prediction during sailing and highlights the utility of wave elevation data in enhancing the RC model’s prediction accuracy. 展开更多
关键词 Ship motion real-time prediction Machine learning Reservoir computing model
原文传递
Explicit ARL Computational for a Modified EWMA Control Chart in Autocorrelated Statistical Process Control Models
19
作者 Yadpirun Supharakonsakun Yupaporn Areepong Korakoch Silpakob 《Computer Modeling in Engineering & Sciences》 2025年第10期699-720,共22页
This study presents an innovative development of the exponentially weighted moving average(EWMA)control chart,explicitly adapted for the examination of time series data distinguished by seasonal autoregressive moving ... This study presents an innovative development of the exponentially weighted moving average(EWMA)control chart,explicitly adapted for the examination of time series data distinguished by seasonal autoregressive moving average behavior—SARMA(1,1)L under exponential white noise.Unlike previous works that rely on simplified models such as AR(1)or assume independence,this research derives for the first time an exact two-sided Average Run Length(ARL)formula for theModified EWMAchart under SARMA(1,1)L conditions,using a mathematically rigorous Fredholm integral approach.The derived formulas are validated against numerical integral equation(NIE)solutions,showing strong agreement and significantly reduced computational burden.Additionally,a performance comparison index(PCI)is introduced to assess the chart’s detection capability.Results demonstrate that the proposed method exhibits superior sensitivity to mean shifts in autocorrelated environments,outperforming existing approaches.The findings offer a new,efficient framework for real-time quality control in complex seasonal processes,with potential applications in environmental monitoring and intelligent manufacturing systems. 展开更多
关键词 Statistical process control average run length modified EWMA control chart autocorrelated data SARMA process computational modeling real-time monitoring
在线阅读 下载PDF
Technique Development and Application——Construction of a Beowulf Cluster for Parallel Computing
20
作者 FENG Kun DONG Jiaqi ZHANG Jinhua 《Southwestern Institute of Physics Annual Report》 2004年第1期138-141,共4页
The large-scale computations are often performed in science and engineering areas such as numerical weather forecasting, astrophysics, energy resources exploration, nuclear weapon design, and plasma fusion research et... The large-scale computations are often performed in science and engineering areas such as numerical weather forecasting, astrophysics, energy resources exploration, nuclear weapon design, and plasma fusion research etc. Many applications in these areas need super computing power. The traditional mode of sequential processing cannot meet the demands of those computations, thus, parallel processing(PP) is the main way of high performance computing (HPC) now. 展开更多
关键词 Parallel computing Beowulf cluster MPICH
在线阅读 下载PDF
上一页 1 2 107 下一页 到第
使用帮助 返回顶部