期刊文献+
共找到360,262篇文章
< 1 2 250 >
每页显示 20 50 100
A Hierarchical-Based Sequential Caching Scheme in Named Data Networking
1
作者 Zhang Junmin Jin Jihuan +3 位作者 Hou Rui Dong Mianxiong Kaoru Ota Zeng Deze 《China Communications》 2025年第5期48-60,共13页
Named data networking(NDNs)is an idealized deployment of information-centric networking(ICN)that has attracted attention from scientists and scholars worldwide.A distributed in-network caching scheme can efficiently r... Named data networking(NDNs)is an idealized deployment of information-centric networking(ICN)that has attracted attention from scientists and scholars worldwide.A distributed in-network caching scheme can efficiently realize load balancing.However,such a ubiquitous caching approach may cause problems including duplicate caching and low data diversity,thus reducing the caching efficiency of NDN routers.To mitigate these caching problems and improve the NDN caching efficiency,in this paper,a hierarchical-based sequential caching(HSC)scheme is proposed.In this scheme,the NDN routers in the data transmission path are divided into various levels and data with different request frequencies are cached in distinct router levels.The aim is to cache data with high request frequencies in the router that is closest to the content requester to increase the response probability of the nearby data,improve the data caching efficiency of named data networks,shorten the response time,and reduce cache redundancy.Simulation results show that this scheme can effectively improve the cache hit rate(CHR)and reduce the average request delay(ARD)and average route hop(ARH). 展开更多
关键词 hierarchical router named data networking sequential caching
在线阅读 下载PDF
Query Acceleration of Graph Databases by ID Caching Technology 被引量:1
2
作者 Wei Jiang Hai-Bo Hu Liu-Gen Xu 《Journal of Electronic Science and Technology》 CAS CSCD 2019年第1期41-50,共10页
In this paper, we approach the design of ID caching technology(IDCT) for graph databases, with the purpose of accelerating the queries on graph database data and avoiding redundant graph database query operations whic... In this paper, we approach the design of ID caching technology(IDCT) for graph databases, with the purpose of accelerating the queries on graph database data and avoiding redundant graph database query operations which will consume great computer resources. Traditional graph database caching technology(GDCT)needs a large memory to store data and has the problems of serious data consistency and low cache utilization. To address these issues, in the paper we propose a new technology which focuses on ID allocation mechanism and high-speed queries of ID on graph databases. Specifically, ID of the query result is cached in memory and data consistency is achieved through the real-time synchronization and cache memory adaptation. In addition, we set up complex queries and simple queries to satisfy all query requirements and design a mechanism of cache replacement based on query action time, query times, and memory capacity, thus improving the performance furthermore.Extensive experiments show the superiority of our techniques compared with the traditional query approach of graph databases. 展开更多
关键词 cachE GRAPH dataBASE QUERY efficiency
在线阅读 下载PDF
Proposed Caching Scheme for Optimizing Trade-off between Freshness and Energy Consumption in Name Data Networking Based IoT 被引量:1
3
作者 Rahul Shrimali Hemal Shah Riya Chauhan 《Advances in Internet of Things》 2017年第2期11-24,共14页
Over the last few years, the Internet of Things (IoT) has become an omnipresent term. The IoT expands the existing common concepts, anytime and anyplace to the connectivity for anything. The proliferation in IoT offer... Over the last few years, the Internet of Things (IoT) has become an omnipresent term. The IoT expands the existing common concepts, anytime and anyplace to the connectivity for anything. The proliferation in IoT offers opportunities but may also bear risks. A hitherto neglected aspect is the possible increase in power consumption as smart devices in IoT applications are expected to be reachable by other devices at all times. This implies that the device is consuming electrical energy even when it is not in use for its primary function. Many researchers’ communities have started addressing storage ability like cache memory of smart devices using the concept called—Named Data Networking (NDN) to achieve better energy efficient communication model. In NDN, memory or buffer overflow is the common challenge especially when internal memory of node exceeds its limit and data with highest degree of freshness may not be accommodated and entire scenarios behaves like a traditional network. In such case, Data Caching is not performed by intermediate nodes to guarantee highest degree of freshness. On the periodical updates sent from data producers, it is exceedingly demanded that data consumers must get up to date information at cost of lease energy. Consequently, there is challenge in maintaining tradeoff between freshness energy consumption during Publisher-Subscriber interaction. In our work, we proposed the architecture to overcome cache strategy issue by Smart Caching Algorithm for improvement in memory management and data freshness. The smart caching strategy updates the data at precise interval by keeping garbage data into consideration. It is also observed from experiment that data redundancy can be easily obtained by ignoring/dropping data packets for the information which is not of interest by other participating nodes in network, ultimately leading to optimizing tradeoff between freshness and energy required. 展开更多
关键词 Internet of Things (IoT) Named data NETWORKING Smart caching Table Pending INTEREST Forwarding INFORMATION Base CONTENT Store CONTENT Centric NETWORKING INFORMATION Centric NETWORKING data & INTEREST Packets SCTSmart caching
暂未订购
FlyCache:Recommendation-driven edge caching architecture for full life cycle of video streaming
4
作者 Shaohua Cao Quancheng Zheng +4 位作者 Zijun Zhan Yansheng Yang Huaqi Lv Danyang Zheng Weishan Zhang 《Digital Communications and Networks》 2025年第4期961-973,共13页
With the rapid development of 5G technology,the proportion of video traffic on the Internet is increasing,bringing pressure on the network infrastructure.Edge computing technology provides a feasible solution for opti... With the rapid development of 5G technology,the proportion of video traffic on the Internet is increasing,bringing pressure on the network infrastructure.Edge computing technology provides a feasible solution for optimizing video content distribution.However,the limited edge node cache capacity and dynamic user requests make edge caching more complex.Therefore,we propose a recommendation-driven edge Caching network architecture for the Full life cycle of video streaming(FlyCache)designed to improve users’Quality of Experience(QoE)and reduce backhaul traffic consumption.FlyCache implements intelligent caching management across three key stages:before-playback,during-playback,and after-playback.Specifically,we introduce a cache placement policy for the before-playback stage,a dynamic prefetching and cache admission policy for the during-playback stage,and a progressive cache eviction policy for the after-playback stage.To validate the effectiveness of FlyCache,we developed a user behavior-driven edge caching simulation framework incorporating recommendation mechanisms.Experiments conducted on the MovieLens and synthetic datasets demonstrate that FlyCache outperforms other caching strategies in terms of byte hit rate,backhaul traffic,and delayed startup rate. 展开更多
关键词 Edge caching cache architecture cache placement cache admission caching eviction
在线阅读 下载PDF
Resource Allocation of UAV-Assisted Mobile Edge Computing Systems with Caching
5
作者 Pu Dan Feng Wenjiang Zhang Juntao 《China Communications》 2025年第10期269-279,共11页
In this paper,unmanned aerial vehicle(UAV)is adopted to serve as aerial base station(ABS)and mobile edge computing(MEC)platform for wire-less communication systems.When Internet of Things devices(IoTDs)cannot cope wit... In this paper,unmanned aerial vehicle(UAV)is adopted to serve as aerial base station(ABS)and mobile edge computing(MEC)platform for wire-less communication systems.When Internet of Things devices(IoTDs)cannot cope with computation-intensive and/or time-sensitive tasks,part of tasks is offloaded to the UAV side,and UAV process them with its own computing resources and caching resources.Thus,the burden of IoTDs gets relieved under the satisfaction of the quality of service(QoS)require-ments.However,owing to the limited resources of UAV,the cost of whole system,i.e.,that is defined as the weighted sum of energy consumption and time de-lay with caching,should be further optimized while the objective function and the constraints are non-convex.Therefore,we first jointly optimize commu-nication resources B,computing resources F and of-floading rates X with alternating iteration and convex optimization method,and then determine the value of caching decision Y with branch-and-bound(BB)al-gorithm.Numerical results show that UAV assisting partial task offloading with content caching is supe-rior to local computing and full offloading mechanism without caching,and meanwhile the cost of whole sys-tem gets further optimized with our proposed scheme. 展开更多
关键词 caching MEC resource allocation UAV
在线阅读 下载PDF
Utility-Driven Edge Caching Optimization with Deep Reinforcement Learning under Uncertain Content Popularity
6
作者 Mingoo Kwon Kyeongmin Kim Minseok Song 《Computers, Materials & Continua》 2025年第10期519-537,共19页
Efficient edge caching is essential for maximizing utility in video streaming systems,especially under constraints such as limited storage capacity and dynamically fluctuating content popularity.Utility,defined as the... Efficient edge caching is essential for maximizing utility in video streaming systems,especially under constraints such as limited storage capacity and dynamically fluctuating content popularity.Utility,defined as the benefit obtained per unit of cache bandwidth usage,degrades when static or greedy caching strategies fail to adapt to changing demand patterns.To address this,we propose a deep reinforcement learning(DRL)-based caching framework built upon the proximal policy optimization(PPO)algorithm.Our approach formulates edge caching as a sequential decision-making problem and introduces a reward model that balances cache hit performance and utility by prioritizing high-demand,high-quality content while penalizing degraded quality delivery.We construct a realistic synthetic dataset that captures both temporal variations and shifting content popularity to validate our model.Experimental results demonstrate that our proposed method improves utility by up to 135.9%and achieves an average improvement of 22.6%compared to traditional greedy algorithms and long short-term memory(LSTM)-based prediction models.Moreover,our method consistently performs well across a variety of utility functions,workload distributions,and storage limitations,underscoring its adaptability and robustness in dynamic video caching environments. 展开更多
关键词 Edge caching video-on-demand reinforcement learning utility optimization
在线阅读 下载PDF
An Efficient Content Caching Strategy for Fog-Enabled Road Side Units in Vehicular Networks
7
作者 Faareh Ahmed Babar Mansoor +1 位作者 Muhammad Awais Javed Abdul Khader Jilani Saudagar 《Computer Modeling in Engineering & Sciences》 2025年第9期3783-3804,共22页
Vehicular networks enable seamless connectivity for exchanging emergency and infotainment content.However,retrieving infotainment data from remote servers often introduces high delays,degrading the Quality of Service(... Vehicular networks enable seamless connectivity for exchanging emergency and infotainment content.However,retrieving infotainment data from remote servers often introduces high delays,degrading the Quality of Service(QoS).To overcome this,caching frequently requested content at fog-enabled Road Side Units(RSUs)reduces communication latency.Yet,the limited caching capacity of RSUs makes it impractical to store all contents with varying sizes and popularity.This research proposes an efficient content caching algorithm that adapts to dynamic vehicular demands on highways to maximize request satisfaction.The scheme is evaluated against Intelligent Content Caching(ICC)and Random Caching(RC).The obtained results show that our proposed scheme entertains more contentrequesting vehicles as compared to ICC and RC,with 33%and 41%more downloaded data in 28%and 35%less amount of time from ICC and RC schemes,respectively. 展开更多
关键词 Vehicular networks fog computing content caching infotainment services
在线阅读 下载PDF
Distributed service caching with deep reinforcement learning for sustainable edge computing in large-scale AI
8
作者 Wei Liu Muhammad Bilal +1 位作者 Yuzhe Shi Xiaolong Xu 《Digital Communications and Networks》 2025年第5期1447-1456,共10页
Increasing reliance on large-scale AI models has led to rising demand for intelligent services.The centralized cloud computing approach has limitations in terms of data transfer efficiency and response time,and as a r... Increasing reliance on large-scale AI models has led to rising demand for intelligent services.The centralized cloud computing approach has limitations in terms of data transfer efficiency and response time,and as a result many service providers have begun to deploy edge servers to cache intelligent services in order to reduce transmission delay and communication energy consumption.However,finding the optimal service caching strategy remains a significant challenge due to the stochastic nature of service requests and the bulky nature of intelligent services.To deal with this,we propose a distributed service caching scheme integrating deep reinforcement learning(DRL)with mobility prediction,which we refer to as DSDM.Specifically,we employ the D3QN(Deep Double Dueling Q-Network)framework to integrate Long Short-Term Memory(LSTM)predicted mobile device locations into the service caching replacement algorithm and adopt the distributed multi-agent approach for learning and training.Experimental results demonstrate that DSDM achieves significant performance improvements in reducing communication energy consumption compared to traditional methods across various scenarios. 展开更多
关键词 Intelligent service Edge caching Deep reinforcement learning Mobility prediction
在线阅读 下载PDF
A knowledge graph-based reinforcement learning approach for cooperative caching in MEC-enabled heterogeneous networks
9
作者 Dan Wang Yalu Bai Bin Song 《Digital Communications and Networks》 2025年第4期1236-1244,共9页
Existing wireless networks are flooded with video data transmissions,and the demand for high-speed and low-latency video services continues to surge.This has brought with it challenges to networks in the form of conge... Existing wireless networks are flooded with video data transmissions,and the demand for high-speed and low-latency video services continues to surge.This has brought with it challenges to networks in the form of congestion as well as the need for more resources and more dedicated caching schemes.Recently,Multi-access Edge Computing(MEC)-enabled heterogeneous networks,which leverage edge caches for proximity delivery,have emerged as a promising solution to all of these problems.Designing an effective edge caching scheme is critical to its success,however,in the face of limited resources.We propose a novel Knowledge Graph(KG)-based Dueling Deep Q-Network(KG-DDQN)for cooperative caching in MEC-enabled heterogeneous networks.The KGDDQN scheme leverages a KG to uncover video relations,providing valuable insights into user preferences for the caching scheme.Specifically,the KG guides the selection of related videos as caching candidates(i.e.,actions in the DDQN),thus providing a rich reference for implementing a personalized caching scheme while also improving the decision efficiency of the DDQN.Extensive simulation results validate the convergence effectiveness of the KG-DDQN,and it also outperforms baselines regarding cache hit rate and service delay. 展开更多
关键词 Multi-access edge computing Cooperative caching Resource allocation Knowledge graph Reinforcement learning
在线阅读 下载PDF
Mobility-Aware Edge Caching with Transformer-DQN in D2D-Enabled Heterogeneous Networks
10
作者 Yiming Guo Hongyu Ma 《Computers, Materials & Continua》 2025年第11期3485-3505,共21页
In dynamic 5G network environments,user mobility and heterogeneous network topologies pose dual challenges to the effort of improving performance of mobile edge caching.Existing studies often overlook the dynamic natu... In dynamic 5G network environments,user mobility and heterogeneous network topologies pose dual challenges to the effort of improving performance of mobile edge caching.Existing studies often overlook the dynamic nature of user locations and the potential of device-to-device(D2D)cooperative caching,limiting the reduction of transmission latency.To address this issue,this paper proposes a joint optimization scheme for edge caching that integrates user mobility prediction with deep reinforcement learning.First,a Transformer-based geolocation prediction model is designed,leveraging multi-head attention mechanisms to capture correlations in historical user trajectories for accurate future location prediction.Then,within a three-tier heterogeneous network,we formulate a latency minimization problem under a D2D cooperative caching architecture and develop a mobility-aware Deep Q-Network(DQN)caching strategy.This strategy takes predicted location information as state input and dynamically adjusts the content distribution across small base stations(SBSs)andmobile users(MUs)to reduce end-to-end delay inmulti-hop content retrieval.Simulation results show that the proposed DQN-based method outperforms other baseline strategies across variousmetrics,achieving a 17.2%reduction in transmission delay compared to DQNmethods withoutmobility integration,thus validating the effectiveness of the joint optimization of location prediction and caching decisions. 展开更多
关键词 Mobile edge caching D2D heterogeneous networks deep reinforcement learning transformer model transmission delay optimization
在线阅读 下载PDF
A Composite Loss-Based Autoencoder for Accurate and Scalable Missing Data Imputation
11
作者 Thierry Mugenzi Cahit Perkgoz 《Computers, Materials & Continua》 2026年第1期1985-2005,共21页
Missing data presents a crucial challenge in data analysis,especially in high-dimensional datasets,where missing data often leads to biased conclusions and degraded model performance.In this study,we present a novel a... Missing data presents a crucial challenge in data analysis,especially in high-dimensional datasets,where missing data often leads to biased conclusions and degraded model performance.In this study,we present a novel autoencoder-based imputation framework that integrates a composite loss function to enhance robustness and precision.The proposed loss combines(i)a guided,masked mean squared error focusing on missing entries;(ii)a noise-aware regularization term to improve resilience against data corruption;and(iii)a variance penalty to encourage expressive yet stable reconstructions.We evaluate the proposed model across four missingness mechanisms,such as Missing Completely at Random,Missing at Random,Missing Not at Random,and Missing Not at Random with quantile censorship,under systematically varied feature counts,sample sizes,and missingness ratios ranging from 5%to 60%.Four publicly available real-world datasets(Stroke Prediction,Pima Indians Diabetes,Cardiovascular Disease,and Framingham Heart Study)were used,and the obtained results show that our proposed model consistently outperforms baseline methods,including traditional and deep learning-based techniques.An ablation study reveals the additive value of each component in the loss function.Additionally,we assessed the downstream utility of imputed data through classification tasks,where datasets imputed by the proposed method yielded the highest receiver operating characteristic area under the curve scores across all scenarios.The model demonstrates strong scalability and robustness,improving performance with larger datasets and higher feature counts.These results underscore the capacity of the proposed method to produce not only numerically accurate but also semantically useful imputations,making it a promising solution for robust data recovery in clinical applications. 展开更多
关键词 Missing data imputation autoencoder deep learning missing mechanisms
在线阅读 下载PDF
Advances in Machine Learning for Explainable Intrusion Detection Using Imbalance Datasets in Cybersecurity with Harris Hawks Optimization
12
作者 Amjad Rehman Tanzila Saba +2 位作者 Mona M.Jamjoom Shaha Al-Otaibi Muhammad I.Khan 《Computers, Materials & Continua》 2026年第1期1804-1818,共15页
Modern intrusion detection systems(MIDS)face persistent challenges in coping with the rapid evolution of cyber threats,high-volume network traffic,and imbalanced datasets.Traditional models often lack the robustness a... Modern intrusion detection systems(MIDS)face persistent challenges in coping with the rapid evolution of cyber threats,high-volume network traffic,and imbalanced datasets.Traditional models often lack the robustness and explainability required to detect novel and sophisticated attacks effectively.This study introduces an advanced,explainable machine learning framework for multi-class IDS using the KDD99 and IDS datasets,which reflects real-world network behavior through a blend of normal and diverse attack classes.The methodology begins with sophisticated data preprocessing,incorporating both RobustScaler and QuantileTransformer to address outliers and skewed feature distributions,ensuring standardized and model-ready inputs.Critical dimensionality reduction is achieved via the Harris Hawks Optimization(HHO)algorithm—a nature-inspired metaheuristic modeled on hawks’hunting strategies.HHO efficiently identifies the most informative features by optimizing a fitness function based on classification performance.Following feature selection,the SMOTE is applied to the training data to resolve class imbalance by synthetically augmenting underrepresented attack types.The stacked architecture is then employed,combining the strengths of XGBoost,SVM,and RF as base learners.This layered approach improves prediction robustness and generalization by balancing bias and variance across diverse classifiers.The model was evaluated using standard classification metrics:precision,recall,F1-score,and overall accuracy.The best overall performance was recorded with an accuracy of 99.44%for UNSW-NB15,demonstrating the model’s effectiveness.After balancing,the model demonstrated a clear improvement in detecting the attacks.We tested the model on four datasets to show the effectiveness of the proposed approach and performed the ablation study to check the effect of each parameter.Also,the proposed model is computationaly efficient.To support transparency and trust in decision-making,explainable AI(XAI)techniques are incorporated that provides both global and local insight into feature contributions,and offers intuitive visualizations for individual predictions.This makes it suitable for practical deployment in cybersecurity environments that demand both precision and accountability. 展开更多
关键词 Intrusion detection XAI machine learning ensemble method CYBERSECURITY imbalance data
在线阅读 下载PDF
Enhanced Capacity Reversible Data Hiding Based on Pixel Value Ordering in Triple Stego Images
13
作者 Kim Sao Nguyen Ngoc Dung Bui 《Computers, Materials & Continua》 2026年第1期1571-1586,共16页
Reversible data hiding(RDH)enables secret data embedding while preserving complete cover image recovery,making it crucial for applications requiring image integrity.The pixel value ordering(PVO)technique used in multi... Reversible data hiding(RDH)enables secret data embedding while preserving complete cover image recovery,making it crucial for applications requiring image integrity.The pixel value ordering(PVO)technique used in multi-stego images provides good image quality but often results in low embedding capability.To address these challenges,this paper proposes a high-capacity RDH scheme based on PVO that generates three stego images from a single cover image.The cover image is partitioned into non-overlapping blocks with pixels sorted in ascending order.Four secret bits are embedded into each block’s maximum pixel value,while three additional bits are embedded into the second-largest value when the pixel difference exceeds a predefined threshold.A similar embedding strategy is also applied to the minimum side of the block,including the second-smallest pixel value.This design enables each block to embed up to 14 bits of secret data.Experimental results demonstrate that the proposed method achieves significantly higher embedding capacity and improved visual quality compared to existing triple-stego RDH approaches,advancing the field of reversible steganography. 展开更多
关键词 RDH reversible data hiding PVO RDH base three stego images
在线阅读 下载PDF
Impact of Data Processing Techniques on AI Models for Attack-Based Imbalanced and Encrypted Traffic within IoT Environments
14
作者 Yeasul Kim Chaeeun Won Hwankuk Kim 《Computers, Materials & Continua》 2026年第1期247-274,共28页
With the increasing emphasis on personal information protection,encryption through security protocols has emerged as a critical requirement in data transmission and reception processes.Nevertheless,IoT ecosystems comp... With the increasing emphasis on personal information protection,encryption through security protocols has emerged as a critical requirement in data transmission and reception processes.Nevertheless,IoT ecosystems comprise heterogeneous networks where outdated systems coexist with the latest devices,spanning a range of devices from non-encrypted ones to fully encrypted ones.Given the limited visibility into payloads in this context,this study investigates AI-based attack detection methods that leverage encrypted traffic metadata,eliminating the need for decryption and minimizing system performance degradation—especially in light of these heterogeneous devices.Using the UNSW-NB15 and CICIoT-2023 dataset,encrypted and unencrypted traffic were categorized according to security protocol,and AI-based intrusion detection experiments were conducted for each traffic type based on metadata.To mitigate the problem of class imbalance,eight different data sampling techniques were applied.The effectiveness of these sampling techniques was then comparatively analyzed using two ensemble models and three Deep Learning(DL)models from various perspectives.The experimental results confirmed that metadata-based attack detection is feasible using only encrypted traffic.In the UNSW-NB15 dataset,the f1-score of encrypted traffic was approximately 0.98,which is 4.3%higher than that of unencrypted traffic(approximately 0.94).In addition,analysis of the encrypted traffic in the CICIoT-2023 dataset using the same method showed a significantly lower f1-score of roughly 0.43,indicating that the quality of the dataset and the preprocessing approach have a substantial impact on detection performance.Furthermore,when data sampling techniques were applied to encrypted traffic,the recall in the UNSWNB15(Encrypted)dataset improved by up to 23.0%,and in the CICIoT-2023(Encrypted)dataset by 20.26%,showing a similar level of improvement.Notably,in CICIoT-2023,f1-score and Receiver Operation Characteristic-Area Under the Curve(ROC-AUC)increased by 59.0%and 55.94%,respectively.These results suggest that data sampling can have a positive effect even in encrypted environments.However,the extent of the improvement may vary depending on data quality,model architecture,and sampling strategy. 展开更多
关键词 Encrypted traffic attack detection data sampling technique AI-based detection IoT environment
在线阅读 下载PDF
Efficient Arabic Essay Scoring with Hybrid Models: Feature Selection, Data Optimization, and Performance Trade-Offs
15
作者 Mohamed Ezz Meshrif Alruily +4 位作者 Ayman Mohamed Mostafa Alaa SAlaerjan Bader Aldughayfiq Hisham Allahem Abdulaziz Shehab 《Computers, Materials & Continua》 2026年第1期2274-2301,共28页
Automated essay scoring(AES)systems have gained significant importance in educational settings,offering a scalable,efficient,and objective method for evaluating student essays.However,developing AES systems for Arabic... Automated essay scoring(AES)systems have gained significant importance in educational settings,offering a scalable,efficient,and objective method for evaluating student essays.However,developing AES systems for Arabic poses distinct challenges due to the language’s complex morphology,diglossia,and the scarcity of annotated datasets.This paper presents a hybrid approach to Arabic AES by combining text-based,vector-based,and embeddingbased similarity measures to improve essay scoring accuracy while minimizing the training data required.Using a large Arabic essay dataset categorized into thematic groups,the study conducted four experiments to evaluate the impact of feature selection,data size,and model performance.Experiment 1 established a baseline using a non-machine learning approach,selecting top-N correlated features to predict essay scores.The subsequent experiments employed 5-fold cross-validation.Experiment 2 showed that combining embedding-based,text-based,and vector-based features in a Random Forest(RF)model achieved an R2 of 88.92%and an accuracy of 83.3%within a 0.5-point tolerance.Experiment 3 further refined the feature selection process,demonstrating that 19 correlated features yielded optimal results,improving R2 to 88.95%.In Experiment 4,an optimal data efficiency training approach was introduced,where training data portions increased from 5%to 50%.The study found that using just 10%of the data achieved near-peak performance,with an R2 of 85.49%,emphasizing an effective trade-off between performance and computational costs.These findings highlight the potential of the hybrid approach for developing scalable Arabic AES systems,especially in low-resource environments,addressing linguistic challenges while ensuring efficient data usage. 展开更多
关键词 Automated essay scoring text-based features vector-based features embedding-based features feature selection optimal data efficiency
在线阅读 下载PDF
Individual Software Expertise Formalization and Assessment from Project Management Tool Databases
16
作者 Traian-Radu Plosca Alexandru-Mihai Pescaru +1 位作者 Bianca-Valeria Rus Daniel-Ioan Curiac 《Computers, Materials & Continua》 2026年第1期389-411,共23页
Objective expertise evaluation of individuals,as a prerequisite stage for team formation,has been a long-term desideratum in large software development companies.With the rapid advancements in machine learning methods... Objective expertise evaluation of individuals,as a prerequisite stage for team formation,has been a long-term desideratum in large software development companies.With the rapid advancements in machine learning methods,based on reliable existing data stored in project management tools’datasets,automating this evaluation process becomes a natural step forward.In this context,our approach focuses on quantifying software developer expertise by using metadata from the task-tracking systems.For this,we mathematically formalize two categories of expertise:technology-specific expertise,which denotes the skills required for a particular technology,and general expertise,which encapsulates overall knowledge in the software industry.Afterward,we automatically classify the zones of expertise associated with each task a developer has worked on using Bidirectional Encoder Representations from Transformers(BERT)-like transformers to handle the unique characteristics of project tool datasets effectively.Finally,our method evaluates the proficiency of each software specialist across already completed projects from both technology-specific and general perspectives.The method was experimentally validated,yielding promising results. 展开更多
关键词 Expertise formalization transformer-based models natural language processing augmented data project management tool skill classification
在线阅读 下载PDF
A Convolutional Neural Network-Based Deep Support Vector Machine for Parkinson’s Disease Detection with Small-Scale and Imbalanced Datasets
17
作者 Kwok Tai Chui Varsha Arya +2 位作者 Brij B.Gupta Miguel Torres-Ruiz Razaz Waheeb Attar 《Computers, Materials & Continua》 2026年第1期1410-1432,共23页
Parkinson’s disease(PD)is a debilitating neurological disorder affecting over 10 million people worldwide.PD classification models using voice signals as input are common in the literature.It is believed that using d... Parkinson’s disease(PD)is a debilitating neurological disorder affecting over 10 million people worldwide.PD classification models using voice signals as input are common in the literature.It is believed that using deep learning algorithms further enhances performance;nevertheless,it is challenging due to the nature of small-scale and imbalanced PD datasets.This paper proposed a convolutional neural network-based deep support vector machine(CNN-DSVM)to automate the feature extraction process using CNN and extend the conventional SVM to a DSVM for better classification performance in small-scale PD datasets.A customized kernel function reduces the impact of biased classification towards the majority class(healthy candidates in our consideration).An improved generative adversarial network(IGAN)was designed to generate additional training data to enhance the model’s performance.For performance evaluation,the proposed algorithm achieves a sensitivity of 97.6%and a specificity of 97.3%.The performance comparison is evaluated from five perspectives,including comparisons with different data generation algorithms,feature extraction techniques,kernel functions,and existing works.Results reveal the effectiveness of the IGAN algorithm,which improves the sensitivity and specificity by 4.05%–4.72%and 4.96%–5.86%,respectively;and the effectiveness of the CNN-DSVM algorithm,which improves the sensitivity by 1.24%–57.4%and specificity by 1.04%–163%and reduces biased detection towards the majority class.The ablation experiments confirm the effectiveness of individual components.Two future research directions have also been suggested. 展开更多
关键词 Convolutional neural network data generation deep support vector machine feature extraction generative artificial intelligence imbalanced dataset medical diagnosis Parkinson’s disease small-scale dataset
在线阅读 下载PDF
局域网流媒体Caching代理服务器的实现 被引量:2
18
作者 谭劲 余胜生 周敬利 《计算机科学》 CSCD 北大核心 2003年第10期141-143,共3页
1概述 随着流媒体应用程序在互联网上的广泛应用,必将给Internet的负载带来巨大的变化.基于包交换的Internet不是为实时、不间断的流媒体传输而设计的,因此互联网上的流媒体系统将受到了以下4个方面的限制.
关键词 局域网 流媒体 caching 代理服务器 网络负载
在线阅读 下载PDF
Web Caching技术和CDN技术及其比较分析 被引量:15
19
作者 霍耀森 盛大同 《计算机应用研究》 CSCD 北大核心 2003年第5期135-137,共3页
对为提高服务网络质量而出现的一"拉"一"推"两种技术———WebCaching技术和CDN技术作了简要介绍,并从两者各自常见的体系结构出发对两种技术的性能和特点进行了比较分析,结论是CDN技术在减小源服务器负载等方面优... 对为提高服务网络质量而出现的一"拉"一"推"两种技术———WebCaching技术和CDN技术作了简要介绍,并从两者各自常见的体系结构出发对两种技术的性能和特点进行了比较分析,结论是CDN技术在减小源服务器负载等方面优于单纯的WebCaching技术;最后指出进一步的研究方向。 展开更多
关键词 INTERNET 计算机网络 服务质量 网络带宽 Webcaching技术 CDN技术
在线阅读 下载PDF
GCaching——一种网格协同缓存系统 被引量:3
20
作者 李文中 顾铁成 +2 位作者 李春洪 陆桑璐 陈道蓄 《计算机研究与发展》 EI CSCD 北大核心 2004年第12期2211-2217,共7页
协同缓存通过多个代理缓存服务器的协同工作 ,可以充分利用各服务器的缓存空间 ,提高缓存命中率 网格技术可以方便地共享和整合异构的服务器资源 结合网格和协同技术 ,提出了一种网格协同缓存系统GCaching ,它将地域上分布的多个代理... 协同缓存通过多个代理缓存服务器的协同工作 ,可以充分利用各服务器的缓存空间 ,提高缓存命中率 网格技术可以方便地共享和整合异构的服务器资源 结合网格和协同技术 ,提出了一种网格协同缓存系统GCaching ,它将地域上分布的多个代理缓存服务器组成缓存池 ,充分利用缓存资源 ,协同工作 ,为用户提供更好的服务能力 GCaching系统设计并实现了一种缓存放置和替换算法GCPR ,它使用周期缓存更新策略 ,根据用户访问模式自适应地调整缓存数据的分布 仿真实验表明 。 展开更多
关键词 网格 协同缓存
在线阅读 下载PDF
上一页 1 2 250 下一页 到第
使用帮助 返回顶部