期刊文献+
共找到330篇文章
< 1 2 17 >
每页显示 20 50 100
MWaOA:A Bio-Inspired Metaheuristic Algorithm for Resource Allocation in Internet of Things
1
作者 Rekha Phadke Abdul Lateef Haroon Phulara Shaik +3 位作者 Dayanidhi Mohapatra Doaa Sami Khafaga Eman Abdullah Aldakheel N.Sathyanarayana 《Computers, Materials & Continua》 2026年第2期1285-1310,共26页
Recently,the Internet of Things(IoT)technology has been utilized in a wide range of services and applications which significantly transforms digital ecosystems through seamless interconnectivity between various smart ... Recently,the Internet of Things(IoT)technology has been utilized in a wide range of services and applications which significantly transforms digital ecosystems through seamless interconnectivity between various smart devices.Furthermore,the IoT plays a key role in multiple domains,including industrial automation,smart homes,and intelligent transportation systems.However,an increasing number of connected devices presents significant challenges related to efficient resource allocation and system responsiveness.To address these issue,this research proposes a Modified Walrus Optimization Algorithm(MWaOA)for effective resource management in smart IoT systems.In the proposed MWaOA,a crowding process is incorporated to maintain diversity and avoid premature convergence thereby enhancing the global search capability.During resource allocation,the MWaOA prevents early convergence,which aids in achieving a better balance between the exploration and exploitation phases during optimization.Empirical evaluations show that the MWaOA reduces energy consumption by approximately 4% to 34%and minimizes the response time by 6% to 33% across different service arrival rates.Compared to traditional optimization algorithms,MWaOA reduces energy consumption by 5% to 30%and minimizes the response time by 4% to 28% across different simulation epochs.The proposed MWaOA provides adaptive and robust resource allocation,thereby minimizing transmission cost while considering network constraints and real-time performance parameters. 展开更多
关键词 Delay GATEWAY internet of things resource allocation resource management walrus optimization algorithm
在线阅读 下载PDF
Computerized Detection of Limbal Stem Cell Deficiency from Digital Cornea Images
2
作者 Hanan A.Hosni Mahmoud Doaa S.Khafga Amal H.Alharbi 《Computer Systems Science & Engineering》 SCIE EI 2022年第2期805-821,共17页
Limbal Stem Cell Deficiency(LSCD)is an eye disease that can cause corneal opacity and vascularization.In its advanced stage it can lead to a degree of visual impairment.It involves the changing in the semispherical sh... Limbal Stem Cell Deficiency(LSCD)is an eye disease that can cause corneal opacity and vascularization.In its advanced stage it can lead to a degree of visual impairment.It involves the changing in the semispherical shape of the cornea to a drooping shape to downwards direction.LSCD is hard to be diagnosed at early stages.The color and texture of the cornea surface can provide significant information about the cornea affected by LSCD.Parameters such as shape and texture are very crucial to differentiate normal from LSCD cornea.Although several medical approaches exist,most of them requires complicated procedure and medical devices.Therefore,in this paper,we pursued the development of a LSCD detection technique(LDT)utilizing image processing methods.Early diagnosis of LSCD is very crucial for physicians to arrange for effective treatment.In the proposed technique,we developed a method for LSCD detection utilizing frontal eye images.A dataset of 280 eye images of frontal and lateral LSCD and normal patients were used in this research.First,the cornea region of both frontal and lateral images is segmented,and the geometric features are extracted through the automated active contour model and the spline curve.While the texture features are extracted using the feature selection algorithm.The experimental results exhibited that the combined features of the geometric and texture will exhibit accuracy of 95.95%,sensitivity of 97.91% and specificity of 94.05% with the random forest classifier of n=40.As a result,this research developed a Limbal stem cell deficiency detection system utilizing features’fusion using image processing techniques for frontal and lateral digital images of the eyes. 展开更多
关键词 Feature extraction corneal opacity geometric features computerized detection image processing
在线阅读 下载PDF
Multi-Step Clustering of Smart Meters Time Series:Application to Demand Flexibility Characterization of SME Customers
3
作者 Santiago Bañales Raquel Dormido Natividad Duro 《Computer Modeling in Engineering & Sciences》 SCIE EI 2025年第1期869-907,共39页
Customer segmentation according to load-shape profiles using smart meter data is an increasingly important application to vital the planning and operation of energy systems and to enable citizens’participation in the... Customer segmentation according to load-shape profiles using smart meter data is an increasingly important application to vital the planning and operation of energy systems and to enable citizens’participation in the energy transition.This study proposes an innovative multi-step clustering procedure to segment customers based on load-shape patterns at the daily and intra-daily time horizons.Smart meter data is split between daily and hourly normalized time series to assess monthly,weekly,daily,and hourly seasonality patterns separately.The dimensionality reduction implicit in the splitting allows a direct approach to clustering raw daily energy time series data.The intraday clustering procedure sequentially identifies representative hourly day-unit profiles for each customer and the entire population.For the first time,a step function approach is applied to reduce time series dimensionality.Customer attributes embedded in surveys are employed to build external clustering validation metrics using Cramer’s V correlation factors and to identify statistically significant determinants of load-shape in energy usage.In addition,a time series features engineering approach is used to extract 16 relevant demand flexibility indicators that characterize customers and corresponding clusters along four different axes:available Energy(E),Temporal patterns(T),Consistency(C),and Variability(V).The methodology is implemented on a real-world electricity consumption dataset of 325 Small and Medium-sized Enterprise(SME)customers,identifying 4 daily and 6 hourly easy-to-interpret,well-defined clusters.The application of the methodology includes selecting key parameters via grid search and a thorough comparison of clustering distances and methods to ensure the robustness of the results.Further research can test the scalability of the methodology to larger datasets from various customer segments(households and large commercial)and locations with different weather and socioeconomic conditions. 展开更多
关键词 Electric load clustering load profiling smart meters machine learning data mining demand flexibility demand response
在线阅读 下载PDF
Internet of Things Software Engineering Model Validation Using Knowledge-Based Semantic Learning
4
作者 Mahmood Alsaadi Mohammed E.Seno Mohammed I.Khalaf 《Intelligent Automation & Soft Computing》 2025年第1期29-52,共24页
The agility of Internet of Things(IoT)software engineering is benchmarked based on its systematic insights for wide application support infrastructure developments.Such developments are focused on reducing the interfa... The agility of Internet of Things(IoT)software engineering is benchmarked based on its systematic insights for wide application support infrastructure developments.Such developments are focused on reducing the interfacing complexity with heterogeneous devices through applications.To handle the interfacing complexity problem,this article introduces a Semantic Interfacing Obscuration Model(SIOM)for IoT software-engineered platforms.The interfacing obscuration between heterogeneous devices and application interfaces from the testing to real-time validations is accounted for in this model.Based on the level of obscuration between the infrastructure hardware to the end-user software,the modifications through device replacement,capacity amendments,or interface bug fixes are performed.These modifications are based on the level of semantic obscurations observed during the application service intervals.The obscuration level is determined using knowledge learning as a progression from hardware to software semantics.The results reported were computed using specific metrics obtained from these experimental evaluations:an 8.94%reduction in interfacing complexity and a 15.04%improvement in integration progression.The knowledge of obscurationsmaps themodifications appropriately to reinstate the agility testing of the hardware/software integrations.This modification-based semantics is verified using semantics error,modification time,and complexity. 展开更多
关键词 Interfacing complexity IoT semantics assessment software engineering
在线阅读 下载PDF
Harnessing Machine Learning for Superior Prediction of Uniaxial Compressive Strength in Reinforced Soilcrete
5
作者 Ala’a R.Al-Shamasneh Faten Khalid Karim Arsalan Mahmoodzadeh 《Computers, Materials & Continua》 2025年第7期281-303,共23页
Soilcrete is a composite material of soil and cement that is highly valued in the construction industry.Accurate measurement of its mechanical properties is essential,but laboratory testing methods are expensive,timec... Soilcrete is a composite material of soil and cement that is highly valued in the construction industry.Accurate measurement of its mechanical properties is essential,but laboratory testing methods are expensive,timeconsuming,and include inaccuracies.Machine learning(ML)algorithms provide a more efficient alternative for this purpose,so after assessment with a statistical extraction method,ML algorithms including back-propagation neural network(BPNN),K-nearest neighbor(KNN),radial basis function(RBF),feed-forward neural networks(FFNN),and support vector regression(SVR)for predicting the uniaxial compressive strength(UCS)of soilcrete,were proposed in this study.The developed models in this study were optimized using an optimization technique,gradient descent(GD),throughout the analysis(direct optimization for neural networks and indirect optimization for other models corresponding to their hyperparameters).After doing laboratory analysis,data pre-preprocessing,and data-processing analysis,a database including 600 soilcrete specimens was gathered,which includes two different soil types(clay and limestone)and metakaolin as a mineral additive.80%of the database was used for the training set and 20%for testing,considering eight input parameters,including metakaolin content,soil type,superplasticizer content,water-to-binder ratio,shrinkage,binder,density,and ultrasonic velocity.The analysis showed that most algorithms performed well in the prediction,with BPNN,KNN,and RBF having higher accuracy compared to others(R^(2)=0.95,0.95,0.92,respectively).Based on this evaluation,it was observed that all models show an acceptable accuracy rate in prediction(RMSE:BPNN=0.11,FFNN=0.24,KNN=0.05,SVR=0.06,RBF=0.05,MAD:BPNN=0.006,FFNN=0.012,KNN=0.008,SVR=0.006,RBF=0.009).The ML importance ranking-sensitivity analysis indicated that all input parameters influence theUCS of soilcrete,especially the water-to-binder ratio and density,which have themost impact. 展开更多
关键词 Soilcrete laboratory analysis uniaxial compressive strength machine learning sensitivity analysis
在线阅读 下载PDF
A Novel Malware Detection Framework for Internet of Things Applications
6
作者 Muhammad Adil Mona M.Jamjoom Zahid Ullah 《Computers, Materials & Continua》 2025年第9期4363-4380,共18页
In today’s digital world,the Internet of Things(IoT)plays an important role in both local and global economies due to its widespread adoption in different applications.This technology has the potential to offer sever... In today’s digital world,the Internet of Things(IoT)plays an important role in both local and global economies due to its widespread adoption in different applications.This technology has the potential to offer several advantages over conventional technologies in the near future.However,the potential growth of this technology also attracts attention from hackers,which introduces new challenges for the research community that range from hardware and software security to user privacy and authentication.Therefore,we focus on a particular security concern that is associated with malware detection.The literature presents many countermeasures,but inconsistent results on identical datasets and algorithms raise concerns about model biases,training quality,and complexity.This highlights the need for an adaptive,real-time learning framework that can effectively mitigate malware threats in IoT applications.To address these challenges,(i)we propose an intelligent framework based on Two-step Deep Reinforcement Learning(TwStDRL)that is capable of learning and adapting in real-time to counter malware threats in IoT applications.This framework uses exploration and exploitation phenomena during both the training and testing phases by storing results in a replay memory.The stored knowledge allows the model to effectively navigate the environment and maximize cumulative rewards.(ii)To demonstrate the superiority of the TwStDRL framework,we implement and evaluate several machine learning algorithms for comparative analysis that include Support Vector Machines(SVM),Multi-Layer Perceptron,Random Forests,and k-means Clustering.The selection of these algorithms is driven by the inconsistent results reported in the literature,which create doubt about their robustness and reliability in real-world IoT deployments.(iii)Finally,we provide a comprehensive evaluation to justify why the TwStDRL framework outperforms them in mitigating security threats.During analysis,we noted that our proposed TwStDRL scheme achieves an average performance of 99.45%across accuracy,precision,recall,and F1-score,which is an absolute improvement of roughly 3%over the existing malware-detection models. 展开更多
关键词 IoT applications security malware detection advanced machine learning algorithms data privacy challenges
在线阅读 下载PDF
Automated Gleason Grading of Prostate Cancer from Low-Resolution Histopathology Images Using an Ensemble Network of CNN and Transformer Models
7
作者 Md Shakhawat Hossain Md Sahilur Rahman +3 位作者 Munim Ahmed Anowar Hussen Zahid Ullah Mona Jamjoom 《Computers, Materials & Continua》 2025年第8期3193-3215,共23页
One in every eight men in the US is diagnosed with prostate cancer,making it the most common cancer in men.Gleason grading is one of the most essential diagnostic and prognostic factors for planning the treatment of p... One in every eight men in the US is diagnosed with prostate cancer,making it the most common cancer in men.Gleason grading is one of the most essential diagnostic and prognostic factors for planning the treatment of prostate cancer patients.Traditionally,urological pathologists perform the grading by scoring the morphological pattern,known as the Gleason pattern,in histopathology images.However,thismanual grading is highly subjective,suffers intra-and inter-pathologist variability and lacks reproducibility.An automated grading system could be more efficient,with no subjectivity and higher accuracy and reproducibility.Automated methods presented previously failed to achieve sufficient accuracy,lacked reproducibility and depended on high-resolution images such as 40×.This paper proposes an automated Gleason grading method,ProGENET,to accurately predict the grade using low-resolution images such as 10×.This method first divides the patient’s histopathology whole slide image(WSI)into patches.Then,it detects artifacts and tissue-less regions and predicts the patch-wise grade using an ensemble network of CNN and transformer models.The proposed method adapted the International Society of Urological Pathology(ISUP)grading system and achieved 90.8%accuracy in classifying the patches into healthy and Gleason grades 1 through 5 using 10×WSI,outperforming the state-of-the-art accuracy by 27%.Finally,the patient’s grade was determined by combining the patch-wise results.The method was also demonstrated for 4−class grading and binary classification of prostate cancer,achieving 93.0%and 99.6%accuracy,respectively.The reproducibility was over 90%.Since the proposedmethod determined the grades with higher accuracy and reproducibility using low-resolution images,it is more reliable and effective than existing methods and can potentially improve subsequent therapy decisions. 展开更多
关键词 Gleason grading prostate cancer whole slide image ensemble learning digital pathology
暂未订购
Advances in Machine Learning for Explainable Intrusion Detection Using Imbalance Datasets in Cybersecurity with Harris Hawks Optimization
8
作者 Amjad Rehman Tanzila Saba +2 位作者 Mona M.Jamjoom Shaha Al-Otaibi Muhammad I.Khan 《Computers, Materials & Continua》 2026年第1期1804-1818,共15页
Modern intrusion detection systems(MIDS)face persistent challenges in coping with the rapid evolution of cyber threats,high-volume network traffic,and imbalanced datasets.Traditional models often lack the robustness a... Modern intrusion detection systems(MIDS)face persistent challenges in coping with the rapid evolution of cyber threats,high-volume network traffic,and imbalanced datasets.Traditional models often lack the robustness and explainability required to detect novel and sophisticated attacks effectively.This study introduces an advanced,explainable machine learning framework for multi-class IDS using the KDD99 and IDS datasets,which reflects real-world network behavior through a blend of normal and diverse attack classes.The methodology begins with sophisticated data preprocessing,incorporating both RobustScaler and QuantileTransformer to address outliers and skewed feature distributions,ensuring standardized and model-ready inputs.Critical dimensionality reduction is achieved via the Harris Hawks Optimization(HHO)algorithm—a nature-inspired metaheuristic modeled on hawks’hunting strategies.HHO efficiently identifies the most informative features by optimizing a fitness function based on classification performance.Following feature selection,the SMOTE is applied to the training data to resolve class imbalance by synthetically augmenting underrepresented attack types.The stacked architecture is then employed,combining the strengths of XGBoost,SVM,and RF as base learners.This layered approach improves prediction robustness and generalization by balancing bias and variance across diverse classifiers.The model was evaluated using standard classification metrics:precision,recall,F1-score,and overall accuracy.The best overall performance was recorded with an accuracy of 99.44%for UNSW-NB15,demonstrating the model’s effectiveness.After balancing,the model demonstrated a clear improvement in detecting the attacks.We tested the model on four datasets to show the effectiveness of the proposed approach and performed the ablation study to check the effect of each parameter.Also,the proposed model is computationaly efficient.To support transparency and trust in decision-making,explainable AI(XAI)techniques are incorporated that provides both global and local insight into feature contributions,and offers intuitive visualizations for individual predictions.This makes it suitable for practical deployment in cybersecurity environments that demand both precision and accountability. 展开更多
关键词 Intrusion detection XAI machine learning ensemble method CYBERSECURITY imbalance data
在线阅读 下载PDF
A Distributed Dual-Network Meta-Adaptive Framework for Scalable and Privacy-Aware Multi-Agent Coordination
9
作者 Atef Gharbi Mohamed Ayari +3 位作者 Nasser Albalawi Ahmad Alshammari Nadhir Ben Halima Zeineb Klai 《Computers, Materials & Continua》 2026年第5期1456-1476,共21页
This paper presents Dual Adaptive Neural Topology(Dual ANT),a distributed dual-network metaadaptive framework that enhances ant-colony-based multi-agent coordination with online introspection,adaptive parameter contro... This paper presents Dual Adaptive Neural Topology(Dual ANT),a distributed dual-network metaadaptive framework that enhances ant-colony-based multi-agent coordination with online introspection,adaptive parameter control,and privacy-preserving interactions.This approach improves standard Ant Colony Optimization(ACO)with two lightweight neural components:a forward network that estimates swarm efficiency in real time and an inverse network that converts these descriptors into parameter adaptations.To preserve the privacy of individual trajectories in shared pheromone maps,we introduce a locally differentially private pheromone update mechanism that adds calibrated noise to each agent’s pheromone deposit while preserving the efficacy of the global pheromone signal.The resulting systemenables agents to dynamically and autonomously adapt their coordination strategies under challenging and dynamic conditions,including varying obstacle layouts,uncertain target locations,and time-varying disturbances.Extensive simulations of large grid-based search tasks demonstrated that Dual ANT achieved faster convergence,higher robustness,and improved scalability compared to advanced baselines such asMulti-StrategyACO and Hierarchical ACO.The meta-adaptive feedback loop compensates for the performance degradation caused by privacy noise and prevents premature stagnation by triggering Levy flight exploration only when necessary. 展开更多
关键词 Ant colony optimization multi-agent systems deep neural networks meta-adaptive learning Levy flight differential privacy swarm intelligence
在线阅读 下载PDF
Advanced Meta-Heuristic Optimization for Accurate Photovoltaic Model Parameterization:A High-Accuracy Estimation Using Spider Wasp Optimization
10
作者 Sarah M.Alhammad Diaa Salama AbdElminaam +1 位作者 Asmaa Rizk Ibrahim Ahmed Taha 《Computers, Materials & Continua》 2026年第3期2269-2303,共35页
Accurate parameter extraction of photovoltaic(PV)models plays a critical role in enabling precise performance prediction,optimal system sizing,and effective operational control under diverse environmental conditions.W... Accurate parameter extraction of photovoltaic(PV)models plays a critical role in enabling precise performance prediction,optimal system sizing,and effective operational control under diverse environmental conditions.While a wide range of metaheuristic optimisation techniques have been applied to this problem,many existing methods are hindered by slow convergence rates,susceptibility to premature stagnation,and reduced accuracy when applied to complex multi-diode PV configurations.These limitations can lead to suboptimal modelling,reducing the efficiency of PV system design and operation.In this work,we propose an enhanced hybrid optimisation approach,the modified Spider Wasp Optimization(mSWO)with Opposition-Based Learning algorithm,which integrates the exploration and exploitation capabilities of the Spider Wasp Optimization(SWO)metaheuristic with the diversityenhancing mechanism of Opposition-Based Learning(OBL).The hybridisation is designed to dynamically expand the search space coverage,avoid premature convergence,and improve both convergence speed and precision in highdimensional optimisation tasks.The mSWO algorithm is applied to three well-established PV configurations:the single diode model(SDM),the double diode model(DDM),and the triple diode model(TDM).Real experimental current-voltage(I-V)datasets from a commercial PV module under standard test conditions(STC)are used for evaluation.Comparative analysis is conducted against eighteen advanced metaheuristic algorithms,including BSDE,RLGBO,GWOCS,MFO,EO,TSA,and SCA.Performance metrics include minimum,mean,and maximum root mean square error(RMSE),standard deviation(SD),and convergence behaviour over 30 independent runs.The results reveal that mSWO consistently delivers superior accuracy and robustness across all PV models,achieving the lowest RMSE values of 0.000986022(SDM),0.000982884(DDM),and 0.000982529(TDM),with minimal SD values,indicating remarkable repeatability.Convergence analyses further show that mSWO reaches optimal solutions more rapidly and with fewer oscillations than all competing methods,with the performance gap widening as model complexity increases.These findings demonstrate that mSWO provides a scalable,computationally efficient,and highly reliable framework for PV parameter extraction.Its adaptability to models of growing complexity suggests strong potential for broader applications in renewable energy systems,including performance monitoring,fault detection,and intelligent control,thereby contributing to the optimisation of next-generation solar energy solutions. 展开更多
关键词 modified Spider Wasp Optimizer(mSWO) photovoltaic(PV)modeling meta-heuristic optimization solar energy parameter estimation renewable energy technologies
在线阅读 下载PDF
ECSA-Net:A Lightweight Attention-Based Deep Learning Model for Eye Disease Detection
11
作者 Sara Tehsin Muhammad John Abbas +4 位作者 Inzamam Mashood Nasir Fadwa Alrowais Reham Abualhamayel Abdulsamad Ebrahim Yahya Radwa Marzouk 《Computers, Materials & Continua》 2026年第5期1290-1323,共34页
Globally,diabetes and glaucoma account for a high number of people suffering from severe vision loss and blindness.To treat these vision disorders effectively,proper diagnosis must occur in a timely manner,and with co... Globally,diabetes and glaucoma account for a high number of people suffering from severe vision loss and blindness.To treat these vision disorders effectively,proper diagnosis must occur in a timely manner,and with conventional methods such as fundus photography,optical coherence tomography(OCT),and slit-lamp imaging,much depends on an expert’s interpretation of the images,making the systems very labor-intensive to operate.Moreover,clinical settings face difficulties with inter-observer variability and limited scalability with these diagnostic devices.To solve these problems,we have developed the Efficient Channel-Spatial Attention Network(ECSA-Net),a new deep learning-based methodology that integrates lightweight channel-and spatial-attention modules into a convolutional neural network.Ultimately,ECSA-Net improves the efficiency of computational resource use while enhancing discriminative feature extraction from retinal images.The ECSA-Net methodology was validated by conducting a series of classification accuracy tests using two publicly available eye disease datasets and was benchmark against a number of different pretrained convolutional neural network(CNN)architectures.The results showed that the ECSA-Net achieved classification accuracies of 60.00%and 69.92%,respectively,while using only a compact architecture with 0.56 million parameters.This represents a reduction in parameter size by a factor of 14×to 247×compared to other pretrained models.Additionally,the attention modules added to the architecture significantly increased sensitivity to disease-relevant regions of the retina while maintaining low computational cost,making ECSA-Net a viable option for real-time clinical use.ECSA-Net is both efficient and accurate in automating the classification of eye diseases,combining high performance with the ethical considerations of medical artificial intelligence(AI)deployment.The ECSA-Net frameworkmitigates algorithmic bias in training datasets and protects individuals’privacy and transparency in decision-making,thereby facilitating human-AI collaboration.The two areas of technical performance and ethical integration are needed for the responsible and scalable use of ECSA-Net in a variety of ophthalmic care settings. 展开更多
关键词 Channel-spatial attention explainable AI eye disease classification fairness in diagnostics lightweight deep learning transparency in healthcare
在线阅读 下载PDF
Q-ALIGNer:A Quantum Entanglement-Driven Multimodal Framework for Robust Fake News Detection
12
作者 Sara Tehsin Inzamam Mashood Nasir +4 位作者 Wiem Abdelbaki Fadwa Alrowais Reham Abualhamayel Abdulsamad Ebrahim Yahya Radwa Marzouk 《Computers, Materials & Continua》 2026年第5期1670-1700,共31页
The rapid proliferation of multimodal misinformation on social media demands detection frameworks that are not only accurate but also robust to noise,adversarial manipulation,and semantic inconsistency between modalit... The rapid proliferation of multimodal misinformation on social media demands detection frameworks that are not only accurate but also robust to noise,adversarial manipulation,and semantic inconsistency between modalities.Existing multimodal fake news detection approaches often rely on deterministic fusion strategies,which limits their ability to model uncertainty and complex cross-modal dependencies.To address these challenges,we propose Q-ALIGNer,a quantum-inspired multimodal framework that integrates classical feature extraction with quantumstate encoding,learnable cross-modal entanglement,and robustness-aware training objectives.The proposed framework adopts quantumformalism as a representational abstraction,enabling probabilisticmodeling ofmultimodal alignment while remaining fully executable on classical hardware.Q-ALIGNer is evaluated on four widely used benchmark datasets—FakeNewsNet,Fakeddit,Weibo,and MediaEval VMU—covering diverse platforms,languages,and content characteristics.Experimental results demonstrate consistent performance improvements over strong text-only,vision-only,multimodal,and quantum-inspired baselines,including BERT,RoBERTa,XLNet,ResNet,EfficientNet,ViT,Multimodal-BERT,ViLBERT,and QEMF.Q-ALIGNer achieves accuracies of 91.2%,92.9%,91.7%,and 92.1%on FakeNewsNet,Fakeddit,Weibo,and MediaEval VMU,respectively,with F1-score gains of 3–4 percentage points over QEMF.Robustness evaluation shows a reduced adversarial accuracy gap of 2.6%,compared to 7%–9%for baseline models,while calibration analysis indicates improved reliability with an expected calibration error of 0.031.In addition,computational analysis shows that Q-ALIGNer reduces training time to 19.6 h compared to 48.2 h for QEMF at a comparable parameter scale.These results indicate that quantum-inspired alignment and entanglement can enhance robustness,uncertainty awareness,and efficiency in multimodal fake news detection,positioning Q-ALIGNer as a principled and practical content-centric framework for misinformation analysis. 展开更多
关键词 Machine learning fake news detection multimodal learning quantum natural language processing cross-modal entanglement adversarial robustness uncertainty calibration
在线阅读 下载PDF
FAIR-DQL:Fairness-Aware Deep Q-Learning for Enhanced Resource Allocation and RIS Optimization in High-Altitude Platform Networks
13
作者 Muhammad Ejaz Muhammad Asim +1 位作者 Mudasir Ahmad Wani Kashish Ara Shakil 《Computers, Materials & Continua》 2026年第3期758-779,共22页
The integration of High-Altitude Platform Stations(HAPS)with Reconfigurable Intelligent Surfaces(RIS)represents a critical advancement for next-generation wireless networks,offering unprecedented opportunities for ubi... The integration of High-Altitude Platform Stations(HAPS)with Reconfigurable Intelligent Surfaces(RIS)represents a critical advancement for next-generation wireless networks,offering unprecedented opportunities for ubiquitous connectivity.However,existing research reveals significant gaps in dynamic resource allocation,joint optimization,and equitable service provisioning under varying channel conditions,limiting practical deployment of these technologies.This paper addresses these challenges by proposing a novel Fairness-Aware Deep Q-Learning(FAIRDQL)framework for joint resource management and phase configuration in HAPS-RIS systems.Our methodology employs a comprehensive three-tier algorithmic architecture integrating adaptive power control,priority-based user scheduling,and dynamic learning mechanisms.The FAIR-DQL approach utilizes advanced reinforcement learning with experience replay and fairness-aware reward functions to balance competing objectives while adapting to dynamic environments.Key findings demonstrate substantial improvements:9.15 dB SINR gain,12.5 bps/Hz capacity,78%power efficiency,and 0.82 fairness index.The framework achieves rapid 40-episode convergence with consistent delay performance.These contributions establish new benchmarks for fairness-aware resource allocation in aerial communications,enabling practical HAPS-RIS deployments in rural connectivity,emergency communications,and urban networks. 展开更多
关键词 Wireless communication high-altitude platform station reconfigurable intelligent surfaces deep Q-learning
在线阅读 下载PDF
Concrete Strength Prediction Using Machine Learning and Somersaulting Spider Optimizer
14
作者 Marwa M.Eid Amel Ali Alhussan +2 位作者 Ebrahim A.Mattar Nima Khodadadi El-Sayed M.El-Kenawy 《Computer Modeling in Engineering & Sciences》 2026年第1期465-493,共29页
Accurate prediction of concrete compressive strength is fundamental for optimizing mix designs,improving material utilization,and ensuring structural safety in modern construction.Traditional empirical methods often f... Accurate prediction of concrete compressive strength is fundamental for optimizing mix designs,improving material utilization,and ensuring structural safety in modern construction.Traditional empirical methods often fail to capture the non-linear relationships among concrete constituents,especially with the growing use of supple-mentary cementitious materials and recycled aggregates.This study presents an integrated machine learning framework for concrete strength prediction,combining advanced regression models—namely CatBoost—with metaheuristic optimization algorithms,with a particular focus on the Somersaulting Spider Optimizer(SSO).A comprehensive dataset encompassing diverse mix proportions and material types was used to evaluate baseline machine learning models,including CatBoost,XGBoost,ExtraTrees,and RandomForest.Among these,CatBoost demonstrated superior accuracy across multiple performance metrics.To further enhance predictive capability,several bio-inspired optimizers were employed for hyperparameter tuning.The SSO-CatBoost hybrid achieved the lowest mean squared error and highest correlation coefficients,outperforming other metaheuristic approaches such as Genetic Algorithm,Particle Swarm Optimization,and Grey Wolf Optimizer.Statistical significance was established through Analysis of Variance and Wilcoxon signed-rank testing,confirming the robustness of the optimized models.The proposed methodology not only delivers improved predictive performance but also offers a transparent framework for mix design optimization,supporting data-driven decision making in sustainable and resilient infrastructure development. 展开更多
关键词 Concrete strength machine learning CatBoost metaheuristic optimization somersaulting spider optimizer ensemble models
在线阅读 下载PDF
HMA-DER:A Hierarchical Attention and Expert Routing Framework for Accurate Gastrointestinal Disease Diagnosis
15
作者 Sara Tehsin Inzamam Mashood Nasir +4 位作者 Wiem Abdelbaki Fadwa Alrowais Khalid A.Alattas Sultan Almutairi Radwa Marzouk 《Computers, Materials & Continua》 2026年第4期701-736,共36页
Objective:Deep learning is employed increasingly in Gastroenterology(GI)endoscopy computer-aided diagnostics for polyp segmentation and multi-class disease detection.In the real world,implementation requires high accu... Objective:Deep learning is employed increasingly in Gastroenterology(GI)endoscopy computer-aided diagnostics for polyp segmentation and multi-class disease detection.In the real world,implementation requires high accuracy,therapeutically relevant explanations,strong calibration,domain generalization,and efficiency.Current Convolutional Neural Network(CNN)and transformer models compromise border precision and global context,generate attention maps that fail to align with expert reasoning,deteriorate during cross-center changes,and exhibit inadequate calibration,hence diminishing clinical trust.Methods:HMA-DER is a hierarchical multi-attention architecture that uses dilation-enhanced residual blocks and an explainability-aware Cognitive Alignment Score(CAS)regularizer to directly align attribution maps with reasoning signals from experts.The framework has additions that make it more resilient and a way to test for accuracy,macro-averaged F1 score,Area Under the Receiver Operating Characteristic Curve(AUROC),calibration(Expected Calibration Error(ECE),Brier Score),explainability(CAS,insertion/deletion AUC),cross-dataset transfer,and throughput.Results:HMA-DER gets Dice Similarity Coefficient scores of 89.5%and 86.0%on Kvasir-SEG and CVC-ClinicDB,beating the strongest baseline by+1.9 and+1.7 points.It gets 86.4%and 85.3%macro-F1 and 94.0%and 93.4%AUROC on HyperKvasir and GastroVision,which is better than the baseline by+1.4/+1.6macro-F1 and+1.2/+1.1AUROC.Ablation study shows that hierarchical attention gives the highest(+3.0),followed by CAS regularization(+2–3),dilatation(+1.5–2.0),and residual connections(+2–3).Cross-dataset validation demonstrates competitive zero-shot transfer(e.g.,KS→CVC Dice 82.7%),whereas multi-dataset training diminishes the domain gap,yielding an 88.1%primary-metric average.HMA-DER’s mixed-precision inference can handle 155 pictures per second,which helps with calibration.Conclusion:HMA-DER strikes a compromise between accuracy,explainability,robustness,and efficiency for the use of reliable GI computer-aided diagnosis in real-world clinical settings. 展开更多
关键词 Gastrointestinal image analysis polyp segmentation multi-attention deep learning explainable AI cognitive alignment score cross-dataset generalization
在线阅读 下载PDF
Improving Real-Time Animal Detection Using Group Sparsity in YOLOv8:A Solution for Animal-Toy Differentiation
16
作者 Zia Ur Rehman Ahmad Syed +3 位作者 Abu Tayab Ghanshyam G.Tejani Doaa Sami Khafaga El-Sayed M.El-kenawy 《Computers, Materials & Continua》 2026年第2期1726-1750,共25页
Object detection,a major challenge in computer vision and pattern recognition,plays a significant part in many applications,crossing artificial intelligence,face recognition,and autonomous driving.It involves focusing... Object detection,a major challenge in computer vision and pattern recognition,plays a significant part in many applications,crossing artificial intelligence,face recognition,and autonomous driving.It involves focusing on identifying the detection,localization,and categorization of targets in images.A particularly important emerging task is distinguishing real animals from toy replicas in real-time,mostly for smart camera systems in both urban and natural environments.However,that difficult task is affected by factors such as showing angle,occlusion,light intensity,variations,and texture differences.To tackle these challenges,this paper recommends Group Sparse YOLOv8(You Only Look Once version 8),an improved real-time object detection algorithm that improves YOLOv8 by integrating group sparsity regularization.This adjustment improves efficiency and accuracy while utilizing the computational costs and power consumption,including a frame selection approach.And a hybrid parallel processing method that merges pipelining with dataflow strategies to improve the performance.Established using a custom dataset of toy and real animal images along with well-known datasets,namely ImageNet,MSCOCO,and CIFAR-10/100.The combination of Group Sparsity with YOLOv8 shows high detection accuracy with lower latency.Here provides a real and resource-efficient solution for intelligent camera systems and improves real-time object detection and classification in environments,differentiating between real and toy animals. 展开更多
关键词 YOLOv8 SPARSITY group sparsity group sparse representation(GSR) CNNS object detection
在线阅读 下载PDF
Hybrid Quantum Gate Enabled CNN Framework with Optimized Features for Human-Object Detection and Recognition
17
作者 Nouf Abdullah Almujally Tanvir Fatima Naik Bukht +3 位作者 Shuaa S.Alharbi Asaad Algarni Ahmad Jalal Jeongmin Park 《Computers, Materials & Continua》 2026年第4期2254-2271,共18页
Recognising human-object interactions(HOI)is a challenging task for traditional machine learning models,including convolutional neural networks(CNNs).Existing models show limited transferability across complex dataset... Recognising human-object interactions(HOI)is a challenging task for traditional machine learning models,including convolutional neural networks(CNNs).Existing models show limited transferability across complex datasets such as D3D-HOI and SYSU 3D HOI.The conventional architecture of CNNs restricts their ability to handle HOI scenarios with high complexity.HOI recognition requires improved feature extraction methods to overcome the current limitations in accuracy and scalability.This work proposes a Novel quantum gate-enabled hybrid CNN(QEH-CNN)for effectiveHOI recognition.Themodel enhancesCNNperformance by integrating quantumcomputing components.The framework begins with bilateral image filtering,followed bymulti-object tracking(MOT)and Felzenszwalb superpixel segmentation.A watershed algorithm refines object boundaries by cleaning merged superpixels.Feature extraction combines a histogram of oriented gradients(HOG),Global Image Statistics for Texture(GIST)descriptors,and a novel 23-joint keypoint extractionmethod using relative joint angles and joint proximitymeasures.A fuzzy optimization process refines the extracted features before feeding them into the QEH-CNNmodel.The proposed model achieves 95.06%accuracy on the 3D-D3D-HOI dataset and 97.29%on the SYSU3DHOI dataset.Theintegration of quantum computing enhances feature optimization,leading to improved accuracy and overall model efficiency. 展开更多
关键词 Pattern recognition image segmentation computer vision object detection
在线阅读 下载PDF
Traffic Vision:UAV-Based Vehicle Detection and Traffic Pattern Analysis via Deep Learning Classifier
18
作者 Mohammed Alnusayri Ghulam Mujtaba +4 位作者 Nouf Abdullah Almujally Shuoa S.Aitarbi Asaad Algarni Ahmad Jalal Jeongmin Park 《Computers, Materials & Continua》 2026年第3期266-284,共19页
This paper presents a unified Unmanned Aerial Vehicle-based(UAV-based)traffic monitoring framework that integrates vehicle detection,tracking,counting,motion prediction,and classification in a modular and co-optimized... This paper presents a unified Unmanned Aerial Vehicle-based(UAV-based)traffic monitoring framework that integrates vehicle detection,tracking,counting,motion prediction,and classification in a modular and co-optimized pipeline.Unlike prior works that address these tasks in isolation,our approach combines You Only Look Once(YOLO)v10 detection,ByteTrack tracking,optical-flow density estimation,Long Short-Term Memory-based(LSTM-based)trajectory forecasting,and hybrid Speeded-Up Robust Feature(SURF)+Gray-Level Co-occurrence Matrix(GLCM)feature engineering with VGG16 classification.Upon the validation across datasets(UAVDT and UAVID)our framework achieved a detection accuracy of 94.2%,and 92.3%detection accuracy when conducting a real-time UAV field validation.Our comprehensive evaluations,including multi-metric analyses,ablation studies,and cross-dataset validations,confirm the framework’s accuracy,efficiency,and generalizability.These results highlight the novelty of integrating complementary methods into a single framework,offering a practical solution for accurate and efficient UAV-based traffic monitoring. 展开更多
关键词 Smart traffic system drone devices machine learner dynamic complex scenes VGG-16 classifier
在线阅读 下载PDF
A Multi-Objective Adaptive Car-Following Framework for Autonomous Connected Vehicles with Deep Reinforcement Learning
19
作者 Abu Tayab Yanwen Li +5 位作者 Ahmad Syed Ghanshyam G.Tejani Doaa Sami Khafaga El-Sayed M.El-kenawy Amel Ali Alhussan Marwa M.Eid 《Computers, Materials & Continua》 2026年第2期1311-1337,共27页
Autonomous connected vehicles(ACV)involve advanced control strategies to effectively balance safety,efficiency,energy consumption,and passenger comfort.This research introduces a deep reinforcement learning(DRL)-based... Autonomous connected vehicles(ACV)involve advanced control strategies to effectively balance safety,efficiency,energy consumption,and passenger comfort.This research introduces a deep reinforcement learning(DRL)-based car-following(CF)framework employing the Deep Deterministic Policy Gradient(DDPG)algorithm,which integrates a multi-objective reward function that balances the four goals while maintaining safe policy learning.Utilizing real-world driving data from the highD dataset,the proposed model learns adaptive speed control policies suitable for dynamic traffic scenarios.The performance of the DRL-based model is evaluated against a traditional model predictive control-adaptive cruise control(MPC-ACC)controller.Results show that theDRLmodel significantly enhances safety,achieving zero collisions and a higher average time-to-collision(TTC)of 8.45 s,compared to 5.67 s for MPC and 6.12 s for human drivers.For efficiency,the model demonstrates 89.2% headway compliance and maintains speed tracking errors below 1.2 m/s in 90% of cases.In terms of energy optimization,the proposed approach reduces fuel consumption by 5.4% relative to MPC.Additionally,it enhances passenger comfort by lowering jerk values by 65%,achieving 0.12 m/s3 vs.0.34 m/s3 for human drivers.A multi-objective reward function is integrated to ensure stable policy convergence while simultaneously balancing the four key performance metrics.Moreover,the findings underscore the potential of DRL in advancing autonomous vehicle control,offering a robust and sustainable solution for safer,more efficient,and more comfortable transportation systems. 展开更多
关键词 Car-following model DDPG multi-objective framework autonomous connected vehicles
在线阅读 下载PDF
Intelligent Human Interaction Recognition with Multi-Modal Feature Extraction and Bidirectional LSTM
20
作者 Muhammad Hamdan Azhar Yanfeng Wu +4 位作者 Nouf Abdullah Almujally Shuaa S.Alharbi Asaad Algarni Ahmad Jalal Hui Liu 《Computers, Materials & Continua》 2026年第4期1632-1649,共18页
Recognizing human interactions in RGB videos is a critical task in computer vision,with applications in video surveillance.Existing deep learning-based architectures have achieved strong results,but are computationall... Recognizing human interactions in RGB videos is a critical task in computer vision,with applications in video surveillance.Existing deep learning-based architectures have achieved strong results,but are computationally intensive,sensitive to video resolution changes and often fail in crowded scenes.We propose a novel hybrid system that is computationally efficient,robust to degraded video quality and able to filter out irrelevant individuals,making it suitable for real-life use.The system leverages multi-modal handcrafted features for interaction representation and a deep learning classifier for capturing complex dependencies.Using Mask R-CNN and YOLO11-Pose,we extract grayscale silhouettes and keypoint coordinates of interacting individuals,while filtering out irrelevant individuals using a proposed algorithm.From these,we extract silhouette-based features(local ternary pattern and histogram of optical flow)and keypoint-based features(distances,angles and velocities)that capture distinct spatial and temporal information.A Bidirectional Long Short-Term Memory network(BiLSTM)then classifies the interactions.Extensive experiments on the UT Interaction,SBU Kinect Interaction and the ISR-UOL 3D social activity datasets demonstrate that our system achieves competitive accuracy.They also validate the effectiveness of the chosen features and classifier,along with the proposed system’s computational efficiency and robustness to occlusion. 展开更多
关键词 Human interaction recognition keypoint coordinates grayscale silhouettes bidirectional long shortterm memory network
在线阅读 下载PDF
上一页 1 2 17 下一页 到第
使用帮助 返回顶部