期刊文献+
共找到685篇文章
< 1 2 35 >
每页显示 20 50 100
An Effective and Secure Quality Assurance System for a Computer Science Program 被引量:1
1
作者 Mohammad Alkhatib 《Computer Systems Science & Engineering》 SCIE EI 2022年第6期975-995,共21页
Improving the quality assurance (QA) processes and acquiring accreditation are top priorities for academic programs. The learning outcomes (LOs)assessment and continuous quality improvement represent core components o... Improving the quality assurance (QA) processes and acquiring accreditation are top priorities for academic programs. The learning outcomes (LOs)assessment and continuous quality improvement represent core components ofthe quality assurance system (QAS). Current assessment methods suffer deficiencies related to accuracy and reliability, and they lack well-organized processes forcontinuous improvement planning. Moreover, the absence of automation, andintegration in QA processes forms a major obstacle towards developing efficientquality system. There is a pressing need to adopt security protocols that providerequired security services to safeguard the valuable information processed byQAS as well. This research proposes an effective methodology for LOs assessment and continuous improvement processes. The proposed approach ensuresmore accurate and reliable LOs assessment results and provides systematic wayfor utilizing those results in the continuous quality improvement. This systematicand well-specified QA processes were then utilized to model and implement automated and secure QAS that efficiently performs quality-related processes. Theproposed system adopts two security protocols that provide confidentiality, integrity, and authentication for quality data and reports. The security protocols avoidthe source repudiation, which is important in the quality reporting system. This isachieved through implementing powerful cryptographic algorithms. The QASenables efficient data collection and processing required for analysis and interpretation. It also prepares for the development of datasets that can be used in futureartificial intelligence (AI) researches to support decision making and improve thequality of academic programs. The proposed approach is implemented in a successful real case study for a computer science program. The current study servesscientific programs struggling to achieve academic accreditation, and gives rise tofully automating and integrating the QA processes and adopting modern AI andsecurity technologies to develop effective QAS. 展开更多
关键词 Quality assurance information security cryptographic algorithms education programs
在线阅读 下载PDF
Scale-invariant 3D face recognition using computer-generated holograms and the Mellin transform
2
作者 Yongwei Yao Yaping Zhang +3 位作者 Huanrong He Xianfeng David Gu Daping Chu Ting-Chung Poon 《Opto-Electronic Advances》 2025年第11期43-55,共13页
We present a novel method for scale-invariant 3D face recognition by integrating computer-generated holography with the Mellin transform.This approach leverages the scale-invariance property of the Mellin transform to... We present a novel method for scale-invariant 3D face recognition by integrating computer-generated holography with the Mellin transform.This approach leverages the scale-invariance property of the Mellin transform to address challenges related to variations in 3D facial sizes during recognition.By applying the Mellin transform to computer-generated holograms and performing correlation between them,which,to the best of our knowledge,is being done for the first time,we have developed a robust recognition framework capable of managing significant scale variations without compromising recognition accuracy.Digital holograms of 3D faces are generated from a face database,and the Mellin transform is employed to enable robust recognition across scale factors ranging from 0.4 to 2.0.Within this range,the method achieves 100%recognition accuracy,as confirmed by both simulation-based and hybrid optical/digital experimental validations.Numerical calculations demonstrate that our method significantly enhances the accuracy and reliability of 3D face recognition,as evidenced by the sharp correlation peaks and higher peak-to-noise ratio(PNR)values than that of using conventional holograms without the Mellin transform.Additionally,the hybrid optical/digital joint transform correlation hardware further validates the method's effectiveness,demonstrating its capability to accurately identify and distinguish 3D faces at various scales.This work provides a promising solution for advanced biometric systems,especially for those which require 3D scale-invariant recognition. 展开更多
关键词 3D face recognition computer-generate holography Mellin transform scale invariance BIOMETRICS
在线阅读 下载PDF
Enhancing Military Visual Communication in Harsh Environments Using Computer Vision Techniques
3
作者 Shitharth Selvarajan Hariprasath Manoharan +2 位作者 Taher Al-Shehari Nasser A Alsadhan Subhav Singh 《Computers, Materials & Continua》 2025年第8期3541-3557,共17页
This research investigates the application of digital images in military contexts by utilizing analytical equations to augment human visual capabilities.A comparable filter is used to improve the visual quality of the... This research investigates the application of digital images in military contexts by utilizing analytical equations to augment human visual capabilities.A comparable filter is used to improve the visual quality of the photographs by reducing truncations in the existing images.Furthermore,the collected images undergo processing using histogram gradients and a flexible threshold value that may be adjusted in specific situations.Thus,it is possible to reduce the occurrence of overlapping circumstances in collective picture characteristics by substituting grey-scale photos with colorized factors.The proposed method offers additional robust feature representations by imposing a limiting factor to reduce overall scattering values.This is achieved by visualizing a graphical function.Moreover,to derive valuable insights from a series of photos,both the separation and in-version processes are conducted.This involves analyzing comparison results across four different scenarios.The results of the comparative analysis show that the proposed method effectively reduces the difficulties associated with time and space to 1 s and 3%,respectively.In contrast,the existing strategy exhibits higher complexities of 3 s and 9.1%,respectively. 展开更多
关键词 Image enhancement visual information harsh environment computer vision
在线阅读 下载PDF
TeachSecure-CTI:Adaptive Cybersecurity Curriculum Generation Using Threat Dynamics and AI
4
作者 Alaa Tolah 《Computers, Materials & Continua》 2026年第4期1698-1734,共37页
The rapidly evolving cybersecurity threat landscape exposes a critical flaw in traditional educational programs where static curricula cannot adapt swiftly to novel attack vectors.This creates a significant gap betwee... The rapidly evolving cybersecurity threat landscape exposes a critical flaw in traditional educational programs where static curricula cannot adapt swiftly to novel attack vectors.This creates a significant gap between theoretical knowledge and the practical defensive capabilities needed in the field.To address this,we propose TeachSecure-CTI,a novel framework for adaptive cybersecurity curriculumgeneration that integrates real-time Cyber Threat Intelligence(CTI)with AI-driven personalization.Our framework employs a layered architecture featuring a CTI ingestion and clusteringmodule,natural language processing for semantic concept extraction,and a reinforcement learning agent for adaptive content sequencing.Bydynamically aligning learningmaterialswithboththe evolving threat environment and individual learner profiles,TeachSecure-CTI ensures content remains current,relevant,and tailored.A 12-week study with 150 students across three institutions demonstrated that the framework improves learning gains by 34%,significantly exceeding the 12%–21%reported in recent literature.The system achieved 84.8%personalization accuracy,85.9%recognition accuracy for MITRE ATT&CK tactics,and a 31%faster competency development rate compared to static curricula.These findings have implications beyond academia,extending to workforce development,cyber range training,and certification programs.By bridging the gap between dynamic threats and static educational materials,TeachSecure-CTI offers an empirically validated,scalable solution for cultivating cybersecurity professionals capable of responding to modern threats. 展开更多
关键词 Adaptive learning cybersecurity education threat intelligence artificial intelligence curriculumgeneration personalised learning
在线阅读 下载PDF
PhishNet: A Real-Time, Scalable Ensemble Framework for Smishing Attack Detection Using Transformers and LLMs
5
作者 Abeer Alhuzali Qamar Al-Qahtani +2 位作者 Asmaa Niyazi Lama Alshehri Fatemah Alharbi 《Computers, Materials & Continua》 2026年第1期2194-2212,共19页
The surge in smishing attacks underscores the urgent need for robust,real-time detection systems powered by advanced deep learning models.This paper introduces PhishNet,a novel ensemble learning framework that integra... The surge in smishing attacks underscores the urgent need for robust,real-time detection systems powered by advanced deep learning models.This paper introduces PhishNet,a novel ensemble learning framework that integrates transformer-based models(RoBERTa)and large language models(LLMs)(GPT-OSS 120B,LLaMA3.370B,and Qwen332B)to enhance smishing detection performance significantly.To mitigate class imbalance,we apply synthetic data augmentation using T5 and leverage various text preprocessing techniques.Our system employs a duallayer voting mechanism:weighted majority voting among LLMs and a final ensemble vote to classify messages as ham,spam,or smishing.Experimental results show an average accuracy improvement from 96%to 98.5%compared to the best standalone transformer,and from 93%to 98.5%when compared to LLMs across datasets.Furthermore,we present a real-time,user-friendly application to operationalize our detection model for practical use.PhishNet demonstrates superior scalability,usability,and detection accuracy,filling critical gaps in current smishing detection methodologies. 展开更多
关键词 Smishing attack detection phishing attacks ensemble learning CYBERSECURITY deep learning transformer-based models large language models
在线阅读 下载PDF
LinguTimeX a Framework for Multilingual CTC Detection Using Explainable AI and Natural Language Processing
6
作者 Omar Darwish Shorouq Al-Eidi +4 位作者 Abdallah Al-Shorman Majdi Maabreh Anas Alsobeh Plamen Zahariev Yahya Tashtoush 《Computers, Materials & Continua》 2026年第1期2231-2251,共21页
Covert timing channels(CTC)exploit network resources to establish hidden communication pathways,posing signi cant risks to data security and policy compliance.erefore,detecting such hidden and dangerous threats remain... Covert timing channels(CTC)exploit network resources to establish hidden communication pathways,posing signi cant risks to data security and policy compliance.erefore,detecting such hidden and dangerous threats remains one of the security challenges. is paper proposes LinguTimeX,a new framework that combines natural language processing with arti cial intelligence,along with explainable Arti cial Intelligence(AI)not only to detect CTC but also to provide insights into the decision process.LinguTimeX performs multidimensional feature extraction by fusing linguistic attributes with temporal network patterns to identify covert channels precisely.LinguTimeX demonstrates strong e ectiveness in detecting CTC across multiple languages;namely English,Arabic,and Chinese.Speci cally,the LSTM and RNN models achieved F1 scores of 90%on the English dataset,89%on the Arabic dataset,and 88%on the Chinese dataset,showcasing their superior performance and ability to generalize across multiple languages. is highlights their robustness in detecting CTCs within security systems,regardless of the language or cultural context of the data.In contrast,the DeepForest model produced F1-scores ranging from 86%to 87%across the same datasets,further con rming its e ectiveness in CTC detection.Although other algorithms also showed reasonable accuracy,the LSTM and RNN models consistently outperformed them in multilingual settings,suggesting that deep learning models might be better suited for this particular problem. 展开更多
关键词 Arabic language Chinese language covert timing channel CYBERSECURITY deep learning English language language processing machine learning
在线阅读 下载PDF
Optimized Deep Learning Framework for Robust Detection of GAN-Induced Hallucinations in Medical Imaging
7
作者 Jarrar Amjad Muhammad Zaheer Sajid +5 位作者 Mudassir Khalil Ayman Youssef Muhammad Fareed Hamid Imran Qureshi Haya Aldossary Qaisar Abbas 《Computer Modeling in Engineering & Sciences》 2026年第2期1185-1213,共29页
Generative Adversarial Networks(GANs)have become valuable tools in medical imaging,enabling realistic image synthesis for enhancement,augmentation,and restoration.However,their integration into clinical workflows rais... Generative Adversarial Networks(GANs)have become valuable tools in medical imaging,enabling realistic image synthesis for enhancement,augmentation,and restoration.However,their integration into clinical workflows raises concerns,particularly the risk of subtle distortions or hallucinations that may undermine diagnostic accuracy and weaken trust in AI-assisted decision-making.To address this challenge,we propose a hybrid deep learning framework designed to detect GAN-induced artifacts in medical images,thereby reinforcing the reliability of AI-driven diagnostics.The framework integrates low-level statistical descriptors,including high-frequency residuals and Gray-Level Co-occurrence Matrix(GLCM)texture features,with high-level semantic representations extracted from a pre-trained ResNet18.This dual-stream approach enables detection of both pixel-level anomalies and structural inconsistencies introduced by GAN-based manipulation.We validated the framework on a curated dataset of 10,000 medical images,evenly split between authentic and GAN-generated samples across four modalities:MRI,CT,X-ray,and fundus photography.To improve generalizability to real-world clinical settings,we incorporated domain adaptation strategies such as adversarial training and style transfer,reducing domain shift by 15%.Experimental results demonstrate robust performance,achieving 92.6%accuracy and an F1-score of 0.91 on synthetic test data,and maintaining strong performance on real-world GAN-modified images with 87.3%accuracy and an F1-score of 0.85.Additionally,the model attained an AUC of 0.96 and an average precision of 0.92,outperforming conventional GAN detection pipelines and baseline Convolutional Neural Network(CNN)architectures.These findings establish the proposed framework as an effective and reliable solution for detecting GAN-induced hallucinations in medical imaging,representing an important step toward building trustworthy and clinically deployable AI systems. 展开更多
关键词 GAN-induced hallucinations medical image detection AI-driven diagnostics domain adaptation synthetic medical images GAN artifacts trustworthiness in AI
在线阅读 下载PDF
Predicting Immunotherapy Outcomes in Colorectal Cancer Using Machine Learning and Multi-Omic Biomarkers:Development of a Real-Time Predictive Web Application
8
作者 Thomas Kidu Harini Kethar +4 位作者 Haben Gebrekidan Haleem Farman Ahmed Sedik Walid El-Shafai Jawad Khan 《Computer Modeling in Engineering & Sciences》 2026年第2期1166-1184,共19页
Colorectal cancer is the third most diagnosed cancer worldwide,and immune checkpoint inhibitors have shown promising therapeutic outcomes in selected patient groups.This study performed a comprehensive analysis of mul... Colorectal cancer is the third most diagnosed cancer worldwide,and immune checkpoint inhibitors have shown promising therapeutic outcomes in selected patient groups.This study performed a comprehensive analysis of multi-omics data from The Cancer Genome Atlas colorectal adenocarcinoma cohort(TCGA-COADREAD),accessed through cBioPortal,to develop machine learning models for predicting progression-free survival(PFS)following immunotherapy.The dataset included clinical variables,genomic alterations in Kirsten Rat Sarcoma Viral Oncogene Homolog(KRAS),B-Raf Proto-Oncogene(BRAF),and Neuroblastoma RAS Viral Oncogene Homolog(NRAS),microsatellite instability(MSI)status,tumor mutation burden(TMB),and expression of immune checkpoint genes.Kaplan–Meier analysis showed that KRAS mutations were significantly associated with reduced PFS,while BRAF and NRAS mutations had no significant impact.MSI-high tumors exhibited elevated TMB and increased immune checkpoint expression,reflecting their immunologically active phenotype.We developed both survival and classification models,with the Extra Trees classifier achieving the best performance(accuracy=0.86,precision=0.67,recall=0.70,F1-score=0.68,AUC=0.84).These findings highlight the potential of combining genomic and immune biomarkers with machine learning to improve patient stratification and guide personalized immunotherapy decisions.An interactive web application was also developed to enable clinicians to input patient-specific molecular and clinical data and visualize individualized PFS predictions,supporting timely,data-driven treatment planning. 展开更多
关键词 Colorectal cancer immunotherapy microsatellite instability tumor mutation burden immune check-point inhibitors multi-omics machine learning survival analysis progression-free survival clinical decision support
暂未订购
Energy Aware Task Scheduling of IoT Application Using a Hybrid Metaheuristic Algorithm in Cloud Computing
9
作者 Ahmed Awad Mohamed Eslam Abdelhakim Seyam +4 位作者 Ahmed R.Elsaeed Laith Abualigah Aseel Smerat Ahmed M.AbdelMouty Hosam E.Refaat 《Computers, Materials & Continua》 2026年第3期1786-1803,共18页
In recent years,fog computing has become an important environment for dealing with the Internet of Things.Fog computing was developed to handle large-scale big data by scheduling tasks via cloud computing.Task schedul... In recent years,fog computing has become an important environment for dealing with the Internet of Things.Fog computing was developed to handle large-scale big data by scheduling tasks via cloud computing.Task scheduling is crucial for efficiently handling IoT user requests,thereby improving system performance,cost,and energy consumption across nodes in cloud computing.With the large amount of data and user requests,achieving the optimal solution to the task scheduling problem is challenging,particularly in terms of cost and energy efficiency.In this paper,we develop novel strategies to save energy consumption across nodes in fog computing when users execute tasks through the least-cost paths.Task scheduling is developed using modified artificial ecosystem optimization(AEO),combined with negative swarm operators,Salp Swarm Algorithm(SSA),in order to competitively optimize their capabilities during the exploitation phase of the optimal search process.In addition,the proposed strategy,Enhancement Artificial Ecosystem Optimization Salp Swarm Algorithm(EAEOSSA),attempts to find the most suitable solution.The optimization that combines cost and energy for multi-objective task scheduling optimization problems.The backpack problem is also added to improve both cost and energy in the iFogSim implementation as well.A comparison was made between the proposed strategy and other strategies in terms of time,cost,energy,and productivity.Experimental results showed that the proposed strategy improved energy consumption,cost,and time over other algorithms.Simulation results demonstrate that the proposed algorithm increases the average cost,average energy consumption,and mean service time in most scenarios,with average reductions of up to 21.15%in cost and 25.8%in energy consumption. 展开更多
关键词 Energy-efficient tasks internet of things(IoT) cloud fog computing artificial ecosystem-based optimization salp swarm algorithm cloud computing
在线阅读 下载PDF
Multimodal Trajectory Generation for Robotic Motion Planning Using Transformer-Based Fusion and Adversarial Learning
10
作者 Shtwai Alsubai Ahmad Almadhor +3 位作者 Abdullah Al Hejaili Najib Ben Aoun Tahani Alsubait Vincent Karovic 《Computer Modeling in Engineering & Sciences》 2026年第2期848-869,共22页
In Human–Robot Interaction(HRI),generating robot trajectories that accurately reflect user intentions while ensuring physical realism remains challenging,especially in unstructured environments.In this study,we devel... In Human–Robot Interaction(HRI),generating robot trajectories that accurately reflect user intentions while ensuring physical realism remains challenging,especially in unstructured environments.In this study,we develop a multimodal framework that integrates symbolic task reasoning with continuous trajectory generation.The approach employs transformer models and adversarial training to map high-level intent to robotic motion.Information from multiple data sources,such as voice traits,hand and body keypoints,visual observations,and recorded paths,is integrated simultaneously.These signals are mapped into a shared representation that supports interpretable reasoning while enabling smooth and realistic motion generation.Based on this design,two different learning strategies are investigated.In the first step,grammar-constrained Linear Temporal Logic(LTL)expressions are created from multimodal human inputs.These expressions are subsequently decoded into robot trajectories.The second method generates trajectories directly from symbolic intent and linguistic data,bypassing an intermediate logical representation.Transformer encoders combine multiple types of information,and autoregressive transformer decoders generate motion sequences.Adding smoothness and speed limits during training increases the likelihood of physical feasibility.To improve the realism and stability of the generated trajectories during training,an adversarial discriminator is also included to guide them toward the distribution of actual robot motion.Tests on the NATSGLD dataset indicate that the complete system exhibits stable training behaviour and performance.In normalised coordinates,the logic-based pipeline has an Average Displacement Error(ADE)of 0.040 and a Final Displacement Error(FDE)of 0.036.The adversarial generator makes substantially more progress,reducing ADE to 0.021 and FDE to 0.018.Visual examination confirms that the generated trajectories closely align with observed motion patterns while preserving smooth temporal dynamics. 展开更多
关键词 Multimodal trajectory generation robotic motion planning transformer networks sensor fusion reinforcement learning generative adversarial networks
在线阅读 下载PDF
Identifying Materials of Photographic Images and Photorealistic Computer Generated Graphics Based on Deep CNNs 被引量:15
11
作者 Qi Cui Suzanne McIntosh Huiyu Sun 《Computers, Materials & Continua》 SCIE EI 2018年第5期229-241,共13页
Currently,some photorealistic computer graphics are very similar to photographic images.Photorealistic computer generated graphics can be forged as photographic images,causing serious security problems.The aim of this... Currently,some photorealistic computer graphics are very similar to photographic images.Photorealistic computer generated graphics can be forged as photographic images,causing serious security problems.The aim of this work is to use a deep neural network to detect photographic images(PI)versus computer generated graphics(CG).In existing approaches,image feature classification is computationally intensive and fails to achieve realtime analysis.This paper presents an effective approach to automatically identify PI and CG based on deep convolutional neural networks(DCNNs).Compared with some existing methods,the proposed method achieves real-time forensic tasks by deepening the network structure.Experimental results show that this approach can effectively identify PI and CG with average detection accuracy of 98%. 展开更多
关键词 Image identification CNN DNN DCNNs computer generated graphics
在线阅读 下载PDF
Enhance Intrusion Detection in Computer Networks Based on Deep Extreme Learning Machine 被引量:3
12
作者 Muhammad Adnan Khan Abdur Rehman +2 位作者 Khalid Masood Khan Mohammed A.Al Ghamdi Sultan H.Almotiri 《Computers, Materials & Continua》 SCIE EI 2021年第1期467-480,共14页
Networks provide a significant function in everyday life,and cybersecurity therefore developed a critical field of study.The Intrusion detection system(IDS)becoming an essential information protection strategy that tr... Networks provide a significant function in everyday life,and cybersecurity therefore developed a critical field of study.The Intrusion detection system(IDS)becoming an essential information protection strategy that tracks the situation of the software and hardware operating on the network.Notwithstanding advancements of growth,current intrusion detection systems also experience difficulties in enhancing detection precision,growing false alarm levels and identifying suspicious activities.In order to address above mentioned issues,several researchers concentrated on designing intrusion detection systems that rely on machine learning approaches.Machine learning models will accurately identify the underlying variations among regular information and irregular information with incredible efficiency.Artificial intelligence,particularly machine learning methods can be used to develop an intelligent intrusion detection framework.There in this article in order to achieve this objective,we propose an intrusion detection system focused on a Deep extreme learning machine(DELM)which first establishes the assessment of safety features that lead to their prominence and then constructs an adaptive intrusion detection system focusing on the important features.In the moment,we researched the viability of our suggested DELMbased intrusion detection system by conducting dataset assessments and evaluating the performance factors to validate the system reliability.The experimental results illustrate that the suggested framework outclasses traditional algorithms.In fact,the suggested framework is not only of interest to scientific research but also of functional importance. 展开更多
关键词 Intrusion detection system DELM network security machine learning
在线阅读 下载PDF
Computer Vision Technology for Fault Detection Systems Using Image Processing
13
作者 Abed Saif Alghawli 《Computers, Materials & Continua》 SCIE EI 2022年第10期1961-1976,共16页
In the period of Industries 4.0,cyber-physical systems(CPSs)were a major study area.Such systems frequently occur in manufacturing processes and people’s everyday lives,and they communicate intensely among physical e... In the period of Industries 4.0,cyber-physical systems(CPSs)were a major study area.Such systems frequently occur in manufacturing processes and people’s everyday lives,and they communicate intensely among physical elements and lead to inconsistency.Due to the magnitude and importance of the systems they support,the cyber quantum models must function effectively.In this paper,an image-processing-based anomalous mobility detecting approach is suggested that may be added to systems at any time.The expense of glitches,failures or destroyed products is decreased when anomalous activities are detected and unplanned scenarios are avoided.The presently offered techniques are not well suited to these operations,which necessitate information systems for issue treatment and classification at a degree of complexity that is distinct from technology.To overcome such challenges in industrial cyber-physical systems,the Image Processing aided Computer Vision Technology for Fault Detection System(IM-CVFD)is proposed in this research.The Uncertainty Management technique is introduced in addition to achieving optimum knowledge in terms of latency and effectiveness.A thorough simulation was performed in an appropriate processing facility.The study results suggest that the IM-CVFD has a high performance,low error frequency,low energy consumption,and low delay with a strategy that provides.In comparison to traditional approaches,the IM-CVFD produces a more efficient outcome. 展开更多
关键词 Cyber-physical system image processing computer vision fault detection
在线阅读 下载PDF
Human-Computer Interaction Using Deep Fusion Model-Based Facial Expression Recognition System
14
作者 Saiyed Umer Ranjeet Kumar Rout +3 位作者 Shailendra Tiwari Ahmad Ali AlZubi Jazem Mutared Alanazi Kulakov Yurii 《Computer Modeling in Engineering & Sciences》 SCIE EI 2023年第5期1165-1185,共21页
A deep fusion model is proposed for facial expression-based human-computer Interaction system.Initially,image preprocessing,i.e.,the extraction of the facial region from the input image is utilized.Thereafter,the extr... A deep fusion model is proposed for facial expression-based human-computer Interaction system.Initially,image preprocessing,i.e.,the extraction of the facial region from the input image is utilized.Thereafter,the extraction of more discriminative and distinctive deep learning features is achieved using extracted facial regions.To prevent overfitting,in-depth features of facial images are extracted and assigned to the proposed convolutional neural network(CNN)models.Various CNN models are then trained.Finally,the performance of each CNN model is fused to obtain the final decision for the seven basic classes of facial expressions,i.e.,fear,disgust,anger,surprise,sadness,happiness,neutral.For experimental purposes,three benchmark datasets,i.e.,SFEW,CK+,and KDEF are utilized.The performance of the proposed systemis compared with some state-of-the-artmethods concerning each dataset.Extensive performance analysis reveals that the proposed system outperforms the competitive methods in terms of various performance metrics.Finally,the proposed deep fusion model is being utilized to control a music player using the recognized emotions of the users. 展开更多
关键词 Deep learning facial expression emotions RECOGNITION CNN
在线阅读 下载PDF
RESTful API in Life Science Research Systems and Data Integration Challenges: Linking Metabolic Pathway, Metabolic Network, Gene and Publication
15
作者 Etienne Z. Gnimpieba Brent S. Anderson +1 位作者 Abalo Chango Carol M. Lushbough 《通讯和计算机(中英文版)》 2013年第9期1196-1199,共4页
关键词 数据集成 代谢途径 REST 生命科学 生物系统 科学研究 API 出版物
在线阅读 下载PDF
Enhanced Adaptive Brain-Computer Interface Approach for Intelligent Assistance to Disabled Peoples
16
作者 Ali Usman Javed Ferzund +7 位作者 Ahmad Shaf Muhammad Aamir Samar Alqhtani Khlood M.Mehdar Hanan Talal Halawani Hassan A.Alshamrani Abdullah A.Asiri Muhammad Irfan 《Computer Systems Science & Engineering》 SCIE EI 2023年第8期1355-1369,共15页
Assistive devices for disabled people with the help of Brain-Computer Interaction(BCI)technology are becoming vital bio-medical engineering.People with physical disabilities need some assistive devices to perform thei... Assistive devices for disabled people with the help of Brain-Computer Interaction(BCI)technology are becoming vital bio-medical engineering.People with physical disabilities need some assistive devices to perform their daily tasks.In these devices,higher latency factors need to be addressed appropriately.Therefore,the main goal of this research is to implement a real-time BCI architecture with minimum latency for command actuation.The proposed architecture is capable to communicate between different modules of the system by adopting an automotive,intelligent data processing and classification approach.Neuro-sky mind wave device has been used to transfer the data to our implemented server for command propulsion.Think-Net Convolutional Neural Network(TN-CNN)architecture has been proposed to recognize the brain signals and classify them into six primary mental states for data classification.Data collection and processing are the responsibility of the central integrated server for system load minimization.Testing of implemented architecture and deep learning model shows excellent results.The proposed system integrity level was the minimum data loss and the accurate commands processing mechanism.The training and testing results are 99%and 93%for custom model implementation based on TN-CNN.The proposed real-time architecture is capable of intelligent data processing unit with fewer errors,and it will benefit assistive devices working on the local server and cloud server. 展开更多
关键词 Disable person ELECTROENCEPHALOGRAM convolutional neural network brain signal classification
在线阅读 下载PDF
Educational System for the Holy Quran and Its Sciences for Blind and Handicapped People Based on Google Speech API
17
作者 Samir A. Elsagheer Mohamed Allam Shehata Hassanin Mohamed Tahar Ben Othman 《Journal of Software Engineering and Applications》 2014年第3期150-161,共12页
There is a great need to provide educational environments for blind and handicapped people. There are many Islamic websites and applications dedicated to the educational services for the Holy Quran and Its Sciences (Q... There is a great need to provide educational environments for blind and handicapped people. There are many Islamic websites and applications dedicated to the educational services for the Holy Quran and Its Sciences (Quran Recitations, the interpretations, etc.) on the Internet. Unfortunately, blind and handicapped people could not use these services. These people cannot use the keyboard and the mouse. In addition, the ability to read and write is essential to benefit from these services. In this paper, we present an educational environment that allows these people to take full advantage of the scientific materials. This is done through the interaction with the system using voice commands by speaking directly without the need to write or to use the mouse. Google Speech API is used for the universal speech recognition after a preprocessing and post processing phases to improve the accuracy. For blind people, responses of these commands will be played back through the audio device instead of displaying the text to the screen. The text will be displayed on the screen to help other people make use of the system. 展开更多
关键词 BLIND Illiterate and Manual-Disabled PEOPLE Quran SCIENCES SPEECH Recognition Learning Systems
暂未订购
Modeling of Computer Virus Propagation with Fuzzy Parameters
18
作者 Reemah M.Alhebshi Nauman Ahmed +6 位作者 Dumitru Baleanu Umbreen Fatima Fazal Dayan Muhammad Rafiq Ali Raza Muhammad Ozair Ahmad Emad E.Mahmoud 《Computers, Materials & Continua》 SCIE EI 2023年第3期5663-5678,共16页
Typically,a computer has infectivity as soon as it is infected.It is a reality that no antivirus programming can identify and eliminate all kinds of viruses,suggesting that infections would persevere on the Internet.T... Typically,a computer has infectivity as soon as it is infected.It is a reality that no antivirus programming can identify and eliminate all kinds of viruses,suggesting that infections would persevere on the Internet.To understand the dynamics of the virus propagation in a better way,a computer virus spread model with fuzzy parameters is presented in this work.It is assumed that all infected computers do not have the same contribution to the virus transmission process and each computer has a different degree of infectivity,which depends on the quantity of virus.Considering this,the parametersβandγbeing functions of the computer virus load,are considered fuzzy numbers.Using fuzzy theory helps us understand the spread of computer viruses more realistically as these parameters have fixed values in classical models.The essential features of the model,like reproduction number and equilibrium analysis,are discussed in fuzzy senses.Moreover,with fuzziness,two numerical methods,the forward Euler technique,and a nonstandard finite difference(NSFD)scheme,respectively,are developed and analyzed.In the evidence of the numerical simulations,the proposed NSFD method preserves the main features of the dynamic system.It can be considered a reliable tool to predict such types of solutions. 展开更多
关键词 SIR model fuzzy parameters computer virus NSFD scheme STABILITY
在线阅读 下载PDF
基于多尺度门控卷积与深度注意力的时序分类方法 被引量:1
19
作者 杨瑞 张海清 +3 位作者 李代伟 Rattasit Sukhahuta 于曦 唐聃 《软件导刊》 2025年第2期33-39,共7页
针对现有时序分类方法难以充分捕捉序列中的深层特征以及特征学习不足的问题,提出一种基于多尺度门控卷积与深度注意力的时序分类网络MGDA-Net,有效提高了时序分类任务的准确率。MGDA-Net利用多尺度门控卷积模块捕获多尺度信息,并通过... 针对现有时序分类方法难以充分捕捉序列中的深层特征以及特征学习不足的问题,提出一种基于多尺度门控卷积与深度注意力的时序分类网络MGDA-Net,有效提高了时序分类任务的准确率。MGDA-Net利用多尺度门控卷积模块捕获多尺度信息,并通过门控机制筛选和调控特征流动来增强特征提取能力。同时,利用深度注意力模块,在保留通道间关系的基础上进一步捕获特征之间的空间关系,提升模型对重要特征的学习能力;引入残差链接促进特征复用和信息流动。实验结果显示,MGDA-Net在20个时序数据集上取得了最高排名和最低平均误差,在多个高维度数据集上的分类准确率提升2.3%~10.5%,证明了其有效性。 展开更多
关键词 时间序列分类 多尺度门控卷积 深度注意力 残差网络
在线阅读 下载PDF
MARIE:One-Stage Object Detection Mechanism for Real-Time Identifying of Firearms 被引量:1
20
作者 Diana Abi-Nader Hassan Harb +4 位作者 Ali Jaber Ali Mansour Christophe Osswald Nour Mostafa Chamseddine Zaki 《Computer Modeling in Engineering & Sciences》 SCIE EI 2025年第1期279-298,共20页
Security and safety remain paramount concerns for both governments and individuals worldwide.In today’s context,the frequency of crimes and terrorist attacks is alarmingly increasing,becoming increasingly intolerable... Security and safety remain paramount concerns for both governments and individuals worldwide.In today’s context,the frequency of crimes and terrorist attacks is alarmingly increasing,becoming increasingly intolerable to society.Consequently,there is a pressing need for swift identification of potential threats to preemptively alert law enforcement and security forces,thereby preventing potential attacks or violent incidents.Recent advancements in big data analytics and deep learning have significantly enhanced the capabilities of computer vision in object detection,particularly in identifying firearms.This paper introduces a novel automatic firearm detection surveillance system,utilizing a one-stage detection approach named MARIE(Mechanism for Realtime Identification of Firearms).MARIE incorporates the Single Shot Multibox Detector(SSD)model,which has been specifically optimized to balance the speed-accuracy trade-off critical in firearm detection applications.The SSD model was further refined by integrating MobileNetV2 and InceptionV2 architectures for superior feature extraction capabilities.The experimental results demonstrate that this modified SSD configuration provides highly satisfactory performance,surpassing existing methods trained on the same dataset in terms of the critical speedaccuracy trade-off.Through these innovations,MARIE sets a new standard in surveillance technology,offering a robust solution to enhance public safety effectively. 展开更多
关键词 Firearm and gun detection single shot multi-box detector deep learning one-stage detector MobileNet INCEPTION convolutional neural network
在线阅读 下载PDF
上一页 1 2 35 下一页 到第
使用帮助 返回顶部