期刊文献+
共找到1,190篇文章
< 1 2 60 >
每页显示 20 50 100
Several Improved Models of the Mountain Gazelle Optimizer for Solving Optimization Problems
1
作者 Farhad Soleimanian Gharehchopogh Keyvan Fattahi Rishakan 《Computer Modeling in Engineering & Sciences》 2026年第1期727-780,共54页
Optimization algorithms are crucial for solving NP-hard problems in engineering and computational sciences.Metaheuristic algorithms,in particular,have proven highly effective in complex optimization scenarios characte... Optimization algorithms are crucial for solving NP-hard problems in engineering and computational sciences.Metaheuristic algorithms,in particular,have proven highly effective in complex optimization scenarios characterized by high dimensionality and intricate variable relationships.The Mountain Gazelle Optimizer(MGO)is notably effective but struggles to balance local search refinement and global space exploration,often leading to premature convergence and entrapment in local optima.This paper presents the Improved MGO(IMGO),which integrates three synergistic enhancements:dynamic chaos mapping using piecewise chaotic sequences to boost explo-ration diversity;Opposition-Based Learning(OBL)with adaptive,diversity-driven activation to speed up convergence;and structural refinements to the position update mechanisms to enhance exploitation.The IMGO underwent a comprehensive evaluation using 52 standardised benchmark functions and seven engineering optimization problems.Benchmark evaluations showed that IMGO achieved the highest rank in best solution quality for 31 functions,the highest rank in mean performance for 18 functions,and the highest rank in worst-case performance for 14 functions among 11 competing algorithms.Statistical validation using Wilcoxon signed-rank tests confirmed that IMGO outperformed individual competitors across 16 to 50 functions,depending on the algorithm.At the same time,Friedman ranking analysis placed IMGO with an average rank of 4.15,compared to the baseline MGO’s 4.38,establishing the best overall performance.The evaluation of engineering problems revealed consistent improvements,including an optimal cost of 1.6896 for the welded beam design vs.MGO’s 1.7249,a minimum cost of 5885.33 for the pressure vessel design vs.MGO’s 6300,and a minimum weight of 2964.52 kg for the speed reducer design vs.MGO’s 2990.00 kg.Ablation studies identified OBL as the strongest individual contributor,whereas complete integration achieved superior performance through synergistic interactions among components.Computational complexity analysis established an O(T×N×5×f(P))time complexity,representing a 1.25×increase in fitness evaluation relative to the baseline MGO,validating the favorable accuracy-efficiency trade-offs for practical optimization applications. 展开更多
关键词 Metaheuristic algorithm dynamical chaos integration opposition-based learning mountain gazelle optimizer optimization
在线阅读 下载PDF
A Comparative Benchmark of Deep Learning Architectures for AI-Assisted Breast Cancer Detection in Mammography Using the MammosighTR Dataset:A Nationwide Turkish Screening Study(2016–2022)
2
作者 Nuh Azginoglu 《Computer Modeling in Engineering & Sciences》 2026年第1期1151-1173,共23页
Breast cancer screening programs rely heavily on mammography for early detection;however,diagnostic performance is strongly affected by inter-reader variability,breast density,and the limitations of conven-tional comp... Breast cancer screening programs rely heavily on mammography for early detection;however,diagnostic performance is strongly affected by inter-reader variability,breast density,and the limitations of conven-tional computer-aided detection systems.Recent advances in deep learning have enabled more robust and scalable solutions for large-scale screening,yet a systematic comparison of modern object detection architectures on nationally representative datasets remains limited.This study presents a comprehensive quantitative comparison of prominent deep learning–based object detection architectures for Artificial Intelligence-assisted mammography analysis using the MammosighTR dataset,developed within the Turkish National Breast Cancer Screening Program.The dataset comprises 12,740 patient cases collected between 2016 and 2022,annotated with BI-RADS categories,breast density levels,and lesion localization labels.A total of 31 models were evaluated,including One-Stage,Two-Stage,and Transformer-based architectures,under a unified experimental framework at both patient and breast levels.The results demonstrate that Two-Stage architectures consistently outperform One-Stage models,achieving approximately 2%–4%higher Macro F1-Scores and more balanced precision–recall trade-offs,with Double-Head R-CNN and Dynamic R-CNN yielding the highest overall performance(Macro F1≈0.84–0.86).This advantage is primarily attributed to the region proposal mechanism and improved class balance inherent to Two-Stage designs.One-Stage detectors exhibited higher sensitivity and faster inference,reaching Recall values above 0.88,but experienced minor reductions in Precision and overall accuracy(≈1%–2%)compared with Two-Stage models.Among Transformer-based architectures,Deformable DEtection TRansformer demonstrated strong robustness and consistency across datasets,achieving Macro F1-Scores comparable to CNN-based detectors(≈0.83–0.85)while exhibiting minimal performance degradation under distributional shifts.Breast density–based analysis revealed increased misclassification rates in medium-density categories(types B and C),whereas Transformer-based architectures maintained more stable performance in high-density type D tissue.These findings quantitatively confirm that both architectural design and tissue characteristics play a decisive role in diagnostic accuracy.Overall,the study provides a reproducible benchmark and highlights the potential of hybrid approaches that combine the accuracy of Two-Stage detectors with the contextual modeling capability of Transformer architectures for clinically reliable breast cancer screening systems. 展开更多
关键词 Deep learning MAMMOGRAPHY breast cancer detection object detection BI-RADS classification
在线阅读 下载PDF
Subtle Micro-Tremor Fusion:A Cross-Modal AI Framework for Early Detection of Parkinson’s Disease from Voice and Handwriting Dynamics
3
作者 H.Ahmed Naglaa E.Ghannam +1 位作者 H.Mancy Esraa A.Mahareek 《Computer Modeling in Engineering & Sciences》 2026年第2期1070-1099,共30页
Parkinson’s disease remains a major clinical issue in terms of early detection,especially during its prodromal stage when symptoms are not evident or not distinct.To address this problem,we proposed a new deep learni... Parkinson’s disease remains a major clinical issue in terms of early detection,especially during its prodromal stage when symptoms are not evident or not distinct.To address this problem,we proposed a new deep learning 2-based approach for detecting Parkinson’s disease before any of the overt symptoms develop during their prodromal stage.We used 5 publicly accessible datasets,including UCI Parkinson’s Voice,Spiral Drawings,PaHaW,NewHandPD,and PPMI,and implemented a dual stream CNN–BiLSTM architecture with Fisher-weighted feature merging and SHAP-based explanation.The findings reveal that the model’s performance was superior and achieved 98.2%,a F1-score of 0.981,and AUC of 0.991 on the UCI Voice dataset.The model’s performance on the remaining datasets was also comparable,with up to a 2–7 percent betterment in accuracy compared to existing strong models such as CNN–RNN–MLP,ILN–GNet,and CASENet.Across the evidence,the findings back the diagnostic promise of micro-tremor assessment and demonstrate that combining temporal and spatial features with a scatter-based segment for a multi-modal approach can be an effective and scalable platform for an“early,”interpretable PD screening system. 展开更多
关键词 Early Parkinson diagnosis explainable AI(XAI) feature-level fusion handwriting analysis microtremor detection multimodal fusion Parkinson’s disease prodromal detection voice signal processing
在线阅读 下载PDF
Planning a Course of a Computer Engineering Program by Bloom's Taxonomy
4
作者 Valfredo Pilla Jr Giancarlo F.Aguiar 《Journal of Mechanics Engineering and Automation》 2015年第4期263-267,共5页
The planning of teaching for a course that belongs to an undergraduate program usually begins with the definition of its contents,which are derived from syllabus of a political-pedagogical project.The contents listed ... The planning of teaching for a course that belongs to an undergraduate program usually begins with the definition of its contents,which are derived from syllabus of a political-pedagogical project.The contents listed are organized in a sequence considered logical.A set of actions is planned,such as lectures,laboratories,among others,through which content will be developed.The previous training of the student is considered,the concurrent and subsequent courses,the context of the course inside the program,the specific and general objectives of the program.A set of assessments is also defined as part of this planning,the associated methodologies,techniques and teaching objectives.In this context,this paper focuses on the aspect of the sequencing of content,methodologies and teaching techniques in a course.For this purpose,the Bloom's Taxonomy of Educational Objectives is applied,which provides a hierarchical structure for the cognitive process.The importance of this hierarchy of knowledge is greater awareness of the teacher about the ways to be adopted in the teaching process. 展开更多
关键词 Teaching organization Bloom's Taxonomy of Educational Objectives.
在线阅读 下载PDF
An Internet-enabled Integration of Concurrent Engineering with Co-design
5
作者 WEN Quan HE Jianmin 《武汉理工大学学报》 CAS CSCD 北大核心 2006年第S2期468-473,共6页
In this paper,the Web-based integration methodology and framework have been developed to facilitate collabora-tive and concurrent engineering design in distributed manufacturing environments.The distributed concurrent... In this paper,the Web-based integration methodology and framework have been developed to facilitate collabora-tive and concurrent engineering design in distributed manufacturing environments.The distributed concurrent engineering and co-design are discussed as key components in the mechanism.The related integration system is presented,which includes four function-al modules:co-design,Web-based visualization,manufacturing analysis and look-up service.It can be used for a design team geo-graphically distributed to organize a collaborative and concurrent engineering design effectively.In particular,the collaborative mechanism incorporated with Java-based and Internet-enabled technologies can generate extended strategies for design and planning.Thus,the proposed integration architecture enables the system to be generic,open and scalable.Finally,for the trend of global manufacturing,a case study of Internet-enabled collaborative optimization is introduced and a discussion on teamwork capability is made. 展开更多
关键词 manufacturing industry concurrent engineering CO-DESIGN Internet-enabled integration distributed system
在线阅读 下载PDF
Integration of data science with the intelligent IoT(IIoT):Current challenges and future perspectives 被引量:4
6
作者 Inam Ullah Deepak Adhikari +3 位作者 Xin Su Francesco Palmieri Celimuge Wu Chang Choi 《Digital Communications and Networks》 2025年第2期280-298,共19页
The Intelligent Internet of Things(IIoT)involves real-world things that communicate or interact with each other through networking technologies by collecting data from these“things”and using intelligent approaches,s... The Intelligent Internet of Things(IIoT)involves real-world things that communicate or interact with each other through networking technologies by collecting data from these“things”and using intelligent approaches,such as Artificial Intelligence(AI)and machine learning,to make accurate decisions.Data science is the science of dealing with data and its relationships through intelligent approaches.Most state-of-the-art research focuses independently on either data science or IIoT,rather than exploring their integration.Therefore,to address the gap,this article provides a comprehensive survey on the advances and integration of data science with the Intelligent IoT(IIoT)system by classifying the existing IoT-based data science techniques and presenting a summary of various characteristics.The paper analyzes the data science or big data security and privacy features,including network architecture,data protection,and continuous monitoring of data,which face challenges in various IoT-based systems.Extensive insights into IoT data security,privacy,and challenges are visualized in the context of data science for IoT.In addition,this study reveals the current opportunities to enhance data science and IoT market development.The current gap and challenges faced in the integration of data science and IoT are comprehensively presented,followed by the future outlook and possible solutions. 展开更多
关键词 Data science Internet of things(IoT) Big data Communication systems Networks Security Data science analytics
在线阅读 下载PDF
Neural Networks and the Study of Time Series: An Application in Engineering Education
7
作者 Jose Tarcisio Franco de Camargo Estefano Vizconde Veraszto +1 位作者 Gilmar Barreto Sergio Ferreira do Amaral 《Journal of Mechanics Engineering and Automation》 2015年第3期153-160,共8页
Time series are an important object of study in sciences, engineering and business, especially in cases where it is expected to know, predict and optimize behaviors. In this context, we intend to show the feasibility ... Time series are an important object of study in sciences, engineering and business, especially in cases where it is expected to know, predict and optimize behaviors. In this context, we intend to show the feasibility of using artificial neural networks in the study of several time series in an engineering course, especially those that have no overt behavior or are not able to be modeled mathematically in a simple way and have direct application in the education of future engineers. 展开更多
关键词 Engineering education time series mathematical modeling.
在线阅读 下载PDF
Enhancing Evapotranspiration Estimation: A Bibliometric and Systematic Review of Hybrid Neural Networks in Water Resource Management 被引量:1
8
作者 Moein Tosan Mohammad Reza Gharib +1 位作者 Nasrin Fathollahzadeh Attar Ali Maroosi 《Computer Modeling in Engineering & Sciences》 2025年第2期1109-1154,共46页
Accurate estimation of evapotranspiration(ET)is crucial for efficient water resource management,particularly in the face of climate change and increasing water scarcity.This study performs a bibliometric analysis of 3... Accurate estimation of evapotranspiration(ET)is crucial for efficient water resource management,particularly in the face of climate change and increasing water scarcity.This study performs a bibliometric analysis of 352 articles and a systematic review of 35 peer-reviewed papers,selected according to PRISMA guidelines,to evaluate the performance of Hybrid Artificial Neural Networks(HANNs)in ET estimation.The findings demonstrate that HANNs,particularly those combining Multilayer Perceptrons(MLPs),Recurrent Neural Networks(RNNs),and Convolutional Neural Networks(CNNs),are highly effective in capturing the complex nonlinear relationships and tem-poral dependencies characteristic of hydrological processes.These hybrid models,often integrated with optimization algorithms and fuzzy logic frameworks,significantly improve the predictive accuracy and generalization capabilities of ET estimation.The growing adoption of advanced evaluation metrics,such as Kling-Gupta Efficiency(KGE)and Taylor Diagrams,highlights the increasing demand for more robust performance assessments beyond traditional methods.Despite the promising results,challenges remain,particularly regarding model interpretability,computational efficiency,and data scarcity.Future research should prioritize the integration of interpretability techniques,such as attention mechanisms,Local Interpretable Model-Agnostic Explanations(LIME),and feature importance analysis,to enhance model transparency and foster stakeholder trust.Additionally,improving HANN models’scalability and computational efficiency is crucial,especially for large-scale,real-world applications.Approaches such as transfer learning,parallel processing,and hyperparameter optimization will be essential in overcoming these challenges.This study underscores the transformative potential of HANN models for precise ET estimation,particularly in water-scarce and climate-vulnerable regions.By integrating CNNs for automatic feature extraction and leveraging hybrid architectures,HANNs offer considerable advantages for optimizing water management,particularly agriculture.Addressing challenges related to interpretability and scalability will be vital to ensuring the widespread deployment and operational success of HANNs in global water resource management. 展开更多
关键词 Artificial neural networks bibliometric analysis EVAPOTRANSPIRATION hybrid models research trends systematic literature review water resources management
在线阅读 下载PDF
Congruent Feature Selection Method to Improve the Efficacy of Machine Learning-Based Classification in Medical Image Processing
9
作者 Mohd Anjum Naoufel Kraiem +2 位作者 Hong Min Ashit Kumar Dutta Yousef Ibrahim Daradkeh 《Computer Modeling in Engineering & Sciences》 SCIE EI 2025年第1期357-384,共28页
Machine learning(ML)is increasingly applied for medical image processing with appropriate learning paradigms.These applications include analyzing images of various organs,such as the brain,lung,eye,etc.,to identify sp... Machine learning(ML)is increasingly applied for medical image processing with appropriate learning paradigms.These applications include analyzing images of various organs,such as the brain,lung,eye,etc.,to identify specific flaws/diseases for diagnosis.The primary concern of ML applications is the precise selection of flexible image features for pattern detection and region classification.Most of the extracted image features are irrelevant and lead to an increase in computation time.Therefore,this article uses an analytical learning paradigm to design a Congruent Feature Selection Method to select the most relevant image features.This process trains the learning paradigm using similarity and correlation-based features over different textural intensities and pixel distributions.The similarity between the pixels over the various distribution patterns with high indexes is recommended for disease diagnosis.Later,the correlation based on intensity and distribution is analyzed to improve the feature selection congruency.Therefore,the more congruent pixels are sorted in the descending order of the selection,which identifies better regions than the distribution.Now,the learning paradigm is trained using intensity and region-based similarity to maximize the chances of selection.Therefore,the probability of feature selection,regardless of the textures and medical image patterns,is improved.This process enhances the performance of ML applications for different medical image processing.The proposed method improves the accuracy,precision,and training rate by 13.19%,10.69%,and 11.06%,respectively,compared to other models for the selected dataset.The mean error and selection time is also reduced by 12.56%and 13.56%,respectively,compared to the same models and dataset. 展开更多
关键词 Computer vision feature selection machine learning region detection texture analysis image classification medical images
在线阅读 下载PDF
Dynamic Multi-Graph Spatio-Temporal Graph Traffic Flow Prediction in Bangkok:An Application of a Continuous Convolutional Neural Network
10
作者 Pongsakon Promsawat Weerapan Sae-dan +2 位作者 Marisa Kaewsuwan Weerawat Sudsutad Aphirak Aphithana 《Computer Modeling in Engineering & Sciences》 SCIE EI 2025年第1期579-607,共29页
The ability to accurately predict urban traffic flows is crucial for optimising city operations.Consequently,various methods for forecasting urban traffic have been developed,focusing on analysing historical data to u... The ability to accurately predict urban traffic flows is crucial for optimising city operations.Consequently,various methods for forecasting urban traffic have been developed,focusing on analysing historical data to understand complex mobility patterns.Deep learning techniques,such as graph neural networks(GNNs),are popular for their ability to capture spatio-temporal dependencies.However,these models often become overly complex due to the large number of hyper-parameters involved.In this study,we introduce Dynamic Multi-Graph Spatial-Temporal Graph Neural Ordinary Differential Equation Networks(DMST-GNODE),a framework based on ordinary differential equations(ODEs)that autonomously discovers effective spatial-temporal graph neural network(STGNN)architectures for traffic prediction tasks.The comparative analysis of DMST-GNODE and baseline models indicates that DMST-GNODE model demonstrates superior performance across multiple datasets,consistently achieving the lowest Root Mean Square Error(RMSE)and Mean Absolute Error(MAE)values,alongside the highest accuracy.On the BKK(Bangkok)dataset,it outperformed other models with an RMSE of 3.3165 and an accuracy of 0.9367 for a 20-min interval,maintaining this trend across 40 and 60 min.Similarly,on the PeMS08 dataset,DMST-GNODE achieved the best performance with an RMSE of 19.4863 and an accuracy of 0.9377 at 20 min,demonstrating its effectiveness over longer periods.The Los_Loop dataset results further emphasise this model’s advantage,with an RMSE of 3.3422 and an accuracy of 0.7643 at 20 min,consistently maintaining superiority across all time intervals.These numerical highlights indicate that DMST-GNODE not only outperforms baseline models but also achieves higher accuracy and lower errors across different time intervals and datasets. 展开更多
关键词 Graph neural networks convolutional neural network deep learning dynamic multi-graph SPATIO-TEMPORAL
在线阅读 下载PDF
A systematic mapping to investigate the application of machine learning techniques in requirement engineering activities
11
作者 Shoaib Hassan Qianmu Li +3 位作者 Khursheed Aurangzeb Affan Yasin Javed Ali Khan Muhammad Shahid Anwar 《CAAI Transactions on Intelligence Technology》 2024年第6期1412-1434,共23页
Over the past few years,the application and usage of Machine Learning(ML)techniques have increased exponentially due to continuously increasing the size of data and computing capacity.Despite the popularity of ML tech... Over the past few years,the application and usage of Machine Learning(ML)techniques have increased exponentially due to continuously increasing the size of data and computing capacity.Despite the popularity of ML techniques,only a few research studies have focused on the application of ML especially supervised learning techniques in Requirement Engineering(RE)activities to solve the problems that occur in RE activities.The authors focus on the systematic mapping of past work to investigate those studies that focused on the application of supervised learning techniques in RE activities between the period of 2002–2023.The authors aim to investigate the research trends,main RE activities,ML algorithms,and data sources that were studied during this period.Forty-five research studies were selected based on our exclusion and inclusion criteria.The results show that the scientific community used 57 algorithms.Among those algorithms,researchers mostly used the five following ML algorithms in RE activities:Decision Tree,Support Vector Machine,Naïve Bayes,K-nearest neighbour Classifier,and Random Forest.The results show that researchers used these algorithms in eight major RE activities.Those activities are requirements analysis,failure prediction,effort estimation,quality,traceability,business rules identification,content classification,and detection of problems in requirements written in natural language.Our selected research studies used 32 private and 41 public data sources.The most popular data sources that were detected in selected studies are the Metric Data Programme from NASA,Predictor Models in Software Engineering,and iTrust Electronic Health Care System. 展开更多
关键词 data sources machine learning requirement engineering supervised learning algorithms
在线阅读 下载PDF
A Bayesian Optimized Stacked Long Short-Term Memory Framework for Real-Time Predictive Condition Monitoring of Heavy-Duty Industrial Motors
12
作者 Mudasir Dilawar Muhammad Shahbaz 《Computers, Materials & Continua》 2025年第6期5091-5114,共24页
In the era of Industry 4.0,conditionmonitoring has emerged as an effective solution for process industries to optimize their operational efficiency.Condition monitoring helps minimize unplanned downtime,extending equi... In the era of Industry 4.0,conditionmonitoring has emerged as an effective solution for process industries to optimize their operational efficiency.Condition monitoring helps minimize unplanned downtime,extending equipment lifespan,reducing maintenance costs,and improving production quality and safety.This research focuses on utilizing Bayesian search-based machine learning and deep learning approaches for the condition monitoring of industrial equipment.The study aims to enhance predictive maintenance for industrial equipment by forecasting vibration values based on domain-specific feature engineering.Early prediction of vibration enables proactive interventions to minimize downtime and extend the lifespan of critical assets.A data set of load information and vibration values from a heavy-duty industrial slip ring induction motor(4600 kW)and gearbox equipped with vibration sensors is used as a case study.The study implements and compares six machine learning models with the proposed Bayesian-optimized stacked Long Short-Term Memory(LSTM)model.The hyperparameters used in the implementation of models are selected based on the Bayesian optimization technique.Comparative analysis reveals that the proposed Bayesian optimized stacked LSTM outperforms other models,showcasing its capability to learn temporal features as well as long-term dependencies in time series information.The implemented machine learning models:Linear Regression(LR),RandomForest(RF),Gradient Boosting Regressor(GBR),ExtremeGradient Boosting(XGBoost),Light Gradient Boosting Machine(LightGBM),and Support Vector Regressor(SVR)displayed a mean squared error of 0.9515,0.4654,0.1849,0.0295,0.2127 and 0.0273,respectively.The proposed model predicts the future vibration characteristics with a mean squared error of 0.0019 on the dataset containing motor load information and vibration characteristics.The results demonstrate that the proposed model outperforms other models in terms of other evaluation metrics with a mean absolute error of 0.0263 and 0.882 as a coefficient of determination.Current research not only contributes to the comparative performance of machine learning models in condition monitoring but also showcases the practical implications of employing these techniques.By transitioning fromreactive to proactive maintenance strategies,industries canminimize downtime,reduce costs,and prolong the lifespan of crucial assets.This study demonstrates the practical advantages of transitioning from reactive to proactive maintenance strategies using ML-based condition monitoring. 展开更多
关键词 Machine learning deep learning predictive maintenance conditionmonitoring Industry 4.0 domainspecific features
在线阅读 下载PDF
Evaluation and Benchmarking of Cybersecurity DDoS Attacks Detection Models through the Integration of FWZIC and MABAC Methods
13
作者 Alaa Mahmood Isa Avcı 《Computer Systems Science & Engineering》 2025年第1期401-417,共17页
A Distributed Denial-of-Service(DDoS)attack poses a significant challenge in the digital age,disrupting online services with operational and financial consequences.Detecting such attacks requires innovative and effect... A Distributed Denial-of-Service(DDoS)attack poses a significant challenge in the digital age,disrupting online services with operational and financial consequences.Detecting such attacks requires innovative and effective solutions.The primary challenge lies in selecting the best among several DDoS detection models.This study presents a framework that combines several DDoS detection models and Multiple-Criteria Decision-Making(MCDM)techniques to compare and select the most effective models.The framework integrates a decision matrix from training several models on the CiC-DDOS2019 dataset with Fuzzy Weighted Zero Inconsistency Criterion(FWZIC)and MultiAttribute Boundary Approximation Area Comparison(MABAC)methodologies.FWZIC assigns weights to evaluate criteria,while MABAC compares detection models based on the assessed criteria.The results indicate that the FWZIC approach assigns weights to criteria reliably,with time complexity receiving the highest weight(0.2585)and F1 score receiving the lowest weight(0.14644).Among the models evaluated using the MABAC approach,the Support Vector Machine(SVM)ranked first with a score of 0.0444,making it the most suitable for this work.In contrast,Naive Bayes(NB)ranked lowest with a score of 0.0018.Objective validation and sensitivity analysis proved the reliability of the framework.This study provides a practical approach and insights for cybersecurity practitioners and researchers to evaluate DDoS detection models. 展开更多
关键词 Cybersecurity attack DDoS attacks DDoS detection MABAC FWZIC
在线阅读 下载PDF
Structural Health Monitoring Using Image Processing and Advanced Technologies for the Identification of Deterioration of Building Structure: A Review
14
作者 Kavita Bodke Sunil Bhirud Keshav Kashinath Sangle 《Structural Durability & Health Monitoring》 2025年第6期1547-1562,共16页
Structural Health Monitoring(SHM)systems play a key role in managing buildings and infrastructure by delivering vital insights into their strength and structural integrity.There is a need for more efficient techniques... Structural Health Monitoring(SHM)systems play a key role in managing buildings and infrastructure by delivering vital insights into their strength and structural integrity.There is a need for more efficient techniques to detect defects,as traditional methods are often prone to human error,and this issue is also addressed through image processing(IP).In addition to IP,automated,accurate,and real-time detection of structural defects,such as cracks,corrosion,and material degradation that conventional inspection techniques may miss,is made possible by Artificial Intelligence(AI)technologies like Machine Learning(ML)and Deep Learning(DL).This review examines the integration of computer vision and AI techniques in Structural Health Monitoring(SHM),investigating their effectiveness in detecting various forms of structural deterioration.Also,it evaluates ML and DL models in SHM for their accuracy in identifying and assessing structural damage,ultimately enhancing safety,durability,and maintenance practices in the field.Key findings reveal that AI-powered approaches,especially those utilizing IP and DL models like CNNs,significantly improve detection efficiency and accuracy,with reported accuracies in various SHM tasks.However,significant research gaps remain,including challenges with the consistency,quality,and environmental resilience of image data,a notable lack of standardized models and datasets for training across diverse structures,and concerns regarding computational costs,model interpretability,and seamless integration with existing systems.Future work should focus on developing more robust models through data augmentation,transfer learning,and hybrid approaches,standardizing protocols,and fostering interdisciplinary collaboration to overcome these limitations and achieve more reliable,scalable,and affordable SHM systems. 展开更多
关键词 Structural health monitoring artificial intelligence machine learning image processing cracks and damage detection
在线阅读 下载PDF
HybridEdge: A Lightweight and Secure Hybrid Communication Protocol for the Edge-Enabled Internet of Things
15
作者 Amjad Khan Rahim Khan +1 位作者 Fahad Alturise Tamim Alkhalifah 《Computers, Materials & Continua》 2025年第2期3161-3178,共18页
The Internet of Things (IoT) and edge-assisted networking infrastructures are capable of bringing data processing and accessibility services locally at the respective edge rather than at a centralized module. These in... The Internet of Things (IoT) and edge-assisted networking infrastructures are capable of bringing data processing and accessibility services locally at the respective edge rather than at a centralized module. These infrastructures are very effective in providing a fast response to the respective queries of the requesting modules, but their distributed nature has introduced other problems such as security and privacy. To address these problems, various security-assisted communication mechanisms have been developed to safeguard every active module, i.e., devices and edges, from every possible vulnerability in the IoT. However, these methodologies have neglected one of the critical issues, which is the prediction of fraudulent devices, i.e., adversaries, preferably as early as possible in the IoT. In this paper, a hybrid communication mechanism is presented where the Hidden Markov Model (HMM) predicts the legitimacy of the requesting device (both source and destination), and the Advanced Encryption Standard (AES) safeguards the reliability of the transmitted data over a shared communication medium, preferably through a secret shared key, i.e., , and timestamp information. A device becomes trusted if it has passed both evaluation levels, i.e., HMM and message decryption, within a stipulated time interval. The proposed hybrid, along with existing state-of-the-art approaches, has been simulated in the realistic environment of the IoT to verify the security measures. These evaluations were carried out in the presence of intruders capable of launching various attacks simultaneously, such as man-in-the-middle, device impersonations, and masquerading attacks. Moreover, the proposed approach has been proven to be more effective than existing state-of-the-art approaches due to its exceptional performance in communication, processing, and storage overheads, i.e., 13%, 19%, and 16%, respectively. Finally, the proposed hybrid approach is pruned against well-known security attacks in the IoT. 展开更多
关键词 Internet of Things information security AUTHENTICATION hidden Markov model MULTIMEDIA
在线阅读 下载PDF
Exploring the Effectiveness of Machine Learning and Deep Learning Algorithms for Sentiment Analysis:A Systematic Literature Review
16
作者 Jungpil Shin Wahidur Rahman +5 位作者 Tanvir Ahmed Bakhtiar Mazrur Md.Mohsin Mia Romana Idress Ekfa Md.Sajib Rana Pankoo Kim 《Computers, Materials & Continua》 2025年第9期4105-4153,共49页
Sentiment Analysis,a significant domain within Natural Language Processing(NLP),focuses on extracting and interpreting subjective information-such as emotions,opinions,and attitudes-from textual data.With the increasi... Sentiment Analysis,a significant domain within Natural Language Processing(NLP),focuses on extracting and interpreting subjective information-such as emotions,opinions,and attitudes-from textual data.With the increasing volume of user-generated content on social media and digital platforms,sentiment analysis has become essential for deriving actionable insights across various sectors.This study presents a systematic literature review of sentiment analysis methodologies,encompassing traditional machine learning algorithms,lexicon-based approaches,and recent advancements in deep learning techniques.The review follows a structured protocol comprising three phases:planning,execution,and analysis/reporting.During the execution phase,67 peer-reviewed articles were initially retrieved,with 25 meeting predefined inclusion and exclusion criteria.The analysis phase involved a detailed examination of each study’s methodology,experimental setup,and key contributions.Among the deep learning models evaluated,Long Short-Term Memory(LSTM)networks were identified as the most frequently adopted architecture for sentiment classification tasks.This review highlights current trends,technical challenges,and emerging opportunities in the field,providing valuable guidance for future research and development in applications such as market analysis,public health monitoring,financial forecasting,and crisis management. 展开更多
关键词 Natural Language Processing(NLP) Machine Learning(ML) sentiment analysis deep learning textual data
在线阅读 下载PDF
Artificial intelligence and the impact of multiomics on the reporting of case reports
17
作者 Aishwarya Boini Vincent Grasso +1 位作者 Heba Taher Andrew A Gumbs 《World Journal of Clinical Cases》 2025年第15期1-6,共6页
The integration of artificial intelligence(AI)and multiomics has transformed clinical and life sciences,enabling precision medicine and redefining disease understanding.Scientific publications grew significantly from ... The integration of artificial intelligence(AI)and multiomics has transformed clinical and life sciences,enabling precision medicine and redefining disease understanding.Scientific publications grew significantly from 2.1 million in 2012 to 3.3 million in 2022,with AI research tripling during this period.Multiomics fields,including genomics and proteomics,also advanced,exemplified by the Human Proteome Project achieving a 90%complete blueprint by 2021.This growth highlights opportunities and challenges in integrating AI and multiomics into clinical reporting.A review of studies and case reports was conducted to evaluate AI and multiomics integration.Key areas analyzed included diagnostic accuracy,predictive modeling,and personalized treatment approaches driven by AI tools.Case examples were studied to assess impacts on clinical decision-making.AI and multiomics enhanced data integration,predictive insights,and treatment personalization.Fields like radiomics,genomics,and proteomics improved diagnostics and guided therapy.For instance,the“AI radiomics,geno-mics,oncopathomics,and surgomics project”combined radiomics and genomics for surgical decision-making,enabling preoperative,intraoperative,and post-operative interventions.AI applications in case reports predicted conditions like postoperative delirium and monitored cancer progression using genomic and imaging data.AI and multiomics enable standardized data analysis,dynamic updates,and predictive modeling in case reports.Traditional reports often lack objectivity,but AI enhances reproducibility and decision-making by processing large datasets.Challenges include data standardization,biases,and ethical concerns.Overcoming these barriers is vital for optimizing AI applications and advancing personalized medicine.AI and multiomics integration is revolutionizing clinical research and practice.Standardizing data reporting and addressing challenges in ethics and data quality will unlock their full potential.Emphasizing collaboration and transparency is essential for leveraging these tools to improve patient care and scientific communication. 展开更多
关键词 Artificial intelligence Multiomics Precision medicine GENOMICS PROTEOMICS Metabolomics Radiomics Pathomics Surgomics Predictive modeling
暂未订购
GPU Usage Time-Based Ordering Management Technique for Tasks Execution to Prevent Running Failures of GPU Tasks in Container Environments
18
作者 Joon-Min Gil Hyunsu Jeong Jihun Kang 《Computers, Materials & Continua》 2025年第2期2199-2213,共15页
In a cloud environment,graphics processing units(GPUs)are the primary devices used for high-performance computation.They exploit flexible resource utilization,a key advantage of cloud environments.Multiple users share... In a cloud environment,graphics processing units(GPUs)are the primary devices used for high-performance computation.They exploit flexible resource utilization,a key advantage of cloud environments.Multiple users share GPUs,which serve as coprocessors of central processing units(CPUs)and are activated only if tasks demand GPU computation.In a container environment,where resources can be shared among multiple users,GPU utilization can be increased by minimizing idle time because the tasks of many users run on a single GPU.However,unlike CPUs and memory,GPUs cannot logically multiplex their resources.Additionally,GPU memory does not support over-utilization:when it runs out,tasks will fail.Therefore,it is necessary to regulate the order of execution of concurrently running GPU tasks to avoid such task failures and to ensure equitable GPU sharing among users.In this paper,we propose a GPU task execution order management technique that controls GPU usage via time-based containers.The technique seeks to ensure equal GPU time among users in a container environment to prevent task failures.In the meantime,we use a deferred processing method to prevent GPU memory shortages when GPU tasks are executed simultaneously and to determine the execution order based on the GPU usage time.As the order of GPU tasks cannot be externally adjusted arbitrarily once the task commences,the GPU task is indirectly paused by pausing the container.In addition,as container pause/unpause status is based on the information about the available GPU memory capacity,overuse of GPU memory can be prevented at the source.As a result,the strategy can prevent task failure and the GPU tasks can be experimentally processed in appropriate order. 展开更多
关键词 Cloud computing CONTAINER GPGPU resource management
在线阅读 下载PDF
Enhancing User Experience in AI-Powered Human-Computer Communication with Vocal Emotions Identification Using a Novel Deep Learning Method
19
作者 Ahmed Alhussen Arshiya Sajid Ansari Mohammad Sajid Mohammadi 《Computers, Materials & Continua》 2025年第2期2909-2929,共21页
Voice, motion, and mimicry are naturalistic control modalities that have replaced text or display-driven control in human-computer communication (HCC). Specifically, the vocals contain a lot of knowledge, revealing de... Voice, motion, and mimicry are naturalistic control modalities that have replaced text or display-driven control in human-computer communication (HCC). Specifically, the vocals contain a lot of knowledge, revealing details about the speaker’s goals and desires, as well as their internal condition. Certain vocal characteristics reveal the speaker’s mood, intention, and motivation, while word study assists the speaker’s demand to be understood. Voice emotion recognition has become an essential component of modern HCC networks. Integrating findings from the various disciplines involved in identifying vocal emotions is also challenging. Many sound analysis techniques were developed in the past. Learning about the development of artificial intelligence (AI), and especially Deep Learning (DL) technology, research incorporating real data is becoming increasingly common these days. Thus, this research presents a novel selfish herd optimization-tuned long/short-term memory (SHO-LSTM) strategy to identify vocal emotions in human communication. The RAVDESS public dataset is used to train the suggested SHO-LSTM technique. Mel-frequency cepstral coefficient (MFCC) and wiener filter (WF) techniques are used, respectively, to remove noise and extract features from the data. LSTM and SHO are applied to the extracted data to optimize the LSTM network’s parameters for effective emotion recognition. Python Software was used to execute our proposed framework. In the finding assessment phase, Numerous metrics are used to evaluate the proposed model’s detection capability, Such as F1-score (95%), precision (95%), recall (96%), and accuracy (97%). The suggested approach is tested on a Python platform, and the SHO-LSTM’s outcomes are contrasted with those of other previously conducted research. Based on comparative assessments, our suggested approach outperforms the current approaches in vocal emotion recognition. 展开更多
关键词 Human-computer communication(HCC) vocal emotions live vocal artificial intelligence(AI) deep learning(DL) selfish herd optimization-tuned long/short K term memory(SHO-LSTM)
在线阅读 下载PDF
Secure monitoring of Internet of vehicles in 6G networks through intelligent re-flecting surfaces leveraging AI
20
作者 Sharanya Selvaraj Balasubramanian Prabhu Kavin +3 位作者 Priyan Malarvizhi Kumar Mohammed J.F.Alenazi Zaid Bin Faheem Jehad Ali 《Digital Communications and Networks》 2025年第6期2003-2015,共13页
The ensemble of Information and Communication Technology(ICT)and Artificial Intelligence(AI)has catalysed many developments and innovations in the automotive industry.6G networks emerge as a promising technology for r... The ensemble of Information and Communication Technology(ICT)and Artificial Intelligence(AI)has catalysed many developments and innovations in the automotive industry.6G networks emerge as a promising technology for realising Intelligent Transport Systems(ITS),which benefits the drivers and society.As the network is highly heterogeneous and robust,the physical layer security and node reliability of the vehicles hold paramount significance.This work presents a novel methodology that integrates the prowess of computer vision techniques and the Lightweight Super Learning Ensemble(LSLE)of Machine Learning(ML)algorithms to predict the presence of intruders in the network.Furthermore,our work utilizes a Deep Convolutional Neural Network(DCNN)to detect obstacles by identifying the Region of Interest(ROI)in the images.As the network utilizes mm-waves with shorter wavelengths,Intelligent Reflecting Surfaces(IRS)are employed to redirect signals to legitimate nodes,thereby mitigating the malicious activity of intruders.The experimental simulation shows that the proposed LSLE outperforms the state-of-the-art techniques in terms of accuracy,False Positive Rate(FPR),Recall,F1-Score,and Precision.A consistent performance improvement with an average FPR of 85.08%and accuracy of 92.01%is achieved by the model.Thus,in the future,detecting moving obstacles and real-time network traffic monitoring can be included to achieve more realistic results. 展开更多
关键词 Intelligent reflecting surface 6G AI Deep convolution neural network Super learning Meta learner Intelligent transport systems
在线阅读 下载PDF
上一页 1 2 60 下一页 到第
使用帮助 返回顶部