期刊文献+
共找到4,682篇文章
< 1 2 235 >
每页显示 20 50 100
Harmonization of Heart Disease Dataset for Accurate Diagnosis:A Machine Learning Approach Enhanced by Feature Engineering
1
作者 Ruhul Amin Md.Jamil Khan +2 位作者 Tonway Deb Nath Md.Shamim Reza Jungpil Shin 《Computers, Materials & Continua》 2025年第3期3907-3919,共13页
Heart disease includes a multiplicity of medical conditions that affect the structure,blood vessels,and general operation of the heart.Numerous researchers have made progress in correcting and predicting early heart d... Heart disease includes a multiplicity of medical conditions that affect the structure,blood vessels,and general operation of the heart.Numerous researchers have made progress in correcting and predicting early heart disease,but more remains to be accomplished.The diagnostic accuracy of many current studies is inadequate due to the attempt to predict patients with heart disease using traditional approaches.By using data fusion from several regions of the country,we intend to increase the accuracy of heart disease prediction.A statistical approach that promotes insights triggered by feature interactions to reveal the intricate pattern in the data,which cannot be adequately captured by a single feature.We processed the data using techniques including feature scaling,outlier detection and replacement,null and missing value imputation,and more to improve the data quality.Furthermore,the proposed feature engineering method uses the correlation test for numerical features and the chi-square test for categorical features to interact with the feature.To reduce the dimensionality,we subsequently used PCA with 95%variation.To identify patients with heart disease,hyperparameter-based machine learning algorithms like RF,XGBoost,Gradient Boosting,LightGBM,CatBoost,SVM,and MLP are utilized,along with ensemble models.The model’s overall prediction performance ranges from 88%to 92%.In order to attain cutting-edge results,we then used a 1D CNN model,which significantly enhanced the prediction with an accuracy score of 96.36%,precision of 96.45%,recall of 96.36%,specificity score of 99.51%and F1 score of 96.34%.The RF model produces the best results among all the classifiers in the evaluation matrix without feature interaction,with accuracy of 90.21%,precision of 90.40%,recall of 90.86%,specificity of 90.91%,and F1 score of 90.63%.Our proposed 1D CNN model is 7%superior to the one without feature engineering when compared to the suggested approach.This illustrates how interaction-focused feature analysis can produce precise and useful insights for heart disease diagnosis. 展开更多
关键词 Heart disease HARMONIZATION feature interaction PCA model hyper tuning machine learning
在线阅读 下载PDF
A Metamodeling Approach to Enforcing the No-Cloning Theorem in Quantum Software Engineering
2
作者 Dae-Kyoo Kim 《Computers, Materials & Continua》 2025年第8期2549-2572,共24页
Quantum software development utilizes quantum phenomena such as superposition and entanglement to address problems that are challenging for classical systems.However,it must also adhere to critical quantum constraints... Quantum software development utilizes quantum phenomena such as superposition and entanglement to address problems that are challenging for classical systems.However,it must also adhere to critical quantum constraints,notably the no-cloning theorem,which prohibits the exact duplication of unknown quantum states and has profound implications for cryptography,secure communication,and error correction.While existing quantum circuit representations implicitly honor such constraints,they lack formal mechanisms for early-stage verification in software design.Addressing this constraint at the design phase is essential to ensure the correctness and reliability of quantum software.This paper presents a formal metamodeling framework using UML-style notation and and Object Constraint Language(OCL)to systematically capture and enforce the no-cloning theorem within quantum software models.The proposed metamodel formalizes key quantum concepts—such as entanglement and teleportation—and encodes enforceable invariants that reflect core quantum mechanical laws.The framework’s effectiveness is validated by analyzing two critical edge cases—conditional copying with CNOT gates and quantum teleportation—through instance model evaluations.These cases demonstrate that the metamodel can capture nuanced scenarios that are often mistaken as violations of the no-cloning theorem but are proven compliant under formal analysis.Thus,these serve as constructive validations that demonstrate the metamodel’s expressiveness and correctness in representing operations that may appear to challenge the no-cloning theorem but,upon rigorous analysis,are shown to comply with it.The approach supports early detection of conceptual design errors,promoting correctness prior to implementation.The framework’s extensibility is also demonstrated by modeling projective measurement,further reinforcing its applicability to broader quantum software engineering tasks.By integrating the rigor of metamodeling with fundamental quantum mechanical principles,this work provides a structured,model-driven approach that enables traditional software engineers to address quantum computing challenges.It offers practical insights into embedding quantum correctness at the modeling level and advances the development of reliable,error-resilient quantum software systems. 展开更多
关键词 METAMODELING no-cloning theorem quantum software software engineering
在线阅读 下载PDF
Computer Modeling Approaches for Blockchain-Driven Supply Chain Intelligence:A Review on Enhancing Transparency,Security,and Efficiency
3
作者 Puranam Revanth Kumar Gouse Baig Mohammad +4 位作者 Pallati Narsimhulu Dharnisha Narasappa Lakshmana Phaneendra Maguluri Subhav Singh Shitharth Selvarajan 《Computer Modeling in Engineering & Sciences》 2025年第9期2779-2818,共40页
Blockchain Technology(BT)has emerged as a transformative solution for improving the efficacy,security,and transparency of supply chain intelligence.Traditional Supply Chain Management(SCM)systems frequently have probl... Blockchain Technology(BT)has emerged as a transformative solution for improving the efficacy,security,and transparency of supply chain intelligence.Traditional Supply Chain Management(SCM)systems frequently have problems such as data silos,a lack of visibility in real time,fraudulent activities,and inefficiencies in tracking and traceability.Blockchain’s decentralized and irreversible ledger offers a solid foundation for dealing with these issues;it facilitates trust,security,and the sharing of data in real-time among all parties involved.Through an examination of critical technologies,methodology,and applications,this paper delves deeply into computer modeling based-blockchain framework within supply chain intelligence.The effect of BT on SCM is evaluated by reviewing current research and practical applications in the field.As part of the process,we delved through the research on blockchain-based supply chain models,smart contracts,Decentralized Applications(DApps),and how they connect to other cutting-edge innovations like Artificial Intelligence(AI)and the Internet of Things(IoT).To quantify blockchain’s performance,the study introduces analytical models for efficiency improvement,security enhancement,and scalability,enabling computational assessment and simulation of supply chain scenarios.These models provide a structured approach to predicting system performance under varying parameters.According to the results,BT increases efficiency by automating transactions using smart contracts,increases security by using cryptographic techniques,and improves transparency in the supply chain by providing immutable records.Regulatory concerns,challenges with interoperability,and scalability all work against broad adoption.To fully automate and intelligently integrate blockchain with AI and the IoT,additional research is needed to address blockchain’s current limitations and realize its potential for supply chain intelligence. 展开更多
关键词 Blockchain supply chain management TRANSPARENCY SECURITY smart contracts DECENTRALIZATION EFFICIENCY
在线阅读 下载PDF
Digital Twins and Cyber-Physical Systems:A New Frontier in Computer Modeling
4
作者 Vidyalakshmi G S Gopikrishnan +2 位作者 Wadii Boulila Anis Koubaa Gautam Srivastava 《Computer Modeling in Engineering & Sciences》 2025年第4期51-113,共63页
Cyber-Physical Systems(CPS)represent an integration of computational and physical elements,revolutionizing industries by enabling real-time monitoring,control,and optimization.A complementary technology,Digital Twin(D... Cyber-Physical Systems(CPS)represent an integration of computational and physical elements,revolutionizing industries by enabling real-time monitoring,control,and optimization.A complementary technology,Digital Twin(DT),acts as a virtual replica of physical assets or processes,facilitating better decision making through simulations and predictive analytics.CPS and DT underpin the evolution of Industry 4.0 by bridging the physical and digital domains.This survey explores their synergy,highlighting how DT enriches CPS with dynamic modeling,realtime data integration,and advanced simulation capabilities.The layered architecture of DTs within CPS is examined,showcasing the enabling technologies and tools vital for seamless integration.The study addresses key challenges in CPS modeling,such as concurrency and communication,and underscores the importance of DT in overcoming these obstacles.Applications in various sectors are analyzed,including smart manufacturing,healthcare,and urban planning,emphasizing the transformative potential of CPS-DT integration.In addition,the review identifies gaps in existing methodologies and proposes future research directions to develop comprehensive,scalable,and secure CPSDT systems.By synthesizing insights fromthe current literature and presenting a taxonomy of CPS and DT,this survey serves as a foundational reference for academics and practitioners.The findings stress the need for unified frameworks that align CPS and DT with emerging technologies,fostering innovation and efficiency in the digital transformation era. 展开更多
关键词 Cyber physical systems digital twin efficiency Industry 4.0 robustness and intelligence
在线阅读 下载PDF
Attention U-Net for Precision Skeletal Segmentation in Chest X-Ray Imaging:Advancing Person Identification Techniques in Forensic Science
5
作者 Hazem Farah Akram Bennour +3 位作者 Hama Soltani Mouaaz Nahas Rashiq Rafiq Marie Mohammed Al-Sarem 《Computers, Materials & Continua》 2025年第11期3335-3348,共14页
This study presents an advanced method for post-mortem person identification using the segmentation of skeletal structures from chest X-ray images.The proposed approach employs the Attention U-Net architecture,enhance... This study presents an advanced method for post-mortem person identification using the segmentation of skeletal structures from chest X-ray images.The proposed approach employs the Attention U-Net architecture,enhanced with gated attention mechanisms,to refine segmentation by emphasizing spatially relevant anatomical features while suppressing irrelevant details.By isolating skeletal structures which remain stable over time compared to soft tissues,this method leverages bones as reliable biometric markers for identity verification.The model integrates custom-designed encoder and decoder blocks with attention gates,achieving high segmentation precision.To evaluate the impact of architectural choices,we conducted an ablation study comparing Attention U-Net with and without attentionmechanisms,alongside an analysis of data augmentation effects.Training and evaluation were performed on a curated chest X-ray dataset,with segmentation performance measured using Dice score,precision,and loss functions,achieving over 98% precision and 94% Dice score.The extracted bone structures were further processed to derive unique biometric patterns,enabling robust and privacy-preserving person identification.Our findings highlight the effectiveness of attentionmechanisms in improving segmentation accuracy and underscore the potential of chest bonebased biometrics in forensic and medical imaging.This work paves the way for integrating artificial intelligence into real-world forensic workflows,offering a non-invasive and reliable solution for post-mortem identification. 展开更多
关键词 Bone extraction segmentation of skeletal structures chest X-ray images person identification deep learning attention mechanisms U-Net
在线阅读 下载PDF
Feature Engineering Methods for Analyzing Blood Samples for Early Diagnosis of Hepatitis Using Machine Learning Approaches
6
作者 Mohamed A.G.Hazber Ebrahim Mohammed Senan Hezam Saud Alrashidi 《Computer Modeling in Engineering & Sciences》 2025年第3期3229-3254,共26页
Hepatitis is an infection that affects the liver through contaminated foods or blood transfusions,and it has many types,from normal to serious.Hepatitis is diagnosed through many blood tests and factors;Artificial Int... Hepatitis is an infection that affects the liver through contaminated foods or blood transfusions,and it has many types,from normal to serious.Hepatitis is diagnosed through many blood tests and factors;Artificial Intelligence(AI)techniques have played an important role in early diagnosis and help physicians make decisions.This study evaluated the performance of Machine Learning(ML)algorithms on the hepatitis data set.The dataset contains missing values that have been processed and outliers removed.The dataset was counterbalanced by the Synthetic Minority Over-sampling Technique(SMOTE).The features of the data set were processed in two ways:first,the application of the Recursive Feature Elimination(RFE)algorithm to arrange the percentage of contribution of each feature to the diagnosis of hepatitis,then selection of important features using the t-distributed Stochastic Neighbor Embedding(t-SNE)and Principal Component Analysis(PCA)algorithms.Second,the SelectKBest function was applied to give scores for each attribute,followed by the t-SNE and PCA algorithms.Finally,the classification algorithms K-Nearest Neighbors(KNN),Support Vector Machine(SVM),Artificial Neural Network(ANN),Decision Tree(DT),and Random Forest(RF)were fed by the dataset after processing the features in different methods are RFE with t-SNE and PCA and SelectKBest with t-SNE and PCA).All algorithms yielded promising results for diagnosing hepatitis data sets.The RF with RFE and PCA methods achieved accuracy,Precision,Recall,and AUC of 97.18%,96.72%,97.29%,and 94.2%,respectively,during the training phase.During the testing phase,it reached accuracy,Precision,Recall,and AUC by 96.31%,95.23%,97.11%,and 92.67%,respectively. 展开更多
关键词 HEPATITIS machine learning PCA RFE SelectKBest t-SNE
在线阅读 下载PDF
Type-I Heavy-Tailed Burr XII Distribution with Applications to Quality Control,Skewed Reliability Engineering Systems and Lifetime Data
7
作者 Okechukwu J.Obulezi Hatem E.Semary +4 位作者 Sadia Nadir Chinyere P.Igbokwe Gabriel O.Orji A.S.Al-Moisheer Mohammed Elgarhy 《Computer Modeling in Engineering & Sciences》 2025年第9期2991-3027,共37页
This study introduces the type-I heavy-tailed Burr XII(TIHTBXII)distribution,a highly flexible and robust statistical model designed to address the limitations of conventional distributions in analyzing data character... This study introduces the type-I heavy-tailed Burr XII(TIHTBXII)distribution,a highly flexible and robust statistical model designed to address the limitations of conventional distributions in analyzing data characterized by skewness,heavy tails,and diverse hazard behaviors.We meticulously develop the TIHTBXII’s mathematical foundations,including its probability density function(PDF),cumulative distribution function(CDF),and essential statistical properties,crucial for theoretical understanding and practical application.A comprehensive Monte Carlo simulation evaluates four parameter estimation methods:maximum likelihood(MLE),maximum product spacing(MPS),least squares(LS),and weighted least squares(WLS).The simulation results consistently show that as sample sizes increase,the Bias and RMSE of all estimators decrease,with WLS and LS often demonstrating superior and more stable performance.Beyond theoretical development,we present a practical application of the TIHTBXII distribution in constructing a group acceptance sampling plan(GASP)for truncated life tests.This application highlights how the TIHTBXII model can optimize quality control decisions by minimizing the average sample number(ASN)while effectively managing consumer and producer risks.Empirical validation using real-world datasets,including“Active Repair Duration,”“Groundwater Contaminant Measurements,”and“Dominica COVID-19 Mortality,”further demonstrates the TIHTBXII’s superior fit compared to existing models.Our findings confirm the TIHTBXII distribution as a powerful and reliable alternative for accurately modeling complex data in fields such as reliability engineering and quality assessment,leading to more informed and robust decision-making. 展开更多
关键词 Acceptance sampling heavy-tailed models parameter estimation reliability engineering
在线阅读 下载PDF
Bat algorithm based on kinetic adaptation and elite communication for engineering problems
8
作者 Chong Yuan Dong Zhao +4 位作者 Ali Asghar Heidari Lei Liu Shuihua Wang Huiling Chen Yudong Zhang 《CAAI Transactions on Intelligence Technology》 2025年第4期1174-1200,共27页
The Bat algorithm,a metaheuristic optimization technique inspired by the foraging behaviour of bats,has been employed to tackle optimization problems.Known for its ease of implementation,parameter tunability,and stron... The Bat algorithm,a metaheuristic optimization technique inspired by the foraging behaviour of bats,has been employed to tackle optimization problems.Known for its ease of implementation,parameter tunability,and strong global search capabilities,this algorithm finds application across diverse optimization problem domains.However,in the face of increasingly complex optimization challenges,the Bat algorithm encounters certain limitations,such as slow convergence and sensitivity to initial solutions.In order to tackle these challenges,the present study incorporates a range of optimization compo-nents into the Bat algorithm,thereby proposing a variant called PKEBA.A projection screening strategy is implemented to mitigate its sensitivity to initial solutions,thereby enhancing the quality of the initial solution set.A kinetic adaptation strategy reforms exploration patterns,while an elite communication strategy enhances group interaction,to avoid algorithm from local optima.Subsequently,the effectiveness of the proposed PKEBA is rigorously evaluated.Testing encompasses 30 benchmark functions from IEEE CEC2014,featuring ablation experiments and comparative assessments against classical algorithms and their variants.Moreover,real-world engineering problems are employed as further validation.The results conclusively demonstrate that PKEBA ex-hibits superior convergence and precision compared to existing algorithms. 展开更多
关键词 Bat algorithm engineering optimization global optimization metaheuristic algorithms
在线阅读 下载PDF
Enhancing Military Visual Communication in Harsh Environments Using Computer Vision Techniques
9
作者 Shitharth Selvarajan Hariprasath Manoharan +2 位作者 Taher Al-Shehari Nasser A Alsadhan Subhav Singh 《Computers, Materials & Continua》 2025年第8期3541-3557,共17页
This research investigates the application of digital images in military contexts by utilizing analytical equations to augment human visual capabilities.A comparable filter is used to improve the visual quality of the... This research investigates the application of digital images in military contexts by utilizing analytical equations to augment human visual capabilities.A comparable filter is used to improve the visual quality of the photographs by reducing truncations in the existing images.Furthermore,the collected images undergo processing using histogram gradients and a flexible threshold value that may be adjusted in specific situations.Thus,it is possible to reduce the occurrence of overlapping circumstances in collective picture characteristics by substituting grey-scale photos with colorized factors.The proposed method offers additional robust feature representations by imposing a limiting factor to reduce overall scattering values.This is achieved by visualizing a graphical function.Moreover,to derive valuable insights from a series of photos,both the separation and in-version processes are conducted.This involves analyzing comparison results across four different scenarios.The results of the comparative analysis show that the proposed method effectively reduces the difficulties associated with time and space to 1 s and 3%,respectively.In contrast,the existing strategy exhibits higher complexities of 3 s and 9.1%,respectively. 展开更多
关键词 Image enhancement visual information harsh environment computer vision
在线阅读 下载PDF
Complex adaptive systems science in the era of global sustainability crisis
10
作者 Li An B.L.Turner II +4 位作者 Jianguo Liu Volker Grimm Qi Zhang Zhangyang Wang Ruihong Huang 《Geography and Sustainability》 2025年第1期14-24,共11页
A significant number and range of challenges besetting sustainability can be traced to the actions and inter actions of multiple autonomous agents(people mostly)and the entities they create(e.g.,institutions,policies,... A significant number and range of challenges besetting sustainability can be traced to the actions and inter actions of multiple autonomous agents(people mostly)and the entities they create(e.g.,institutions,policies,social network)in the corresponding social-environmental systems(SES).To address these challenges,we need to understand decisions made and actions taken by agents,the outcomes of their actions,including the feedbacks on the corresponding agents and environment.The science of complex adaptive systems-complex adaptive sys tems(CAS)science-has a significant potential to handle such challenges.We address the advantages of CAS science for sustainability by identifying the key elements and challenges in sustainability science,the generic features of CAS,and the key advances and challenges in modeling CAS.Artificial intelligence and data science combined with agent-based modeling promise to improve understanding of agents’behaviors,detect SES struc tures,and formulate SES mechanisms. 展开更多
关键词 Social-environmental systems Complex adaptive systems Sustainability science Agent-based models Artificial intelligence Data science
在线阅读 下载PDF
A Study on Outlier Detection and Feature Engineering Strategies in Machine Learning for Heart Disease Prediction 被引量:2
11
作者 Varada Rajkumar Kukkala Surapaneni Phani Praveen +1 位作者 Naga Satya Koti Mani Kumar Tirumanadham Parvathaneni Naga Srinivasu 《Computer Systems Science & Engineering》 2024年第5期1085-1112,共28页
This paper investigates the application ofmachine learning to develop a response model to cardiovascular problems and the use of AdaBoost which incorporates an application of Outlier Detection methodologies namely;Z-S... This paper investigates the application ofmachine learning to develop a response model to cardiovascular problems and the use of AdaBoost which incorporates an application of Outlier Detection methodologies namely;Z-Score incorporated with GreyWolf Optimization(GWO)as well as Interquartile Range(IQR)coupled with Ant Colony Optimization(ACO).Using a performance index,it is shown that when compared with the Z-Score and GWO with AdaBoost,the IQR and ACO,with AdaBoost are not very accurate(89.0%vs.86.0%)and less discriminative(Area Under the Curve(AUC)score of 93.0%vs.91.0%).The Z-Score and GWO methods also outperformed the others in terms of precision,scoring 89.0%;and the recall was also found to be satisfactory,scoring 90.0%.Thus,the paper helps to reveal various specific benefits and drawbacks associated with different outlier detection and feature selection techniques,which can be important to consider in further improving various aspects of diagnostics in cardiovascular health.Collectively,these findings can enhance the knowledge of heart disease prediction and patient treatment using enhanced and innovativemachine learning(ML)techniques.These findings when combined improve patient therapy knowledge and cardiac disease prediction through the use of cutting-edge and improved machine learning approaches.This work lays the groundwork for more precise diagnosis models by highlighting the benefits of combining multiple optimization methodologies.Future studies should focus on maximizing patient outcomes and model efficacy through research on these combinations. 展开更多
关键词 Grey wolf optimization ant colony optimization Z-SCORE interquartile range(IQR) ADABOOST OUTLIER
在线阅读 下载PDF
Automatic Fetal Segmentation Designed on Computer-Aided Detection with Ultrasound Images
12
作者 Mohana Priya Govindarajan Sangeetha Subramaniam Karuppaiya Bharathi 《Computers, Materials & Continua》 SCIE EI 2024年第11期2967-2986,共20页
In the present research,we describe a computer-aided detection(CAD)method aimed at automatic fetal head circumference(HC)measurement in 2D ultrasonography pictures during all trimesters of pregnancy.The HC might be ut... In the present research,we describe a computer-aided detection(CAD)method aimed at automatic fetal head circumference(HC)measurement in 2D ultrasonography pictures during all trimesters of pregnancy.The HC might be utilized toward determining gestational age and tracking fetal development.This automated approach is particularly valuable in low-resource settings where access to trained sonographers is limited.The CAD system is divided into two steps:to begin,Haar-like characteristics were extracted from ultrasound pictures in order to train a classifier using random forests to find the fetal skull.We identified the HC using dynamic programming,an elliptical fit,and a Hough transform.The computer-aided detection(CAD)program was well-trained on 999 pictures(HC18 challenge data source),and then verified on 335 photos from all trimesters in an independent test set.A skilled sonographer and an expert in medicine personally marked the test set.We used the crown-rump length(CRL)measurement to calculate the reference gestational age(GA).In the first,second,and third trimesters,the median difference between the standard GA and the GA calculated by the skilled sonographer stayed at 0.7±2.7,0.0±4.5,and 2.0±12.0 days,respectively.The regular duration variance between the baseline GA and the health investigator’s GA remained 1.5±3.0,1.9±5.0,and 4.0±14 a couple of days.The mean variance between the standard GA and the CAD system’s GA remained between 0.5 and 5.0,with an additional variation of 2.9 to 12.5 days.The outcomes reveal that the computer-aided detection(CAD)program outperforms an expert sonographer.When paired with the classifications reported in the literature,the provided system achieves results that are comparable or even better.We have assessed and scheduled this computerized approach for HC evaluation,which includes information from all trimesters of gestation. 展开更多
关键词 Fetal growth SEGMENTATION ultrasound images computer-aided detection gestational age crown-rump length head circumference
在线阅读 下载PDF
Exploring Deep Learning Methods for Computer Vision Applications across Multiple Sectors:Challenges and Future Trends
13
作者 Narayanan Ganesh Rajendran Shankar +3 位作者 Miroslav Mahdal Janakiraman SenthilMurugan Jasgurpreet Singh Chohan Kanak Kalita 《Computer Modeling in Engineering & Sciences》 SCIE EI 2024年第4期103-141,共39页
Computer vision(CV)was developed for computers and other systems to act or make recommendations based on visual inputs,such as digital photos,movies,and other media.Deep learning(DL)methods are more successful than ot... Computer vision(CV)was developed for computers and other systems to act or make recommendations based on visual inputs,such as digital photos,movies,and other media.Deep learning(DL)methods are more successful than other traditional machine learning(ML)methods inCV.DL techniques can produce state-of-the-art results for difficult CV problems like picture categorization,object detection,and face recognition.In this review,a structured discussion on the history,methods,and applications of DL methods to CV problems is presented.The sector-wise presentation of applications in this papermay be particularly useful for researchers in niche fields who have limited or introductory knowledge of DL methods and CV.This review will provide readers with context and examples of how these techniques can be applied to specific areas.A curated list of popular datasets and a brief description of them are also included for the benefit of readers. 展开更多
关键词 Neural network machine vision classification object detection deep learning
在线阅读 下载PDF
Early Detection of Colletotrichum Kahawae Disease in Coffee Cherry Based on Computer Vision Techniques
14
作者 Raveena Selvanarayanan Surendran Rajendran Youseef Alotaibi 《Computer Modeling in Engineering & Sciences》 SCIE EI 2024年第4期759-782,共24页
Colletotrichum kahawae(Coffee Berry Disease)spreads through spores that can be carried by wind,rain,and insects affecting coffee plantations,and causes 80%yield losses and poor-quality coffee beans.The deadly disease ... Colletotrichum kahawae(Coffee Berry Disease)spreads through spores that can be carried by wind,rain,and insects affecting coffee plantations,and causes 80%yield losses and poor-quality coffee beans.The deadly disease is hard to control because wind,rain,and insects carry spores.Colombian researchers utilized a deep learning system to identify CBD in coffee cherries at three growth stages and classify photographs of infected and uninfected cherries with 93%accuracy using a random forest method.If the dataset is too small and noisy,the algorithm may not learn data patterns and generate accurate predictions.To overcome the existing challenge,early detection of Colletotrichum Kahawae disease in coffee cherries requires automated processes,prompt recognition,and accurate classifications.The proposed methodology selects CBD image datasets through four different stages for training and testing.XGBoost to train a model on datasets of coffee berries,with each image labeled as healthy or diseased.Once themodel is trained,SHAP algorithmto figure out which features were essential formaking predictions with the proposed model.Some of these characteristics were the cherry’s colour,whether it had spots or other damage,and how big the Lesions were.Virtual inception is important for classification to virtualize the relationship between the colour of the berry is correlated with the presence of disease.To evaluate themodel’s performance andmitigate excess fitting,a 10-fold cross-validation approach is employed.This involves partitioning the dataset into ten subsets,training the model on each subset,and evaluating its performance.In comparison to other contemporary methodologies,the model put forth achieved an accuracy of 98.56%. 展开更多
关键词 Computer vision coffee berry disease colletotrichum kahawae XG boost shapley additive explanations
在线阅读 下载PDF
Developing Lexicons for Enhanced Sentiment Analysis in Software Engineering:An Innovative Multilingual Approach for Social Media Reviews
15
作者 Zohaib Ahmad Khan Yuanqing Xia +4 位作者 Ahmed Khan Muhammad Sadiq Mahmood Alam Fuad AAwwad Emad A.A.Ismail 《Computers, Materials & Continua》 SCIE EI 2024年第5期2771-2793,共23页
Sentiment analysis is becoming increasingly important in today’s digital age, with social media being a significantsource of user-generated content. The development of sentiment lexicons that can support languages ot... Sentiment analysis is becoming increasingly important in today’s digital age, with social media being a significantsource of user-generated content. The development of sentiment lexicons that can support languages other thanEnglish is a challenging task, especially for analyzing sentiment analysis in social media reviews. Most existingsentiment analysis systems focus on English, leaving a significant research gap in other languages due to limitedresources and tools. This research aims to address this gap by building a sentiment lexicon for local languages,which is then used with a machine learning algorithm for efficient sentiment analysis. In the first step, a lexiconis developed that includes five languages: Urdu, Roman Urdu, Pashto, Roman Pashto, and English. The sentimentscores from SentiWordNet are associated with each word in the lexicon to produce an effective sentiment score. Inthe second step, a naive Bayesian algorithm is applied to the developed lexicon for efficient sentiment analysis ofRoman Pashto. Both the sentiment lexicon and sentiment analysis steps were evaluated using information retrievalmetrics, with an accuracy score of 0.89 for the sentiment lexicon and 0.83 for the sentiment analysis. The resultsshowcase the potential for improving software engineering tasks related to user feedback analysis and productdevelopment. 展开更多
关键词 Emotional assessment regional dialects SentiWordNet naive bayesian technique lexicons software engineering user feedback
在线阅读 下载PDF
Gaussian Backbone-Based Spherical Evolutionary Algorithm with Cross-search for Engineering Problems
16
作者 Yupeng Li Dong Zhao +3 位作者 Ali Asghar Heidari Shuihua Wang Huiling Chen Yudong Zhang 《Journal of Bionic Engineering》 SCIE EI CSCD 2024年第2期1055-1091,共37页
In recent years,with the increasing demand for social production,engineering design problems have gradually become more and more complex.Many novel and well-performing meta-heuristic algorithms have been studied and d... In recent years,with the increasing demand for social production,engineering design problems have gradually become more and more complex.Many novel and well-performing meta-heuristic algorithms have been studied and developed to cope with this problem.Among them,the Spherical Evolutionary Algorithm(SE)is one of the classical representative methods that proposed in recent years with admirable optimization performance.However,it tends to stagnate prematurely to local optima in solving some specific problems.Therefore,this paper proposes an SE variant integrating the Cross-search Mutation(CSM)and Gaussian Backbone Strategy(GBS),called CGSE.In this study,the CSM can enhance its social learning ability,which strengthens the utilization rate of SE on effective information;the GBS cooperates with the original rules of SE to further improve the convergence effect of SE.To objectively demonstrate the core advantages of CGSE,this paper designs a series of global optimization experiments based on IEEE CEC2017,and CGSE is used to solve six engineering design problems with constraints.The final experimental results fully showcase that,compared with the existing well-known methods,CGSE has a very significant competitive advantage in global tasks and has certain practical value in real applications.Therefore,the proposed CGSE is a promising and first-rate algorithm with good potential strength in the field of engineering design. 展开更多
关键词 Meta-heuristic algorithms Engineering optimization Spherical evolution algorithm Global optimization
在线阅读 下载PDF
Axonal Conduction Velocity: A Computer Study
17
作者 Arthur D. Snider Aman Chawla Salvatore D. Morgera 《Journal of Applied Mathematics and Physics》 2024年第1期60-71,共12页
This paper derives rigorous statements concerning the propagation velocity of action potentials in axons. The authors use the Green’s function approach to approximate the action potential and find a relation between ... This paper derives rigorous statements concerning the propagation velocity of action potentials in axons. The authors use the Green’s function approach to approximate the action potential and find a relation between conduction velocity and the impulse profile. Computer simulations are used to bolster the analysis. 展开更多
关键词 NEURON AXON Action Potential Conduction Velocity INTERNODE
在线阅读 下载PDF
A systematic mapping to investigate the application of machine learning techniques in requirement engineering activities
18
作者 Shoaib Hassan Qianmu Li +3 位作者 Khursheed Aurangzeb Affan Yasin Javed Ali Khan Muhammad Shahid Anwar 《CAAI Transactions on Intelligence Technology》 2024年第6期1412-1434,共23页
Over the past few years,the application and usage of Machine Learning(ML)techniques have increased exponentially due to continuously increasing the size of data and computing capacity.Despite the popularity of ML tech... Over the past few years,the application and usage of Machine Learning(ML)techniques have increased exponentially due to continuously increasing the size of data and computing capacity.Despite the popularity of ML techniques,only a few research studies have focused on the application of ML especially supervised learning techniques in Requirement Engineering(RE)activities to solve the problems that occur in RE activities.The authors focus on the systematic mapping of past work to investigate those studies that focused on the application of supervised learning techniques in RE activities between the period of 2002–2023.The authors aim to investigate the research trends,main RE activities,ML algorithms,and data sources that were studied during this period.Forty-five research studies were selected based on our exclusion and inclusion criteria.The results show that the scientific community used 57 algorithms.Among those algorithms,researchers mostly used the five following ML algorithms in RE activities:Decision Tree,Support Vector Machine,Naïve Bayes,K-nearest neighbour Classifier,and Random Forest.The results show that researchers used these algorithms in eight major RE activities.Those activities are requirements analysis,failure prediction,effort estimation,quality,traceability,business rules identification,content classification,and detection of problems in requirements written in natural language.Our selected research studies used 32 private and 41 public data sources.The most popular data sources that were detected in selected studies are the Metric Data Programme from NASA,Predictor Models in Software Engineering,and iTrust Electronic Health Care System. 展开更多
关键词 data sources machine learning requirement engineering supervised learning algorithms
在线阅读 下载PDF
Early identification of stroke through deep learning with multi-modal human speech and movement data 被引量:4
19
作者 Zijun Ou Haitao Wang +9 位作者 Bin Zhang Haobang Liang Bei Hu Longlong Ren Yanjuan Liu Yuhu Zhang Chengbo Dai Hejun Wu Weifeng Li Xin Li 《Neural Regeneration Research》 SCIE CAS 2025年第1期234-241,共8页
Early identification and treatment of stroke can greatly improve patient outcomes and quality of life.Although clinical tests such as the Cincinnati Pre-hospital Stroke Scale(CPSS)and the Face Arm Speech Test(FAST)are... Early identification and treatment of stroke can greatly improve patient outcomes and quality of life.Although clinical tests such as the Cincinnati Pre-hospital Stroke Scale(CPSS)and the Face Arm Speech Test(FAST)are commonly used for stroke screening,accurate administration is dependent on specialized training.In this study,we proposed a novel multimodal deep learning approach,based on the FAST,for assessing suspected stroke patients exhibiting symptoms such as limb weakness,facial paresis,and speech disorders in acute settings.We collected a dataset comprising videos and audio recordings of emergency room patients performing designated limb movements,facial expressions,and speech tests based on the FAST.We compared the constructed deep learning model,which was designed to process multi-modal datasets,with six prior models that achieved good action classification performance,including the I3D,SlowFast,X3D,TPN,TimeSformer,and MViT.We found that the findings of our deep learning model had a higher clinical value compared with the other approaches.Moreover,the multi-modal model outperformed its single-module variants,highlighting the benefit of utilizing multiple types of patient data,such as action videos and speech audio.These results indicate that a multi-modal deep learning model combined with the FAST could greatly improve the accuracy and sensitivity of early stroke identification of stroke,thus providing a practical and powerful tool for assessing stroke patients in an emergency clinical setting. 展开更多
关键词 artificial intelligence deep learning DIAGNOSIS early detection FAST SCREENING STROKE
在线阅读 下载PDF
SEFormer:A Lightweight CNN-Transformer Based on Separable Multiscale Depthwise Convolution and Efficient Self-Attention for Rotating Machinery Fault Diagnosis 被引量:1
20
作者 Hongxing Wang Xilai Ju +1 位作者 Hua Zhu Huafeng Li 《Computers, Materials & Continua》 SCIE EI 2025年第1期1417-1437,共21页
Traditional data-driven fault diagnosis methods depend on expert experience to manually extract effective fault features of signals,which has certain limitations.Conversely,deep learning techniques have gained promine... Traditional data-driven fault diagnosis methods depend on expert experience to manually extract effective fault features of signals,which has certain limitations.Conversely,deep learning techniques have gained prominence as a central focus of research in the field of fault diagnosis by strong fault feature extraction ability and end-to-end fault diagnosis efficiency.Recently,utilizing the respective advantages of convolution neural network(CNN)and Transformer in local and global feature extraction,research on cooperating the two have demonstrated promise in the field of fault diagnosis.However,the cross-channel convolution mechanism in CNN and the self-attention calculations in Transformer contribute to excessive complexity in the cooperative model.This complexity results in high computational costs and limited industrial applicability.To tackle the above challenges,this paper proposes a lightweight CNN-Transformer named as SEFormer for rotating machinery fault diagnosis.First,a separable multiscale depthwise convolution block is designed to extract and integrate multiscale feature information from different channel dimensions of vibration signals.Then,an efficient self-attention block is developed to capture critical fine-grained features of the signal from a global perspective.Finally,experimental results on the planetary gearbox dataset and themotor roller bearing dataset prove that the proposed framework can balance the advantages of robustness,generalization and lightweight compared to recent state-of-the-art fault diagnosis models based on CNN and Transformer.This study presents a feasible strategy for developing a lightweight rotating machinery fault diagnosis framework aimed at economical deployment. 展开更多
关键词 CNN-Transformer separable multiscale depthwise convolution efficient self-attention fault diagnosis
在线阅读 下载PDF
上一页 1 2 235 下一页 到第
使用帮助 返回顶部