期刊文献+
共找到366篇文章
< 1 2 19 >
每页显示 20 50 100
Advancing Android Ransomware Detection with Hybrid AutoML and Ensemble Learning Approaches
1
作者 Kirubavathi Ganapathiyappan Chahana Ravikumar +3 位作者 Raghul Alagunachimuthu Ranganayaki Ayman Altameem Ateeq Ur Rehman Ahmad Almogren 《Computers, Materials & Continua》 2026年第4期737-766,共30页
Android smartphones have become an integral part of our daily lives,becoming targets for ransomware attacks.Such attacks encrypt user information and ask for payment to recover it.Conventional detection mechanisms,suc... Android smartphones have become an integral part of our daily lives,becoming targets for ransomware attacks.Such attacks encrypt user information and ask for payment to recover it.Conventional detection mechanisms,such as signature-based and heuristic techniques,often fail to detect new and polymorphic ransomware samples.To address this challenge,we employed various ensemble classifiers,such as Random Forest,Gradient Boosting,Bagging,and AutoML models.We aimed to showcase how AutoML can automate processes such as model selection,feature engineering,and hyperparameter optimization,to minimize manual effort while ensuring or enhancing performance compared to traditional approaches.We used this framework to test it with a publicly available dataset from the Kaggle repository,which contains features for Android ransomware network traffic.The dataset comprises 392,024 flow records,divided into eleven groups.There are ten classes for various ransomware types,including SVpeng,PornDroid,Koler,WannaLocker,and Lockerpin.There is also a class for regular traffic.We applied a three-step procedure to select themost relevant features:filter,wrapper,and embeddedmethods.The Bagging classifier was highly accurate,correctly getting 99.84%of the time.The FLAML AutoML framework was evenmore accurate,correctly getting 99.85%of the time.This is indicative of howwellAutoML performs in improving things with minimal human assistance.Our findings indicate that AutoML is an efficient,scalable,and flexible method to discover Android ransomware,and it will facilitate the development of next-generation intrusion detection systems. 展开更多
关键词 Automated machine learning(automl) ensemble learning intrusion detection system(IDS) ransomware traffic analysis android ransomware detection
在线阅读 下载PDF
Unveiling optimal molecular features for hERG insights with automatic machine learning
2
作者 Congying Xu Youjun Xu +4 位作者 Ziang Hu Xinyi Zhao Weixin Xie Weiren Chen Jianfeng Pei 《Journal of Pharmaceutical Analysis》 2025年第12期2913-2926,共14页
We developed MaxQsaring,a novel universal framework integrating molecular descriptors,fingerprints,and deep-learning pretrained representations,to predict the properties of compounds.Applied to a case study of human e... We developed MaxQsaring,a novel universal framework integrating molecular descriptors,fingerprints,and deep-learning pretrained representations,to predict the properties of compounds.Applied to a case study of human ether-à-go-go-related gene(hERG)blockage prediction,MaxQsaring achieved state-of-the-art performance on two challenging external datasets through automatic optimal feature combinations,and successfully identified top 10 important interpretable features that could be used to model a high-accuracy decision tree.The models'predictions align well with empirical hERG optimization strategies,demonstrating their interpretability for practical utilities.Deep learning pre-trained representations have been demonstrated to exert a moderate influence on enhancing the performance of predictive models.Nevertheless,their impact on augmenting the generalizability of these models,particularly when applied to compounds possessing novel scaffolds,appears to be comparatively minimal.MaxQsaring excelled in the Therapeutics Data Commons(TDC)benchmarks,ranking first in 19 out of 22 tasks,showcasing its potential for universal accurate compound property prediction to facilitate a high success rate of early drug discovery,which is still a formidable challenge. 展开更多
关键词 hERG blockage prediction automatic machine learning Pretrained representations Feature combination XGBoost
暂未订购
Automatic diagnosis of extraocular muscle palsy based on machine learning and diplopia images
3
作者 Xiao-Lu Jin Xue-Mei Li +1 位作者 Tie-Juan Liu Ling-Yun Zhou 《International Journal of Ophthalmology(English edition)》 2025年第5期757-764,共8页
AIM:To develop different machine learning models to train and test diplopia images and data generated by the computerized diplopia test.METHODS:Diplopia images and data generated by computerized diplopia tests,along w... AIM:To develop different machine learning models to train and test diplopia images and data generated by the computerized diplopia test.METHODS:Diplopia images and data generated by computerized diplopia tests,along with patient medical records,were retrospectively collected from 3244 cases.Diagnostic models were constructed using logistic regression(LR),decision tree(DT),support vector machine(SVM),extreme gradient boosting(XGBoost),and deep learning(DL)algorithms.A total of 2757 diplopia images were randomly selected as training data,while the test dataset contained 487 diplopia images.The optimal diagnostic model was evaluated using test set accuracy,confusion matrix,and precision-recall curve(P-R curve).RESULTS:The test set accuracy of the LR,SVM,DT,XGBoost,DL(64 categories),and DL(6 binary classifications)algorithms was 0.762,0.811,0.818,0.812,0.858 and 0.858,respectively.The accuracy in the training set was 0.785,0.815,0.998,0.965,0.968,and 0.967,respectively.The weighted precision of LR,SVM,DT,XGBoost,DL(64 categories),and DL(6 binary classifications)algorithms was 0.74,0.77,0.83,0.80,0.85,and 0.85,respectively;weighted recall was 0.76,0.81,0.82,0.81,0.86,and 0.86,respectively;weighted F1 score was 0.74,0.79,0.82,0.80,0.85,and 0.85,respectively.CONCLUSION:In this study,the 7 machine learning algorithms all achieve automatic diagnosis of extraocular muscle palsy.The DL(64 categories)and DL(6 binary classifications)algorithms have a significant advantage over other machine learning algorithms regarding diagnostic accuracy on the test set,with a high level of consistency with clinical diagnoses made by physicians.Therefore,it can be used as a reference for diagnosis. 展开更多
关键词 machine learning extraocular muscle paralysis automatic diagnosis diplopia images
原文传递
Machine learning guided automatic recognition of crystal boundaries in bainitic/martensitic alloy and relationship between boundary types and ductile-to-brittle transition behavior 被引量:12
4
作者 X.C.Li J.X.Zhao +4 位作者 J.H.Cong R.D.K.Misra X.M.Wang X.L.Wang C.J.Shang 《Journal of Materials Science & Technology》 SCIE EI CAS CSCD 2021年第25期49-58,共10页
Gradient boosting decision tree(GBDT)machine learning(ML)method was adopted for the first time to automatically recognize and conduct quantitative statistical analysis of boundaries in bainitic microstructure using el... Gradient boosting decision tree(GBDT)machine learning(ML)method was adopted for the first time to automatically recognize and conduct quantitative statistical analysis of boundaries in bainitic microstructure using electron back-scatter diffraction(EBSD)data.In spite of lack of large sets of EBSD data,we were successful in achieving the desired accuracy and accomplishing the objective of recognizing the boundaries.Compared with a low model accuracy of<50%as using Euler angles or axis-angle pair as characteristic features,the accuracy of the model was significantly enhanced to about 88%when the Euler angle was converted to overall misorientation angle(OMA)and specific misorientation angle(SMA)and considered as important features.In this model,the recall score of prior austenite grain(PAG)boundary was~93%,high angle packet boundary(OMA>40°)was~97%,and block boundary was~96%.The derived outcomes of ML were used to obtain insights into the ductile-to-brittle transition(DBTT)behavior.Interestingly,ML modeling approach suggested that DBTT was not determined by the density of high angle grain boundaries,but significantly influenced by the density of PAG and packet boundaries.The study underscores that ML has a great potential in detailed recognition of complex multi-hierarchical microstructure such as bainite and martensite and relates to material performance. 展开更多
关键词 machine learning Feature engineering automatic recognition Lath structure CRYSTALLOGRAPHY
原文传递
Automatic Speaker Recognition Using Mel-Frequency Cepstral Coefficients Through Machine Learning 被引量:2
5
作者 U˘gur Ayvaz Hüseyin Gürüler +3 位作者 Faheem Khan Naveed Ahmed Taegkeun Whangbo Abdusalomov Akmalbek Bobomirzaevich 《Computers, Materials & Continua》 SCIE EI 2022年第6期5511-5521,共11页
Automatic speaker recognition(ASR)systems are the field of Human-machine interaction and scientists have been using feature extraction and feature matching methods to analyze and synthesize these signals.One of the mo... Automatic speaker recognition(ASR)systems are the field of Human-machine interaction and scientists have been using feature extraction and feature matching methods to analyze and synthesize these signals.One of the most commonly used methods for feature extraction is Mel Frequency Cepstral Coefficients(MFCCs).Recent researches show that MFCCs are successful in processing the voice signal with high accuracies.MFCCs represents a sequence of voice signal-specific features.This experimental analysis is proposed to distinguish Turkish speakers by extracting the MFCCs from the speech recordings.Since the human perception of sound is not linear,after the filterbank step in theMFCC method,we converted the obtained log filterbanks into decibel(dB)features-based spectrograms without applying the Discrete Cosine Transform(DCT).A new dataset was created with converted spectrogram into a 2-D array.Several learning algorithms were implementedwith a 10-fold cross-validationmethod to detect the speaker.The highest accuracy of 90.2%was achieved using Multi-layer Perceptron(MLP)with tanh activation function.The most important output of this study is the inclusion of human voice as a new feature set. 展开更多
关键词 automatic speaker recognition human voice recognition spatial pattern recognition MFCCs SPECTROGRAM machine learning artificial intelligence
在线阅读 下载PDF
Automatic Sentiment Classification of News Using Machine Learning Methods
6
作者 Yuhan Wang 《Modern Electronic Technology》 2022年第1期7-11,共5页
With the rapid development of social economy,the society has entered into a new stage of development,especially in new media under the background of rapid development,makes the importance of news and information to ge... With the rapid development of social economy,the society has entered into a new stage of development,especially in new media under the background of rapid development,makes the importance of news and information to get the comprehensive promotion,and in order to further identify the positive and negative news,should be fully using machine learning methods,based on the emotion to realize the automatic classifying of news,in order to improve the efficiency of news classification.Therefore,the article first makes clear the basic outline of news sentiment classification.Secondly,the specific way of automatic classification of news emotion is deeply analyzed.On the basis of this,the paper puts forward the concrete measures of automatic classification of news emotion by using machine learning. 展开更多
关键词 machine learning automatic classification of news sentiment Specific measures
在线阅读 下载PDF
Applications of advanced signal processing and machine learning in the neonatal hypoxic-ischemic electroencephalography 被引量:6
7
作者 Hamid Abbasi Charles P.Unsworth 《Neural Regeneration Research》 SCIE CAS CSCD 2020年第2期222-231,共10页
Perinatal hypoxic-ischemic-encephalopathy significantly contributes to neonatal death and life-long disability such as cerebral palsy. Advances in signal processing and machine learning have provided the research comm... Perinatal hypoxic-ischemic-encephalopathy significantly contributes to neonatal death and life-long disability such as cerebral palsy. Advances in signal processing and machine learning have provided the research community with an opportunity to develop automated real-time identification techniques to detect the signs of hypoxic-ischemic-encephalopathy in larger electroencephalography/amplitude-integrated electroencephalography data sets more easily. This review details the recent achievements, performed by a number of prominent research groups across the world, in the automatic identification and classification of hypoxic-ischemic epileptiform neonatal seizures using advanced signal processing and machine learning techniques. This review also addresses the clinical challenges that current automated techniques face in order to be fully utilized by clinicians, and highlights the importance of upgrading the current clinical bedside sampling frequencies to higher sampling rates in order to provide better hypoxic-ischemic biomarker detection frameworks. Additionally, the article highlights that current clinical automated epileptiform detection strategies for human neonates have been only concerned with seizure detection after the therapeutic latent phase of injury. Whereas recent animal studies have demonstrated that the latent phase of opportunity is critically important for early diagnosis of hypoxic-ischemic-encephalopathy electroencephalography biomarkers and although difficult, detection strategies could utilize biomarkers in the latent phase to also predict the onset of future seizures. 展开更多
关键词 advanced signal processing AEEG automatic detection classification clinical EEG fetal HIE hypoxic-ischemic ENCEPHALOPATHY machine learning neonatal SEIZURE real-time identification review
暂未订购
Auto machine learning-based modelling and prediction of excavationinduced tunnel displacement 被引量:9
8
作者 Dongmei Zhang Yiming Shen +1 位作者 Zhongkai Huang Xiaochuang Xie 《Journal of Rock Mechanics and Geotechnical Engineering》 SCIE CSCD 2022年第4期1100-1114,共15页
The influence of a deep excavation on existing shield tunnels nearby is a vital issue in tunnelling engineering.Whereas,there lacks robust methods to predict excavation-induced tunnel displacements.In this study,an au... The influence of a deep excavation on existing shield tunnels nearby is a vital issue in tunnelling engineering.Whereas,there lacks robust methods to predict excavation-induced tunnel displacements.In this study,an auto machine learning(AutoML)-based approach is proposed to precisely solve the issue.Seven input parameters are considered in the database covering two physical aspects,namely soil property,and spatial characteristics of the deep excavation.The 10-fold cross-validation method is employed to overcome the scarcity of data,and promote model’s robustness.Six genetic algorithm(GA)-ML models are established as well for comparison.The results indicated that the proposed AutoML model is a comprehensive model that integrates efficiency and robustness.Importance analysis reveals that the ratio of the average shear strength to the vertical effective stress E_(ur)/σ′_(v),the excavation depth H,and the excavation width B are the most influential variables for the displacements.Finally,the AutoML model is further validated by practical engineering.The prediction results are in a good agreement with monitoring data,signifying that our model can be applied in real projects. 展开更多
关键词 Soilestructure interaction Auto machine learning(automl) Displacement prediction Robust model Geotechnical engineering
在线阅读 下载PDF
Design of Machine Learning Based Smart Irrigation System for Precision Agriculture 被引量:2
9
作者 Khalil Ibrahim Mohammad Abuzanouneh Fahd N.Al-Wesabi +6 位作者 Amani Abdulrahman Albraikan Mesfer Al Duhayyim M.Al-Shabi Anwer Mustafa Hilal Manar Ahmed Hamza Abu Sarwar Zamani K.Muthulakshmi 《Computers, Materials & Continua》 SCIE EI 2022年第7期109-124,共16页
Agriculture 4.0,as the future of farming technology,comprises numerous key enabling technologies towards sustainable agriculture.The use of state-of-the-art technologies,such as the Internet of Things,transform tradit... Agriculture 4.0,as the future of farming technology,comprises numerous key enabling technologies towards sustainable agriculture.The use of state-of-the-art technologies,such as the Internet of Things,transform traditional cultivation practices,like irrigation,to modern solutions of precision agriculture.To achieve effectivewater resource usage and automated irrigation in precision agriculture,recent technologies like machine learning(ML)can be employed.With this motivation,this paper design an IoT andML enabled smart irrigation system(IoTML-SIS)for precision agriculture.The proposed IoTML-SIS technique allows to sense the parameters of the farmland and make appropriate decisions for irrigation.The proposed IoTML-SIS model involves different IoT based sensors for soil moisture,humidity,temperature sensor,and light.Besides,the sensed data are transmitted to the cloud server for processing and decision making.Moreover,artificial algae algorithm(AAA)with least squares-support vector machine(LS-SVM)model is employed for the classification process to determine the need for irrigation.Furthermore,the AAA is applied to optimally tune the parameters involved in the LS-SVM model,and thereby the classification efficiency is significantly increased.The performance validation of the proposed IoTML-SIS technique ensured better performance over the compared methods with the maximum accuracy of 0.975. 展开更多
关键词 automatic irrigation precision agriculture smart farming machine learning cloud computing decision making internet of things
在线阅读 下载PDF
Machine Learning of Weather Forecasting Rules from Large Meteorological Data Bases 被引量:1
10
作者 Honghua DaiDepartment of Computer Science,Monash University,Australia,dai@ brucc.cs.monash.edu.au 《Advances in Atmospheric Sciences》 SCIE CAS CSCD 1996年第4期471-488,共18页
Discovery of useful forecasting rules from observational weather data is an outstanding interesting topic.The traditional methods of acquiring forecasting knowledge are manual analysis and investigation performed by h... Discovery of useful forecasting rules from observational weather data is an outstanding interesting topic.The traditional methods of acquiring forecasting knowledge are manual analysis and investigation performed by human scientists.This paper presents the experimental results of an automatic machine learning system which derives forecasting rules from real observational data.We tested the system on the two large real data sets from the areas of centra! China and Victoria of Australia.The experimental results show that the forecasting rules discovered by the system are very competitive to human experts.The forecasting accuracy rates are 86.4% and 78% of the two data sets respectively 展开更多
关键词 Weather forecasting machine learning machine discovery Meteorological expert system Meteorological knowledge processing automatic forecasting
在线阅读 下载PDF
Semi-supervised learning based probabilistic latent semantic analysis for automatic image annotation 被引量:1
11
作者 Tian Dongping 《High Technology Letters》 EI CAS 2017年第4期367-374,共8页
In recent years,multimedia annotation problem has been attracting significant research attention in multimedia and computer vision areas,especially for automatic image annotation,whose purpose is to provide an efficie... In recent years,multimedia annotation problem has been attracting significant research attention in multimedia and computer vision areas,especially for automatic image annotation,whose purpose is to provide an efficient and effective searching environment for users to query their images more easily. In this paper,a semi-supervised learning based probabilistic latent semantic analysis( PLSA) model for automatic image annotation is presenred. Since it's often hard to obtain or create labeled images in large quantities while unlabeled ones are easier to collect,a transductive support vector machine( TSVM) is exploited to enhance the quality of the training image data. Then,different image features with different magnitudes will result in different performance for automatic image annotation. To this end,a Gaussian normalization method is utilized to normalize different features extracted from effective image regions segmented by the normalized cuts algorithm so as to reserve the intrinsic content of images as complete as possible. Finally,a PLSA model with asymmetric modalities is constructed based on the expectation maximization( EM) algorithm to predict a candidate set of annotations with confidence scores. Extensive experiments on the general-purpose Corel5k dataset demonstrate that the proposed model can significantly improve performance of traditional PLSA for the task of automatic image annotation. 展开更多
关键词 automatic image annotation semi-supervised learning probabilistic latent semantic analysis(PLSA) transductive support vector machine(TSVM) image segmentation image retrieval
在线阅读 下载PDF
Development of a machine learning model for predicting abnormalities of commercial airplanes 被引量:1
12
作者 Rossi Passarella Siti Nurmaini +2 位作者 Muhammad Naufal Rachmatullah Harumi Veny Fara Nissya Nur Hafidzoh 《Data Science and Management》 2024年第3期256-265,共10页
Airplanes are a social necessity for movement of humans,goods,and other.They are generally safe modes of transportation;however,incidents and accidents occasionally occur.To prevent aviation accidents,it is necessary ... Airplanes are a social necessity for movement of humans,goods,and other.They are generally safe modes of transportation;however,incidents and accidents occasionally occur.To prevent aviation accidents,it is necessary to develop a machine-learning model to detect and predict commercial flights using automatic dependent surveillance–broadcast data.This study combined data-quality detection,anomaly detection,and abnormality-classification-model development.The research methodology involved the following stages:problem statement,data selection and labeling,prediction-model development,deployment,and testing.The data labeling process was based on the rules framed by the international civil aviation organization for commercial,jet-engine flights and validated by expert commercial pilots.The results showed that the best prediction model,the quadratic-discriminant-analysis,was 93%accurate,indicating a“good fit”.Moreover,the model’s area-under-the-curve results for abnormal and normal detection were 0.97 and 0.96,respectively,thus confirming its“good fit”. 展开更多
关键词 automatic dependent surveillance-broadcast data Commercial airplanes accident Data-labeling machine learning Prediction model
在线阅读 下载PDF
Spatio-temporal change and driving mechanisms of land use/cover in Qarhan Salt Lake area during from 2000 to 2020,based on machine learning
13
作者 Chao Yue ZiTao Wang JianPing Wang 《Research in Cold and Arid Regions》 CSCD 2024年第5期239-249,共11页
The significance of land use classification has garnered attention due to its implications for climate and ecosystems.This paper establishes a connection by introducing and applying automatic machine learning(Auto ML)... The significance of land use classification has garnered attention due to its implications for climate and ecosystems.This paper establishes a connection by introducing and applying automatic machine learning(Auto ML)techniques to salt lake landscape,with a specific focus on the Qarhan Salt Lake area.Utilizing Landsat-5 Thematic Mappe(TM)and Landsat-8 Operational Land Imager(OLI)imagery,six machine learning algorithms were employed to classify eight land use types from 2000 to 2020.Results show that XGBLD performed optimally with 77%accuracy.Over two decades,salt fields,construction land,and water areas increased due to transformations in saline land and salt flats.The exposed lakes area exhibited a rise followed by a decline,mainly transforming into salt flats.Agricultural land areas slightly increased,influenced by both human activities and climate.Our analysis reveals a strong correlation between salt fields and precipitation,while exposed lakes demonstrate a significant negative correlation with evaporation and temperature,highlighting their vulnerability to climate change.Additionally,human water usage was identified as a significant factor impacting land use change,emphasizing the dual influence of anthropogenic activities and natural factors.This paper addresses the void in the application of Auto ML in salt lake environments and provides valuable insights into the dynamic evolution of land use types in the Qarhan Salt Lake region. 展开更多
关键词 automatic machine learning Qarhan Salt Lake Land use classicification TRANSFORMATION
在线阅读 下载PDF
Robust signal recognition algorithm based on machine learning in heterogeneous networks
14
作者 Xiaokai Liu Rong Li +1 位作者 Chenglin Zhao Pengbiao Wang 《Journal of Systems Engineering and Electronics》 SCIE EI CSCD 2016年第2期333-342,共10页
There are various heterogeneous networks for terminals to deliver a better quality of service. Signal system recognition and classification contribute a lot to the process. However, in low signal to noise ratio(SNR)... There are various heterogeneous networks for terminals to deliver a better quality of service. Signal system recognition and classification contribute a lot to the process. However, in low signal to noise ratio(SNR) circumstances or under time-varying multipath channels, the majority of the existing algorithms for signal recognition are already facing limitations. In this series, we present a robust signal recognition method based upon the original and latest updated version of the extreme learning machine(ELM) to help users to switch between networks. The ELM utilizes signal characteristics to distinguish systems. The superiority of this algorithm lies in the random choices of hidden nodes and in the fact that it determines the output weights analytically, which result in lower complexity. Theoretically, the algorithm tends to offer a good generalization performance at an extremely fast speed of learning. Moreover, we implement the GSM/WCDMA/LTE models in the Matlab environment by using the Simulink tools. The simulations reveal that the signals can be recognized successfully to achieve a 95% accuracy in a low SNR(0 dB) environment in the time-varying multipath Rayleigh fading channel. 展开更多
关键词 heterogeneous networks automatic signal classification extreme learning machine(ELM) features-extracted Rayleigh fading channel
在线阅读 下载PDF
Investigation on Analog and Digital Modulations Recognition Using Machine Learning Algorithms
15
作者 Jean Ndoumbe Ivan Basile Kabeina +1 位作者 Gaelle Patricia Talotsing Soubiel-Noël Nkomo Biloo 《World Journal of Engineering and Technology》 2024年第4期867-884,共18页
In the field of radiocommunication, modulation type identification is one of the most important characteristics in signal processing. This study aims to implement a modulation recognition system on two approaches to m... In the field of radiocommunication, modulation type identification is one of the most important characteristics in signal processing. This study aims to implement a modulation recognition system on two approaches to machine learning techniques, the K-Nearest Neighbors (KNN) and Artificial Neural Networks (ANN). From a statistical and spectral analysis of signals, nine key differentiation features are extracted and used as input vectors for each trained model. The feature extraction is performed by using the Hilbert transform, the forward and inverse Fourier transforms. The experiments with the AMC Master dataset classify ten (10) types of analog and digital modulations. AM_DSB_FC, AM_DSB_SC, AM_USB, AM_LSB, FM, MPSK, 2PSK, MASK, 2ASK, MQAM are put forward in this article. For the simulation of the chosen model, signals are polluted by the Additive White Gaussian Noise (AWGN). The simulation results show that the best identification rate is the MLP neuronal method with 90.5% of accuracy after 10 dB signal-to-noise ratio value, with a shift of more than 15% from the k-nearest neighbors’ algorithm. 展开更多
关键词 automatic Recognition Artificial Neural Networks K-Nearest Neighbors machine learning Analog Modulations Digital Modulations
在线阅读 下载PDF
Developing a diagnostic support system for audiogram interpretation using deep learning-based object detection
16
作者 Titipat Achakulvisut Suchanon Phanthong +4 位作者 Thanawut Timpitak Kanpat Vesessook Sirinan Junthong Withita Utainrat Kanokrat Bunnag 《Journal of Otology》 2025年第1期26-32,共7页
Objective To develop and evaluate an automated system for digitizing audiograms,classifying hearing loss levels,and comparing their performance with traditional methods and otolaryngologists'interpretations.Design... Objective To develop and evaluate an automated system for digitizing audiograms,classifying hearing loss levels,and comparing their performance with traditional methods and otolaryngologists'interpretations.Designed and Methods We conducted a retrospective diagnostic study using 1,959 audiogram images from patients aged 7 years and older at the Faculty of Medicine,Vajira Hospital,Navamindradhiraj University.We employed an object detection approach to digitize audiograms and developed multiple machine learning models to classify six hearing loss levels.The dataset was split into 70%training(1,407 images)and 30%testing(352 images)sets.We compared our model's performance with classifications based on manually extracted audiogram values and otolaryngologists'interpretations.Result Our object detection-based model achieved an F1-score of 94.72%in classifying hearing loss levels,comparable to the 96.43%F1-score obtained using manually extracted values.The Light Gradient Boosting Machine(LGBM)model is used as the classifier for the manually extracted data,which achieved top performance with 94.72%accuracy,94.72%f1-score,94.72 recall,and 94.72 precision.In object detection based model,The Random Forest Classifier(RFC)model showed the highest 96.43%accuracy in predicting hearing loss level,with a F1-score of 96.43%,recall of 96.43%,and precision of 96.45%.Conclusion Our proposed automated approach for audiogram digitization and hearing loss classification performs comparably to traditional methods and otolaryngologists'interpretations.This system can potentially assist otolaryngologists in providing more timely and effective treatment by quickly and accurately classifying hearing loss. 展开更多
关键词 AUDIOGRAM Deep machine learning Training set Validation set Testing set automatic machine learning(automl) Random Forest Classifier(RFC) Support Vector machine(SVM) XGBoost
在线阅读 下载PDF
基于元学习的超参数优化方法综述
17
作者 吴佳 刘析远 陈森朋 《哈尔滨工业大学学报》 北大核心 2026年第1期77-91,共15页
超参数优化是自动机器学习领域中的关键技术之一,旨在通过实现超参数调优的自动化,减轻机器学习从业者的工作负担。在机器人系统中,超参数优化对感知模块的神经网络训练、控制器的参数整定以及多模态数据融合算法的性能提升具有关键作... 超参数优化是自动机器学习领域中的关键技术之一,旨在通过实现超参数调优的自动化,减轻机器学习从业者的工作负担。在机器人系统中,超参数优化对感知模块的神经网络训练、控制器的参数整定以及多模态数据融合算法的性能提升具有关键作用。然而,尽管该技术已取得显著进展,但其效率问题仍是限制其广泛应用的主要瓶颈。近年来,元学习技术的迅猛发展为提升超参数优化的效率开辟了新路径,该技术在机器人系统需要快速适应动态环境及新任务场景时展现出独特优势。元学习的核心在于使模型能够从大量先验任务中自动吸收并应用相关知识,从而显著提升其对未知任务的学习效率。基于此,众多研究者正致力于探索如何利用元学习技术来增强超参数优化的搜索能力。本文旨在系统梳理相关研究进展:首先对超参数优化问题进行形式化定义,并综述当前主流方法;其次,总结基于元学习理论的超参数优化策略,并分析当前主流元学习算法的特点;再次,介绍超参数优化领域的基准数据集,并对比分析主流方法在其上的实验性能;最后,对超参数优化技术的未来发展趋势进行展望。 展开更多
关键词 自动机器学习 超参数优化 元学习 数据挖掘 机器学习
在线阅读 下载PDF
智能可视化重铬酸钾回流法测定化学需氧量
18
作者 周跃明 邱新 +6 位作者 周馨 万潇天 张末凡 李丰 邵鑫鑫 丁鹏 梁喜珍 《大学化学》 2026年第1期85-94,共10页
重铬酸钾回流法测定化学需氧量是水环境质量监测标准方法(HJ 828-2017)。该方法危险性强、成本高且排污严重,限制了其在化学基础实验教学中的推广应用。用色敏摄像机采集溶液反应图像,Python中的OpenCV库获取RGB数据,结合机器学习聚类分... 重铬酸钾回流法测定化学需氧量是水环境质量监测标准方法(HJ 828-2017)。该方法危险性强、成本高且排污严重,限制了其在化学基础实验教学中的推广应用。用色敏摄像机采集溶液反应图像,Python中的OpenCV库获取RGB数据,结合机器学习聚类分析,实现加热回流与滴定过程的自动化监测。将数字孪生可视化技术创新性地融入重铬酸钾回流法测化学需氧量实验中,交互式操作提升了仿真实验教学效果。 展开更多
关键词 数字孪生 重铬酸钾回流法 化学需氧量 机器学习 自动化监测
在线阅读 下载PDF
面向语义分割机器视觉的AutoML方法 被引量:6
19
作者 刘桂雄 黄坚 +1 位作者 刘思洋 廖普 《激光杂志》 北大核心 2019年第6期1-9,共9页
自动机器学习(Automatic Machine Learning,AutoML)可实现语义分割,使机器学习大部分步骤自动化。针对面向超参数优化、迁移学习、神经架构搜索等方法的算法思想、优化对象、实现技术、技术指标、应用效果及场景,结合语义分割的机器学... 自动机器学习(Automatic Machine Learning,AutoML)可实现语义分割,使机器学习大部分步骤自动化。针对面向超参数优化、迁移学习、神经架构搜索等方法的算法思想、优化对象、实现技术、技术指标、应用效果及场景,结合语义分割的机器学习超参数多、数据集规模较小、标注工作量大等问题,指出超参数优化、迁移学习、神经架构搜索分别有助于提升训练效率、降低样本标注工作量、自动构建专用卷积神经网络,若Au-toML与机器视觉相结合可赋予系统自学习、快速更换检测对象和解决特别复杂任务等特性。 展开更多
关键词 机器视觉 语义分割 自动机器学习 超参数优化 迁移学习 神经架构搜索
原文传递
Artificial intelligence assisted 3D in the robotic urooncology?A systematic review and narrative synthesis of current applications,challenges and future directions
20
作者 Bara Barakat Bilal Al-Absi +3 位作者 Boris Hadaschik Christian Rehme Samer Schakaki Joerg Bauer 《The Canadian Journal of Urology》 2026年第1期105-116,共12页
Background:Artificial intelligence(AI)-assisted threedimensional(3D)surgical platforms,integrated with augmented reality,have the potential to improve intraoperative anatomical recognition and provide surgeons with an... Background:Artificial intelligence(AI)-assisted threedimensional(3D)surgical platforms,integrated with augmented reality,have the potential to improve intraoperative anatomical recognition and provide surgeons with an immersive,dynamic operating environment during urooncological procedures.This review aims to examine the current applications of AI in robotic uro-oncology,with a particular focus on its role in facilitating intraoperative navigation during complex surgeries.Methods:A systematic literature search was performed across PubMed,the National Library of Medicine,MEDLINE,the Cochrane Central Register of Controlled Trials(CENTRAL),ClinicalTrials.gov,and Google Scholar to identify relevant studies published up to July 2025.The search strategy incorporated a predefined set of keywords,including AI,machine learning,radical prostatectomy(RP),robotic-assisted radical prostatectomy(RARP),robotassisted partial nephrectomy(RAPN),and robot-assisted radical cystectomy(RARC).Only clinical trials,full-text peer-reviewed publications,and original research articles were included.Studies were eligible for inclusion if they evaluated or described applications of AI in RARP,RAPN,or RARC.Results:Technological advancements have substantially transformed the field of uro-oncologic surgery.In particular,AI and AI-assisted intraoperative navigation in RARP demonstrate considerable potential to objectively assess surgical performance and predict clinical outcomes.In RAPN,the adoption of preoperative,interactive 3D virtualmodels for surgical planning has influenced surgical decisions,thus,enhanced precision in resection planning correlates with superior nephron-sparing outcomes and optimized selective clamping.AI applications in RARC,techniques such as augmented reality(AR)can overlay critical information on the surgical field,by facilitating navigation through complex anatomical planes and enhancing identification of critical structures.Conclusion:AI appears to enhance robotic uro-oncologic procedures by increasing operative precision and supporting individualised surgical treatment strategies. 展开更多
关键词 artificial intelligence robot-assisted surgery machine learning deep learning automatic three-dimensional surgical navigation intuitive surgical systematic review
在线阅读 下载PDF
上一页 1 2 19 下一页 到第
使用帮助 返回顶部