BACKGROUND Ischemic heart disease(IHD)impacts the quality of life and has the highest mortality rate of cardiovascular diseases globally.AIM To compare variations in the parameters of the single-lead electrocardiogram...BACKGROUND Ischemic heart disease(IHD)impacts the quality of life and has the highest mortality rate of cardiovascular diseases globally.AIM To compare variations in the parameters of the single-lead electrocardiogram(ECG)during resting conditions and physical exertion in individuals diagnosed with IHD and those without the condition using vasodilator-induced stress computed tomography(CT)myocardial perfusion imaging as the diagnostic reference standard.METHODS This single center observational study included 80 participants.The participants were aged≥40 years and given an informed written consent to participate in the study.Both groups,G1(n=31)with and G2(n=49)without post stress induced myocardial perfusion defect,passed cardiologist consultation,anthropometric measurements,blood pressure and pulse rate measurement,echocardiography,cardio-ankle vascular index,bicycle ergometry,recording 3-min single-lead ECG(Cardio-Qvark)before and just after bicycle ergometry followed by performing CT myocardial perfusion.The LASSO regression with nested cross-validation was used to find the association between Cardio-Qvark parameters and the existence of the perfusion defect.Statistical processing was performed with the R programming language v4.2,Python v.3.10[^R],and Statistica 12 program.RESULTS Bicycle ergometry yielded an area under the receiver operating characteristic curve of 50.7%[95%confidence interval(CI):0.388-0.625],specificity of 53.1%(95%CI:0.392-0.673),and sensitivity of 48.4%(95%CI:0.306-0.657).In contrast,the Cardio-Qvark test performed notably better with an area under the receiver operating characteristic curve of 67%(95%CI:0.530-0.801),specificity of 75.5%(95%CI:0.628-0.88),and sensitivity of 51.6%(95%CI:0.333-0.695).CONCLUSION The single-lead ECG has a relatively higher diagnostic accuracy compared with bicycle ergometry by using machine learning models,but the difference was not statistically significant.However,further investigations are required to uncover the hidden capabilities of single-lead ECG in IHD diagnosis.展开更多
Sporadic E(Es)layers in the ionosphere are characterized by intense plasma irregularities in the E region at altitudes of 90-130 km.Because they can significantly influence radio communications and navigation systems,...Sporadic E(Es)layers in the ionosphere are characterized by intense plasma irregularities in the E region at altitudes of 90-130 km.Because they can significantly influence radio communications and navigation systems,accurate forecasting of Es layers is crucial for ensuring the precision and dependability of navigation satellite systems.In this study,we present Es predictions made by an empirical model and by a deep learning model,and analyze their differences comprehensively by comparing the model predictions to satellite RO measurements and ground-based ionosonde observations.The deep learning model exhibited significantly better performance,as indicated by its high coefficient of correlation(r=0.87)with RO observations and predictions,than did the empirical model(r=0.53).This study highlights the importance of integrating artificial intelligence technology into ionosphere modelling generally,and into predicting Es layer occurrences and characteristics,in particular.展开更多
The backwater effect caused by tributary inflow can significantly elevate the water level profile upstream of a confluence point.However,the influence of mainstream and confluence discharges on the backwater effect in...The backwater effect caused by tributary inflow can significantly elevate the water level profile upstream of a confluence point.However,the influence of mainstream and confluence discharges on the backwater effect in a river reach remains unclear.In this study,various hydrological data collected from the Jingjiang Reach of the Yangtze River in China were statistically analyzed to determine the backwater degree and range with three representative mainstream discharges.The results indicated that the backwater degree increased with mainstream discharge,and a positive relationship was observed between the runoff ratio and backwater degree at specific representative mainstream discharges.Following the operation of the Three Gorges Project,the backwater effect in the Jingjiang Reach diminished.For instance,mean backwater degrees for low,moderate,and high mainstream discharges were recorded as 0.83 m,1.61 m,and 2.41 m during the period from 1990 to 2002,whereas these values decreased to 0.30 m,0.95 m,and 2.08 m from 2009 to 2020.The backwater range extended upstream as mainstream discharge increased from 7000 m3/s to 30000 m3/s.Moreover,a random forest-based machine learning model was used to quantify the backwater effect with varying mainstream and confluence discharges,accounting for the impacts of mainstream discharge,confluence discharge,and channel degradation in the Jingjiang Reach.At the Jianli Hydrological Station,a decrease in mainstream discharge during flood seasons resulted in a 7%–15%increase in monthly mean backwater degree,while an increase in mainstream discharge during dry seasons led to a 1%–15%decrease in monthly mean backwater degree.Furthermore,increasing confluence discharge from Dongting Lake during June to July and September to November resulted in an 11%–42%increase in monthly mean backwater degree.Continuous channel degradation in the Jingjiang Reach contributed to a 6%–19%decrease in monthly mean backwater degree.Under the influence of these factors,the monthly mean backwater degree in 2017 varied from a decrease of 53%to an increase of 37%compared to corresponding values in 1991.展开更多
Understanding spatial heterogeneity in groundwater responses to multiple factors is critical for water resource management in coastal cities.Daily groundwater depth(GWD)data from 43 wells(2018-2022)were collected in t...Understanding spatial heterogeneity in groundwater responses to multiple factors is critical for water resource management in coastal cities.Daily groundwater depth(GWD)data from 43 wells(2018-2022)were collected in three coastal cities in Jiangsu Province,China.Seasonal and Trend decomposition using Loess(STL)together with wavelet analysis and empirical mode decomposition were applied to identify tide-influenced wells while remaining wells were grouped by hierarchical clustering analysis(HCA).Machine learning models were developed to predict GWD,then their response to natural conditions and human activities was assessed by the Shapley Additive exPlanations(SHAP)method.Results showed that eXtreme Gradient Boosting(XGB)was superior to other models in terms of prediction performance and computational efficiency(R^(2)>0.95).GWD in Yancheng and southern Lianyungang were greater than those in Nantong,exhibiting larger fluctuations.Groundwater within 5 km of the coastline was affected by tides,with more pronounced effects in agricultural areas compared to urban areas.Shallow groundwater(3-7 m depth)responded immediately(0-1 day)to rainfall,primarily influenced by farmland and topography(slope and distance from rivers).Rainfall recharge to groundwater peaked at 50%farmland coverage,but this effect was suppressed by high temperatures(>30℃)which intensified as distance from rivers increased,especially in forest and grassland.Deep groundwater(>10 m)showed delayed responses to rainfall(1-4 days)and temperature(10-15 days),with GDP as the primary influence,followed by agricultural irrigation and population density.Farmland helped to maintain stable GWD in low population density regions,while excessive farmland coverage(>90%)led to overexploitation.In the early stages of GDP development,increased industrial and agricultural water demand led to GWD decline,but as GDP levels significantly improved,groundwater consumption pressure gradually eased.This methodological framework is applicable not only to coastal cities in China but also could be extended to coastal regions worldwide.展开更多
This research investigates the influence of indoor and outdoor factors on photovoltaic(PV)power generation at Utrecht University to accurately predict PV system performance by identifying critical impact factors and i...This research investigates the influence of indoor and outdoor factors on photovoltaic(PV)power generation at Utrecht University to accurately predict PV system performance by identifying critical impact factors and improving renewable energy efficiency.To predict plant efficiency,nineteen variables are analyzed,consisting of nine indoor photovoltaic panel characteristics(Open Circuit Voltage(Voc),Short Circuit Current(Isc),Maximum Power(Pmpp),Maximum Voltage(Umpp),Maximum Current(Impp),Filling Factor(FF),Parallel Resistance(Rp),Series Resistance(Rs),Module Temperature)and ten environmental factors(Air Temperature,Air Humidity,Dew Point,Air Pressure,Irradiation,Irradiation Propagation,Wind Speed,Wind Speed Propagation,Wind Direction,Wind Direction Propagation).This study provides a new perspective not previously addressed in the literature.In this study,different machine learning methods such as Multilayer Perceptron(MLP),Multivariate Adaptive Regression Spline(MARS),Multiple Linear Regression(MLR),and Random Forest(RF)models are used to predict power values using data from installed PVpanels.Panel values obtained under real field conditions were used to train the models,and the results were compared.The Multilayer Perceptron(MLP)model was achieved with the highest classification accuracy of 0.990%.The machine learning models used for solar energy forecasting show high performance and produce results close to actual values.Models like Multi-Layer Perceptron(MLP)and Random Forest(RF)can be used in diverse locations based on load demand.展开更多
Well logging technology has accumulated a large amount of historical data through four generations of technological development,which forms the basis of well logging big data and digital assets.However,the value of th...Well logging technology has accumulated a large amount of historical data through four generations of technological development,which forms the basis of well logging big data and digital assets.However,the value of these data has not been well stored,managed and mined.With the development of cloud computing technology,it provides a rare development opportunity for logging big data private cloud.The traditional petrophysical evaluation and interpretation model has encountered great challenges in the face of new evaluation objects.The solution research of logging big data distributed storage,processing and learning functions integrated in logging big data private cloud has not been carried out yet.To establish a distributed logging big-data private cloud platform centered on a unifi ed learning model,which achieves the distributed storage and processing of logging big data and facilitates the learning of novel knowledge patterns via the unifi ed logging learning model integrating physical simulation and data models in a large-scale functional space,thus resolving the geo-engineering evaluation problem of geothermal fi elds.Based on the research idea of“logging big data cloud platform-unifi ed logging learning model-large function space-knowledge learning&discovery-application”,the theoretical foundation of unified learning model,cloud platform architecture,data storage and learning algorithm,arithmetic power allocation and platform monitoring,platform stability,data security,etc.have been carried on analysis.The designed logging big data cloud platform realizes parallel distributed storage and processing of data and learning algorithms.The feasibility of constructing a well logging big data cloud platform based on a unifi ed learning model of physics and data is analyzed in terms of the structure,ecology,management and security of the cloud platform.The case study shows that the logging big data cloud platform has obvious technical advantages over traditional logging evaluation methods in terms of knowledge discovery method,data software and results sharing,accuracy,speed and complexity.展开更多
The prediction of fundus fluorescein angiography(FFA)images from fundus structural images is a cutting-edge research topic in ophthalmological image processing.Prediction comprises estimating FFA from fundus camera im...The prediction of fundus fluorescein angiography(FFA)images from fundus structural images is a cutting-edge research topic in ophthalmological image processing.Prediction comprises estimating FFA from fundus camera imaging,single-phase FFA from scanning laser ophthalmoscopy(SLO),and three-phase FFA also from SLO.Although many deep learning models are available,a single model can only perform one or two of these prediction tasks.To accomplish three prediction tasks using a unified method,we propose a unified deep learning model for predicting FFA images from fundus structure images using a supervised generative adversarial network.The three prediction tasks are processed as follows:data preparation,network training under FFA supervision,and FFA image prediction from fundus structure images on a test set.By comparing the FFA images predicted by our model,pix2pix,and CycleGAN,we demonstrate the remarkable progress achieved by our proposal.The high performance of our model is validated in terms of the peak signal-to-noise ratio,structural similarity index,and mean squared error.展开更多
The Indian Himalayan region is frequently experiencing climate change-induced landslides.Thus,landslide susceptibility assessment assumes greater significance for lessening the impact of a landslide hazard.This paper ...The Indian Himalayan region is frequently experiencing climate change-induced landslides.Thus,landslide susceptibility assessment assumes greater significance for lessening the impact of a landslide hazard.This paper makes an attempt to assess landslide susceptibility in Shimla district of the northwest Indian Himalayan region.It examined the effectiveness of random forest(RF),multilayer perceptron(MLP),sequential minimal optimization regression(SMOreg)and bagging ensemble(B-RF,BSMOreg,B-MLP)models.A landslide inventory map comprising 1052 locations of past landslide occurrences was classified into training(70%)and testing(30%)datasets.The site-specific influencing factors were selected by employing a multicollinearity test.The relationship between past landslide occurrences and influencing factors was established using the frequency ratio method.The effectiveness of machine learning models was verified through performance assessors.The landslide susceptibility maps were validated by the area under the receiver operating characteristic curves(ROC-AUC),accuracy,precision,recall and F1-score.The key performance metrics and map validation demonstrated that the BRF model(correlation coefficient:0.988,mean absolute error:0.010,root mean square error:0.058,relative absolute error:2.964,ROC-AUC:0.947,accuracy:0.778,precision:0.819,recall:0.917 and F-1 score:0.865)outperformed the single classifiers and other bagging ensemble models for landslide susceptibility.The results show that the largest area was found under the very high susceptibility zone(33.87%),followed by the low(27.30%),high(20.68%)and moderate(18.16%)susceptibility zones.The factors,namely average annual rainfall,slope,lithology,soil texture and earthquake magnitude have been identified as the influencing factors for very high landslide susceptibility.Soil texture,lineament density and elevation have been attributed to high and moderate susceptibility.Thus,the study calls for devising suitable landslide mitigation measures in the study area.Structural measures,an immediate response system,community participation and coordination among stakeholders may help lessen the detrimental impact of landslides.The findings from this study could aid decision-makers in mitigating future catastrophes and devising suitable strategies in other geographical regions with similar geological characteristics.展开更多
BACKGROUND Colorectal cancer significantly impacts global health,with unplanned reoperations post-surgery being key determinants of patient outcomes.Existing predictive models for these reoperations lack precision in ...BACKGROUND Colorectal cancer significantly impacts global health,with unplanned reoperations post-surgery being key determinants of patient outcomes.Existing predictive models for these reoperations lack precision in integrating complex clinical data.AIM To develop and validate a machine learning model for predicting unplanned reoperation risk in colorectal cancer patients.METHODS Data of patients treated for colorectal cancer(n=2044)at the First Affiliated Hospital of Wenzhou Medical University and Wenzhou Central Hospital from March 2020 to March 2022 were retrospectively collected.Patients were divided into an experimental group(n=60)and a control group(n=1984)according to unplanned reoperation occurrence.Patients were also divided into a training group and a validation group(7:3 ratio).We used three different machine learning methods to screen characteristic variables.A nomogram was created based on multifactor logistic regression,and the model performance was assessed using receiver operating characteristic curve,calibration curve,Hosmer-Lemeshow test,and decision curve analysis.The risk scores of the two groups were calculated and compared to validate the model.RESULTS More patients in the experimental group were≥60 years old,male,and had a history of hypertension,laparotomy,and hypoproteinemia,compared to the control group.Multiple logistic regression analysis confirmed the following as independent risk factors for unplanned reoperation(P<0.05):Prognostic Nutritional Index value,history of laparotomy,hypertension,or stroke,hypoproteinemia,age,tumor-node-metastasis staging,surgical time,gender,and American Society of Anesthesiologists classification.Receiver operating characteristic curve analysis showed that the model had good discrimination and clinical utility.CONCLUSION This study used a machine learning approach to build a model that accurately predicts the risk of postoperative unplanned reoperation in patients with colorectal cancer,which can improve treatment decisions and prognosis.展开更多
BACKGROUND Liver transplantation(LT)is a life-saving intervention for patients with end-stage liver disease.However,the equitable allocation of scarce donor organs remains a formidable challenge.Prognostic tools are p...BACKGROUND Liver transplantation(LT)is a life-saving intervention for patients with end-stage liver disease.However,the equitable allocation of scarce donor organs remains a formidable challenge.Prognostic tools are pivotal in identifying the most suitable transplant candidates.Traditionally,scoring systems like the model for end-stage liver disease have been instrumental in this process.Nevertheless,the landscape of prognostication is undergoing a transformation with the integration of machine learning(ML)and artificial intelligence models.AIM To assess the utility of ML models in prognostication for LT,comparing their performance and reliability to established traditional scoring systems.METHODS Following the Preferred Reporting Items for Systematic Reviews and Meta-Analysis guidelines,we conducted a thorough and standardized literature search using the PubMed/MEDLINE database.Our search imposed no restrictions on publication year,age,or gender.Exclusion criteria encompassed non-English studies,review articles,case reports,conference papers,studies with missing data,or those exhibiting evident methodological flaws.RESULTS Our search yielded a total of 64 articles,with 23 meeting the inclusion criteria.Among the selected studies,60.8%originated from the United States and China combined.Only one pediatric study met the criteria.Notably,91%of the studies were published within the past five years.ML models consistently demonstrated satisfactory to excellent area under the receiver operating characteristic curve values(ranging from 0.6 to 1)across all studies,surpassing the performance of traditional scoring systems.Random forest exhibited superior predictive capabilities for 90-d mortality following LT,sepsis,and acute kidney injury(AKI).In contrast,gradient boosting excelled in predicting the risk of graft-versus-host disease,pneumonia,and AKI.CONCLUSION This study underscores the potential of ML models in guiding decisions related to allograft allocation and LT,marking a significant evolution in the field of prognostication.展开更多
Objective:To analyze the effect of using a problem-based(PBL)independent learning model in teaching cerebral ischemic stroke(CIS)first aid in emergency medicine.Methods:90 interns in the emergency department of our ho...Objective:To analyze the effect of using a problem-based(PBL)independent learning model in teaching cerebral ischemic stroke(CIS)first aid in emergency medicine.Methods:90 interns in the emergency department of our hospital from May 2022 to May 2023 were selected for the study.They were divided into Group A(45,conventional teaching method)and Group B(45 cases,PBL independent learning model)by randomized numerical table method to compare the effects of the two groups.Results:The teaching effect indicators and student satisfaction scores in Group B were higher than those in Group A(P<0.05).Conclusion:The use of the PBL independent learning model in the teaching of CIS first aid can significantly improve the teaching effect and student satisfaction.展开更多
AIM:To establish pupil diameter measurement algorithms based on infrared images that can be used in real-world clinical settings.METHODS:A total of 188 patients from outpatient clinic at He Eye Specialist Shenyang Hos...AIM:To establish pupil diameter measurement algorithms based on infrared images that can be used in real-world clinical settings.METHODS:A total of 188 patients from outpatient clinic at He Eye Specialist Shenyang Hospital from Spetember to December 2022 were included,and 13470 infrared pupil images were collected for the study.All infrared images for pupil segmentation were labeled using the Labelme software.The computation of pupil diameter is divided into four steps:image pre-processing,pupil identification and localization,pupil segmentation,and diameter calculation.Two major models are used in the computation process:the modified YoloV3 and Deeplabv 3+models,which must be trained beforehand.RESULTS:The test dataset included 1348 infrared pupil images.On the test dataset,the modified YoloV3 model had a detection rate of 99.98% and an average precision(AP)of 0.80 for pupils.The DeeplabV3+model achieved a background intersection over union(IOU)of 99.23%,a pupil IOU of 93.81%,and a mean IOU of 96.52%.The pupil diameters in the test dataset ranged from 20 to 56 pixels,with a mean of 36.06±6.85 pixels.The absolute error in pupil diameters between predicted and actual values ranged from 0 to 7 pixels,with a mean absolute error(MAE)of 1.06±0.96 pixels.CONCLUSION:This study successfully demonstrates a robust infrared image-based pupil diameter measurement algorithm,proven to be highly accurate and reliable for clinical application.展开更多
The accumulation of defects on wind turbine blade surfaces can lead to irreversible damage,impacting the aero-dynamic performance of the blades.To address the challenge of detecting and quantifying surface defects on ...The accumulation of defects on wind turbine blade surfaces can lead to irreversible damage,impacting the aero-dynamic performance of the blades.To address the challenge of detecting and quantifying surface defects on wind turbine blades,a blade surface defect detection and quantification method based on an improved Deeplabv3+deep learning model is proposed.Firstly,an improved method for wind turbine blade surface defect detection,utilizing Mobilenetv2 as the backbone feature extraction network,is proposed based on an original Deeplabv3+deep learning model to address the issue of limited robustness.Secondly,through integrating the concept of pre-trained weights from transfer learning and implementing a freeze training strategy,significant improvements have been made to enhance both the training speed and model training accuracy of this deep learning model.Finally,based on segmented blade surface defect images,a method for quantifying blade defects is proposed.This method combines image stitching algorithms to achieve overall quantification and risk assessment of the entire blade.Test results show that the improved Deeplabv3+deep learning model reduces training time by approximately 43.03%compared to the original model,while achieving mAP and MIoU values of 96.87%and 96.93%,respectively.Moreover,it demonstrates robustness in detecting different surface defects on blades across different back-grounds.The application of a blade surface defect quantification method enables the precise quantification of dif-ferent defects and facilitates the assessment of risk levels associated with defect measurements across the entire blade.This method enables non-contact,long-distance,high-precision detection and quantification of surface defects on the blades,providing a reference for assessing surface defects on wind turbine blades.展开更多
To perform landslide susceptibility prediction(LSP),it is important to select appropriate mapping unit and landslide-related conditioning factors.The efficient and automatic multi-scale segmentation(MSS)method propose...To perform landslide susceptibility prediction(LSP),it is important to select appropriate mapping unit and landslide-related conditioning factors.The efficient and automatic multi-scale segmentation(MSS)method proposed by the authors promotes the application of slope units.However,LSP modeling based on these slope units has not been performed.Moreover,the heterogeneity of conditioning factors in slope units is neglected,leading to incomplete input variables of LSP modeling.In this study,the slope units extracted by the MSS method are used to construct LSP modeling,and the heterogeneity of conditioning factors is represented by the internal variations of conditioning factors within slope unit using the descriptive statistics features of mean,standard deviation and range.Thus,slope units-based machine learning models considering internal variations of conditioning factors(variant slope-machine learning)are proposed.The Chongyi County is selected as the case study and is divided into 53,055 slope units.Fifteen original slope unit-based conditioning factors are expanded to 38 slope unit-based conditioning factors through considering their internal variations.Random forest(RF)and multi-layer perceptron(MLP)machine learning models are used to construct variant Slope-RF and Slope-MLP models.Meanwhile,the Slope-RF and Slope-MLP models without considering the internal variations of conditioning factors,and conventional grid units-based machine learning(Grid-RF and MLP)models are built for comparisons through the LSP performance assessments.Results show that the variant Slopemachine learning models have higher LSP performances than Slope-machine learning models;LSP results of variant Slope-machine learning models have stronger directivity and practical application than Grid-machine learning models.It is concluded that slope units extracted by MSS method can be appropriate for LSP modeling,and the heterogeneity of conditioning factors within slope units can more comprehensively reflect the relationships between conditioning factors and landslides.The research results have important reference significance for land use and landslide prevention.展开更多
BACKGROUND Bleeding is one of the major complications after endoscopic submucosal dissection(ESD)in early gastric cancer(EGC)patients.There are limited studies on estimating the bleeding risk after ESD using an artifi...BACKGROUND Bleeding is one of the major complications after endoscopic submucosal dissection(ESD)in early gastric cancer(EGC)patients.There are limited studies on estimating the bleeding risk after ESD using an artificial intelligence system.AIM To derivate and verify the performance of the deep learning model and the clinical model for predicting bleeding risk after ESD in EGC patients.METHODS Patients with EGC who underwent ESD between January 2010 and June 2020 at the Samsung Medical Center were enrolled,and post-ESD bleeding(PEB)was investigated retrospectively.We split the entire cohort into a development set(80%)and a validation set(20%).The deep learning and clinical model were built on the development set and tested in the validation set.The performance of the deep learning model and the clinical model were compared using the area under the curve and the stratification of bleeding risk after ESD.RESULTS A total of 5629 patients were included,and PEB occurred in 325 patients.The area under the curve for predicting PEB was 0.71(95%confidence interval:0.63-0.78)in the deep learning model and 0.70(95%confidence interval:0.62-0.77)in the clinical model,without significant difference(P=0.730).The patients expected to the low-(<5%),intermediate-(≥5%,<9%),and high-risk(≥9%)categories were observed with actual bleeding rate of 2.2%,3.9%,and 11.6%,respectively,in the deep learning model;4.0%,8.8%,and 18.2%,respectively,in the clinical model.CONCLUSION A deep learning model can predict and stratify the bleeding risk after ESD in patients with EGC.展开更多
In recent years evidence has emerged suggesting that Mini-basketball training program(MBTP)can be an effec-tive intervention method to improve social communication(SC)impairments and restricted and repetitive beha-vio...In recent years evidence has emerged suggesting that Mini-basketball training program(MBTP)can be an effec-tive intervention method to improve social communication(SC)impairments and restricted and repetitive beha-viors(RRBs)in preschool children suffering from autism spectrum disorder(ASD).However,there is a considerable degree if interindividual variability concerning these social outcomes and thus not all preschool chil-dren with ASD profit from a MBTP intervention to the same extent.In order to make more accurate predictions which preschool children with ASD can benefit from an MBTP intervention or which preschool children with ASD need additional interventions to achieve behavioral improvements,further research is required.This study aimed to investigate which individual factors of preschool children with ASD can predict MBTP intervention out-comes concerning SC impairments and RRBs.Then,test the performance of machine learning models in predict-ing intervention outcomes based on these factors.Participants were 26 preschool children with ASD who enrolled in a quasi-experiment and received MBTP intervention.Baseline demographic variables(e.g.,age,body,mass index[BMI]),indicators of physicalfitness(e.g.,handgrip strength,balance performance),performance in execu-tive function,severity of ASD symptoms,level of SC impairments,and severity of RRBs were obtained to predict treatment outcomes after MBTP intervention.Machine learning models were established based on support vector machine algorithm were implemented.For comparison,we also employed multiple linear regression models in statistics.Ourfindings suggest that in preschool children with ASD symptomatic severity(r=0.712,p<0.001)and baseline SC impairments(r=0.713,p<0.001)are predictors for intervention outcomes of SC impair-ments.Furthermore,BMI(r=-0.430,p=0.028),symptomatic severity(r=0.656,p<0.001),baseline SC impair-ments(r=0.504,p=0.009)and baseline RRBs(r=0.647,p<0.001)can predict intervention outcomes of RRBs.Statistical models predicted 59.6%of variance in post-treatment SC impairments(MSE=0.455,RMSE=0.675,R2=0.596)and 58.9%of variance in post-treatment RRBs(MSE=0.464,RMSE=0.681,R2=0.589).Machine learning models predicted 83%of variance in post-treatment SC impairments(MSE=0.188,RMSE=0.434,R2=0.83)and 85.9%of variance in post-treatment RRBs(MSE=0.051,RMSE=0.226,R2=0.859),which were better than statistical models.Ourfindings suggest that baseline characteristics such as symptomatic severity of 144 IJMHP,2022,vol.24,no.2 ASD symptoms and SC impairments are important predictors determining MBTP intervention-induced improvements concerning SC impairments and RBBs.Furthermore,the current study revealed that machine learning models can successfully be applied to predict the MBTP intervention-related outcomes in preschool chil-dren with ASD,and performed better than statistical models.Ourfindings can help to inform which preschool children with ASD are most likely to benefit from an MBTP intervention,and they might provide a reference for the development of personalized intervention programs for preschool children with ASD.展开更多
Stock market trends forecast is one of the most current topics and a significant research challenge due to its dynamic and unstable nature.The stock data is usually non-stationary,and attributes are non-correlative to...Stock market trends forecast is one of the most current topics and a significant research challenge due to its dynamic and unstable nature.The stock data is usually non-stationary,and attributes are non-correlative to each other.Several traditional Stock Technical Indicators(STIs)may incorrectly predict the stockmarket trends.To study the stock market characteristics using STIs and make efficient trading decisions,a robust model is built.This paper aims to build up an Evolutionary Deep Learning Model(EDLM)to identify stock trends’prices by using STIs.The proposed model has implemented the Deep Learning(DL)model to establish the concept of Correlation-Tensor.The analysis of the dataset of three most popular banking organizations obtained from the live stock market based on the National Stock exchange(NSE)-India,a Long Short Term Memory(LSTM)is used.The datasets encompassed the trading days from the 17^(th) of Nov 2008 to the 15^(th) of Nov 2018.This work also conducted exhaustive experiments to study the correlation of various STIs with stock price trends.The model built with an EDLM has shown significant improvements over two benchmark ML models and a deep learning one.The proposed model aids investors in making profitable investment decisions as it presents trend-based forecasting and has achieved a prediction accuracy of 63.59%,56.25%,and 57.95%on the datasets of HDFC,Yes Bank,and SBI,respectively.Results indicate that the proposed EDLA with a combination of STIs can often provide improved results than the other state-of-the-art algorithms.展开更多
Every day,websites and personal archives create more and more photos.The size of these archives is immeasurable.The comfort of use of these huge digital image gatherings donates to their admiration.However,not all of ...Every day,websites and personal archives create more and more photos.The size of these archives is immeasurable.The comfort of use of these huge digital image gatherings donates to their admiration.However,not all of these folders deliver relevant indexing information.From the outcomes,it is dif-ficult to discover data that the user can be absorbed in.Therefore,in order to determine the significance of the data,it is important to identify the contents in an informative manner.Image annotation can be one of the greatest problematic domains in multimedia research and computer vision.Hence,in this paper,Adap-tive Convolutional Deep Learning Model(ACDLM)is developed for automatic image annotation.Initially,the databases are collected from the open-source system which consists of some labelled images(for training phase)and some unlabeled images{Corel 5 K,MSRC v2}.After that,the images are sent to the pre-processing step such as colour space quantization and texture color class map.The pre-processed images are sent to the segmentation approach for efficient labelling technique using J-image segmentation(JSEG).Thefinal step is an auto-matic annotation using ACDLM which is a combination of Convolutional Neural Network(CNN)and Honey Badger Algorithm(HBA).Based on the proposed classifier,the unlabeled images are labelled.The proposed methodology is imple-mented in MATLAB and performance is evaluated by performance metrics such as accuracy,precision,recall and F1_Measure.With the assistance of the pro-posed methodology,the unlabeled images are labelled.展开更多
This study employs nine distinct deep learning models to categorize 12,444 blood cell images and automatically extract from them relevant information with an accuracy that is beyond that achievable with traditional te...This study employs nine distinct deep learning models to categorize 12,444 blood cell images and automatically extract from them relevant information with an accuracy that is beyond that achievable with traditional techniques.The work is intended to improve current methods for the assessment of human health through measurement of the distribution of four types of blood cells,namely,eosinophils,neutrophils,monocytes,and lymphocytes,known for their relationship with human body damage,inflammatory regions,and organ illnesses,in particular,and with the health of the immune system and other hazards,such as cardiovascular disease or infections,more in general.The results of the experiments show that the deep learning models can automatically extract features from the blood cell images and properly classify them with an accuracy of 98%,97%,and 89%,respectively,with regard to the training,verification,and testing of the corresponding datasets.展开更多
Forecasting the movement of stock market is a long-time attractive topic. This paper implements different statistical learning models to predict the movement of S&P 500 index. The S&P 500 index is influenced b...Forecasting the movement of stock market is a long-time attractive topic. This paper implements different statistical learning models to predict the movement of S&P 500 index. The S&P 500 index is influenced by other important financial indexes across the world such as commodity price and financial technical indicators. This paper systematically investigated four supervised learning models, including Logistic Regression, Gaussian Discriminant Analysis (GDA), Naive Bayes and Support Vector Machine (SVM) in the forecast of S&P 500 index. After several experiments of optimization in features and models, especially the SVM kernel selection and feature selection for different models, this paper concludes that a SVM model with a Radial Basis Function (RBF) kernel can achieve an accuracy rate of 62.51% for the future market trend of the S&P 500 index.展开更多
基金Supported by Government Assignment,No.1023022600020-6RSF Grant,No.24-15-00549Ministry of Science and Higher Education of the Russian Federation within the Framework of State Support for the Creation and Development of World-Class Research Center,No.075-15-2022-304.
文摘BACKGROUND Ischemic heart disease(IHD)impacts the quality of life and has the highest mortality rate of cardiovascular diseases globally.AIM To compare variations in the parameters of the single-lead electrocardiogram(ECG)during resting conditions and physical exertion in individuals diagnosed with IHD and those without the condition using vasodilator-induced stress computed tomography(CT)myocardial perfusion imaging as the diagnostic reference standard.METHODS This single center observational study included 80 participants.The participants were aged≥40 years and given an informed written consent to participate in the study.Both groups,G1(n=31)with and G2(n=49)without post stress induced myocardial perfusion defect,passed cardiologist consultation,anthropometric measurements,blood pressure and pulse rate measurement,echocardiography,cardio-ankle vascular index,bicycle ergometry,recording 3-min single-lead ECG(Cardio-Qvark)before and just after bicycle ergometry followed by performing CT myocardial perfusion.The LASSO regression with nested cross-validation was used to find the association between Cardio-Qvark parameters and the existence of the perfusion defect.Statistical processing was performed with the R programming language v4.2,Python v.3.10[^R],and Statistica 12 program.RESULTS Bicycle ergometry yielded an area under the receiver operating characteristic curve of 50.7%[95%confidence interval(CI):0.388-0.625],specificity of 53.1%(95%CI:0.392-0.673),and sensitivity of 48.4%(95%CI:0.306-0.657).In contrast,the Cardio-Qvark test performed notably better with an area under the receiver operating characteristic curve of 67%(95%CI:0.530-0.801),specificity of 75.5%(95%CI:0.628-0.88),and sensitivity of 51.6%(95%CI:0.333-0.695).CONCLUSION The single-lead ECG has a relatively higher diagnostic accuracy compared with bicycle ergometry by using machine learning models,but the difference was not statistically significant.However,further investigations are required to uncover the hidden capabilities of single-lead ECG in IHD diagnosis.
基金supported by the Project of Stable Support for Youth Team in Basic Research Field,CAS(grant No.YSBR-018)the National Natural Science Foundation of China(grant Nos.42188101,42130204)+4 种基金the B-type Strategic Priority Program of CAS(grant no.XDB41000000)the National Natural Science Foundation of China(NSFC)Distinguished Overseas Young Talents Program,Innovation Program for Quantum Science and Technology(2021ZD0300301)the Open Research Project of Large Research Infrastructures of CAS-“Study on the interaction between low/mid-latitude atmosphere and ionosphere based on the Chinese Meridian Project”.The project was supported also by the National Key Laboratory of Deep Space Exploration(Grant No.NKLDSE2023A002)the Open Fund of Anhui Provincial Key Laboratory of Intelligent Underground Detection(Grant No.APKLIUD23KF01)the China National Space Administration(CNSA)pre-research Project on Civil Aerospace Technologies No.D010305,D010301.
文摘Sporadic E(Es)layers in the ionosphere are characterized by intense plasma irregularities in the E region at altitudes of 90-130 km.Because they can significantly influence radio communications and navigation systems,accurate forecasting of Es layers is crucial for ensuring the precision and dependability of navigation satellite systems.In this study,we present Es predictions made by an empirical model and by a deep learning model,and analyze their differences comprehensively by comparing the model predictions to satellite RO measurements and ground-based ionosonde observations.The deep learning model exhibited significantly better performance,as indicated by its high coefficient of correlation(r=0.87)with RO observations and predictions,than did the empirical model(r=0.53).This study highlights the importance of integrating artificial intelligence technology into ionosphere modelling generally,and into predicting Es layer occurrences and characteristics,in particular.
基金supported by the National Key Research and Development Program of China(Grant No.2023YFC3209504)the National Natural Science Foundation of China(Grants No.U2040215 and 52479075)the Natural Science Foundation of Hubei Province(Grant No.2021CFA029).
文摘The backwater effect caused by tributary inflow can significantly elevate the water level profile upstream of a confluence point.However,the influence of mainstream and confluence discharges on the backwater effect in a river reach remains unclear.In this study,various hydrological data collected from the Jingjiang Reach of the Yangtze River in China were statistically analyzed to determine the backwater degree and range with three representative mainstream discharges.The results indicated that the backwater degree increased with mainstream discharge,and a positive relationship was observed between the runoff ratio and backwater degree at specific representative mainstream discharges.Following the operation of the Three Gorges Project,the backwater effect in the Jingjiang Reach diminished.For instance,mean backwater degrees for low,moderate,and high mainstream discharges were recorded as 0.83 m,1.61 m,and 2.41 m during the period from 1990 to 2002,whereas these values decreased to 0.30 m,0.95 m,and 2.08 m from 2009 to 2020.The backwater range extended upstream as mainstream discharge increased from 7000 m3/s to 30000 m3/s.Moreover,a random forest-based machine learning model was used to quantify the backwater effect with varying mainstream and confluence discharges,accounting for the impacts of mainstream discharge,confluence discharge,and channel degradation in the Jingjiang Reach.At the Jianli Hydrological Station,a decrease in mainstream discharge during flood seasons resulted in a 7%–15%increase in monthly mean backwater degree,while an increase in mainstream discharge during dry seasons led to a 1%–15%decrease in monthly mean backwater degree.Furthermore,increasing confluence discharge from Dongting Lake during June to July and September to November resulted in an 11%–42%increase in monthly mean backwater degree.Continuous channel degradation in the Jingjiang Reach contributed to a 6%–19%decrease in monthly mean backwater degree.Under the influence of these factors,the monthly mean backwater degree in 2017 varied from a decrease of 53%to an increase of 37%compared to corresponding values in 1991.
基金supported by the Natural Science Foundation of Jiangsu province,China(BK20240937)the Belt and Road Special Foundation of the National Key Laboratory of Water Disaster Prevention(2022491411,2021491811)the Basal Research Fund of Central Public Welfare Scientific Institution of Nanjing Hydraulic Research Institute(Y223006).
文摘Understanding spatial heterogeneity in groundwater responses to multiple factors is critical for water resource management in coastal cities.Daily groundwater depth(GWD)data from 43 wells(2018-2022)were collected in three coastal cities in Jiangsu Province,China.Seasonal and Trend decomposition using Loess(STL)together with wavelet analysis and empirical mode decomposition were applied to identify tide-influenced wells while remaining wells were grouped by hierarchical clustering analysis(HCA).Machine learning models were developed to predict GWD,then their response to natural conditions and human activities was assessed by the Shapley Additive exPlanations(SHAP)method.Results showed that eXtreme Gradient Boosting(XGB)was superior to other models in terms of prediction performance and computational efficiency(R^(2)>0.95).GWD in Yancheng and southern Lianyungang were greater than those in Nantong,exhibiting larger fluctuations.Groundwater within 5 km of the coastline was affected by tides,with more pronounced effects in agricultural areas compared to urban areas.Shallow groundwater(3-7 m depth)responded immediately(0-1 day)to rainfall,primarily influenced by farmland and topography(slope and distance from rivers).Rainfall recharge to groundwater peaked at 50%farmland coverage,but this effect was suppressed by high temperatures(>30℃)which intensified as distance from rivers increased,especially in forest and grassland.Deep groundwater(>10 m)showed delayed responses to rainfall(1-4 days)and temperature(10-15 days),with GDP as the primary influence,followed by agricultural irrigation and population density.Farmland helped to maintain stable GWD in low population density regions,while excessive farmland coverage(>90%)led to overexploitation.In the early stages of GDP development,increased industrial and agricultural water demand led to GWD decline,but as GDP levels significantly improved,groundwater consumption pressure gradually eased.This methodological framework is applicable not only to coastal cities in China but also could be extended to coastal regions worldwide.
文摘This research investigates the influence of indoor and outdoor factors on photovoltaic(PV)power generation at Utrecht University to accurately predict PV system performance by identifying critical impact factors and improving renewable energy efficiency.To predict plant efficiency,nineteen variables are analyzed,consisting of nine indoor photovoltaic panel characteristics(Open Circuit Voltage(Voc),Short Circuit Current(Isc),Maximum Power(Pmpp),Maximum Voltage(Umpp),Maximum Current(Impp),Filling Factor(FF),Parallel Resistance(Rp),Series Resistance(Rs),Module Temperature)and ten environmental factors(Air Temperature,Air Humidity,Dew Point,Air Pressure,Irradiation,Irradiation Propagation,Wind Speed,Wind Speed Propagation,Wind Direction,Wind Direction Propagation).This study provides a new perspective not previously addressed in the literature.In this study,different machine learning methods such as Multilayer Perceptron(MLP),Multivariate Adaptive Regression Spline(MARS),Multiple Linear Regression(MLR),and Random Forest(RF)models are used to predict power values using data from installed PVpanels.Panel values obtained under real field conditions were used to train the models,and the results were compared.The Multilayer Perceptron(MLP)model was achieved with the highest classification accuracy of 0.990%.The machine learning models used for solar energy forecasting show high performance and produce results close to actual values.Models like Multi-Layer Perceptron(MLP)and Random Forest(RF)can be used in diverse locations based on load demand.
基金supported By Grant (PLN2022-14) of State Key Laboratory of Oil and Gas Reservoir Geology and Exploitation (Southwest Petroleum University)。
文摘Well logging technology has accumulated a large amount of historical data through four generations of technological development,which forms the basis of well logging big data and digital assets.However,the value of these data has not been well stored,managed and mined.With the development of cloud computing technology,it provides a rare development opportunity for logging big data private cloud.The traditional petrophysical evaluation and interpretation model has encountered great challenges in the face of new evaluation objects.The solution research of logging big data distributed storage,processing and learning functions integrated in logging big data private cloud has not been carried out yet.To establish a distributed logging big-data private cloud platform centered on a unifi ed learning model,which achieves the distributed storage and processing of logging big data and facilitates the learning of novel knowledge patterns via the unifi ed logging learning model integrating physical simulation and data models in a large-scale functional space,thus resolving the geo-engineering evaluation problem of geothermal fi elds.Based on the research idea of“logging big data cloud platform-unifi ed logging learning model-large function space-knowledge learning&discovery-application”,the theoretical foundation of unified learning model,cloud platform architecture,data storage and learning algorithm,arithmetic power allocation and platform monitoring,platform stability,data security,etc.have been carried on analysis.The designed logging big data cloud platform realizes parallel distributed storage and processing of data and learning algorithms.The feasibility of constructing a well logging big data cloud platform based on a unifi ed learning model of physics and data is analyzed in terms of the structure,ecology,management and security of the cloud platform.The case study shows that the logging big data cloud platform has obvious technical advantages over traditional logging evaluation methods in terms of knowledge discovery method,data software and results sharing,accuracy,speed and complexity.
基金supported in part by the Gusu Innovation and Entrepreneurship Leading Talents in Suzhou City,grant numbers ZXL2021425 and ZXL2022476Doctor of Innovation and Entrepreneurship Program in Jiangsu Province,grant number JSSCBS20211440+6 种基金Jiangsu Province Key R&D Program,grant number BE2019682Natural Science Foundation of Jiangsu Province,grant number BK20200214National Key R&D Program of China,grant number 2017YFB0403701National Natural Science Foundation of China,grant numbers 61605210,61675226,and 62075235Youth Innovation Promotion Association of Chinese Academy of Sciences,grant number 2019320Frontier Science Research Project of the Chinese Academy of Sciences,grant number QYZDB-SSW-JSC03Strategic Priority Research Program of the Chinese Academy of Sciences,grant number XDB02060000.
文摘The prediction of fundus fluorescein angiography(FFA)images from fundus structural images is a cutting-edge research topic in ophthalmological image processing.Prediction comprises estimating FFA from fundus camera imaging,single-phase FFA from scanning laser ophthalmoscopy(SLO),and three-phase FFA also from SLO.Although many deep learning models are available,a single model can only perform one or two of these prediction tasks.To accomplish three prediction tasks using a unified method,we propose a unified deep learning model for predicting FFA images from fundus structure images using a supervised generative adversarial network.The three prediction tasks are processed as follows:data preparation,network training under FFA supervision,and FFA image prediction from fundus structure images on a test set.By comparing the FFA images predicted by our model,pix2pix,and CycleGAN,we demonstrate the remarkable progress achieved by our proposal.The high performance of our model is validated in terms of the peak signal-to-noise ratio,structural similarity index,and mean squared error.
文摘The Indian Himalayan region is frequently experiencing climate change-induced landslides.Thus,landslide susceptibility assessment assumes greater significance for lessening the impact of a landslide hazard.This paper makes an attempt to assess landslide susceptibility in Shimla district of the northwest Indian Himalayan region.It examined the effectiveness of random forest(RF),multilayer perceptron(MLP),sequential minimal optimization regression(SMOreg)and bagging ensemble(B-RF,BSMOreg,B-MLP)models.A landslide inventory map comprising 1052 locations of past landslide occurrences was classified into training(70%)and testing(30%)datasets.The site-specific influencing factors were selected by employing a multicollinearity test.The relationship between past landslide occurrences and influencing factors was established using the frequency ratio method.The effectiveness of machine learning models was verified through performance assessors.The landslide susceptibility maps were validated by the area under the receiver operating characteristic curves(ROC-AUC),accuracy,precision,recall and F1-score.The key performance metrics and map validation demonstrated that the BRF model(correlation coefficient:0.988,mean absolute error:0.010,root mean square error:0.058,relative absolute error:2.964,ROC-AUC:0.947,accuracy:0.778,precision:0.819,recall:0.917 and F-1 score:0.865)outperformed the single classifiers and other bagging ensemble models for landslide susceptibility.The results show that the largest area was found under the very high susceptibility zone(33.87%),followed by the low(27.30%),high(20.68%)and moderate(18.16%)susceptibility zones.The factors,namely average annual rainfall,slope,lithology,soil texture and earthquake magnitude have been identified as the influencing factors for very high landslide susceptibility.Soil texture,lineament density and elevation have been attributed to high and moderate susceptibility.Thus,the study calls for devising suitable landslide mitigation measures in the study area.Structural measures,an immediate response system,community participation and coordination among stakeholders may help lessen the detrimental impact of landslides.The findings from this study could aid decision-makers in mitigating future catastrophes and devising suitable strategies in other geographical regions with similar geological characteristics.
基金This study has been reviewed and approved by the Clinical Research Ethics Committee of Wenzhou Central Hospital and the First Hospital Affiliated to Wenzhou Medical University,No.KY2024-R016.
文摘BACKGROUND Colorectal cancer significantly impacts global health,with unplanned reoperations post-surgery being key determinants of patient outcomes.Existing predictive models for these reoperations lack precision in integrating complex clinical data.AIM To develop and validate a machine learning model for predicting unplanned reoperation risk in colorectal cancer patients.METHODS Data of patients treated for colorectal cancer(n=2044)at the First Affiliated Hospital of Wenzhou Medical University and Wenzhou Central Hospital from March 2020 to March 2022 were retrospectively collected.Patients were divided into an experimental group(n=60)and a control group(n=1984)according to unplanned reoperation occurrence.Patients were also divided into a training group and a validation group(7:3 ratio).We used three different machine learning methods to screen characteristic variables.A nomogram was created based on multifactor logistic regression,and the model performance was assessed using receiver operating characteristic curve,calibration curve,Hosmer-Lemeshow test,and decision curve analysis.The risk scores of the two groups were calculated and compared to validate the model.RESULTS More patients in the experimental group were≥60 years old,male,and had a history of hypertension,laparotomy,and hypoproteinemia,compared to the control group.Multiple logistic regression analysis confirmed the following as independent risk factors for unplanned reoperation(P<0.05):Prognostic Nutritional Index value,history of laparotomy,hypertension,or stroke,hypoproteinemia,age,tumor-node-metastasis staging,surgical time,gender,and American Society of Anesthesiologists classification.Receiver operating characteristic curve analysis showed that the model had good discrimination and clinical utility.CONCLUSION This study used a machine learning approach to build a model that accurately predicts the risk of postoperative unplanned reoperation in patients with colorectal cancer,which can improve treatment decisions and prognosis.
文摘BACKGROUND Liver transplantation(LT)is a life-saving intervention for patients with end-stage liver disease.However,the equitable allocation of scarce donor organs remains a formidable challenge.Prognostic tools are pivotal in identifying the most suitable transplant candidates.Traditionally,scoring systems like the model for end-stage liver disease have been instrumental in this process.Nevertheless,the landscape of prognostication is undergoing a transformation with the integration of machine learning(ML)and artificial intelligence models.AIM To assess the utility of ML models in prognostication for LT,comparing their performance and reliability to established traditional scoring systems.METHODS Following the Preferred Reporting Items for Systematic Reviews and Meta-Analysis guidelines,we conducted a thorough and standardized literature search using the PubMed/MEDLINE database.Our search imposed no restrictions on publication year,age,or gender.Exclusion criteria encompassed non-English studies,review articles,case reports,conference papers,studies with missing data,or those exhibiting evident methodological flaws.RESULTS Our search yielded a total of 64 articles,with 23 meeting the inclusion criteria.Among the selected studies,60.8%originated from the United States and China combined.Only one pediatric study met the criteria.Notably,91%of the studies were published within the past five years.ML models consistently demonstrated satisfactory to excellent area under the receiver operating characteristic curve values(ranging from 0.6 to 1)across all studies,surpassing the performance of traditional scoring systems.Random forest exhibited superior predictive capabilities for 90-d mortality following LT,sepsis,and acute kidney injury(AKI).In contrast,gradient boosting excelled in predicting the risk of graft-versus-host disease,pneumonia,and AKI.CONCLUSION This study underscores the potential of ML models in guiding decisions related to allograft allocation and LT,marking a significant evolution in the field of prognostication.
文摘Objective:To analyze the effect of using a problem-based(PBL)independent learning model in teaching cerebral ischemic stroke(CIS)first aid in emergency medicine.Methods:90 interns in the emergency department of our hospital from May 2022 to May 2023 were selected for the study.They were divided into Group A(45,conventional teaching method)and Group B(45 cases,PBL independent learning model)by randomized numerical table method to compare the effects of the two groups.Results:The teaching effect indicators and student satisfaction scores in Group B were higher than those in Group A(P<0.05).Conclusion:The use of the PBL independent learning model in the teaching of CIS first aid can significantly improve the teaching effect and student satisfaction.
文摘AIM:To establish pupil diameter measurement algorithms based on infrared images that can be used in real-world clinical settings.METHODS:A total of 188 patients from outpatient clinic at He Eye Specialist Shenyang Hospital from Spetember to December 2022 were included,and 13470 infrared pupil images were collected for the study.All infrared images for pupil segmentation were labeled using the Labelme software.The computation of pupil diameter is divided into four steps:image pre-processing,pupil identification and localization,pupil segmentation,and diameter calculation.Two major models are used in the computation process:the modified YoloV3 and Deeplabv 3+models,which must be trained beforehand.RESULTS:The test dataset included 1348 infrared pupil images.On the test dataset,the modified YoloV3 model had a detection rate of 99.98% and an average precision(AP)of 0.80 for pupils.The DeeplabV3+model achieved a background intersection over union(IOU)of 99.23%,a pupil IOU of 93.81%,and a mean IOU of 96.52%.The pupil diameters in the test dataset ranged from 20 to 56 pixels,with a mean of 36.06±6.85 pixels.The absolute error in pupil diameters between predicted and actual values ranged from 0 to 7 pixels,with a mean absolute error(MAE)of 1.06±0.96 pixels.CONCLUSION:This study successfully demonstrates a robust infrared image-based pupil diameter measurement algorithm,proven to be highly accurate and reliable for clinical application.
基金supported by the National Science Foundation of China(Grant Nos.52068049 and 51908266)the Science Fund for Distinguished Young Scholars of Gansu Province(No.21JR7RA267)Hongliu Outstanding Young Talents Program of Lanzhou University of Technology.
文摘The accumulation of defects on wind turbine blade surfaces can lead to irreversible damage,impacting the aero-dynamic performance of the blades.To address the challenge of detecting and quantifying surface defects on wind turbine blades,a blade surface defect detection and quantification method based on an improved Deeplabv3+deep learning model is proposed.Firstly,an improved method for wind turbine blade surface defect detection,utilizing Mobilenetv2 as the backbone feature extraction network,is proposed based on an original Deeplabv3+deep learning model to address the issue of limited robustness.Secondly,through integrating the concept of pre-trained weights from transfer learning and implementing a freeze training strategy,significant improvements have been made to enhance both the training speed and model training accuracy of this deep learning model.Finally,based on segmented blade surface defect images,a method for quantifying blade defects is proposed.This method combines image stitching algorithms to achieve overall quantification and risk assessment of the entire blade.Test results show that the improved Deeplabv3+deep learning model reduces training time by approximately 43.03%compared to the original model,while achieving mAP and MIoU values of 96.87%and 96.93%,respectively.Moreover,it demonstrates robustness in detecting different surface defects on blades across different back-grounds.The application of a blade surface defect quantification method enables the precise quantification of dif-ferent defects and facilitates the assessment of risk levels associated with defect measurements across the entire blade.This method enables non-contact,long-distance,high-precision detection and quantification of surface defects on the blades,providing a reference for assessing surface defects on wind turbine blades.
基金funded by the Natural Science Foundation of China(Grant Nos.41807285,41972280 and 52179103).
文摘To perform landslide susceptibility prediction(LSP),it is important to select appropriate mapping unit and landslide-related conditioning factors.The efficient and automatic multi-scale segmentation(MSS)method proposed by the authors promotes the application of slope units.However,LSP modeling based on these slope units has not been performed.Moreover,the heterogeneity of conditioning factors in slope units is neglected,leading to incomplete input variables of LSP modeling.In this study,the slope units extracted by the MSS method are used to construct LSP modeling,and the heterogeneity of conditioning factors is represented by the internal variations of conditioning factors within slope unit using the descriptive statistics features of mean,standard deviation and range.Thus,slope units-based machine learning models considering internal variations of conditioning factors(variant slope-machine learning)are proposed.The Chongyi County is selected as the case study and is divided into 53,055 slope units.Fifteen original slope unit-based conditioning factors are expanded to 38 slope unit-based conditioning factors through considering their internal variations.Random forest(RF)and multi-layer perceptron(MLP)machine learning models are used to construct variant Slope-RF and Slope-MLP models.Meanwhile,the Slope-RF and Slope-MLP models without considering the internal variations of conditioning factors,and conventional grid units-based machine learning(Grid-RF and MLP)models are built for comparisons through the LSP performance assessments.Results show that the variant Slopemachine learning models have higher LSP performances than Slope-machine learning models;LSP results of variant Slope-machine learning models have stronger directivity and practical application than Grid-machine learning models.It is concluded that slope units extracted by MSS method can be appropriate for LSP modeling,and the heterogeneity of conditioning factors within slope units can more comprehensively reflect the relationships between conditioning factors and landslides.The research results have important reference significance for land use and landslide prevention.
文摘BACKGROUND Bleeding is one of the major complications after endoscopic submucosal dissection(ESD)in early gastric cancer(EGC)patients.There are limited studies on estimating the bleeding risk after ESD using an artificial intelligence system.AIM To derivate and verify the performance of the deep learning model and the clinical model for predicting bleeding risk after ESD in EGC patients.METHODS Patients with EGC who underwent ESD between January 2010 and June 2020 at the Samsung Medical Center were enrolled,and post-ESD bleeding(PEB)was investigated retrospectively.We split the entire cohort into a development set(80%)and a validation set(20%).The deep learning and clinical model were built on the development set and tested in the validation set.The performance of the deep learning model and the clinical model were compared using the area under the curve and the stratification of bleeding risk after ESD.RESULTS A total of 5629 patients were included,and PEB occurred in 325 patients.The area under the curve for predicting PEB was 0.71(95%confidence interval:0.63-0.78)in the deep learning model and 0.70(95%confidence interval:0.62-0.77)in the clinical model,without significant difference(P=0.730).The patients expected to the low-(<5%),intermediate-(≥5%,<9%),and high-risk(≥9%)categories were observed with actual bleeding rate of 2.2%,3.9%,and 11.6%,respectively,in the deep learning model;4.0%,8.8%,and 18.2%,respectively,in the clinical model.CONCLUSION A deep learning model can predict and stratify the bleeding risk after ESD in patients with EGC.
基金supported by grants from the National Natural Science Foundation of China(31771243)the Fok Ying Tong Education Foundation(141113)to Aiguo Chen.
文摘In recent years evidence has emerged suggesting that Mini-basketball training program(MBTP)can be an effec-tive intervention method to improve social communication(SC)impairments and restricted and repetitive beha-viors(RRBs)in preschool children suffering from autism spectrum disorder(ASD).However,there is a considerable degree if interindividual variability concerning these social outcomes and thus not all preschool chil-dren with ASD profit from a MBTP intervention to the same extent.In order to make more accurate predictions which preschool children with ASD can benefit from an MBTP intervention or which preschool children with ASD need additional interventions to achieve behavioral improvements,further research is required.This study aimed to investigate which individual factors of preschool children with ASD can predict MBTP intervention out-comes concerning SC impairments and RRBs.Then,test the performance of machine learning models in predict-ing intervention outcomes based on these factors.Participants were 26 preschool children with ASD who enrolled in a quasi-experiment and received MBTP intervention.Baseline demographic variables(e.g.,age,body,mass index[BMI]),indicators of physicalfitness(e.g.,handgrip strength,balance performance),performance in execu-tive function,severity of ASD symptoms,level of SC impairments,and severity of RRBs were obtained to predict treatment outcomes after MBTP intervention.Machine learning models were established based on support vector machine algorithm were implemented.For comparison,we also employed multiple linear regression models in statistics.Ourfindings suggest that in preschool children with ASD symptomatic severity(r=0.712,p<0.001)and baseline SC impairments(r=0.713,p<0.001)are predictors for intervention outcomes of SC impair-ments.Furthermore,BMI(r=-0.430,p=0.028),symptomatic severity(r=0.656,p<0.001),baseline SC impair-ments(r=0.504,p=0.009)and baseline RRBs(r=0.647,p<0.001)can predict intervention outcomes of RRBs.Statistical models predicted 59.6%of variance in post-treatment SC impairments(MSE=0.455,RMSE=0.675,R2=0.596)and 58.9%of variance in post-treatment RRBs(MSE=0.464,RMSE=0.681,R2=0.589).Machine learning models predicted 83%of variance in post-treatment SC impairments(MSE=0.188,RMSE=0.434,R2=0.83)and 85.9%of variance in post-treatment RRBs(MSE=0.051,RMSE=0.226,R2=0.859),which were better than statistical models.Ourfindings suggest that baseline characteristics such as symptomatic severity of 144 IJMHP,2022,vol.24,no.2 ASD symptoms and SC impairments are important predictors determining MBTP intervention-induced improvements concerning SC impairments and RBBs.Furthermore,the current study revealed that machine learning models can successfully be applied to predict the MBTP intervention-related outcomes in preschool chil-dren with ASD,and performed better than statistical models.Ourfindings can help to inform which preschool children with ASD are most likely to benefit from an MBTP intervention,and they might provide a reference for the development of personalized intervention programs for preschool children with ASD.
基金Funding is provided by Taif University Researchers Supporting Project Number(TURSP-2020/10),Taif University,Taif,Saudi Arabia.
文摘Stock market trends forecast is one of the most current topics and a significant research challenge due to its dynamic and unstable nature.The stock data is usually non-stationary,and attributes are non-correlative to each other.Several traditional Stock Technical Indicators(STIs)may incorrectly predict the stockmarket trends.To study the stock market characteristics using STIs and make efficient trading decisions,a robust model is built.This paper aims to build up an Evolutionary Deep Learning Model(EDLM)to identify stock trends’prices by using STIs.The proposed model has implemented the Deep Learning(DL)model to establish the concept of Correlation-Tensor.The analysis of the dataset of three most popular banking organizations obtained from the live stock market based on the National Stock exchange(NSE)-India,a Long Short Term Memory(LSTM)is used.The datasets encompassed the trading days from the 17^(th) of Nov 2008 to the 15^(th) of Nov 2018.This work also conducted exhaustive experiments to study the correlation of various STIs with stock price trends.The model built with an EDLM has shown significant improvements over two benchmark ML models and a deep learning one.The proposed model aids investors in making profitable investment decisions as it presents trend-based forecasting and has achieved a prediction accuracy of 63.59%,56.25%,and 57.95%on the datasets of HDFC,Yes Bank,and SBI,respectively.Results indicate that the proposed EDLA with a combination of STIs can often provide improved results than the other state-of-the-art algorithms.
文摘Every day,websites and personal archives create more and more photos.The size of these archives is immeasurable.The comfort of use of these huge digital image gatherings donates to their admiration.However,not all of these folders deliver relevant indexing information.From the outcomes,it is dif-ficult to discover data that the user can be absorbed in.Therefore,in order to determine the significance of the data,it is important to identify the contents in an informative manner.Image annotation can be one of the greatest problematic domains in multimedia research and computer vision.Hence,in this paper,Adap-tive Convolutional Deep Learning Model(ACDLM)is developed for automatic image annotation.Initially,the databases are collected from the open-source system which consists of some labelled images(for training phase)and some unlabeled images{Corel 5 K,MSRC v2}.After that,the images are sent to the pre-processing step such as colour space quantization and texture color class map.The pre-processed images are sent to the segmentation approach for efficient labelling technique using J-image segmentation(JSEG).Thefinal step is an auto-matic annotation using ACDLM which is a combination of Convolutional Neural Network(CNN)and Honey Badger Algorithm(HBA).Based on the proposed classifier,the unlabeled images are labelled.The proposed methodology is imple-mented in MATLAB and performance is evaluated by performance metrics such as accuracy,precision,recall and F1_Measure.With the assistance of the pro-posed methodology,the unlabeled images are labelled.
基金supported by National Natural Science Foundation of China(NSFC)(Nos.61806087,61902158).
文摘This study employs nine distinct deep learning models to categorize 12,444 blood cell images and automatically extract from them relevant information with an accuracy that is beyond that achievable with traditional techniques.The work is intended to improve current methods for the assessment of human health through measurement of the distribution of four types of blood cells,namely,eosinophils,neutrophils,monocytes,and lymphocytes,known for their relationship with human body damage,inflammatory regions,and organ illnesses,in particular,and with the health of the immune system and other hazards,such as cardiovascular disease or infections,more in general.The results of the experiments show that the deep learning models can automatically extract features from the blood cell images and properly classify them with an accuracy of 98%,97%,and 89%,respectively,with regard to the training,verification,and testing of the corresponding datasets.
文摘Forecasting the movement of stock market is a long-time attractive topic. This paper implements different statistical learning models to predict the movement of S&P 500 index. The S&P 500 index is influenced by other important financial indexes across the world such as commodity price and financial technical indicators. This paper systematically investigated four supervised learning models, including Logistic Regression, Gaussian Discriminant Analysis (GDA), Naive Bayes and Support Vector Machine (SVM) in the forecast of S&P 500 index. After several experiments of optimization in features and models, especially the SVM kernel selection and feature selection for different models, this paper concludes that a SVM model with a Radial Basis Function (RBF) kernel can achieve an accuracy rate of 62.51% for the future market trend of the S&P 500 index.