Machine learning,especially deep learning,has been highly successful in data-intensive applications;however,the performance of these models will drop significantly when the amount of the training data amount does not ...Machine learning,especially deep learning,has been highly successful in data-intensive applications;however,the performance of these models will drop significantly when the amount of the training data amount does not meet the requirement.This leads to the so-called few-shot learning(FSL)problem,which requires the model rapidly generalize to new tasks that containing only a few labeled samples.In this paper,we proposed a new deep model,called deep convolutional meta-learning networks,to address the low performance of generalization under limited data for bearing fault diagnosis.The essential of our approach is to learn a base model from the multiple learning tasks using a support dataset and finetune the learnt parameters using few-shot tasks before it can adapt to the new learning task based on limited training data.The proposed method was compared to several FSL methods,including methods with and without pre-training the embedding mapping,and methods with finetuning the classifier or the whole model by utilizing the few-shot data from the target domain.The comparisons are carried out on 1-shot and 10-shot tasks using the Case Western Reserve University bearing dataset and a cylindrical roller bearing dataset.The experimental result illustrates that our method has good performance on the bearing fault diagnosis across various few-shot conditions.In addition,we found that the pretraining process does not always improve the prediction accuracy.展开更多
Sporadic E(Es)layers in the ionosphere are characterized by intense plasma irregularities in the E region at altitudes of 90-130 km.Because they can significantly influence radio communications and navigation systems,...Sporadic E(Es)layers in the ionosphere are characterized by intense plasma irregularities in the E region at altitudes of 90-130 km.Because they can significantly influence radio communications and navigation systems,accurate forecasting of Es layers is crucial for ensuring the precision and dependability of navigation satellite systems.In this study,we present Es predictions made by an empirical model and by a deep learning model,and analyze their differences comprehensively by comparing the model predictions to satellite RO measurements and ground-based ionosonde observations.The deep learning model exhibited significantly better performance,as indicated by its high coefficient of correlation(r=0.87)with RO observations and predictions,than did the empirical model(r=0.53).This study highlights the importance of integrating artificial intelligence technology into ionosphere modelling generally,and into predicting Es layer occurrences and characteristics,in particular.展开更多
It is fundamental and useful to investigate how deep learning forecasting models(DLMs)perform compared to operational oceanography forecast systems(OFSs).However,few studies have intercompared their performances using...It is fundamental and useful to investigate how deep learning forecasting models(DLMs)perform compared to operational oceanography forecast systems(OFSs).However,few studies have intercompared their performances using an identical reference.In this study,three physically reasonable DLMs are implemented for the forecasting of the sea surface temperature(SST),sea level anomaly(SLA),and sea surface velocity in the South China Sea.The DLMs are validated against both the testing dataset and the“OceanPredict”Class 4 dataset.Results show that the DLMs'RMSEs against the latter increase by 44%,245%,302%,and 109%for SST,SLA,current speed,and direction,respectively,compared to those against the former.Therefore,different references have significant influences on the validation,and it is necessary to use an identical and independent reference to intercompare the DLMs and OFSs.Against the Class 4 dataset,the DLMs present significantly better performance for SLA than the OFSs,and slightly better performances for other variables.The error patterns of the DLMs and OFSs show a high degree of similarity,which is reasonable from the viewpoint of predictability,facilitating further applications of the DLMs.For extreme events,the DLMs and OFSs both present large but similar forecast errors for SLA and current speed,while the DLMs are likely to give larger errors for SST and current direction.This study provides an evaluation of the forecast skills of commonly used DLMs and provides an example to objectively intercompare different DLMs.展开更多
Traditional vehicle detection algorithms use traverse search based vehicle candidate generation and hand crafted based classifier training for vehicle candidate verification.These types of methods generally have high ...Traditional vehicle detection algorithms use traverse search based vehicle candidate generation and hand crafted based classifier training for vehicle candidate verification.These types of methods generally have high processing times and low vehicle detection performance.To address this issue,a visual saliency and deep sparse convolution hierarchical model based vehicle detection algorithm is proposed.A visual saliency calculation is firstly used to generate a small vehicle candidate area.The vehicle candidate sub images are then loaded into a sparse deep convolution hierarchical model with an SVM-based classifier to perform the final detection.The experimental results demonstrate that the proposed method is with 94.81%correct rate and 0.78%false detection rate on the existing datasets and the real road pictures captured by our group,which outperforms the existing state-of-the-art algorithms.More importantly,high discriminative multi-scale features are generated by deep sparse convolution network which has broad application prospects in target recognition in the field of intelligent vehicle.展开更多
Protein subcellular localization prediction is im- portant for studying the function of proteins. Recently, as significant progress has been witnessed in the field of mi- croscopic imaging, automatically determining t...Protein subcellular localization prediction is im- portant for studying the function of proteins. Recently, as significant progress has been witnessed in the field of mi- croscopic imaging, automatically determining the subcellular localization of proteins from bio-images is becoming a new research hotspot. One of the central themes in this field is to determine what features are suitable for describing the pro- tein images. Existing feature extraction methods are usually hand-crafted designed, by which only one layer of features will be extracted, which may not be sufficient to represent the complex protein images. To this end, we propose a deep model based descriptor (DMD) to extract the high-level fea- tures from protein images. Specifically, in order to make the extracted features more generic, we firstly trained a convolu- tion neural network (i.e., AlexNe0 by using a natural image set with millions of labels, and then used the partial parame- ter transfer strategy to fine-tnne the parameters from natural images to protein images. After that, we applied the Lasso model to select the most distinguishing features from the last fully connected layer of the CNN (Convolution Neural Net- work), and used these selected features for final classifica- tions. Experimental results on a protein image dataset vali- date the efficacy of our method.展开更多
Leveraging the power of artificial intelligence to facilitate an automatic analysis and monitoring of heart sounds has increasingly attracted tremendous efforts in the past decade.Nevertheless,lacking on standard open...Leveraging the power of artificial intelligence to facilitate an automatic analysis and monitoring of heart sounds has increasingly attracted tremendous efforts in the past decade.Nevertheless,lacking on standard open-access database made it difficult to maintain a sustainable and comparable research before the first release of the PhysioNet CinC Challenge Dataset.However,inconsistent standards on data collection,annotation,and partition are still restraining a fair and efficient comparison between different works.To this line,we introduced and benchmarked a first version of the Heart Sounds Shenzhen(HSS)corpus.Motivated and inspired by the previous works based on HSS,we redefined the tasks and make a comprehensive investigation on shallow and deep models in this study.First,we segmented the heart sound recording into shorter recordings(10 s),which makes it more similar to the human auscultation case.Second,we redefined the classification tasks.Besides using the 3 class categories(normal,moderate,and mild/severe)adopted in HSS,we added a binary classification task in this study,i.e.,normal and abnormal.In this work,we provided detailed benchmarks based on both the classic machine learning and the state-of-the-art deep learning technologies,which are reproducible by using open-source toolkits.Last but not least,we analyzed the feature contributions of best performance achieved by the benchmark to make the results more convincing and interpretable.展开更多
This reviewpresents a comprehensive technical analysis of deep learning(DL)methodologies in biomedical signal processing,focusing on architectural innovations,experimental validation,and evaluation frameworks.We syste...This reviewpresents a comprehensive technical analysis of deep learning(DL)methodologies in biomedical signal processing,focusing on architectural innovations,experimental validation,and evaluation frameworks.We systematically evaluate key deep learning architectures including convolutional neural networks(CNNs),recurrent neural networks(RNNs),transformer-based models,and hybrid systems across critical tasks such as arrhythmia classification,seizure detection,and anomaly segmentation.The study dissects preprocessing techniques(e.g.,wavelet denoising,spectral normalization)and feature extraction strategies(time-frequency analysis,attention mechanisms),demonstrating their impact on model accuracy,noise robustness,and computational efficiency.Experimental results underscore the superiority of deep learning over traditional methods,particularly in automated feature extraction,real-time processing,cross-modal generalization,and achieving up to a 15%increase in classification accuracy and enhanced noise resilience across electrocardiogram(ECG),electroencephalogram(EEG),and electromyogram(EMG)signals.Performance is rigorously benchmarked using precision,recall,F1-scores,area under the receiver operating characteristic curve(AUC-ROC),and computational complexitymetrics,providing a unified framework for comparing model efficacy.Thesurvey addresses persistent challenges:synthetic data generationmitigates limited training samples,interpretability tools(e.g.,Gradient-weighted Class Activation Mapping(Grad-CAM),Shapley values)resolve model opacity,and federated learning ensures privacy-compliant deployments.Distinguished from prior reviews,this work offers a structured taxonomy of deep learning architectures,integrates emerging paradigms like transformers and domain-specific attention mechanisms,and evaluates preprocessing pipelines for spectral-temporal trade-offs.It advances the field by bridging technical advancements with clinical needs,such as scalability in real-world settings(e.g.,wearable devices)and regulatory alignment with theHealth Insurance Portability and Accountability Act(HIPAA)and General Data Protection Regulation(GDPR).By synthesizing technical rigor,ethical considerations,and actionable guidelines for model selection,this survey establishes a holistic reference for developing robust,interpretable biomedical artificial intelligence(AI)systems,accelerating their translation into personalized and equitable healthcare solutions.展开更多
The paper proposes a new deep structure model,called Densely Connected Cascade Forest-Weighted K Nearest Neighbors(DCCF-WKNNs),to implement the corrosion data modelling and corrosion knowledgemining.Firstly,we collect...The paper proposes a new deep structure model,called Densely Connected Cascade Forest-Weighted K Nearest Neighbors(DCCF-WKNNs),to implement the corrosion data modelling and corrosion knowledgemining.Firstly,we collect 409 outdoor atmospheric corrosion samples of low-alloy steels as experiment datasets.Then,we give the proposed methods process,including random forests-K nearest neighbors(RF-WKNNs)and DCCF-WKNNs.Finally,we use the collected datasets to verify the performance of the proposed method.The results show that compared with commonly used and advanced machine-learning algorithms such as artificial neural network(ANN),support vector regression(SVR),random forests(RF),and cascade forests(cForest),the proposed method can obtain the best prediction results.In addition,the method can predict the corrosion rates with variations of any one single environmental variable,like pH,temperature,relative humidity,SO2,rainfall or Cl-.By this way,the threshold of each variable,upon which the corrosion rate may have a large change,can be further obtained.展开更多
The prediction of fundus fluorescein angiography(FFA)images from fundus structural images is a cutting-edge research topic in ophthalmological image processing.Prediction comprises estimating FFA from fundus camera im...The prediction of fundus fluorescein angiography(FFA)images from fundus structural images is a cutting-edge research topic in ophthalmological image processing.Prediction comprises estimating FFA from fundus camera imaging,single-phase FFA from scanning laser ophthalmoscopy(SLO),and three-phase FFA also from SLO.Although many deep learning models are available,a single model can only perform one or two of these prediction tasks.To accomplish three prediction tasks using a unified method,we propose a unified deep learning model for predicting FFA images from fundus structure images using a supervised generative adversarial network.The three prediction tasks are processed as follows:data preparation,network training under FFA supervision,and FFA image prediction from fundus structure images on a test set.By comparing the FFA images predicted by our model,pix2pix,and CycleGAN,we demonstrate the remarkable progress achieved by our proposal.The high performance of our model is validated in terms of the peak signal-to-noise ratio,structural similarity index,and mean squared error.展开更多
BACKGROUND Bleeding is one of the major complications after endoscopic submucosal dissection(ESD)in early gastric cancer(EGC)patients.There are limited studies on estimating the bleeding risk after ESD using an artifi...BACKGROUND Bleeding is one of the major complications after endoscopic submucosal dissection(ESD)in early gastric cancer(EGC)patients.There are limited studies on estimating the bleeding risk after ESD using an artificial intelligence system.AIM To derivate and verify the performance of the deep learning model and the clinical model for predicting bleeding risk after ESD in EGC patients.METHODS Patients with EGC who underwent ESD between January 2010 and June 2020 at the Samsung Medical Center were enrolled,and post-ESD bleeding(PEB)was investigated retrospectively.We split the entire cohort into a development set(80%)and a validation set(20%).The deep learning and clinical model were built on the development set and tested in the validation set.The performance of the deep learning model and the clinical model were compared using the area under the curve and the stratification of bleeding risk after ESD.RESULTS A total of 5629 patients were included,and PEB occurred in 325 patients.The area under the curve for predicting PEB was 0.71(95%confidence interval:0.63-0.78)in the deep learning model and 0.70(95%confidence interval:0.62-0.77)in the clinical model,without significant difference(P=0.730).The patients expected to the low-(<5%),intermediate-(≥5%,<9%),and high-risk(≥9%)categories were observed with actual bleeding rate of 2.2%,3.9%,and 11.6%,respectively,in the deep learning model;4.0%,8.8%,and 18.2%,respectively,in the clinical model.CONCLUSION A deep learning model can predict and stratify the bleeding risk after ESD in patients with EGC.展开更多
Despite the rapid development of mobile and embedded hardware, directly executing computationexpensive and storage-intensive deep learning algorithms on these devices’ local side remains constrained for sensory data ...Despite the rapid development of mobile and embedded hardware, directly executing computationexpensive and storage-intensive deep learning algorithms on these devices’ local side remains constrained for sensory data analysis. In this paper, we first summarize the layer compression techniques for the state-of-theart deep learning model from three categories: weight factorization and pruning, convolution decomposition, and special layer architecture designing. For each category of layer compression techniques, we quantify their storage and computation tunable by layer compression techniques and discuss their practical challenges and possible improvements. Then, we implement Android projects using TensorFlow Mobile to test these 10 compression methods and compare their practical performances in terms of accuracy, parameter size, intermediate feature size,computation, processing latency, and energy consumption. To further discuss their advantages and bottlenecks,we test their performance over four standard recognition tasks on six resource-constrained Android smartphones.Finally, we survey two types of run-time Neural Network(NN) compression techniques which are orthogonal with the layer compression techniques, run-time resource management and cost optimization with special NN architecture,which are orthogonal with the layer compression techniques.展开更多
This study employs nine distinct deep learning models to categorize 12,444 blood cell images and automatically extract from them relevant information with an accuracy that is beyond that achievable with traditional te...This study employs nine distinct deep learning models to categorize 12,444 blood cell images and automatically extract from them relevant information with an accuracy that is beyond that achievable with traditional techniques.The work is intended to improve current methods for the assessment of human health through measurement of the distribution of four types of blood cells,namely,eosinophils,neutrophils,monocytes,and lymphocytes,known for their relationship with human body damage,inflammatory regions,and organ illnesses,in particular,and with the health of the immune system and other hazards,such as cardiovascular disease or infections,more in general.The results of the experiments show that the deep learning models can automatically extract features from the blood cell images and properly classify them with an accuracy of 98%,97%,and 89%,respectively,with regard to the training,verification,and testing of the corresponding datasets.展开更多
Stock market trends forecast is one of the most current topics and a significant research challenge due to its dynamic and unstable nature.The stock data is usually non-stationary,and attributes are non-correlative to...Stock market trends forecast is one of the most current topics and a significant research challenge due to its dynamic and unstable nature.The stock data is usually non-stationary,and attributes are non-correlative to each other.Several traditional Stock Technical Indicators(STIs)may incorrectly predict the stockmarket trends.To study the stock market characteristics using STIs and make efficient trading decisions,a robust model is built.This paper aims to build up an Evolutionary Deep Learning Model(EDLM)to identify stock trends’prices by using STIs.The proposed model has implemented the Deep Learning(DL)model to establish the concept of Correlation-Tensor.The analysis of the dataset of three most popular banking organizations obtained from the live stock market based on the National Stock exchange(NSE)-India,a Long Short Term Memory(LSTM)is used.The datasets encompassed the trading days from the 17^(th) of Nov 2008 to the 15^(th) of Nov 2018.This work also conducted exhaustive experiments to study the correlation of various STIs with stock price trends.The model built with an EDLM has shown significant improvements over two benchmark ML models and a deep learning one.The proposed model aids investors in making profitable investment decisions as it presents trend-based forecasting and has achieved a prediction accuracy of 63.59%,56.25%,and 57.95%on the datasets of HDFC,Yes Bank,and SBI,respectively.Results indicate that the proposed EDLA with a combination of STIs can often provide improved results than the other state-of-the-art algorithms.展开更多
Every day,websites and personal archives create more and more photos.The size of these archives is immeasurable.The comfort of use of these huge digital image gatherings donates to their admiration.However,not all of ...Every day,websites and personal archives create more and more photos.The size of these archives is immeasurable.The comfort of use of these huge digital image gatherings donates to their admiration.However,not all of these folders deliver relevant indexing information.From the outcomes,it is dif-ficult to discover data that the user can be absorbed in.Therefore,in order to determine the significance of the data,it is important to identify the contents in an informative manner.Image annotation can be one of the greatest problematic domains in multimedia research and computer vision.Hence,in this paper,Adap-tive Convolutional Deep Learning Model(ACDLM)is developed for automatic image annotation.Initially,the databases are collected from the open-source system which consists of some labelled images(for training phase)and some unlabeled images{Corel 5 K,MSRC v2}.After that,the images are sent to the pre-processing step such as colour space quantization and texture color class map.The pre-processed images are sent to the segmentation approach for efficient labelling technique using J-image segmentation(JSEG).Thefinal step is an auto-matic annotation using ACDLM which is a combination of Convolutional Neural Network(CNN)and Honey Badger Algorithm(HBA).Based on the proposed classifier,the unlabeled images are labelled.The proposed methodology is imple-mented in MATLAB and performance is evaluated by performance metrics such as accuracy,precision,recall and F1_Measure.With the assistance of the pro-posed methodology,the unlabeled images are labelled.展开更多
Extensive transgression of lake water occurred during the Cretaceous Qingshankou Stage and the Nengjiang Stage in the Songliao basin, forming widespread deep-water deposits. Eleven types of microfacies of deep-water d...Extensive transgression of lake water occurred during the Cretaceous Qingshankou Stage and the Nengjiang Stage in the Songliao basin, forming widespread deep-water deposits. Eleven types of microfacies of deep-water deposits have been recognized in the continuous core rocks from the SKII, including mudstone of still water, marlite, dolostone, off shale, volcanic ashes, turbidite, slump sediment, tempestite, seismite, ostracoda limestone and sparry carbonate, which are divided into two types: microfacies generated due to gradually changing environments (Ⅰ) and microfacies generated due to geological events (Ⅱ). Type Ⅰ is composed of some special fine grain sediments such as marlite, dolomite stone and oil shale as well as mudstone and Type Ⅱ is composed of some sediments related to geological events, such as volcanic ashes, turbiditie, slump sediment, tempestite, seismite, ostracoda limestone. The formation of sparry carbonate may be controlled by factors related to both environments and events. Generally, mudstone sediments of still water can be regarded as background sediments, and the rest sediments are all event sediments, which have unique forming models, which may reflect controlling effects of climatics and tectonics.展开更多
It is important to investigate the dynamic behaviors of deep rocks near explosion cavity to reveal the mechanisms of deformations and fractures. Some improvements are carried out for Grigorian model with focuses on th...It is important to investigate the dynamic behaviors of deep rocks near explosion cavity to reveal the mechanisms of deformations and fractures. Some improvements are carried out for Grigorian model with focuses on the dilation effects and the relaxation effects of deep rocks, and the high pressure equations of states with Mie-Grüneisen form are also established. Numerical calculations of free field parameters for deep underground explosions are carried out based on the user subroutines which are compiled by means of the secondary development functions of LS-DYNA9703 D software. The histories of radial stress, radial velocity and radial displacement of rock particles are obtained, and the calculation results are compared with those of U.S. Hardhat nuclear test. It is indicated that the dynamic responses of free field for deep underground explosions are well simulated based on improved Grigorian model, and the calculation results are in good agreement with the data of U.S. Hardhat nuclear test. The peak values of particle velocities are consistent with those of test, but the waveform widths and the rising times are obviously greater than those without dilation effects. The attenuation rates of particle velocities are greater than the calculation results with classic plastic model, and they are consistent with the results of Hardhat nuclear test. The attenuation behaviors and the rising times of stress waves are well shown by introducing dilation effects and relaxation effects into the calculation model. Therefore, the defects of Grigorian model are avoided. It is also indicated that the initial stress has obvious influences on the waveforms of radial stress and the radial displacements of rock particles.展开更多
Brain encoding and decoding via functional magnetic resonance imaging(fMRI)are two important aspects of visual perception neuroscience.Although previous researchers have made significant advances in brain encoding and...Brain encoding and decoding via functional magnetic resonance imaging(fMRI)are two important aspects of visual perception neuroscience.Although previous researchers have made significant advances in brain encoding and decoding models,existing methods still require improvement using advanced machine learning techniques.For example,traditional methods usually build the encoding and decoding models separately,and are prone to overfitting on a small dataset.In fact,effectively unifying the encoding and decoding procedures may allow for more accurate predictions.In this paper,we first review the existing encoding and decoding methods and discuss the potential advantages of a“bidirectional”modeling strategy.Next,we show that there are correspondences between deep neural networks and human visual streams in terms of the architecture and computational rules.Furthermore,deep generative models(e.g.,variational autoencoders(VAEs)and generative adversarial networks(GANs))have produced promising results in studies on brain encoding and decoding.Finally,we propose that the dual learning method,which was originally designed for machine translation tasks,could help to improve the performance of encoding and decoding models by leveraging large-scale unpaired data.展开更多
In this paper, the effects of frying time, white egg (0%, 5% and 10% w/w) and chitosan (0%, 0.5% and 1.5% w/w) addition to the batter formulation on the quality of simulated crispy deep-fried Kurdish cheese nugget cru...In this paper, the effects of frying time, white egg (0%, 5% and 10% w/w) and chitosan (0%, 0.5% and 1.5% w/w) addition to the batter formulation on the quality of simulated crispy deep-fried Kurdish cheese nugget crusts was studied by using a deep-fried crust model. Moisture content, oil content, color and hardness of the samples were determined. Crust models were fried at 190℃ for 60, 120 and 180 s. Batter formulations and frying time significantly (p < 0.01) affected moisture, oil content, color and hardness of Crust models. Batter formulation contain 10% white egg was found to be an effective ingredient in decreasing oil content of Crust models. The mean moisture and fat content of Crust models formed with batter contained 10% white egg, fried at 190℃, for 180s were 6.207 ± 0.447 and 5.649 ± 0.394. Batters containing 5% white egg and 1.5% chitosan showed the lowest moisture content and the highest oil content among all the formulations. Crust models containing combination of white egg and chitosan were the darkest. Hardness of samples containing chitosan were the highest, specially for ch1.5 The mean hardness in 60, 120 and 180s of frying in this formulation were 21.518 ± 0.481, 36.871 ± 1.758 and 49.563 ± 1.847 respectively.展开更多
基金This research was funded by RECLAIM project“Remanufacturing and Refurbishment of Large Industrial Equipment”and received funding from the European Commission Horizon 2020 research and innovation program under Grant Agreement No.869884The authors also acknowledge the support of The Efficiency and Performance Engineering Network International Collaboration Fund Award 2022(TEPEN-ICF 2022)project“Intelligent Fault Diagnosis Method and System with Few-Shot Learning Technique under Small Sample Data Condition”.
文摘Machine learning,especially deep learning,has been highly successful in data-intensive applications;however,the performance of these models will drop significantly when the amount of the training data amount does not meet the requirement.This leads to the so-called few-shot learning(FSL)problem,which requires the model rapidly generalize to new tasks that containing only a few labeled samples.In this paper,we proposed a new deep model,called deep convolutional meta-learning networks,to address the low performance of generalization under limited data for bearing fault diagnosis.The essential of our approach is to learn a base model from the multiple learning tasks using a support dataset and finetune the learnt parameters using few-shot tasks before it can adapt to the new learning task based on limited training data.The proposed method was compared to several FSL methods,including methods with and without pre-training the embedding mapping,and methods with finetuning the classifier or the whole model by utilizing the few-shot data from the target domain.The comparisons are carried out on 1-shot and 10-shot tasks using the Case Western Reserve University bearing dataset and a cylindrical roller bearing dataset.The experimental result illustrates that our method has good performance on the bearing fault diagnosis across various few-shot conditions.In addition,we found that the pretraining process does not always improve the prediction accuracy.
基金supported by the Project of Stable Support for Youth Team in Basic Research Field,CAS(grant No.YSBR-018)the National Natural Science Foundation of China(grant Nos.42188101,42130204)+4 种基金the B-type Strategic Priority Program of CAS(grant no.XDB41000000)the National Natural Science Foundation of China(NSFC)Distinguished Overseas Young Talents Program,Innovation Program for Quantum Science and Technology(2021ZD0300301)the Open Research Project of Large Research Infrastructures of CAS-“Study on the interaction between low/mid-latitude atmosphere and ionosphere based on the Chinese Meridian Project”.The project was supported also by the National Key Laboratory of Deep Space Exploration(Grant No.NKLDSE2023A002)the Open Fund of Anhui Provincial Key Laboratory of Intelligent Underground Detection(Grant No.APKLIUD23KF01)the China National Space Administration(CNSA)pre-research Project on Civil Aerospace Technologies No.D010305,D010301.
文摘Sporadic E(Es)layers in the ionosphere are characterized by intense plasma irregularities in the E region at altitudes of 90-130 km.Because they can significantly influence radio communications and navigation systems,accurate forecasting of Es layers is crucial for ensuring the precision and dependability of navigation satellite systems.In this study,we present Es predictions made by an empirical model and by a deep learning model,and analyze their differences comprehensively by comparing the model predictions to satellite RO measurements and ground-based ionosonde observations.The deep learning model exhibited significantly better performance,as indicated by its high coefficient of correlation(r=0.87)with RO observations and predictions,than did the empirical model(r=0.53).This study highlights the importance of integrating artificial intelligence technology into ionosphere modelling generally,and into predicting Es layer occurrences and characteristics,in particular.
基金supported by the National Natural Science Foundation of China(Grant Nos.42375062 and 42275158)the National Key Scientific and Technological Infrastructure project“Earth System Science Numerical Simulator Facility”(EarthLab)the Natural Science Foundation of Gansu Province(Grant No.22JR5RF1080)。
文摘It is fundamental and useful to investigate how deep learning forecasting models(DLMs)perform compared to operational oceanography forecast systems(OFSs).However,few studies have intercompared their performances using an identical reference.In this study,three physically reasonable DLMs are implemented for the forecasting of the sea surface temperature(SST),sea level anomaly(SLA),and sea surface velocity in the South China Sea.The DLMs are validated against both the testing dataset and the“OceanPredict”Class 4 dataset.Results show that the DLMs'RMSEs against the latter increase by 44%,245%,302%,and 109%for SST,SLA,current speed,and direction,respectively,compared to those against the former.Therefore,different references have significant influences on the validation,and it is necessary to use an identical and independent reference to intercompare the DLMs and OFSs.Against the Class 4 dataset,the DLMs present significantly better performance for SLA than the OFSs,and slightly better performances for other variables.The error patterns of the DLMs and OFSs show a high degree of similarity,which is reasonable from the viewpoint of predictability,facilitating further applications of the DLMs.For extreme events,the DLMs and OFSs both present large but similar forecast errors for SLA and current speed,while the DLMs are likely to give larger errors for SST and current direction.This study provides an evaluation of the forecast skills of commonly used DLMs and provides an example to objectively intercompare different DLMs.
基金Supported by National Natural Science Foundation of China(Grant Nos.U1564201,61573171,61403172,51305167)China Postdoctoral Science Foundation(Grant Nos.2015T80511,2014M561592)+3 种基金Jiangsu Provincial Natural Science Foundation of China(Grant No.BK20140555)Six Talent Peaks Project of Jiangsu Province,China(Grant Nos.2015-JXQC-012,2014-DZXX-040)Jiangsu Postdoctoral Science Foundation,China(Grant No.1402097C)Jiangsu University Scientific Research Foundation for Senior Professionals,China(Grant No.14JDG028)
文摘Traditional vehicle detection algorithms use traverse search based vehicle candidate generation and hand crafted based classifier training for vehicle candidate verification.These types of methods generally have high processing times and low vehicle detection performance.To address this issue,a visual saliency and deep sparse convolution hierarchical model based vehicle detection algorithm is proposed.A visual saliency calculation is firstly used to generate a small vehicle candidate area.The vehicle candidate sub images are then loaded into a sparse deep convolution hierarchical model with an SVM-based classifier to perform the final detection.The experimental results demonstrate that the proposed method is with 94.81%correct rate and 0.78%false detection rate on the existing datasets and the real road pictures captured by our group,which outperforms the existing state-of-the-art algorithms.More importantly,high discriminative multi-scale features are generated by deep sparse convolution network which has broad application prospects in target recognition in the field of intelligent vehicle.
基金This work was supported in part by the National Nat- ural Science Foundation of China (Grant Nos. 61422204, 61473149 and 61671288), Jiangsu Natural Science Foundation for Distinguished Young Scholar (BK20130034), and Science and Technology Commission of Shang- hai Municipality (16JC1404300).
文摘Protein subcellular localization prediction is im- portant for studying the function of proteins. Recently, as significant progress has been witnessed in the field of mi- croscopic imaging, automatically determining the subcellular localization of proteins from bio-images is becoming a new research hotspot. One of the central themes in this field is to determine what features are suitable for describing the pro- tein images. Existing feature extraction methods are usually hand-crafted designed, by which only one layer of features will be extracted, which may not be sufficient to represent the complex protein images. To this end, we propose a deep model based descriptor (DMD) to extract the high-level fea- tures from protein images. Specifically, in order to make the extracted features more generic, we firstly trained a convolu- tion neural network (i.e., AlexNe0 by using a natural image set with millions of labels, and then used the partial parame- ter transfer strategy to fine-tnne the parameters from natural images to protein images. After that, we applied the Lasso model to select the most distinguishing features from the last fully connected layer of the CNN (Convolution Neural Net- work), and used these selected features for final classifica- tions. Experimental results on a protein image dataset vali- date the efficacy of our method.
基金partially supported by the Ministry of Science and Technology of the People's Republic of China with the STI2030-Major Projects(2021ZD0201900)the National Natural Science Foundation of China(No.62227807 and 62272044)+3 种基金the Teli Young Fellow Program from the Beijing Institute of Technology,Chinathe Natural Science Foundation of Shenzhen University General Hospital(No.SUGH2018QD013),Chinathe Shenzhen Science and Technology Innovation Commission Project(No.JCYJ20190808120613189),Chinathe Grants-in-Aid for Scientific Research(No.20H00569)from the Ministry of Education,Culture,Sports,Science and Technology(MEXT),Japan.
文摘Leveraging the power of artificial intelligence to facilitate an automatic analysis and monitoring of heart sounds has increasingly attracted tremendous efforts in the past decade.Nevertheless,lacking on standard open-access database made it difficult to maintain a sustainable and comparable research before the first release of the PhysioNet CinC Challenge Dataset.However,inconsistent standards on data collection,annotation,and partition are still restraining a fair and efficient comparison between different works.To this line,we introduced and benchmarked a first version of the Heart Sounds Shenzhen(HSS)corpus.Motivated and inspired by the previous works based on HSS,we redefined the tasks and make a comprehensive investigation on shallow and deep models in this study.First,we segmented the heart sound recording into shorter recordings(10 s),which makes it more similar to the human auscultation case.Second,we redefined the classification tasks.Besides using the 3 class categories(normal,moderate,and mild/severe)adopted in HSS,we added a binary classification task in this study,i.e.,normal and abnormal.In this work,we provided detailed benchmarks based on both the classic machine learning and the state-of-the-art deep learning technologies,which are reproducible by using open-source toolkits.Last but not least,we analyzed the feature contributions of best performance achieved by the benchmark to make the results more convincing and interpretable.
基金The Natural Sciences and Engineering Research Council of Canada(NSERC)funded this review study.
文摘This reviewpresents a comprehensive technical analysis of deep learning(DL)methodologies in biomedical signal processing,focusing on architectural innovations,experimental validation,and evaluation frameworks.We systematically evaluate key deep learning architectures including convolutional neural networks(CNNs),recurrent neural networks(RNNs),transformer-based models,and hybrid systems across critical tasks such as arrhythmia classification,seizure detection,and anomaly segmentation.The study dissects preprocessing techniques(e.g.,wavelet denoising,spectral normalization)and feature extraction strategies(time-frequency analysis,attention mechanisms),demonstrating their impact on model accuracy,noise robustness,and computational efficiency.Experimental results underscore the superiority of deep learning over traditional methods,particularly in automated feature extraction,real-time processing,cross-modal generalization,and achieving up to a 15%increase in classification accuracy and enhanced noise resilience across electrocardiogram(ECG),electroencephalogram(EEG),and electromyogram(EMG)signals.Performance is rigorously benchmarked using precision,recall,F1-scores,area under the receiver operating characteristic curve(AUC-ROC),and computational complexitymetrics,providing a unified framework for comparing model efficacy.Thesurvey addresses persistent challenges:synthetic data generationmitigates limited training samples,interpretability tools(e.g.,Gradient-weighted Class Activation Mapping(Grad-CAM),Shapley values)resolve model opacity,and federated learning ensures privacy-compliant deployments.Distinguished from prior reviews,this work offers a structured taxonomy of deep learning architectures,integrates emerging paradigms like transformers and domain-specific attention mechanisms,and evaluates preprocessing pipelines for spectral-temporal trade-offs.It advances the field by bridging technical advancements with clinical needs,such as scalability in real-world settings(e.g.,wearable devices)and regulatory alignment with theHealth Insurance Portability and Accountability Act(HIPAA)and General Data Protection Regulation(GDPR).By synthesizing technical rigor,ethical considerations,and actionable guidelines for model selection,this survey establishes a holistic reference for developing robust,interpretable biomedical artificial intelligence(AI)systems,accelerating their translation into personalized and equitable healthcare solutions.
基金financially supported by the National Key R&D Program of China(No.2017YFB0702100)the National Natural Science Foundation of China(No.51871024)。
文摘The paper proposes a new deep structure model,called Densely Connected Cascade Forest-Weighted K Nearest Neighbors(DCCF-WKNNs),to implement the corrosion data modelling and corrosion knowledgemining.Firstly,we collect 409 outdoor atmospheric corrosion samples of low-alloy steels as experiment datasets.Then,we give the proposed methods process,including random forests-K nearest neighbors(RF-WKNNs)and DCCF-WKNNs.Finally,we use the collected datasets to verify the performance of the proposed method.The results show that compared with commonly used and advanced machine-learning algorithms such as artificial neural network(ANN),support vector regression(SVR),random forests(RF),and cascade forests(cForest),the proposed method can obtain the best prediction results.In addition,the method can predict the corrosion rates with variations of any one single environmental variable,like pH,temperature,relative humidity,SO2,rainfall or Cl-.By this way,the threshold of each variable,upon which the corrosion rate may have a large change,can be further obtained.
基金supported in part by the Gusu Innovation and Entrepreneurship Leading Talents in Suzhou City,grant numbers ZXL2021425 and ZXL2022476Doctor of Innovation and Entrepreneurship Program in Jiangsu Province,grant number JSSCBS20211440+6 种基金Jiangsu Province Key R&D Program,grant number BE2019682Natural Science Foundation of Jiangsu Province,grant number BK20200214National Key R&D Program of China,grant number 2017YFB0403701National Natural Science Foundation of China,grant numbers 61605210,61675226,and 62075235Youth Innovation Promotion Association of Chinese Academy of Sciences,grant number 2019320Frontier Science Research Project of the Chinese Academy of Sciences,grant number QYZDB-SSW-JSC03Strategic Priority Research Program of the Chinese Academy of Sciences,grant number XDB02060000.
文摘The prediction of fundus fluorescein angiography(FFA)images from fundus structural images is a cutting-edge research topic in ophthalmological image processing.Prediction comprises estimating FFA from fundus camera imaging,single-phase FFA from scanning laser ophthalmoscopy(SLO),and three-phase FFA also from SLO.Although many deep learning models are available,a single model can only perform one or two of these prediction tasks.To accomplish three prediction tasks using a unified method,we propose a unified deep learning model for predicting FFA images from fundus structure images using a supervised generative adversarial network.The three prediction tasks are processed as follows:data preparation,network training under FFA supervision,and FFA image prediction from fundus structure images on a test set.By comparing the FFA images predicted by our model,pix2pix,and CycleGAN,we demonstrate the remarkable progress achieved by our proposal.The high performance of our model is validated in terms of the peak signal-to-noise ratio,structural similarity index,and mean squared error.
文摘BACKGROUND Bleeding is one of the major complications after endoscopic submucosal dissection(ESD)in early gastric cancer(EGC)patients.There are limited studies on estimating the bleeding risk after ESD using an artificial intelligence system.AIM To derivate and verify the performance of the deep learning model and the clinical model for predicting bleeding risk after ESD in EGC patients.METHODS Patients with EGC who underwent ESD between January 2010 and June 2020 at the Samsung Medical Center were enrolled,and post-ESD bleeding(PEB)was investigated retrospectively.We split the entire cohort into a development set(80%)and a validation set(20%).The deep learning and clinical model were built on the development set and tested in the validation set.The performance of the deep learning model and the clinical model were compared using the area under the curve and the stratification of bleeding risk after ESD.RESULTS A total of 5629 patients were included,and PEB occurred in 325 patients.The area under the curve for predicting PEB was 0.71(95%confidence interval:0.63-0.78)in the deep learning model and 0.70(95%confidence interval:0.62-0.77)in the clinical model,without significant difference(P=0.730).The patients expected to the low-(<5%),intermediate-(≥5%,<9%),and high-risk(≥9%)categories were observed with actual bleeding rate of 2.2%,3.9%,and 11.6%,respectively,in the deep learning model;4.0%,8.8%,and 18.2%,respectively,in the clinical model.CONCLUSION A deep learning model can predict and stratify the bleeding risk after ESD in patients with EGC.
基金supported by the National Key Research and Development Program of China (No. 2018YFB1003605)Foundations of CARCH (No. CARCH201704)+3 种基金the National Natural Science Foundation of China (No. 61472312)Foundations of Shaanxi Province and Xi’an ScienceTechnology Plan (Nos. B018230008 and BD34017020001)the Foundations of Xidian University (No. JBZ171002)
文摘Despite the rapid development of mobile and embedded hardware, directly executing computationexpensive and storage-intensive deep learning algorithms on these devices’ local side remains constrained for sensory data analysis. In this paper, we first summarize the layer compression techniques for the state-of-theart deep learning model from three categories: weight factorization and pruning, convolution decomposition, and special layer architecture designing. For each category of layer compression techniques, we quantify their storage and computation tunable by layer compression techniques and discuss their practical challenges and possible improvements. Then, we implement Android projects using TensorFlow Mobile to test these 10 compression methods and compare their practical performances in terms of accuracy, parameter size, intermediate feature size,computation, processing latency, and energy consumption. To further discuss their advantages and bottlenecks,we test their performance over four standard recognition tasks on six resource-constrained Android smartphones.Finally, we survey two types of run-time Neural Network(NN) compression techniques which are orthogonal with the layer compression techniques, run-time resource management and cost optimization with special NN architecture,which are orthogonal with the layer compression techniques.
基金supported by National Natural Science Foundation of China(NSFC)(Nos.61806087,61902158).
文摘This study employs nine distinct deep learning models to categorize 12,444 blood cell images and automatically extract from them relevant information with an accuracy that is beyond that achievable with traditional techniques.The work is intended to improve current methods for the assessment of human health through measurement of the distribution of four types of blood cells,namely,eosinophils,neutrophils,monocytes,and lymphocytes,known for their relationship with human body damage,inflammatory regions,and organ illnesses,in particular,and with the health of the immune system and other hazards,such as cardiovascular disease or infections,more in general.The results of the experiments show that the deep learning models can automatically extract features from the blood cell images and properly classify them with an accuracy of 98%,97%,and 89%,respectively,with regard to the training,verification,and testing of the corresponding datasets.
基金Funding is provided by Taif University Researchers Supporting Project Number(TURSP-2020/10),Taif University,Taif,Saudi Arabia.
文摘Stock market trends forecast is one of the most current topics and a significant research challenge due to its dynamic and unstable nature.The stock data is usually non-stationary,and attributes are non-correlative to each other.Several traditional Stock Technical Indicators(STIs)may incorrectly predict the stockmarket trends.To study the stock market characteristics using STIs and make efficient trading decisions,a robust model is built.This paper aims to build up an Evolutionary Deep Learning Model(EDLM)to identify stock trends’prices by using STIs.The proposed model has implemented the Deep Learning(DL)model to establish the concept of Correlation-Tensor.The analysis of the dataset of three most popular banking organizations obtained from the live stock market based on the National Stock exchange(NSE)-India,a Long Short Term Memory(LSTM)is used.The datasets encompassed the trading days from the 17^(th) of Nov 2008 to the 15^(th) of Nov 2018.This work also conducted exhaustive experiments to study the correlation of various STIs with stock price trends.The model built with an EDLM has shown significant improvements over two benchmark ML models and a deep learning one.The proposed model aids investors in making profitable investment decisions as it presents trend-based forecasting and has achieved a prediction accuracy of 63.59%,56.25%,and 57.95%on the datasets of HDFC,Yes Bank,and SBI,respectively.Results indicate that the proposed EDLA with a combination of STIs can often provide improved results than the other state-of-the-art algorithms.
文摘Every day,websites and personal archives create more and more photos.The size of these archives is immeasurable.The comfort of use of these huge digital image gatherings donates to their admiration.However,not all of these folders deliver relevant indexing information.From the outcomes,it is dif-ficult to discover data that the user can be absorbed in.Therefore,in order to determine the significance of the data,it is important to identify the contents in an informative manner.Image annotation can be one of the greatest problematic domains in multimedia research and computer vision.Hence,in this paper,Adap-tive Convolutional Deep Learning Model(ACDLM)is developed for automatic image annotation.Initially,the databases are collected from the open-source system which consists of some labelled images(for training phase)and some unlabeled images{Corel 5 K,MSRC v2}.After that,the images are sent to the pre-processing step such as colour space quantization and texture color class map.The pre-processed images are sent to the segmentation approach for efficient labelling technique using J-image segmentation(JSEG).Thefinal step is an auto-matic annotation using ACDLM which is a combination of Convolutional Neural Network(CNN)and Honey Badger Algorithm(HBA).Based on the proposed classifier,the unlabeled images are labelled.The proposed methodology is imple-mented in MATLAB and performance is evaluated by performance metrics such as accuracy,precision,recall and F1_Measure.With the assistance of the pro-posed methodology,the unlabeled images are labelled.
文摘Extensive transgression of lake water occurred during the Cretaceous Qingshankou Stage and the Nengjiang Stage in the Songliao basin, forming widespread deep-water deposits. Eleven types of microfacies of deep-water deposits have been recognized in the continuous core rocks from the SKII, including mudstone of still water, marlite, dolostone, off shale, volcanic ashes, turbidite, slump sediment, tempestite, seismite, ostracoda limestone and sparry carbonate, which are divided into two types: microfacies generated due to gradually changing environments (Ⅰ) and microfacies generated due to geological events (Ⅱ). Type Ⅰ is composed of some special fine grain sediments such as marlite, dolomite stone and oil shale as well as mudstone and Type Ⅱ is composed of some sediments related to geological events, such as volcanic ashes, turbiditie, slump sediment, tempestite, seismite, ostracoda limestone. The formation of sparry carbonate may be controlled by factors related to both environments and events. Generally, mudstone sediments of still water can be regarded as background sediments, and the rest sediments are all event sediments, which have unique forming models, which may reflect controlling effects of climatics and tectonics.
基金Project(51378498)supported by the National Natural Science Foundation of ChinaProject(BK20141066)supported the Natural Science Foundation of Jiangsu Province,China+1 种基金Project(SKLGDUEK1208)supported by State Key Laboratory for Geo Mechanics and Deep Underground Engineering(China University of Mining & Technology),ChinaProject(DPMEIKF201301)supported by State Key Laboratory of Disaster Prevention & Mitigation of Explosion & Impact(PLA University of Science and Technology),China
文摘It is important to investigate the dynamic behaviors of deep rocks near explosion cavity to reveal the mechanisms of deformations and fractures. Some improvements are carried out for Grigorian model with focuses on the dilation effects and the relaxation effects of deep rocks, and the high pressure equations of states with Mie-Grüneisen form are also established. Numerical calculations of free field parameters for deep underground explosions are carried out based on the user subroutines which are compiled by means of the secondary development functions of LS-DYNA9703 D software. The histories of radial stress, radial velocity and radial displacement of rock particles are obtained, and the calculation results are compared with those of U.S. Hardhat nuclear test. It is indicated that the dynamic responses of free field for deep underground explosions are well simulated based on improved Grigorian model, and the calculation results are in good agreement with the data of U.S. Hardhat nuclear test. The peak values of particle velocities are consistent with those of test, but the waveform widths and the rising times are obviously greater than those without dilation effects. The attenuation rates of particle velocities are greater than the calculation results with classic plastic model, and they are consistent with the results of Hardhat nuclear test. The attenuation behaviors and the rising times of stress waves are well shown by introducing dilation effects and relaxation effects into the calculation model. Therefore, the defects of Grigorian model are avoided. It is also indicated that the initial stress has obvious influences on the waveforms of radial stress and the radial displacements of rock particles.
基金This work was supported by the National Key Research and Development Program of China(2018YFC2001302)National Natural Science Foundation of China(91520202)+2 种基金Chinese Academy of Sciences Scientific Equipment Development Project(YJKYYQ20170050)Beijing Municipal Science and Technology Commission(Z181100008918010)Youth Innovation Promotion Association of Chinese Academy of Sciences,and Strategic Priority Research Program of Chinese Academy of Sciences(XDB32040200).
文摘Brain encoding and decoding via functional magnetic resonance imaging(fMRI)are two important aspects of visual perception neuroscience.Although previous researchers have made significant advances in brain encoding and decoding models,existing methods still require improvement using advanced machine learning techniques.For example,traditional methods usually build the encoding and decoding models separately,and are prone to overfitting on a small dataset.In fact,effectively unifying the encoding and decoding procedures may allow for more accurate predictions.In this paper,we first review the existing encoding and decoding methods and discuss the potential advantages of a“bidirectional”modeling strategy.Next,we show that there are correspondences between deep neural networks and human visual streams in terms of the architecture and computational rules.Furthermore,deep generative models(e.g.,variational autoencoders(VAEs)and generative adversarial networks(GANs))have produced promising results in studies on brain encoding and decoding.Finally,we propose that the dual learning method,which was originally designed for machine translation tasks,could help to improve the performance of encoding and decoding models by leveraging large-scale unpaired data.
文摘In this paper, the effects of frying time, white egg (0%, 5% and 10% w/w) and chitosan (0%, 0.5% and 1.5% w/w) addition to the batter formulation on the quality of simulated crispy deep-fried Kurdish cheese nugget crusts was studied by using a deep-fried crust model. Moisture content, oil content, color and hardness of the samples were determined. Crust models were fried at 190℃ for 60, 120 and 180 s. Batter formulations and frying time significantly (p < 0.01) affected moisture, oil content, color and hardness of Crust models. Batter formulation contain 10% white egg was found to be an effective ingredient in decreasing oil content of Crust models. The mean moisture and fat content of Crust models formed with batter contained 10% white egg, fried at 190℃, for 180s were 6.207 ± 0.447 and 5.649 ± 0.394. Batters containing 5% white egg and 1.5% chitosan showed the lowest moisture content and the highest oil content among all the formulations. Crust models containing combination of white egg and chitosan were the darkest. Hardness of samples containing chitosan were the highest, specially for ch1.5 The mean hardness in 60, 120 and 180s of frying in this formulation were 21.518 ± 0.481, 36.871 ± 1.758 and 49.563 ± 1.847 respectively.