期刊文献+
共找到6,346篇文章
< 1 2 250 >
每页显示 20 50 100
Sentiment Analysis of Low-Resource Language Literature Using Data Processing and Deep Learning
1
作者 Aizaz Ali Maqbool Khan +2 位作者 Khalil Khan Rehan Ullah Khan Abdulrahman Aloraini 《Computers, Materials & Continua》 SCIE EI 2024年第4期713-733,共21页
Sentiment analysis, a crucial task in discerning emotional tones within the text, plays a pivotal role in understandingpublic opinion and user sentiment across diverse languages.While numerous scholars conduct sentime... Sentiment analysis, a crucial task in discerning emotional tones within the text, plays a pivotal role in understandingpublic opinion and user sentiment across diverse languages.While numerous scholars conduct sentiment analysisin widely spoken languages such as English, Chinese, Arabic, Roman Arabic, and more, we come to grapplingwith resource-poor languages like Urdu literature which becomes a challenge. Urdu is a uniquely crafted language,characterized by a script that amalgamates elements from diverse languages, including Arabic, Parsi, Pashtu,Turkish, Punjabi, Saraiki, and more. As Urdu literature, characterized by distinct character sets and linguisticfeatures, presents an additional hurdle due to the lack of accessible datasets, rendering sentiment analysis aformidable undertaking. The limited availability of resources has fueled increased interest among researchers,prompting a deeper exploration into Urdu sentiment analysis. This research is dedicated to Urdu languagesentiment analysis, employing sophisticated deep learning models on an extensive dataset categorized into fivelabels: Positive, Negative, Neutral, Mixed, and Ambiguous. The primary objective is to discern sentiments andemotions within the Urdu language, despite the absence of well-curated datasets. To tackle this challenge, theinitial step involves the creation of a comprehensive Urdu dataset by aggregating data from various sources such asnewspapers, articles, and socialmedia comments. Subsequent to this data collection, a thorough process of cleaningand preprocessing is implemented to ensure the quality of the data. The study leverages two well-known deeplearningmodels, namely Convolutional Neural Networks (CNN) and Recurrent Neural Networks (RNN), for bothtraining and evaluating sentiment analysis performance. Additionally, the study explores hyperparameter tuning tooptimize the models’ efficacy. Evaluation metrics such as precision, recall, and the F1-score are employed to assessthe effectiveness of the models. The research findings reveal that RNN surpasses CNN in Urdu sentiment analysis,gaining a significantly higher accuracy rate of 91%. This result accentuates the exceptional performance of RNN,solidifying its status as a compelling option for conducting sentiment analysis tasks in the Urdu language. 展开更多
关键词 Urdu sentiment analysis convolutional neural networks recurrent neural network deep learning natural language processing neural networks
在线阅读 下载PDF
Microseismic signal processing and rockburst disaster identification:A multi-task deep learning and machine learning approach
2
作者 Chunchi Ma Weihao Xu +3 位作者 Xuefeng Ran Tianbin Li Hang Zhang Dongwei Xing 《Journal of Rock Mechanics and Geotechnical Engineering》 2026年第1期441-456,共16页
Underground engineering projects such as deep tunnel excavation often encounter rockburst disasters accompanied by numerous microseismic events.Rapid interpretation of microseismic signals is crucial for the timely id... Underground engineering projects such as deep tunnel excavation often encounter rockburst disasters accompanied by numerous microseismic events.Rapid interpretation of microseismic signals is crucial for the timely identification of rockbursts.However,conventional processing encompasses multi-step workflows,including classification,denoising,picking,locating,and computational analysis,coupled with manual intervention,which collectively compromise the reliability of early warnings.To address these challenges,this study innovatively proposes the“microseismic stethoscope"-a multi-task machine learning and deep learning model designed for the automated processing of massive microseismic signals.This model efficiently extracts three key parameters that are necessary for recognizing rockburst disasters:rupture location,microseismic energy,and moment magnitude.Specifically,the model extracts raw waveform features from three dedicated sub-networks:a classifier for source zone classification,and two regressors for microseismic energy and moment magnitude estimation.This model demonstrates superior efficiency compared to traditional processing and semi-automated processing,reducing per-event processing time from 0.71 s to 0.49 s to merely 0.036 s.It concurrently achieves 98%accuracy in source zone classification,with microseismic energy and moment magnitude estimation errors of 0.13 and 0.05,respectively.This model has been well applied and validated in the Daxiagu Tunnel case in Sichuan,China.The application results indicate that the model is as accurate as traditional methods in determining source parameters,and thus can be used to identify potential geomechanical processes of rockburst disasters.By enhancing the signal processing reliability of microseismic events,the proposed model in this study presents a significant advancement in the identification of rockburst disasters. 展开更多
关键词 Underground engineering Microseismic signal processing deep learning MULTI-TASK Rockburst identification
在线阅读 下载PDF
Harnessing deep learning for the discovery of latent patterns in multi-omics medical data
3
作者 Okechukwu Paul-Chima Ugwu Fabian COgenyi +8 位作者 Chinyere Nkemjika Anyanwu Melvin Nnaemeka Ugwu Esther Ugo Alum Mariam Basajja Joseph Obiezu Chukwujekwu Ezeonwumelu Daniel Ejim Uti Ibe Michael Usman Chukwuebuka Gabriel Eze Simeon Ikechukwu Egba 《Medical Data Mining》 2026年第1期32-45,共14页
The rapid growth of biomedical data,particularly multi-omics data including genomes,transcriptomics,proteomics,metabolomics,and epigenomics,medical research and clinical decision-making confront both new opportunities... The rapid growth of biomedical data,particularly multi-omics data including genomes,transcriptomics,proteomics,metabolomics,and epigenomics,medical research and clinical decision-making confront both new opportunities and obstacles.The huge and diversified nature of these datasets cannot always be managed using traditional data analysis methods.As a consequence,deep learning has emerged as a strong tool for analysing numerous omics data due to its ability to handle complex and non-linear relationships.This paper explores the fundamental concepts of deep learning and how they are used in multi-omics medical data mining.We demonstrate how autoencoders,variational autoencoders,multimodal models,attention mechanisms,transformers,and graph neural networks enable pattern analysis and recognition across all omics data.Deep learning has been found to be effective in illness classification,biomarker identification,gene network learning,and therapeutic efficacy prediction.We also consider critical problems like as data quality,model explainability,whether findings can be repeated,and computational power requirements.We now consider future elements of combining omics with clinical and imaging data,explainable AI,federated learning,and real-time diagnostics.Overall,this study emphasises the need of collaborating across disciplines to advance deep learning-based multi-omics research for precision medicine and comprehending complicated disorders. 展开更多
关键词 deep learning multi-omics integration biomedical data mining precision medicine graph neural networks autoencoders and transformers
在线阅读 下载PDF
Human Activity Recognition Using Weighted Average Ensemble by Selected Deep Learning Models
4
作者 Waseem Akhtar Mahwish Ilyas +3 位作者 Romana Aziz Ghadah Aldehim Tassawar Iqbal Muhammad Ramzan 《Computer Modeling in Engineering & Sciences》 2026年第2期971-989,共19页
Human Activity Recognition(HAR)is a novel area for computer vision.It has a great impact on healthcare,smart environments,and surveillance while is able to automatically detect human behavior.It plays a vital role in ... Human Activity Recognition(HAR)is a novel area for computer vision.It has a great impact on healthcare,smart environments,and surveillance while is able to automatically detect human behavior.It plays a vital role in many applications,such as smart home,healthcare,human computer interaction,sports analysis,and especially,intelligent surveillance.In this paper,we propose a robust and efficient HAR system by leveraging deep learning paradigms,including pre-trained models,CNN architectures,and their average-weighted fusion.However,due to the diversity of human actions and various environmental influences,as well as a lack of data and resources,achieving high recognition accuracy remain elusive.In this work,a weighted average ensemble technique is employed to fuse three deep learning models:EfficientNet,ResNet50,and a custom CNN.The results of this study indicate that using a weighted average ensemble strategy for developing more effective HAR models may be a promising idea for detection and classification of human activities.Experiments by using the benchmark dataset proved that the proposed weighted ensemble approach outperformed existing approaches in terms of accuracy and other key performance measures.The combined average-weighted ensemble of pre-trained and CNN models obtained an accuracy of 98%,compared to 97%,96%,and 95%for the customized CNN,EfficientNet,and ResNet50 models,respectively. 展开更多
关键词 Artificial intelligence computer vision deep learning RECOGNITION human activity classification image processing
在线阅读 下载PDF
Efficient socket-based data transmission method and implementation in deep learning
5
作者 Wei Xin-Jian Li Shu-Ping +5 位作者 Yang Wu-Yang Zhang Xiang-Yang Li Hai-Shan Xu Xin Wang Nan Fu Zhanbao 《Applied Geophysics》 2025年第4期1341-1350,1499,1500,共12页
The deep learning algorithm,which has been increasingly applied in the field of petroleum geophysical prospecting,has achieved good results in improving efficiency and accuracy based on test applications.To play a gre... The deep learning algorithm,which has been increasingly applied in the field of petroleum geophysical prospecting,has achieved good results in improving efficiency and accuracy based on test applications.To play a greater role in actual production,these algorithm modules must be integrated into software systems and used more often in actual production projects.Deep learning frameworks,such as TensorFlow and PyTorch,basically take Python as the core architecture,while the application program mainly uses Java,C#,and other programming languages.During integration,the seismic data read by the Java and C#data interfaces must be transferred to the Python main program module.The data exchange methods between Java,C#,and Python include shared memory,shared directory,and so on.However,these methods have the disadvantages of low transmission efficiency and unsuitability for asynchronous networks.Considering the large volume of seismic data and the need for network support for deep learning,this paper proposes a method of transmitting seismic data based on Socket.By maximizing Socket’s cross-network and efficient longdistance transmission,this approach solves the problem of inefficient transmission of underlying data while integrating the deep learning algorithm module into a software system.Furthermore,the actual production application shows that this method effectively solves the shortage of data transmission in shared memory,shared directory,and other modes while simultaneously improving the transmission efficiency of massive seismic data across modules at the bottom of the software. 展开更多
关键词 SOCKET deep learning Transfer data Seismic data Thread pool River prediction
在线阅读 下载PDF
Battery pack capacity prediction using deep learning and data compression technique:A method for real-world vehicles
6
作者 Yi Yang Jibin Yang +4 位作者 Xiaohua Wu Liyue Fu Xinmei Gao Xiandong Xie Quan Ouyang 《Journal of Energy Chemistry》 2025年第7期553-564,共12页
The accurate prediction of battery pack capacity in electric vehicles(EVs)is crucial for ensuring safety and optimizing performance.Despite extensive research on predicting cell capacity using laboratory data,predicti... The accurate prediction of battery pack capacity in electric vehicles(EVs)is crucial for ensuring safety and optimizing performance.Despite extensive research on predicting cell capacity using laboratory data,predicting the capacity of onboard battery packs from field data remains challenging due to complex operating conditions and irregular EV usage in real-world settings.Most existing methods rely on extracting health feature parameters from raw data for capacity prediction of onboard battery packs,however,selecting specific parameters often results in a loss of critical information,which reduces prediction accuracy.To this end,this paper introduces a novel framework combining deep learning and data compression techniques to accurately predict battery pack capacity onboard.The proposed data compression method converts monthly EV charging data into feature maps,which preserve essential data characteristics while reducing the volume of raw data.To address missing capacity labels in field data,a capacity labeling method is proposed,which calculates monthly battery capacity by transforming the ampere-hour integration formula and applying linear regression.Subsequently,a deep learning model is proposed to build a capacity prediction model,using feature maps from historical months to predict the battery capacity of future months,thus facilitating accurate forecasts.The proposed framework,evaluated using field data from 20 EVs,achieves a mean absolute error of 0.79 Ah,a mean absolute percentage error of 0.65%,and a root mean square error of 1.02 Ah,highlighting its potential for real-world EV applications. 展开更多
关键词 Lithium-ion battery Capacity prediction Real-world vehicle data data compression deep learning
在线阅读 下载PDF
Dynamic UAV data fusion and deep learning for improved maize phenological-stage tracking
7
作者 Ziheng Feng Jiliang Zhao +8 位作者 Liunan Suo Heguang Sun Huiling Long Hao Yang Xiaoyu Song Haikuan Feng Bo Xu Guijun Yang Chunjiang Zhao 《The Crop Journal》 2025年第3期961-974,共14页
Near real-time maize phenology monitoring is crucial for field management,cropping system adjustments,and yield estimation.Most phenological monitoring methods are post-seasonal and heavily rely on high-frequency time... Near real-time maize phenology monitoring is crucial for field management,cropping system adjustments,and yield estimation.Most phenological monitoring methods are post-seasonal and heavily rely on high-frequency time-series data.These methods are not applicable on the unmanned aerial vehicle(UAV)platform due to the high cost of acquiring time-series UAV images and the shortage of UAV-based phenological monitoring methods.To address these challenges,we employed the Synthetic Minority Oversampling Technique(SMOTE)for sample augmentation,aiming to resolve the small sample modelling problem.Moreover,we utilized enhanced"separation"and"compactness"feature selection methods to identify input features from multiple data sources.In this process,we incorporated dynamic multi-source data fusion strategies,involving Vegetation index(VI),Color index(CI),and Texture features(TF).A two-stage neural network that combines Convolutional Neural Network(CNN)and Long Short-Term Memory Network(LSTM)is proposed to identify maize phenological stages(including sowing,seedling,jointing,trumpet,tasseling,maturity,and harvesting)on UAV platforms.The results indicate that the dataset generated by SMOTE closely resembles the measured dataset.Among dynamic data fusion strategies,the VI-TF combination proves to be most effective,with CI-TF and VI-CI combinations following behind.Notably,as more data sources are integrated,the model's demand for input features experiences a significant decline.In particular,the CNN-LSTM model,based on the fusion of three data sources,exhibited remarkable reliability when validating the three datasets.For Dataset 1(Beijing Xiaotangshan,2023:Data from 12 UAV Flight Missions),the model achieved an overall accuracy(OA)of 86.53%.Additionally,its precision(Pre),recall(Rec),F1 score(F1),false acceptance rate(FAR),and false rejection rate(FRR)were 0.89,0.89,0.87,0.11,and 0.11,respectively.The model also showed strong generalizability in Dataset 2(Beijing Xiaotangshan,2023:Data from 6 UAV Flight Missions)and Dataset 3(Beijing Xiaotangshan,2022:Data from 4 UAV Flight Missions),with OAs of 89.4%and 85%,respectively.Meanwhile,the model has a low demand for input featu res,requiring only 54.55%(99 of all featu res).The findings of this study not only offer novel insights into near real-time crop phenology monitoring,but also provide technical support for agricultural field management and cropping system adaptation. 展开更多
关键词 Near real-time Maize phenology deep learning UAV Multi-source data fusion
在线阅读 下载PDF
A Comparative Study of Data Representation Techniques for Deep Learning-Based Classification of Promoter and Histone-Associated DNA Regions
8
作者 Sarab Almuhaideb Najwa Altwaijry +2 位作者 Isra Al-Turaiki Ahmad Raza Khan Hamza Ali Rizvi 《Computers, Materials & Continua》 2025年第11期3095-3128,共34页
Many bioinformatics applications require determining the class of a newly sequenced Deoxyribonucleic acid(DNA)sequence,making DNA sequence classification an integral step in performing bioinformatics analysis,where la... Many bioinformatics applications require determining the class of a newly sequenced Deoxyribonucleic acid(DNA)sequence,making DNA sequence classification an integral step in performing bioinformatics analysis,where large biomedical datasets are transformed into valuable knowledge.Existing methods rely on a feature extraction step and suffer from high computational time requirements.In contrast,newer approaches leveraging deep learning have shown significant promise in enhancing accuracy and efficiency.In this paper,we investigate the performance of various deep learning architectures:Convolutional Neural Network(CNN),CNN-Long Short-Term Memory(CNNLSTM),CNN-Bidirectional Long Short-Term Memory(CNN-BiLSTM),Residual Network(ResNet),and InceptionV3 for DNA sequence classification.Various numerical and visual data representation techniques are utilized to represent the input datasets,including:label encoding,k-mer sentence encoding,k-mer one-hot vector,Frequency Chaos Game Representation(FCGR)and 5-Color Map(ColorSquare).Three datasets are used for the training of the models including H3,H4 and DNA Sequence Dataset(Yeast,Human,Arabidopsis Thaliana).Experiments are performed to determine which combination of DNA representation and deep learning architecture yields improved performance for the classification task.Our results indicate that using a hybrid CNN-LSTM neural network trained on DNA sequences represented as one-hot encoded k-mer sequences yields the best performance,achieving an accuracy of 92.1%. 展开更多
关键词 DNA sequence classification deep learning data visualization
在线阅读 下载PDF
A deep learning model for ocean surface latent heat flux based on transformer and data assimilation
9
作者 Yahui Liu Hengxiao Li Jichao Wang 《Acta Oceanologica Sinica》 2025年第5期115-130,共16页
Efficient and accurate prediction of ocean surface latent heat fluxes is essential for understanding and modeling climate dynamics.Conventional estimation methods have low resolution and lack accuracy.The transformer ... Efficient and accurate prediction of ocean surface latent heat fluxes is essential for understanding and modeling climate dynamics.Conventional estimation methods have low resolution and lack accuracy.The transformer model,with its self-attention mechanism,effectively captures long-range dependencies,leading to a degradation of accuracy over time.Due to the non-linearity and uncertainty of physical processes,the transformer model encounters the problem of error accumulation,leading to a degradation of accuracy over time.To solve this problem,we combine the Data Assimilation(DA)technique with the transformer model and continuously modify the model state to make it closer to the actual observations.In this paper,we propose a deep learning model called TransNetDA,which integrates transformer,convolutional neural network and DA methods.By combining data-driven and DA methods for spatiotemporal prediction,TransNetDA effectively extracts multi-scale spatial features and significantly improves prediction accuracy.The experimental results indicate that the TransNetDA method surpasses traditional techniques in terms of root mean square error and R2 metrics,showcasing its superior performance in predicting latent heat fluxes at the ocean surface. 展开更多
关键词 climate dynamics deep learning(DL) data Assimilation(DA) TRANSFORMER ensemble Kalman filter ocean surface latent heat flux
在线阅读 下载PDF
Deep Learning in Biomedical Image and Signal Processing:A Survey
10
作者 Batyrkhan Omarov 《Computers, Materials & Continua》 2025年第11期2195-2253,共59页
Deep learning now underpins many state-of-the-art systems for biomedical image and signal processing,enabling automated lesion detection,physiological monitoring,and therapy planning with accuracy that rivals expert p... Deep learning now underpins many state-of-the-art systems for biomedical image and signal processing,enabling automated lesion detection,physiological monitoring,and therapy planning with accuracy that rivals expert performance.This survey reviews the principal model families as convolutional,recurrent,generative,reinforcement,autoencoder,and transfer-learning approaches as emphasising how their architectural choices map to tasks such as segmentation,classification,reconstruction,and anomaly detection.A dedicated treatment of multimodal fusion networks shows how imaging features can be integrated with genomic profiles and clinical records to yield more robust,context-aware predictions.To support clinical adoption,we outline post-hoc explainability techniques(Grad-CAM,SHAP,LIME)and describe emerging intrinsically interpretable designs that expose decision logic to end users.Regulatory guidance from the U.S.FDA,the European Medicines Agency,and the EU AI Act is summarised,linking transparency and lifecycle-monitoring requirements to concrete development practices.Remaining challenges as data imbalance,computational cost,privacy constraints,and cross-domain generalization are discussed alongside promising solutions such as federated learning,uncertainty quantification,and lightweight 3-D architectures.The article therefore offers researchers,clinicians,and policymakers a concise,practice-oriented roadmap for deploying trustworthy deep-learning systems in healthcare. 展开更多
关键词 deep learning biomedical imaging signal processing neural networks image segmentation disease classification drug discovery patient monitoring robotic surgery artificial intelligence in healthcare
在线阅读 下载PDF
An Enhanced Lung Cancer Detection Approach Using Dual-Model Deep Learning Technique 被引量:1
11
作者 Sumaia Mohamed Elhassan Saad Mohamed Darwish Saleh Mesbah Elkaffas 《Computer Modeling in Engineering & Sciences》 SCIE EI 2025年第1期835-867,共33页
Lung cancer continues to be a leading cause of cancer-related deaths worldwide,emphasizing the critical need for improved diagnostic techniques.Early detection of lung tumors significantly increases the chances of suc... Lung cancer continues to be a leading cause of cancer-related deaths worldwide,emphasizing the critical need for improved diagnostic techniques.Early detection of lung tumors significantly increases the chances of successful treatment and survival.However,current diagnostic methods often fail to detect tumors at an early stage or to accurately pinpoint their location within the lung tissue.Single-model deep learning technologies for lung cancer detection,while beneficial,cannot capture the full range of features present in medical imaging data,leading to incomplete or inaccurate detection.Furthermore,it may not be robust enough to handle the wide variability in medical images due to different imaging conditions,patient anatomy,and tumor characteristics.To overcome these disadvantages,dual-model or multi-model approaches can be employed.This research focuses on enhancing the detection of lung cancer by utilizing a combination of two learning models:a Convolutional Neural Network(CNN)for categorization and the You Only Look Once(YOLOv8)architecture for real-time identification and pinpointing of tumors.CNNs automatically learn to extract hierarchical features from raw image data,capturing patterns such as edges,textures,and complex structures that are crucial for identifying lung cancer.YOLOv8 incorporates multiscale feature extraction,enabling the detection of tumors of varying sizes and scales within a single image.This is particularly beneficial for identifying small or irregularly shaped tumors that may be challenging to detect.Furthermore,through the utilization of cutting-edge data augmentation methods,such as Deep Convolutional Generative Adversarial Networks(DCGAN),the suggested approach can handle the issue of limited data and boost the models’ability to learn from diverse and comprehensive datasets.The combined method not only improved accuracy and localization but also ensured efficient real-time processing,which is crucial for practical clinical applications.The CNN achieved an accuracy of 97.67%in classifying lung tissues into healthy and cancerous categories.The YOLOv8 model achieved an Intersection over Union(IoU)score of 0.85 for tumor localization,reflecting high precision in detecting and marking tumor boundaries within the images.Finally,the incorporation of synthetic images generated by DCGAN led to a 10%improvement in both the CNN classification accuracy and YOLOv8 detection performance. 展开更多
关键词 Lung cancer detection dual-model deep learning technique data augmentation CNN YOLOv8
在线阅读 下载PDF
Deep Learning-Driven Data Curation and Model Interpretation for Smart Manufacturing 被引量:7
12
作者 Jianjing Zhang Robert X.Gao 《Chinese Journal of Mechanical Engineering》 SCIE EI CAS CSCD 2021年第3期52-72,共21页
Characterized by self-monitoring and agile adaptation to fast changing dynamics in complex production environments,smart manufacturing as envisioned under Industry 4.0 aims to improve the throughput and reliability of... Characterized by self-monitoring and agile adaptation to fast changing dynamics in complex production environments,smart manufacturing as envisioned under Industry 4.0 aims to improve the throughput and reliability of production beyond the state-of-the-art.While the widespread application of deep learning(DL)has opened up new opportunities to accomplish the goal,data quality and model interpretability have continued to present a roadblock for the widespread acceptance of DL for real-world applications.This has motivated research on two fronts:data curation,which aims to provide quality data as input for meaningful DL-based analysis,and model interpretation,which intends to reveal the physical reasoning underlying DL model outputs and promote trust from the users.This paper summarizes several key techniques in data curation where breakthroughs in data denoising,outlier detection,imputation,balancing,and semantic annotation have demonstrated the effectiveness in information extraction from noisy,incomplete,insufficient,and/or unannotated data.Also highlighted are model interpretation methods that address the“black-box”nature of DL towards model transparency. 展开更多
关键词 deep learning data curation Model interpretation
在线阅读 下载PDF
Lensless complex amplitude demodulation based on deep learning in holographic data storage 被引量:7
13
作者 Jianying Hao Xiao Lin +5 位作者 Yongkun Lin Mingyong Chen Ruixian Chen Guohai Situ Hideyoshi Horimai Xiaodi Tan 《Opto-Electronic Advances》 SCIE EI CAS CSCD 2023年第3期42-56,共15页
To increase the storage capacity in holographic data storage(HDS),the information to be stored is encoded into a complex amplitude.Fast and accurate retrieval of amplitude and phase from the reconstructed beam is nece... To increase the storage capacity in holographic data storage(HDS),the information to be stored is encoded into a complex amplitude.Fast and accurate retrieval of amplitude and phase from the reconstructed beam is necessary during data readout in HDS.In this study,we proposed a complex amplitude demodulation method based on deep learning from a single-shot diffraction intensity image and verified it by a non-interferometric lensless experiment demodulating four-level amplitude and four-level phase.By analyzing the correlation between the diffraction intensity features and the amplitude and phase encoding data pages,the inverse problem was decomposed into two backward operators denoted by two convolutional neural networks(CNNs)to demodulate amplitude and phase respectively.The experimental system is simple,stable,and robust,and it only needs a single diffraction image to realize the direct demodulation of both amplitude and phase.To our investigation,this is the first time in HDS that multilevel complex amplitude demodulation is achieved experimentally from one diffraction intensity image without iterations. 展开更多
关键词 holographic data storage complex amplitude demodulation deep learning computational imaging
在线阅读 下载PDF
DeepSwarm:towards swarm deep learning with bi-directional optimization of data acquisition and processing
14
作者 Sicong LIU Bin GUO +4 位作者 Ziqi WANG Lehao WANG Zimu ZHOU Xiaochen LI Zhiwen YU 《Frontiers of Computer Science》 2025年第3期125-127,共3页
1 Introduction On-device deep learning(DL)on mobile and embedded IoT devices drives various applications[1]like robotics image recognition[2]and drone swarm classification[3].Efficient local data processing preserves ... 1 Introduction On-device deep learning(DL)on mobile and embedded IoT devices drives various applications[1]like robotics image recognition[2]and drone swarm classification[3].Efficient local data processing preserves privacy,enhances responsiveness,and saves bandwidth.However,current ondevice DL relies on predefined patterns,leading to accuracy and efficiency bottlenecks.It is difficult to provide feedback on data processing performance during the data acquisition stage,as processing typically occurs after data acquisition. 展开更多
关键词 drone swarm classification efficient local data processing data processing deep learning dl device deep learning bi-directional optimization iot devices swarm deep learning
原文传递
A machine learning framework for low-field NMR data processing 被引量:5
15
作者 Si-Hui Luo Li-Zhi Xiao +4 位作者 Yan Jin Guang-Zhi Liao Bin-Sen Xu Jun Zhou Can Liang 《Petroleum Science》 SCIE CAS CSCD 2022年第2期581-593,共13页
Low-field(nuclear magnetic resonance)NMR has been widely used in petroleum industry,such as well logging and laboratory rock core analysis.However,the signal-to-noise ratio is low due to the low magnetic field strengt... Low-field(nuclear magnetic resonance)NMR has been widely used in petroleum industry,such as well logging and laboratory rock core analysis.However,the signal-to-noise ratio is low due to the low magnetic field strength of NMR tools and the complex petrophysical properties of detected samples.Suppressing the noise and highlighting the available NMR signals is very important for subsequent data processing.Most denoising methods are normally based on fixed mathematical transformation or handdesign feature selectors to suppress noise characteristics,which may not perform well because of their non-adaptive performance to different noisy signals.In this paper,we proposed a“data processing framework”to improve the quality of low field NMR echo data based on dictionary learning.Dictionary learning is a machine learning method based on redundancy and sparse representation theory.Available information in noisy NMR echo data can be adaptively extracted and reconstructed by dictionary learning.The advantages and application effectiveness of the proposed method were verified with a number of numerical simulations,NMR core data analyses,and NMR logging data processing.The results show that dictionary learning can significantly improve the quality of NMR echo data with high noise level and effectively improve the accuracy and reliability of inversion results. 展开更多
关键词 Dictionary learning Low-field NMR DENOISING data processing T_(2)distribution
原文传递
Novel Multi-Step Deep Learning Approach for Detection of Complex Defects in Solar Cells 被引量:1
16
作者 JIANG Wenbo ZHENG Hangbin BAO Jinsong 《Journal of Shanghai Jiaotong university(Science)》 2025年第5期1050-1064,共15页
Solar cell defects exhibit significant variations and multiple types,with some defect data being difficult to acquire or having small scales,posing challenges in terms of small sample and small target in defect detect... Solar cell defects exhibit significant variations and multiple types,with some defect data being difficult to acquire or having small scales,posing challenges in terms of small sample and small target in defect detection for solar cells.In order to address this issue,this paper proposes a multi-step approach for detecting the complex defects of solar cells.First,individual cell plates are extracted from electroluminescence images for block-by-block detection.Then,StyleGAN2-Ada is utilized for generative adversarial networks data augmentation to expand the number of defect samples in small sample defects.Finally,the fake dataset is combined with real dataset,and the improved YOLOv5 model is trained on this mixed dataset.Experimental results demonstrate that the proposed method achieves a superior performance in detecting the defects with small sample and small target,with the final recall rate reaching 99.7%,an increase of 3.9% compared with the unimproved model.Additionally,the precision and mean average precision are increased by 3.4% and 3.5%,respectively.Moreover,the experiments demonstrate that the improved network training on the mixed dataset can effectively enhance the detection performance of the model.The combination of these approaches significantly improves the network’s ability to detect solar cell defects. 展开更多
关键词 intelligent manufacturing intelligent defect recognition deep learning data augmentation solar cells
原文传递
A Comprehensive Review of Multimodal Deep Learning for Enhanced Medical Diagnostics 被引量:1
17
作者 Aya M.Al-Zoghby Ahmed Ismail Ebada +2 位作者 Aya S.Saleh Mohammed Abdelhay Wael A.Awad 《Computers, Materials & Continua》 2025年第9期4155-4193,共39页
Multimodal deep learning has emerged as a key paradigm in contemporary medical diagnostics,advancing precision medicine by enabling integration and learning from diverse data sources.The exponential growth of high-dim... Multimodal deep learning has emerged as a key paradigm in contemporary medical diagnostics,advancing precision medicine by enabling integration and learning from diverse data sources.The exponential growth of high-dimensional healthcare data,encompassing genomic,transcriptomic,and other omics profiles,as well as radiological imaging and histopathological slides,makes this approach increasingly important because,when examined separately,these data sources only offer a fragmented picture of intricate disease processes.Multimodal deep learning leverages the complementary properties of multiple data modalities to enable more accurate prognostic modeling,more robust disease characterization,and improved treatment decision-making.This review provides a comprehensive overview of the current state of multimodal deep learning approaches in medical diagnosis.We classify and examine important application domains,such as(1)radiology,where automated report generation and lesion detection are facilitated by image-text integration;(2)histopathology,where fusion models improve tumor classification and grading;and(3)multi-omics,where molecular subtypes and latent biomarkers are revealed through cross-modal learning.We provide an overview of representative research,methodological advancements,and clinical consequences for each domain.Additionally,we critically analyzed the fundamental issues preventing wider adoption,including computational complexity(particularly in training scalable,multi-branch networks),data heterogeneity(resulting from modality-specific noise,resolution variations,and inconsistent annotations),and the challenge of maintaining significant cross-modal correlations during fusion.These problems impede interpretability,which is crucial for clinical trust and use,in addition to performance and generalizability.Lastly,we outline important areas for future research,including the development of standardized protocols for harmonizing data,the creation of lightweight and interpretable fusion architectures,the integration of real-time clinical decision support systems,and the promotion of cooperation for federated multimodal learning.Our goal is to provide researchers and clinicians with a concise overview of the field’s present state,enduring constraints,and exciting directions for further research through this review. 展开更多
关键词 Multimodal deep learning medical diagnostics multimodal healthcare fusion healthcare data integration
暂未订购
Deep learning technique for process fault detection and diagnosis in the presence of incomplete data 被引量:4
18
作者 Cen Guo Wenkai Hu +1 位作者 Fan Yang Dexian Huang 《Chinese Journal of Chemical Engineering》 SCIE EI CAS CSCD 2020年第9期2358-2367,共10页
In modern industrial processes, timely detection and diagnosis of process abnormalities are critical for monitoring process operations. Various fault detection and diagnosis(FDD) methods have been proposed and impleme... In modern industrial processes, timely detection and diagnosis of process abnormalities are critical for monitoring process operations. Various fault detection and diagnosis(FDD) methods have been proposed and implemented, the performance of which, however, could be drastically influenced by the common presence of incomplete or missing data in real industrial scenarios. This paper presents a new FDD approach based on an incomplete data imputation technique for process fault recognition. It employs the modified stacked autoencoder,a deep learning structure, in the phase of incomplete data treatment, and classifies data representations rather than the imputed complete data in the phase of fault identification. A benchmark process, the Tennessee Eastman process, is employed to illustrate the effectiveness and applicability of the proposed method. 展开更多
关键词 Alarm configuration deep learning Fault detection and diagnosis Incomplete data Stacked autoencoder
在线阅读 下载PDF
A generic and extensible model for the martensite start temperature incorporating thermodynamic data mining and deep learning framework 被引量:3
19
作者 Chenchong Wang Kaiyu Zhu +2 位作者 Peter Hedström Yong Li Wei Xu 《Journal of Materials Science & Technology》 SCIE EI CAS CSCD 2022年第33期31-43,共13页
The martensite start temperature is a critical parameter for steels with metastable austenite.Although numerous models have been developed to predict the martensite start(Ms)temperature,the complexity of the martensit... The martensite start temperature is a critical parameter for steels with metastable austenite.Although numerous models have been developed to predict the martensite start(Ms)temperature,the complexity of the martensitic transformation greatly limits their performance and extensibility.In this work,we apply deep data mining of thermodynamic calculations and deep learning to develop a generic model for Msprediction.Deep data mining was used to establish a hierarchical database with three levels of information.Then,a convolutional neural network model,which can accurately treat the hierarchical data structure,was used to obtain the final model.By integrating thermodynamic calculations,traditional machine learning and deep learning modeling,the final predictor model shows excellent generalizability and extensibility,i.e.model performance both within and beyond the composition range of the original database.The effects of 15 alloying elements were considered successfully using the proposed methodology.The work suggests that,with the help of deep data mining considering the physical mechanisms,deep learning methods can partially mitigate the challenge with limited data in materials science and provide a means for solving complex problems with small databases. 展开更多
关键词 Martensite transformation data mining deep learning EXTENSIBILITY Small-sample problem
原文传递
Automated deep learning system for power line inspection image analysis and processing: architecture and design issues 被引量:4
20
作者 Daoxing Li Xiaohui Wang +1 位作者 Jie Zhang Zhixiang Ji 《Global Energy Interconnection》 EI CSCD 2023年第5期614-633,共20页
The continuous growth in the scale of unmanned aerial vehicle (UAV) applications in transmission line inspection has resulted in a corresponding increase in the demand for UAV inspection image processing. Owing to its... The continuous growth in the scale of unmanned aerial vehicle (UAV) applications in transmission line inspection has resulted in a corresponding increase in the demand for UAV inspection image processing. Owing to its excellent performance in computer vision, deep learning has been applied to UAV inspection image processing tasks such as power line identification and insulator defect detection. Despite their excellent performance, electric power UAV inspection image processing models based on deep learning face several problems such as a small application scope, the need for constant retraining and optimization, and high R&D monetary and time costs due to the black-box and scene data-driven characteristics of deep learning. In this study, an automated deep learning system for electric power UAV inspection image analysis and processing is proposed as a solution to the aforementioned problems. This system design is based on the three critical design principles of generalizability, extensibility, and automation. Pre-trained models, fine-tuning (downstream task adaptation), and automated machine learning, which are closely related to these design principles, are reviewed. In addition, an automated deep learning system architecture for electric power UAV inspection image analysis and processing is presented. A prototype system was constructed and experiments were conducted on the two electric power UAV inspection image analysis and processing tasks of insulator self-detonation and bird nest recognition. The models constructed using the prototype system achieved 91.36% and 86.13% mAP for insulator self-detonation and bird nest recognition, respectively. This demonstrates that the system design concept is reasonable and the system architecture feasible . 展开更多
关键词 Transmission line inspection deep learning Automated machine learning Image analysis and processing
在线阅读 下载PDF
上一页 1 2 250 下一页 到第
使用帮助 返回顶部