期刊文献+
共找到1,136,965篇文章
< 1 2 250 >
每页显示 20 50 100
Photoacoustic-computed tomography 3D data compression method and system based on Wavelet-Transformer
1
作者 Jialin Li Tingting Li +2 位作者 Yiming Ma Yi Shen Mingjian Sun 《Journal of Innovative Optical Health Sciences》 2026年第1期110-125,共16页
Photoacoustic-computed tomography is a novel imaging technique that combines high absorption contrast and deep tissue penetration capability,enabling comprehensive three-dimensional imaging of biological targets.Howev... Photoacoustic-computed tomography is a novel imaging technique that combines high absorption contrast and deep tissue penetration capability,enabling comprehensive three-dimensional imaging of biological targets.However,the increasing demand for higher resolution and real-time imaging results in significant data volume,limiting data storage,transmission and processing efficiency of system.Therefore,there is an urgent need for an effective method to compress the raw data without compromising image quality.This paper presents a photoacoustic-computed tomography 3D data compression method and system based on Wavelet-Transformer.This method is based on the cooperative compression framework that integrates wavelet hard coding with deep learning-based soft decoding.It combines the multiscale analysis capability of wavelet transforms with the global feature modeling advantage of Transformers,achieving high-quality data compression and reconstruction.Experimental results using k-wave simulation suggest that the proposed compression system has advantages under extreme compression conditions,achieving a raw data compression ratio of up to 1:40.Furthermore,three-dimensional data compression experiment using in vivo mouse demonstrated that the maximum peak signal-to-noise ratio(PSNR)and structural similarity index(SSIM)values of reconstructed images reached 38.60 and 0.9583,effectively overcoming detail loss and artifacts introduced by raw data compression.All the results suggest that the proposed system can significantly reduce storage requirements and hardware cost,enhancing computational efficiency and image quality.These advantages support the development of photoacoustic-computed tomography toward higher efficiency,real-time performance and intelligent functionality. 展开更多
关键词 Photoacoustic-computed tomography data compression TRANSFORMER
原文传递
The Missing Data Recovery Method Based on Improved GAN
2
作者 Su Zhang Song Deng Qingsheng Liu 《Computers, Materials & Continua》 2026年第4期1111-1128,共18页
Accurate and reliable power system data are fundamental for critical operations such as gridmonitoring,fault diagnosis,and load forecasting,underpinned by increasing intelligentization and digitalization.However,data ... Accurate and reliable power system data are fundamental for critical operations such as gridmonitoring,fault diagnosis,and load forecasting,underpinned by increasing intelligentization and digitalization.However,data loss and anomalies frequently compromise data integrity in practical settings,significantly impacting system operational efficiency and security.Most existing data recovery methods require complete datasets for training,leading to substantial data and computational demands and limited generalization.To address these limitations,this study proposes a missing data imputation model based on an improved Generative Adversarial Network(BAC-GAN).Within the BAC-GAN framework,the generator utilizes Bidirectional Long Short-Term Memory(BiLSTM)networks and Multi-Head Attention mechanisms to capture temporal dependencies and complex relationships within power system data.The discriminator employs a Convolutional Neural Network(CNN)architecture to integrate local features with global structures,effectivelymitigating the generation of implausible imputations.Experimental results on two public datasets demonstrate that the BAC-GAN model achieves superior data recovery accuracy compared to five state-of-the-art and classical benchmarkmethods,with an average improvement of 17.7%in reconstruction accuracy.The proposedmethod significantly enhances the accuracy of grid fault diagnosis and provides reliable data support for the stable operation of smart grids,showing great potential for practical applications in power systems. 展开更多
关键词 Power system data recovery generative adversarial network bidirectional long short-term memory network multi-head attention mechanism convolutional neural network
在线阅读 下载PDF
A high-accuracy particle-type labeling method for organic scintillator pulse waveform datasets
3
作者 Lin-Jun Hou Peng Xu +1 位作者 Zhi-Meng Hu Jie Cheng 《Nuclear Science and Techniques》 2026年第2期143-155,共13页
The pulse shape discrimination technique plays a pivotal role in neutron field measurements using organic scintillator detectors,and the particle-type labeling accuracy of the pulse waveform dataset has a significant ... The pulse shape discrimination technique plays a pivotal role in neutron field measurements using organic scintillator detectors,and the particle-type labeling accuracy of the pulse waveform dataset has a significant impact on its performance,especially with the growing use of machine learning methods.In this study,a high-accuracy labeling method for pulse waveform datasets based on the time-of-flight(TOF)filtering method,an improved charge comparison method(CCM),and the coincidence measurement method is proposed.The relationship between the experimental parameters and the chance coincidence proportion in the TOF measurement was derived to reduce contamination from chance coincidences at the experimental level.Based on this,an experiment was conducted to obtain raw data using the^(241)AmBe source,and a piled-up identification algorithm based on reference waveform cross-correlation and differential analysis was designed to filter out piled-up pulses.To improve the labeling accuracy,the CCM was optimized,a simple method of selecting the TOF interval for a lower chance coincidence proportion was proposed,and a low-amplitude pulse waveform dataset construction method based on coincidence measurements was developed.To verify these methods,eight pulse waveform datasets were constructed using different combinations of the proposed approaches.Three neural network structures and a corresponding evaluation parameter were designed to test the quality of these datasets.The results showed that the particle identification performance of the CCM was significantly improved after optimization,with the neutron-to-gamma-ray misidentification rate reduced by more than 35%.The proposed accuracy improvement methods reduced ambiguous identification results from these artificial neural networks by more than 50%. 展开更多
关键词 Pulse shape discrimination Organic liquid scintillator Time of flight Charge comparison method Machine learning
在线阅读 下载PDF
Handling missing data in large-scale TBM datasets:Methods,strategies,and applications 被引量:1
4
作者 Haohan Xiao Ruilang Cao +5 位作者 Zuyu Chen Chengyu Hong Jun Wang Min Yao Litao Fan Teng Luo 《Intelligent Geoengineering》 2025年第3期109-125,共17页
Substantial advancements have been achieved in Tunnel Boring Machine(TBM)technology and monitoring systems,yet the presence of missing data impedes accurate analysis and interpretation of TBM monitoring results.This s... Substantial advancements have been achieved in Tunnel Boring Machine(TBM)technology and monitoring systems,yet the presence of missing data impedes accurate analysis and interpretation of TBM monitoring results.This study aims to investigate the issue of missing data in extensive TBM datasets.Through a comprehensive literature review,we analyze the mechanism of missing TBM data and compare different imputation methods,including statistical analysis and machine learning algorithms.We also examine the impact of various missing patterns and rates on the efficacy of these methods.Finally,we propose a dynamic interpolation strategy tailored for TBM engineering sites.The research results show that K-Nearest Neighbors(KNN)and Random Forest(RF)algorithms can achieve good interpolation results;As the missing rate increases,the interpolation effect of different methods will decrease;The interpolation effect of block missing is poor,followed by mixed missing,and the interpolation effect of sporadic missing is the best.On-site application results validate the proposed interpolation strategy's capability to achieve robust missing value interpolation effects,applicable in ML scenarios such as parameter optimization,attitude warning,and pressure prediction.These findings contribute to enhancing the efficiency of TBM missing data processing,offering more effective support for large-scale TBM monitoring datasets. 展开更多
关键词 Tunnel boring machine(TBM) Missing data imputation Machine learning(ML) Time series interpolation data preprocessing Real-time data stream
在线阅读 下载PDF
Data-Model Fusion Methods and Applications Toward Smart Manufacturing and Digital Engineering 被引量:1
5
作者 Fei Tao Yilin Li +2 位作者 Yupeng Wei Chenyuan Zhang Ying Zuo 《Engineering》 2025年第12期36-50,共15页
As pivotal supporting technologies for smart manufacturing and digital engineering,model-based and data-driven methods have been widely applied in many industrial fields,such as product design,process monitoring,and s... As pivotal supporting technologies for smart manufacturing and digital engineering,model-based and data-driven methods have been widely applied in many industrial fields,such as product design,process monitoring,and smart maintenance.While promising,both methods have issues that need to be addressed.For example,model-based methods are limited by low computational accuracy and a high computational burden,and data-driven methods always suffer from poor interpretability and redundant features.To address these issues,the concept of data-model fusion(DMF)emerges as a promising solution.DMF involves integrating model-based methods with data-driven methods by incorporating big data into model-based methods or embedding relevant domain knowledge into data-driven methods.Despite growing efforts in the field of DMF,a unanimous definition of DMF remains elusive,and a general framework of DMF has been rarely discussed.This paper aims to address this gap by providing a thorough overview and categorization of both data-driven methods and model-based methods.Subsequently,this paper also presents the definition and categorization of DMF and discusses the general framework of DMF.Moreover,the primary seven applications of DMF are reviewed within the context of smart manufacturing and digital engineering.Finally,this paper directs the future directions of DMF. 展开更多
关键词 data-model fusion Model-based methods data-driven methods Smart manufacturing Digital engineering
在线阅读 下载PDF
A novel method for clustering cellular data to improve classification
6
作者 Diek W.Wheeler Giorgio A.Ascoli 《Neural Regeneration Research》 SCIE CAS 2025年第9期2697-2705,共9页
Many fields,such as neuroscience,are experiencing the vast prolife ration of cellular data,underscoring the need fo r organizing and interpreting large datasets.A popular approach partitions data into manageable subse... Many fields,such as neuroscience,are experiencing the vast prolife ration of cellular data,underscoring the need fo r organizing and interpreting large datasets.A popular approach partitions data into manageable subsets via hierarchical clustering,but objective methods to determine the appropriate classification granularity are missing.We recently introduced a technique to systematically identify when to stop subdividing clusters based on the fundamental principle that cells must differ more between than within clusters.Here we present the corresponding protocol to classify cellular datasets by combining datadriven unsupervised hierarchical clustering with statistical testing.These general-purpose functions are applicable to any cellular dataset that can be organized as two-dimensional matrices of numerical values,including molecula r,physiological,and anatomical datasets.We demonstrate the protocol using cellular data from the Janelia MouseLight project to chara cterize morphological aspects of neurons. 展开更多
关键词 cellular data clustering dendrogram data classification Levene's one-tailed statistical test unsupervised hierarchical clustering
在线阅读 下载PDF
Multi-View Picture Fuzzy Clustering:A Novel Method for Partitioning Multi-View Relational Data 被引量:1
7
作者 Pham Huy Thong Hoang Thi Canh +2 位作者 Luong Thi Hong Lan Nguyen Tuan Huy Nguyen Long Giang 《Computers, Materials & Continua》 2025年第6期5461-5485,共25页
Multi-view clustering is a critical research area in computer science aimed at effectively extracting meaningful patterns from complex,high-dimensional data that single-view methods cannot capture.Traditional fuzzy cl... Multi-view clustering is a critical research area in computer science aimed at effectively extracting meaningful patterns from complex,high-dimensional data that single-view methods cannot capture.Traditional fuzzy clustering techniques,such as Fuzzy C-Means(FCM),face significant challenges in handling uncertainty and the dependencies between different views.To overcome these limitations,we introduce a new multi-view fuzzy clustering approach that integrates picture fuzzy sets with a dual-anchor graph method for multi-view data,aiming to enhance clustering accuracy and robustness,termed Multi-view Picture Fuzzy Clustering(MPFC).In particular,the picture fuzzy set theory extends the capability to represent uncertainty by modeling three membership levels:membership degrees,neutral degrees,and refusal degrees.This allows for a more flexible representation of uncertain and conflicting data than traditional fuzzy models.Meanwhile,dual-anchor graphs exploit the similarity relationships between data points and integrate information across views.This combination improves stability,scalability,and robustness when handling noisy and heterogeneous data.Experimental results on several benchmark datasets demonstrate significant improvements in clustering accuracy and efficiency,outperforming traditional methods.Specifically,the MPFC algorithm demonstrates outstanding clustering performance on a variety of datasets,attaining a Purity(PUR)score of 0.6440 and an Accuracy(ACC)score of 0.6213 for the 3 Sources dataset,underscoring its robustness and efficiency.The proposed approach significantly contributes to fields such as pattern recognition,multi-view relational data analysis,and large-scale clustering problems.Future work will focus on extending the method for semi-supervised multi-view clustering,aiming to enhance adaptability,scalability,and performance in real-world applications. 展开更多
关键词 Multi-view clustering picture fuzzy sets dual anchor graph fuzzy clustering multi-view relational data
在线阅读 下载PDF
Data credibility evaluation method for formation water in oil and gas fields and its influencing factors
8
作者 LI Wei XIE Wuren +2 位作者 WU Saijun SHUAI Yanhua MA Xingzhi 《Petroleum Exploration and Development》 2025年第2期361-376,共16页
The formation water sample in oil and gas fields may be polluted in processes of testing, trial production, collection, storage, transportation and analysis, making the properties of formation water not be reflected t... The formation water sample in oil and gas fields may be polluted in processes of testing, trial production, collection, storage, transportation and analysis, making the properties of formation water not be reflected truly. This paper discusses identification methods and the data credibility evaluation method for formation water in oil and gas fields of petroliferous basins within China. The results of the study show that: (1) the identification methods of formation water include the basic methods of single factors such as physical characteristics, water composition characteristics, water type characteristics, and characteristic coefficients, as well as the comprehensive evaluation method of data credibility proposed on this basis, which mainly relies on the correlation analysis sodium chloride coefficient and desulfurization coefficient and combines geological background evaluation;(2) The basic identifying methods for formation water enable the preliminary identification of hydrochemical data and the preliminary screening of data on site, the proposed comprehensive method realizes the evaluation by classifying the CaCl2-type water into types A-I to A-VI and the NaHCO3-type water into types B-I to B-IV, so that researchers can make in-depth evaluation on the credibility of hydrochemical data and analysis of influencing factors;(3) When the basic methods are used to identify the formation water, the formation water containing anions such as CO_(3)^(2-), OH- and NO_(3)^(-), or the formation water with the sodium chloride coefficient and desulphurization coefficient not matching the geological setting, are all invaded with surface water or polluted by working fluid;(4) When the comprehensive method is used, the data credibility of A-I, A-II, B-I and B-II formation water can be evaluated effectively and accurately only if the geological setting analysis in respect of the factors such as formation environment, sampling conditions, condensate water, acid fluid, leaching of ancient weathering crust, and ancient atmospheric fresh water, is combined, although such formation water is believed with high credibility. 展开更多
关键词 oil and gas field hydrogeology formation water hydrochemical data data credibility evaluation method hydrochemical characteristic indicator influencing factor
在线阅读 下载PDF
General Improvement of Image Interpolation-Based Data Hiding Methods Using Multiple-Based Number Conversion
9
作者 Da-Chun Wu Bing-Han 《Computer Modeling in Engineering & Sciences》 2025年第7期535-580,共46页
Data hiding methods involve embedding secret messages into cover objects to enable covert communication in a way that is difficult to detect.In data hiding methods based on image interpolation,the image size is reduce... Data hiding methods involve embedding secret messages into cover objects to enable covert communication in a way that is difficult to detect.In data hiding methods based on image interpolation,the image size is reduced and then enlarged through interpolation,followed by the embedding of secret data into the newly generated pixels.A general improving approach for embedding secret messages is proposed.The approach may be regarded a general model for enhancing the data embedding capacity of various existing image interpolation-based data hiding methods.This enhancement is achieved by expanding the range of pixel values available for embedding secret messages,removing the limitations of many existing methods,where the range is restricted to powers of two to facilitate the direct embedding of bit-based messages.This improvement is accomplished through the application of multiple-based number conversion to the secret message data.The method converts the message bits into a multiple-based number and uses an algorithm to embed each digit of this number into an individual pixel,thereby enhancing the message embedding efficiency,as proved by a theorem derived in this study.The proposed improvement method has been tested through experiments on three well-known image interpolation-based data hiding methods.The results show that the proposed method can enhance the three data embedding rates by approximately 14%,13%,and 10%,respectively,create stego-images with good quality,and resist RS steganalysis attacks.These experimental results indicate that the use of the multiple-based number conversion technique to improve the three interpolation-based methods for embedding secret messages increases the number of message bits embedded in the images.For many image interpolation-based data hiding methods,which use power-of-two pixel-value ranges for message embedding,other than the three tested ones,the proposed improvement method is also expected to be effective for enhancing their data embedding capabilities. 展开更多
关键词 data hiding image interpolation interpolation-based hiding methods steganography multiple-based number conversion
在线阅读 下载PDF
Integrating Internet Search Data and Surveillance Data to Construct Influenza Epidemic Thresholds in Hubei Province:A Moving Epidemic Method Approach
10
作者 Caixia Dang Feng Liu +6 位作者 Hengliang Lyu Ziqian Zhao Sijin Zhu Yang Wang Yuanyong Xu Yeqing Tong Hui Chen 《Biomedical and Environmental Sciences》 2025年第9期1150-1154,共5页
Influenza,an acute respiratory infectious disease caused by the influenza virus,exhibits distinct seasonal patterns in China,with peak activity occurring in winter and spring in northern regions,and in winter and summ... Influenza,an acute respiratory infectious disease caused by the influenza virus,exhibits distinct seasonal patterns in China,with peak activity occurring in winter and spring in northern regions,and in winter and summer in southern areas[1].The World Health Organization(WHO)emphasizes that early warning and epidemic intensity assessments are critical public health strategies for influenza prevention and control.Internet-based flu surveillance,with real-time data and low costs,effectively complements traditional methods.The Baidu Search Index,which reflects flu-related queries,strongly correlates with influenza trends,aiding in regional activity assessment and outbreak tracking[2]. 展开更多
关键词 internet search data public health strategies moving epidemic method acute respiratory infectious disease early warning Hubei province epidemic intensity assessments surveillance data
暂未订购
Data matching and association based on the arc-segment difference method
11
作者 Jiannan Sun Zhe Kang +1 位作者 Zhenwei Li Cunbo Fan 《Astronomical Techniques and Instruments》 2025年第5期299-309,共11页
In response to the issue of fuzzy matching and association when optical observation data are matched with the orbital elements in a catalog database,this paper proposes a matching and association strategy based on the... In response to the issue of fuzzy matching and association when optical observation data are matched with the orbital elements in a catalog database,this paper proposes a matching and association strategy based on the arcsegment difference method.First,a matching error threshold is set to match the observation data with the known catalog database.Second,the matching results for the same day are sorted on the basis of target identity and observation residuals.Different matching error thresholds and arc-segment dynamic association thresholds are then applied to categorize the observation residuals of the same target across different arc-segments,yielding matching results under various thresholds.Finally,the orbital residual is computed through orbit determination(OD),and the positional error is derived by comparing the OD results with the orbit track from the catalog database.The appropriate matching error threshold is then selected on the basis of these results,leading to the final matching and association of the fuzzy correlation data.Experimental results showed that the correct matching rate for data arc-segments is 92.34% when the matching error threshold is set to 720″,with the arc-segment difference method processing the results of an average matching rate of 97.62% within 8 days.The remaining 5.28% of the fuzzy correlation data are correctly matched and associated,enabling identification of orbital maneuver targets through further processing and analysis.This method substantially enhances the efficiency and accuracy of space target cataloging,offering robust technical support for dynamic maintenance of the space target database. 展开更多
关键词 Optical data processing Space target identification Fuzzy correlation Arc-segment difference method Orbit determination
在线阅读 下载PDF
A review of test methods for uniaxial compressive strength of rocks:Theory,apparatus and data processing
12
作者 Wei-Qiang Xie Xiao-Li Liu +2 位作者 Xiao-Ping Zhang Quan-Sheng Liu En-ZhiWang 《Journal of Rock Mechanics and Geotechnical Engineering》 2025年第3期1889-1905,共17页
The uniaxial compressive strength(UCS)of rocks is a vital geomechanical parameter widely used for rock mass classification,stability analysis,and engineering design in rock engineering.Various UCS testing methods and ... The uniaxial compressive strength(UCS)of rocks is a vital geomechanical parameter widely used for rock mass classification,stability analysis,and engineering design in rock engineering.Various UCS testing methods and apparatuses have been proposed over the past few decades.The objective of the present study is to summarize the status and development in theories,test apparatuses,data processing of the existing testing methods for UCS measurement.It starts with elaborating the theories of these test methods.Then the test apparatus and development trends for UCS measurement are summarized,followed by a discussion on rock specimens for test apparatus,and data processing methods.Next,the method selection for UCS measurement is recommended.It reveals that the rock failure mechanism in the UCS testing methods can be divided into compression-shear,compression-tension,composite failure mode,and no obvious failure mode.The trends of these apparatuses are towards automation,digitization,precision,and multi-modal test.Two size correction methods are commonly used.One is to develop empirical correlation between the measured indices and the specimen size.The other is to use a standard specimen to calculate the size correction factor.Three to five input parameters are commonly utilized in soft computation models to predict the UCS of rocks.The selection of the test methods for the UCS measurement can be carried out according to the testing scenario and the specimen size.The engineers can gain a comprehensive understanding of the UCS testing methods and its potential developments in various rock engineering endeavors. 展开更多
关键词 Uniaxial compressive strength(UCS) UCS testing methods Test apparatus data processing
在线阅读 下载PDF
Nuclear data measurement and propagation in Back-n experiments:methodologies and instrumentation
13
作者 Min-Hao Gu Jie-Ming Xue +7 位作者 Ya-Kang Li Ping Cao Jie Ren Yong-Hao Chen Wei Jiang Han Yi Peng Hu Rui-Rui Fan 《Nuclear Science and Techniques》 2025年第11期69-82,共14页
This article introduces the methodologies and instrumentation for data measurement and propagation at the Back-n white neutron facility of the China Spallation Neutron Source.The Back-n facility employs backscattering... This article introduces the methodologies and instrumentation for data measurement and propagation at the Back-n white neutron facility of the China Spallation Neutron Source.The Back-n facility employs backscattering techniques to generate a broad spectrum of white neutrons.Equipped with advanced detectors such as the light particle detector array and the fission ionization chamber detector,the facility achieves high-precision data acquisition through a general-purpose electronics system.Data were managed and stored in a hierarchical system supported by the National High Energy Physics Science Data Center,ensuring long-term preservation and efficient access.The data from the Back-n experiments significantly contribute to nuclear physics,reactor design,astrophysics,and medical physics,enhancing the understanding of nuclear processes and supporting interdisciplinary research. 展开更多
关键词 Nuclear physics data acquisition data storage and management data sharing Neutron experiments White neutron beam
在线阅读 下载PDF
Data Augmentation:A Multi-Perspective Survey on Data,Methods,and Applications
14
作者 Canlin Cui Junyu Yao Heng Xia 《Computers, Materials & Continua》 2025年第12期4275-4306,共32页
High-quality data is essential for the success of data-driven learning tasks.The characteristics,precision,and completeness of the datasets critically determine the reliability,interpretability,and effectiveness of su... High-quality data is essential for the success of data-driven learning tasks.The characteristics,precision,and completeness of the datasets critically determine the reliability,interpretability,and effectiveness of subsequent analyzes and applications,such as fault detection,predictive maintenance,and process optimization.However,for many industrial processes,obtaining sufficient high-quality data remains a significant challenge due to high costs,safety concerns,and practical constraints.To overcome these challenges,data augmentation has emerged as a rapidly growing research area,attracting considerable attention across both academia and industry.By expanding datasets,data augmentation techniques improve greater generalization and more robust performance in actual applications.This paper provides a comprehensive,multi-perspective review of data augmentation methods for industrial processes.For clarity and organization,existing studies are systematically grouped into four categories:small sample with low dimension,small sample with high dimension,large sample with low dimension,and large sample with high dimension.Within this framework,the review examines current research from both methodological and application-oriented perspectives,highlighting main methods,advantages,and limitations.By synthesizing these findings,this review offers a structured overview for scholars and practitioners,serving as a valuable reference for newcomers and experienced researchers seeking to explore and advance data augmentation techniques in industrial processes. 展开更多
关键词 data-DRIVEN data augmentation big data industrial application
在线阅读 下载PDF
Efficient socket-based data transmission method and implementation in deep learning
15
作者 Wei Xin-Jian Li Shu-Ping +5 位作者 Yang Wu-Yang Zhang Xiang-Yang Li Hai-Shan Xu Xin Wang Nan Fu Zhanbao 《Applied Geophysics》 2025年第4期1341-1350,1499,1500,共12页
The deep learning algorithm,which has been increasingly applied in the field of petroleum geophysical prospecting,has achieved good results in improving efficiency and accuracy based on test applications.To play a gre... The deep learning algorithm,which has been increasingly applied in the field of petroleum geophysical prospecting,has achieved good results in improving efficiency and accuracy based on test applications.To play a greater role in actual production,these algorithm modules must be integrated into software systems and used more often in actual production projects.Deep learning frameworks,such as TensorFlow and PyTorch,basically take Python as the core architecture,while the application program mainly uses Java,C#,and other programming languages.During integration,the seismic data read by the Java and C#data interfaces must be transferred to the Python main program module.The data exchange methods between Java,C#,and Python include shared memory,shared directory,and so on.However,these methods have the disadvantages of low transmission efficiency and unsuitability for asynchronous networks.Considering the large volume of seismic data and the need for network support for deep learning,this paper proposes a method of transmitting seismic data based on Socket.By maximizing Socket’s cross-network and efficient longdistance transmission,this approach solves the problem of inefficient transmission of underlying data while integrating the deep learning algorithm module into a software system.Furthermore,the actual production application shows that this method effectively solves the shortage of data transmission in shared memory,shared directory,and other modes while simultaneously improving the transmission efficiency of massive seismic data across modules at the bottom of the software. 展开更多
关键词 SOCKET Deep learning Transfer data Seismic data Thread pool River prediction
在线阅读 下载PDF
Battery pack capacity prediction using deep learning and data compression technique:A method for real-world vehicles
16
作者 Yi Yang Jibin Yang +4 位作者 Xiaohua Wu Liyue Fu Xinmei Gao Xiandong Xie Quan Ouyang 《Journal of Energy Chemistry》 2025年第7期553-564,共12页
The accurate prediction of battery pack capacity in electric vehicles(EVs)is crucial for ensuring safety and optimizing performance.Despite extensive research on predicting cell capacity using laboratory data,predicti... The accurate prediction of battery pack capacity in electric vehicles(EVs)is crucial for ensuring safety and optimizing performance.Despite extensive research on predicting cell capacity using laboratory data,predicting the capacity of onboard battery packs from field data remains challenging due to complex operating conditions and irregular EV usage in real-world settings.Most existing methods rely on extracting health feature parameters from raw data for capacity prediction of onboard battery packs,however,selecting specific parameters often results in a loss of critical information,which reduces prediction accuracy.To this end,this paper introduces a novel framework combining deep learning and data compression techniques to accurately predict battery pack capacity onboard.The proposed data compression method converts monthly EV charging data into feature maps,which preserve essential data characteristics while reducing the volume of raw data.To address missing capacity labels in field data,a capacity labeling method is proposed,which calculates monthly battery capacity by transforming the ampere-hour integration formula and applying linear regression.Subsequently,a deep learning model is proposed to build a capacity prediction model,using feature maps from historical months to predict the battery capacity of future months,thus facilitating accurate forecasts.The proposed framework,evaluated using field data from 20 EVs,achieves a mean absolute error of 0.79 Ah,a mean absolute percentage error of 0.65%,and a root mean square error of 1.02 Ah,highlighting its potential for real-world EV applications. 展开更多
关键词 Lithium-ion battery Capacity prediction Real-world vehicle data data compression Deep learning
在线阅读 下载PDF
Failure rate analysis and maintenance plan optimization method for civil aircraft parts based on data fusion
17
作者 Kang CAO Yongjie ZHANG Jianfei FENG 《Chinese Journal of Aeronautics》 2025年第1期306-324,共19页
In the face of data scarcity in the optimization of maintenance strategies for civil aircraft,traditional failure data-driven methods are encountering challenges owing to the increasing reliability of aircraft design.... In the face of data scarcity in the optimization of maintenance strategies for civil aircraft,traditional failure data-driven methods are encountering challenges owing to the increasing reliability of aircraft design.This study addresses this issue by presenting a novel combined data fusion algorithm,which serves to enhance the accuracy and reliability of failure rate analysis for a specific aircraft model by integrating historical failure data from similar models as supplementary information.Through a comprehensive analysis of two different maintenance projects,this study illustrates the application process of the algorithm.Building upon the analysis results,this paper introduces the innovative equal integral value method as a replacement for the conventional equal interval method in the context of maintenance schedule optimization.The Monte Carlo simulation example validates that the equivalent essential value method surpasses the traditional method by over 20%in terms of inspection efficiency ratio.This discovery indicates that the equal critical value method not only upholds maintenance efficiency but also substantially decreases workload and maintenance costs.The findings of this study open up novel perspectives for airlines grappling with data scarcity,offer fresh strategies for the optimization of aviation maintenance practices,and chart a new course toward achieving more efficient and cost-effective maintenance schedule optimization through refined data analysis. 展开更多
关键词 Small sample data data fusion Failure rate Maintenance planning Aircraft parts
原文传递
Cooperative Iteration Matching Method for Aligning Samples from Heterogeneous Industrial Datasets
18
作者 LI Han SHI Guohong +1 位作者 LIU Zhao ZHU Ping 《Journal of Shanghai Jiaotong university(Science)》 2025年第2期375-384,共10页
Industrial data mining usually deals with data from different sources.These heterogeneous datasets describe the same object in different views.However,samples from some of the datasets may be lost.Then the remaining s... Industrial data mining usually deals with data from different sources.These heterogeneous datasets describe the same object in different views.However,samples from some of the datasets may be lost.Then the remaining samples do not correspond one-to-one correctly.Mismatched datasets caused by missing samples make the industrial data unavailable for further machine learning.In order to align the mismatched samples,this article presents a cooperative iteration matching method(CIMM)based on the modified dynamic time warping(DTW).The proposed method regards the sequentially accumulated industrial data as the time series.Mismatched samples are aligned by the DTW.In addition,dynamic constraints are applied to the warping distance of the DTW process to make the alignment more efficient.Then a series of models are trained with the cumulated samples iteratively.Several groups of numerical experiments on different missing patterns and missing locations are designed and analyzed to prove the effectiveness and the applicability of the proposed method. 展开更多
关键词 dynamic time warping mismatched samples sample alignment industrial data data missing
原文传递
Importance of data collection and subgroup analyses in research methodology
19
作者 Sunny Chi Lik Au 《World Journal of Meta-Analysis》 2025年第2期98-101,共4页
Data collection serves as the cornerstone in the study of clinical research questions.Two types of data are commonly utilized in medicine:(1)Qualitative;and(2)Quantitative.Several methods are commonly employed to gath... Data collection serves as the cornerstone in the study of clinical research questions.Two types of data are commonly utilized in medicine:(1)Qualitative;and(2)Quantitative.Several methods are commonly employed to gather data,regardless of whether retrospective or prospective studies are used:(1)Interviews;(2)Observational methods;(3)Questionnaires;(4)Investigation parameters;(5)Medical records;and(6)Electronic chart reviews.Each source type has its own advantages and cons in terms of the accuracy and availability of the data to be extracted.We will focus on the important parts of the research methodology:(1)Data collection;and(2)Subgroup analyses.Errors in research can arise from various sources,including investigators,instruments,and subjects,making the validation and reliability of research tools crucial for ensuring the credibility of findings.Subgroup analyses can either be planned before or emerge after(post-hoc)treatment.The interpretation of subgroup effects should consider the interaction between treatment effect and various patient variables with caution. 展开更多
关键词 data collection methodOLOGY RESEARCH JOURNAL ACADEMIC
暂未订购
SA-WGAN Based Data Enhancement Method for Industrial Internet Intrusion Detection
20
作者 Yuan Feng Yajie Si +2 位作者 Jianwei Zhang Zengyu Cai Hongying Zhao 《Computers, Materials & Continua》 2025年第9期4431-4449,共19页
With the rapid development of the industrial Internet,the network security environment has become increasingly complex and variable.Intrusion detection,a core technology for ensuring the security of industrial control... With the rapid development of the industrial Internet,the network security environment has become increasingly complex and variable.Intrusion detection,a core technology for ensuring the security of industrial control systems,faces the challenge of unbalanced data samples,particularly the low detection rates for minority class attack samples.Therefore,this paper proposes a data enhancement method for intrusion detection in the industrial Internet based on a Self-Attention Wasserstein Generative Adversarial Network(SA-WGAN)to address the low detection rates of minority class attack samples in unbalanced intrusion detection scenarios.The proposed method integrates a selfattention mechanism with a Wasserstein Generative Adversarial Network(WGAN).The self-attention mechanism automatically learns important features from the input data and assigns different weights to emphasize the key features related to intrusion behaviors,providing strong guidance for subsequent data generation.The WGAN generates new data samples through adversarial training to expand the original dataset.In the SA-WGAN framework,the WGAN directs the data generation process based on the key features extracted by the self-attention mechanism,ensuring that the generated samples exhibit both diversity and similarity to real data.Experimental results demonstrate that the SA-WGAN-based data enhancement method significantly improves detection performance for attack samples from minority classes,addresses issues of insufficient data and category imbalance,and enhances the generalization ability and overall performance of the intrusion detection model. 展开更多
关键词 data enhancement intrusion detection industrial internet WGAN
在线阅读 下载PDF
上一页 1 2 250 下一页 到第
使用帮助 返回顶部