期刊文献+
共找到171篇文章
< 1 2 9 >
每页显示 20 50 100
Drive-by spatial offset detection for high-speed railway bridges based on fusion analysis of multi-source data from comprehensive inspection train
1
作者 Chuang Wang Jiawang Zhan +4 位作者 Nan Zhang Yujie Wang Xinxiang Xu Zhihang Wang Zhen Ni 《Railway Engineering Science》 2026年第1期128-148,共21页
The spatial offset of bridge has a significant impact on the safety,comfort,and durability of high-speed railway(HSR)operations,so it is crucial to rapidly and effectively detect the spatial offset of operational HSR ... The spatial offset of bridge has a significant impact on the safety,comfort,and durability of high-speed railway(HSR)operations,so it is crucial to rapidly and effectively detect the spatial offset of operational HSR bridges.Drive-by monitoring of bridge uneven settlement demonstrates significant potential due to its practicality,cost-effectiveness,and efficiency.However,existing drive-by methods for detecting bridge offset have limitations such as reliance on a single data source,low detection accuracy,and the inability to identify lateral deformations of bridges.This paper proposes a novel drive-by inspection method for spatial offset of HSR bridge based on multi-source data fusion of comprehensive inspection train.Firstly,dung beetle optimizer-variational mode decomposition was employed to achieve adaptive decomposition of non-stationary dynamic signals,and explore the hidden temporal relationships in the data.Subsequently,a long short-term memory neural network was developed to achieve feature fusion of multi-source signal and accurate prediction of spatial settlement of HSR bridge.A dataset of track irregularities and CRH380A high-speed train responses was generated using a 3D train-track-bridge interaction model,and the accuracy and effectiveness of the proposed hybrid deep learning model were numerically validated.Finally,the reliability of the proposed drive-by inspection method was further validated by analyzing the actual measurement data obtained from comprehensive inspection train.The research findings indicate that the proposed approach enables rapid and accurate detection of spatial offset in HSR bridge,ensuring the long-term operational safety of HSR bridges. 展开更多
关键词 High-speed railway bridge Drive-by inspection Spatial offset multi-source data fusion Deep learning
在线阅读 下载PDF
Recent trends of machine learning applied to multi-source data of medicinal plants 被引量:6
2
作者 Yanying Zhang Yuanzhong Wang 《Journal of Pharmaceutical Analysis》 SCIE CAS CSCD 2023年第12期1388-1407,共20页
In traditional medicine and ethnomedicine,medicinal plants have long been recognized as the basis for materials in therapeutic applications worldwide.In particular,the remarkable curative effect of traditional Chinese... In traditional medicine and ethnomedicine,medicinal plants have long been recognized as the basis for materials in therapeutic applications worldwide.In particular,the remarkable curative effect of traditional Chinese medicine during corona virus disease 2019(COVID-19)pandemic has attracted extensive attention globally.Medicinal plants have,therefore,become increasingly popular among the public.However,with increasing demand for and profit with medicinal plants,commercial fraudulent events such as adulteration or counterfeits sometimes occur,which poses a serious threat to the clinical outcomes and interests of consumers.With rapid advances in artificial intelligence,machine learning can be used to mine information on various medicinal plants to establish an ideal resource database.We herein present a review that mainly introduces common machine learning algorithms and discusses their application in multi-source data analysis of medicinal plants.The combination of machine learning algorithms and multi-source data analysis facilitates a comprehensive analysis and aids in the effective evaluation of the quality of medicinal plants.The findings of this review provide new possibilities for promoting the development and utilization of medicinal plants. 展开更多
关键词 Machine learning Medicinal plant multi-source data data fusion Application
在线阅读 下载PDF
A Dynamic Masking-Based Multi-Learning Framework for Sparse Classification
3
作者 Woo Hyun Park Dong Ryeol Shin 《Computers, Materials & Continua》 2026年第3期1365-1380,共16页
With the recent increase in data volume and diversity,traditional text representation techniques are struggling to capture context,particularly in environments with sparse data.To address these challenges,this study p... With the recent increase in data volume and diversity,traditional text representation techniques are struggling to capture context,particularly in environments with sparse data.To address these challenges,this study proposes a new model,the Masked Joint Representation Model(MJRM).MJRM approximates the original hypothesis by leveraging multiple elements in a limited context.It dynamically adapts to changes in characteristics based on data distribution through three main components.First,masking-based representation learning,termed selective dynamic masking,integrates topic modeling and sentiment clustering to generate and train multiple instances across different data subsets,whose predictions are then aggregated with optimized weights.This design alleviates sparsity,suppresses noise,and preserves contextual structures.Second,regularization-based improvements are applied.Third,techniques for addressing sparse data are used to perform final inference.As a result,MJRM improves performance by up to 4%compared to existing AI techniques.In our experiments,we analyzed the contribution of each factor,demonstrating that masking,dynamic learning,and aggregating multiple instances complement each other to improve performance.This demonstrates that a masking-based multi-learning strategy is effective for context-aware sparse text classification,and can be useful even in challenging situations such as data shortage or data distribution variations.We expect that the approach can be extended to diverse fields such as sentiment analysis,spam filtering,and domain-specific document classification. 展开更多
关键词 Text classification dynamic learning contextual features data sparsity masking-based representation
在线阅读 下载PDF
Dynamic UAV data fusion and deep learning for improved maize phenological-stage tracking 被引量:1
4
作者 Ziheng Feng Jiliang Zhao +8 位作者 Liunan Suo Heguang Sun Huiling Long Hao Yang Xiaoyu Song Haikuan Feng Bo Xu Guijun Yang Chunjiang Zhao 《The Crop Journal》 2025年第3期961-974,共14页
Near real-time maize phenology monitoring is crucial for field management,cropping system adjustments,and yield estimation.Most phenological monitoring methods are post-seasonal and heavily rely on high-frequency time... Near real-time maize phenology monitoring is crucial for field management,cropping system adjustments,and yield estimation.Most phenological monitoring methods are post-seasonal and heavily rely on high-frequency time-series data.These methods are not applicable on the unmanned aerial vehicle(UAV)platform due to the high cost of acquiring time-series UAV images and the shortage of UAV-based phenological monitoring methods.To address these challenges,we employed the Synthetic Minority Oversampling Technique(SMOTE)for sample augmentation,aiming to resolve the small sample modelling problem.Moreover,we utilized enhanced"separation"and"compactness"feature selection methods to identify input features from multiple data sources.In this process,we incorporated dynamic multi-source data fusion strategies,involving Vegetation index(VI),Color index(CI),and Texture features(TF).A two-stage neural network that combines Convolutional Neural Network(CNN)and Long Short-Term Memory Network(LSTM)is proposed to identify maize phenological stages(including sowing,seedling,jointing,trumpet,tasseling,maturity,and harvesting)on UAV platforms.The results indicate that the dataset generated by SMOTE closely resembles the measured dataset.Among dynamic data fusion strategies,the VI-TF combination proves to be most effective,with CI-TF and VI-CI combinations following behind.Notably,as more data sources are integrated,the model's demand for input features experiences a significant decline.In particular,the CNN-LSTM model,based on the fusion of three data sources,exhibited remarkable reliability when validating the three datasets.For Dataset 1(Beijing Xiaotangshan,2023:Data from 12 UAV Flight Missions),the model achieved an overall accuracy(OA)of 86.53%.Additionally,its precision(Pre),recall(Rec),F1 score(F1),false acceptance rate(FAR),and false rejection rate(FRR)were 0.89,0.89,0.87,0.11,and 0.11,respectively.The model also showed strong generalizability in Dataset 2(Beijing Xiaotangshan,2023:Data from 6 UAV Flight Missions)and Dataset 3(Beijing Xiaotangshan,2022:Data from 4 UAV Flight Missions),with OAs of 89.4%and 85%,respectively.Meanwhile,the model has a low demand for input featu res,requiring only 54.55%(99 of all featu res).The findings of this study not only offer novel insights into near real-time crop phenology monitoring,but also provide technical support for agricultural field management and cropping system adaptation. 展开更多
关键词 Near real-time Maize phenology Deep learning UAV multi-source data fusion
在线阅读 下载PDF
Monitoring track irregularities using multi-source on-board measurement data
5
作者 Qinglin Xie Fei Peng +4 位作者 Gongquan Tao Yu Ren Fangbo Liu Jizhong Yang Zefeng Wen 《Railway Engineering Science》 2025年第4期746-765,共20页
Accurate monitoring of track irregularities is very helpful to improving the vehicle operation quality and to formulating appropriate track maintenance strategies.Existing methods have the problem that they rely on co... Accurate monitoring of track irregularities is very helpful to improving the vehicle operation quality and to formulating appropriate track maintenance strategies.Existing methods have the problem that they rely on complex signal processing algorithms and lack multi-source data analysis.Driven by multi-source measurement data,including the axle box,the bogie frame and the carbody accelerations,this paper proposes a track irregularities monitoring network(TIMNet)based on deep learning methods.TIMNet uses the feature extraction capability of convolutional neural networks and the sequence map-ping capability of the long short-term memory model to explore the mapping relationship between vehicle accelerations and track irregularities.The particle swarm optimization algorithm is used to optimize the network parameters,so that both the vertical and lateral track irregularities can be accurately identified in the time and spatial domains.The effectiveness and superiority of the proposed TIMNet is analyzed under different simulation conditions using a vehicle dynamics model.Field tests are conducted to prove the availability of the proposed TIMNet in quantitatively monitoring vertical and lateral track irregularities.Furthermore,comparative tests show that the TIMNet has a better fitting degree and timeliness in monitoring track irregularities(vertical R2 of 0.91,lateral R2 of 0.84 and time cost of 10 ms),compared to other classical regression.The test also proves that the TIMNet has a better anti-interference ability than other regression models. 展开更多
关键词 Track irregularities Vehicle accelerations On-board monitoring multi-source data Deep learning
在线阅读 下载PDF
Improved Density Peaking Algorithm for Community Detection Based on Graph Representation Learning
6
作者 Jiaming Wang Xiaolan Xie +1 位作者 Xiaochun Cheng Yuhan Wang 《Computer Systems Science & Engineering》 SCIE EI 2022年第12期997-1008,共12页
There is a large amount of information in the network data that we canexploit. It is difficult for classical community detection algorithms to handle network data with sparse topology. Representation learning of netwo... There is a large amount of information in the network data that we canexploit. It is difficult for classical community detection algorithms to handle network data with sparse topology. Representation learning of network data is usually paired with clustering algorithms to solve the community detection problem.Meanwhile, there is always an unpredictable distribution of class clusters outputby graph representation learning. Therefore, we propose an improved densitypeak clustering algorithm (ILDPC) for the community detection problem, whichimproves the local density mechanism in the original algorithm and can betteraccommodate class clusters of different shapes. And we study the communitydetection in network data. The algorithm is paired with the benchmark modelGraph sample and aggregate (GraphSAGE) to show the adaptability of ILDPCfor community detection. The plotted decision diagram shows that the ILDPCalgorithm is more discriminative in selecting density peak points compared tothe original algorithm. Finally, the performance of K-means and other clusteringalgorithms on this benchmark model is compared, and the algorithm is proved tobe more suitable for community detection in sparse networks with the benchmarkmodel on the evaluation criterion F1-score. The sensitivity of the parameters ofthe ILDPC algorithm to the low-dimensional vector set output by the benchmarkmodel GraphSAGE is also analyzed. 展开更多
关键词 representation learning data mining low-dimensional embedding community detection density peaking algorithm
在线阅读 下载PDF
Airborne electromagnetic data denoising based on dictionary learning 被引量:8
7
作者 Xue Shu-yang Yin Chang-chun +5 位作者 Su Yang Liu Yun-he Wang Yong Liu Cai-hua Xiong Bin Sun Huai-feng 《Applied Geophysics》 SCIE CSCD 2020年第2期306-313,317,共9页
Time-domain airborne electromagnetic(AEM)data are frequently subject to interference from various types of noise,which can reduce the data quality and affect data inversion and interpretation.Traditional denoising met... Time-domain airborne electromagnetic(AEM)data are frequently subject to interference from various types of noise,which can reduce the data quality and affect data inversion and interpretation.Traditional denoising methods primarily deal with data directly,without analyzing the data in detail;thus,the results are not always satisfactory.In this paper,we propose a method based on dictionary learning for EM data denoising.This method uses dictionary learning to perform feature analysis and to extract and reconstruct the true signal.In the process of dictionary learning,the random noise is fi ltered out as residuals.To verify the eff ectiveness of this dictionary learning approach for denoising,we use a fi xed overcomplete discrete cosine transform(ODCT)dictionary algorithm,the method-of-optimal-directions(MOD)dictionary learning algorithm,and the K-singular value decomposition(K-SVD)dictionary learning algorithm to denoise decay curves at single points and to denoise profi le data for diff erent time channels in time-domain AEM.The results show obvious diff erences among the three dictionaries for denoising AEM data,with the K-SVD dictionary achieving the best performance. 展开更多
关键词 Time-domain AEM data processing DENOISING dictionary learning sparse representation
在线阅读 下载PDF
Multi-source information fused generative adversarial network model and data assimilation based history matching for reservoir with complex geologies 被引量:7
8
作者 Kai Zhang Hai-Qun Yu +7 位作者 Xiao-Peng Ma Jin-Ding Zhang Jian Wang Chuan-Jin Yao Yong-Fei Yang Hai Sun Jun Yao Jian Wang 《Petroleum Science》 SCIE CAS CSCD 2022年第2期707-719,共13页
For reservoirs with complex non-Gaussian geological characteristics,such as carbonate reservoirs or reservoirs with sedimentary facies distribution,it is difficult to implement history matching directly,especially for... For reservoirs with complex non-Gaussian geological characteristics,such as carbonate reservoirs or reservoirs with sedimentary facies distribution,it is difficult to implement history matching directly,especially for the ensemble-based data assimilation methods.In this paper,we propose a multi-source information fused generative adversarial network(MSIGAN)model,which is used for parameterization of the complex geologies.In MSIGAN,various information such as facies distribution,microseismic,and inter-well connectivity,can be integrated to learn the geological features.And two major generative models in deep learning,variational autoencoder(VAE)and generative adversarial network(GAN)are combined in our model.Then the proposed MSIGAN model is integrated into the ensemble smoother with multiple data assimilation(ESMDA)method to conduct history matching.We tested the proposed method on two reservoir models with fluvial facies.The experimental results show that the proposed MSIGAN model can effectively learn the complex geological features,which can promote the accuracy of history matching. 展开更多
关键词 multi-source information Automatic history matching Deep learning data assimilation Generative model
原文传递
Reply to:Comment on“Machine learning enhanced analysis of EBSD data for texture representation”
9
作者 J.Wanni C.A.Bronkhorst D.J.Thoma 《npj Computational Materials》 2025年第1期769-771,共3页
We respond to Schaeben et al.’s1 comment on our paper,“Machine Learning Enhanced Analysis of EBSD Data for Texture Representation.”While their observations are factually correct,they do not disprove our results.Our... We respond to Schaeben et al.’s1 comment on our paper,“Machine Learning Enhanced Analysis of EBSD Data for Texture Representation.”While their observations are factually correct,they do not disprove our results.Our method,TACS,preserves the full distribution of crystallographic orientations and is validated with real-world data.We emphasize the importance of empirical validation over theoretical constructs in assessing machine learning methods’practical performance. 展开更多
关键词 crystallographic orientations empirical validation texture representation ebsd data machine learning methods practical machine learning
原文传递
MltAuxTSPP:a unified benchmark for deep learning-based traffic state prediction with multi-source auxiliary data
10
作者 Yusong ZHOU Xiaoyu JIANG +3 位作者 Shu SUN Xinmin ZHANG Yuanqiu MO Zhihuan SONG 《Frontiers of Information Technology & Electronic Engineering》 2025年第10期1984-1999,共16页
Deep learning has empowered traffic prediction models to integrate diverse auxiliary data sources,such as weather and temporal features,for enhanced forecasting accuracy.However,existing approaches often suffer from l... Deep learning has empowered traffic prediction models to integrate diverse auxiliary data sources,such as weather and temporal features,for enhanced forecasting accuracy.However,existing approaches often suffer from limited generality and scalability,and the field lacks a unified benchmark for fair model comparison.This absence hinders consistent performance evaluation,slows the development of robust and adaptable models,and makes it challenging to quantify the incremental benefits of different auxiliary data sources.To address these issues,we present MltAuxTSPP,a unified benchmark framework for deep learning-based traffic state prediction with multi-source auxiliary data.The framework features a standardized data container and a fusion embedding module,enabling consistent utilization of heterogeneous data and improving scalability.It produces unified hidden representations that can be seamlessly adopted by various downstream models,ensuring fair and reproducible comparisons under identical conditions.Extensive experiments on real-world datasets demonstrate that MltAuxTSPP effectively leverages weather and temporal features to improve long-term forecast performance and offers a practical and reproducible foundation for advancing research in traffic state prediction. 展开更多
关键词 Traffic prediction Benchmark platform Deep learning multi-source auxiliary data
原文传递
Comment on“Machine learning enhanced analysis of EBSD data for texture representation”
11
作者 Helmut Schaeben K.Gerald van den Boogaart 《npj Computational Materials》 2025年第1期783-787,共5页
Our concerns apply to the inadequate ways statistical distributions of crystallographic orientations are compared and occasionally confirmed to agree sufficiently well.The authors of“Machine learning enhanced analysi... Our concerns apply to the inadequate ways statistical distributions of crystallographic orientations are compared and occasionally confirmed to agree sufficiently well.The authors of“Machine learning enhanced analysis of EBSD data for texture representation”1 suggest a method to replace an EBSD dataset of crystallographic orientations with a much smaller synthetic dataset preserving the texture.They claim that their“texture adaptive clustering and sampling”algorithm generates datasets of a few hundred crystallographic orientations,realizing an equivalent crystallographic orientation distribution as the initial dataset.To prove the principle and substantiate their claim of equivalent orientation distributions,the authors content themselves with(i)a visual inspection of the crystallographic pole density function,in fact,of three crystallographic“pole figures”and(ii)Kolmogorov–Smirnov tests for each of the three Euler angles of the crystallographic orientations individually.However,these criteria are insufficient to confirm equivalence of orientation distributions,they do not provide scientific evidence to substantiate the authors’claim that“texture adaptive clustering and sampling”generates crystallographic orientations in terms of their Euler angles representing the same texture. 展开更多
关键词 texture representation ebsd dataset crystallographic orientations synthetic dataset ebsd data clustering sampling algorithm statistical distributions machine learning
原文传递
Predicting the daily return direction of the stock market using hybrid machine learning algorithms 被引量:12
12
作者 Xiao Zhong David Enke 《Financial Innovation》 2019年第1期435-454,共20页
Big data analytic techniques associated with machine learning algorithms are playing an increasingly important role in various application fields,including stock market investment.However,few studies have focused on f... Big data analytic techniques associated with machine learning algorithms are playing an increasingly important role in various application fields,including stock market investment.However,few studies have focused on forecasting daily stock market returns,especially when using powerful machine learning techniques,such as deep neural networks(DNNs),to perform the analyses.DNNs employ various deep learning algorithms based on the combination of network structure,activation function,and model parameters,with their performance depending on the format of the data representation.This paper presents a comprehensive big data analytics process to predict the daily return direction of the SPDR S&P 500 ETF(ticker symbol:SPY)based on 60 financial and economic features.DNNs and traditional artificial neural networks(ANNs)are then deployed over the entire preprocessed but untransformed dataset,along with two datasets transformed via principal component analysis(PCA),to predict the daily direction of future stock market index returns.While controlling for overfitting,a pattern for the classification accuracy of the DNNs is detected and demonstrated as the number of the hidden layers increases gradually from 12 to 1000.Moreover,a set of hypothesis testing procedures are implemented on the classification,and the simulation results show that the DNNs using two PCA-represented datasets give significantly higher classification accuracy than those using the entire untransformed dataset,as well as several other hybrid machine learning algorithms.In addition,the trading strategies guided by the DNN classification process based on PCA-represented data perform slightly better than the others tested,including in a comparison against two standard benchmarks. 展开更多
关键词 Daily stock return forecasting Return direction classification data representation Hybrid machine learning algorithms Deep neural networks(DNNs) Trading strategies
在线阅读 下载PDF
IoT Empowered Early Warning of Transmission Line Galloping Based on Integrated Optical Fiber Sensing and Weather Forecast Time Series Data 被引量:1
13
作者 Zhe Li Yun Liang +1 位作者 Jinyu Wang Yang Gao 《Computers, Materials & Continua》 SCIE EI 2025年第1期1171-1192,共22页
Iced transmission line galloping poses a significant threat to the safety and reliability of power systems,leading directly to line tripping,disconnections,and power outages.Existing early warning methods of iced tran... Iced transmission line galloping poses a significant threat to the safety and reliability of power systems,leading directly to line tripping,disconnections,and power outages.Existing early warning methods of iced transmission line galloping suffer from issues such as reliance on a single data source,neglect of irregular time series,and lack of attention-based closed-loop feedback,resulting in high rates of missed and false alarms.To address these challenges,we propose an Internet of Things(IoT)empowered early warning method of transmission line galloping that integrates time series data from optical fiber sensing and weather forecast.Initially,the method applies a primary adaptive weighted fusion to the IoT empowered optical fiber real-time sensing data and weather forecast data,followed by a secondary fusion based on a Back Propagation(BP)neural network,and uses the K-medoids algorithm for clustering the fused data.Furthermore,an adaptive irregular time series perception adjustment module is introduced into the traditional Gated Recurrent Unit(GRU)network,and closed-loop feedback based on attentionmechanism is employed to update network parameters through gradient feedback of the loss function,enabling closed-loop training and time series data prediction of the GRU network model.Subsequently,considering various types of prediction data and the duration of icing,an iced transmission line galloping risk coefficient is established,and warnings are categorized based on this coefficient.Finally,using an IoT-driven realistic dataset of iced transmission line galloping,the effectiveness of the proposed method is validated through multi-dimensional simulation scenarios. 展开更多
关键词 Optical fiber sensing multi-source data fusion early warning of galloping time series data IOT adaptive weighted learning irregular time series perception closed-loop attention mechanism
在线阅读 下载PDF
Join multiple Riemannian manifold representation and multi-kernel non-redundancy for image clustering
14
作者 Mengyuan Zhang Jinglei Liu 《CAAI Transactions on Intelligence Technology》 2024年第5期1305-1319,共15页
Image clustering has received significant attention due to the growing importance of image recognition.Researchers have explored Riemannian manifold clustering,which is capable of capturing the non-linear shapes found... Image clustering has received significant attention due to the growing importance of image recognition.Researchers have explored Riemannian manifold clustering,which is capable of capturing the non-linear shapes found in real-world datasets.However,the complexity of image data poses substantial challenges for modelling and feature extraction.Traditional methods such as covariance matrices and linear subspace have shown promise in image modelling,and they are still in their early stages and suffer from certain limitations.However,these include the uncertainty of representing data using only one Riemannian manifold,limited feature extraction capacity of single kernel functions,and resulting incomplete data representation and redundancy.To overcome these limitations,the authors propose a novel approach called join multiple Riemannian manifold representation and multi-kernel non-redundancy for image clustering(MRMNR-MKC).It combines covariance matrices with linear subspace to represent data and applies multiple kernel functions to map the non-linear structural data into a reproducing kernel Hilbert space,enabling linear model analysis for image clustering.Additionally,the authors use matrix-induced regularisation to improve the clustering kernel selection process by reducing redundancy and assigning lower weights to identical kernels.Finally,the authors also conducted numerous experiments to evaluate the performance of our approach,confirming its superiority to state-of-the-art methods on three benchmark datasets. 展开更多
关键词 clustering data mining image representation machine learning MANIFOLDS
在线阅读 下载PDF
A survey on trajectory representation learning methods
15
作者 Xiangfu MENG Shuonan SUN +2 位作者 Xiaoyan ZHANG Qiangkui LENG Jinfeng FANG 《Frontiers of Computer Science》 2025年第12期47-68,共22页
With the rapid development of Global Positioning System(GPS),Global System for Mobile Communications(GSM),and the widespread application of mobile devices,a massive amount of trajectory data have been generated.Curren... With the rapid development of Global Positioning System(GPS),Global System for Mobile Communications(GSM),and the widespread application of mobile devices,a massive amount of trajectory data have been generated.Current trajectory data processing methods typically require input in the form of fixed-length vectors,making it crucial to convert variable-length trajectory data into fixed-length,low-dimensional embedding vectors.Trajectory representation learning aims to transform trajectory data into more expressive and interpretable representations.This paper provides a comprehensive review of the research progress,methodologies,and applications of trajectory representation learning.First,it categorizes and introduces the key techniques of trajectory representation learning and summarizes the available public trajectory datasets.Then,it classifies trajectory representation learning methods based on various downstream tasks,with a focus on their principles,advantages,limitations,and application scenarios in trajectory similarity computation,similar trajectory search,trajectory clustering,and trajectory prediction.Additionally,representative model structures and principles in each task are analyzed,along with the characteristics and advantages of different methods in each task.Last,the challenges faced by current trajectory representation learning methods are analyzed,including data sparsity,multimodality,model optimization,and privacy protection,while potential research directions and methodologies to address these challenges are explored. 展开更多
关键词 trajectory representation learning trajectory data mining trajectory similarity computation similar trajectory search trajectory clustering trajectory prediction
原文传递
基于关系一致性的多分支对比学习算法
16
作者 冯慧敏 吕巧莉 陈俊芬 《河北大学学报(自然科学版)》 北大核心 2026年第1期104-112,共9页
传统对比学习算法进行实例判别时容易引入虚假负样本,导致模型收敛于次优解,影响下游任务性能.为此,提出一种基于关系一致性的多分支对比学习算法.该算法在分支网络中挖掘近邻集,提供语义一致的正样本,避免产生假的负样本.结合数据增强... 传统对比学习算法进行实例判别时容易引入虚假负样本,导致模型收敛于次优解,影响下游任务性能.为此,提出一种基于关系一致性的多分支对比学习算法.该算法在分支网络中挖掘近邻集,提供语义一致的正样本,避免产生假的负样本.结合数据增强的多分支网络,最小化KL散度拉近语义一致性的正样本推开负样本,提升网络的特征表达能力.不同分支的温度控制输出分布的平滑性,保证特征表示的真实可靠性.最后在5个数据集上测试所提算法,并与其他先进方法进行对比,均获得令人满意的结果. 展开更多
关键词 对比学习 关系一致性 特征表示 假负样本对 数据增强
在线阅读 下载PDF
课堂学习投入智能测量的研究现状、困境与突破
17
作者 魏艳涛 徐琦 +2 位作者 师亚飞 刘清堂 祝翔祺 《开放学习研究》 2026年第1期35-46,共12页
课堂学习投入的智能测量是实施精准教学评价与干预的重要依据。实现课堂学习投入智能测量需要回答测什么、用什么测、如何测三个关键问题。对此,文章聚焦客观数据驱动的课堂学习投入智能测量现状,采用系统性文献综述法,梳理了课堂学习... 课堂学习投入的智能测量是实施精准教学评价与干预的重要依据。实现课堂学习投入智能测量需要回答测什么、用什么测、如何测三个关键问题。对此,文章聚焦客观数据驱动的课堂学习投入智能测量现状,采用系统性文献综述法,梳理了课堂学习投入的测量模态、数据特征与测量方法。研究发现,当前主要面临非侵入式测量理论发展滞后、多模态特征融合冲突、生成式测量专用模型缺乏以及复杂投入状态因果解释模糊等困境。为促进学习投入测量朝着智能化、精准化和可解释的方向发展,突破途径应聚焦学习投入的可计算的理论模型、可验证的数据表征、可调优的生成式测量和可解释的因果推断等方面。 展开更多
关键词 课堂学习投入 智能测量 数据表征 测量方法
原文传递
Survey on Encoding Schemes for Genomic Data Representation and Feature Learning——From Signal Processing to Machine Learning 被引量:1
18
作者 Ning Yu Zhihua Li Zeng Yu 《Big Data Mining and Analytics》 2018年第3期191-210,共20页
Data-driven machine learning, especially deep learning technology, is becoming an important tool for handling big data issues in bioinformatics. In machine learning, DNA sequences are often converted to numerical valu... Data-driven machine learning, especially deep learning technology, is becoming an important tool for handling big data issues in bioinformatics. In machine learning, DNA sequences are often converted to numerical values for data representation and feature learning in various applications. Similar conversion occurs in Genomic Signal Processing(GSP), where genome sequences are transformed into numerical sequences for signal extraction and recognition. This kind of conversion is also called encoding scheme. The diverse encoding schemes can greatly affect the performance of GSP applications and machine learning models. This paper aims to collect,analyze, discuss, and summarize the existing encoding schemes of genome sequence particularly in GSP as well as other genome analysis applications to provide a comprehensive reference for the genomic data representation and feature learning in machine learning. 展开更多
关键词 ENCODING scheme data representation FEATURE learning deep learning GENOMIC signal processing machine learning GENOME analysis
原文传递
VisuaLizations As Intermediate Representations (VLAIR): An approach for applying deep learning-based computer vision to non-image-based data
19
作者 Ai Jiang Miguel A.Nacenta Juan Ye 《Visual Informatics》 EI 2022年第3期35-50,共16页
Deep learning algorithms increasingly support automated systems in areas such as human activity recognition and purchase recommendation.We identify a current trend in which data is transformed first into abstract visu... Deep learning algorithms increasingly support automated systems in areas such as human activity recognition and purchase recommendation.We identify a current trend in which data is transformed first into abstract visualizations and then processed by a computer vision deep learning pipeline.We call this VisuaLization As Intermediate Representation(VLAIR)and believe that it can be instrumental to support accurate recognition in a number of fields while also enhancing humans’ability to interpret deep learning models for debugging purposes or for personal use.In this paper we describe the potential advantages of this approach and explore various visualization mappings and deep learning architectures.We evaluate several VLAIR alternatives for a specific problem(human activity recognition in an apartment)and show that VLAIR attains classification accuracy above classical machine learning algorithms and several other non-image-based deep learning algorithms with several data representations. 展开更多
关键词 Information visualization Convolutional neural networks Human activity recognition Smart homes data representation Intermediate representations INTERPRETABILITY Machine learning Deep learning
原文传递
A Review of Disentangled Representation Learning for Remote Sensing Data
20
作者 Mi Wang Huiwen Wang +1 位作者 Jing Xiao Liang Liao 《CAAI Artificial Intelligence Research》 2022年第2期172-190,共19页
representation that can identify and isolate different potential variables hidden in the highdimensional observations.Disentangled representation learning can capture information about a single change factor and contr... representation that can identify and isolate different potential variables hidden in the highdimensional observations.Disentangled representation learning can capture information about a single change factor and control it by the corresponding potential subspace,providing a robust representation for complex changes in the data.In this paper,we first introduce and analyze the current status of research on disentangled representation and its causal mechanisms and summarize three crucial properties of disentangled representation.Then,disentangled representation learning algorithms are classified into four categories and outlined in terms of both mathematical description and applicability.Subsequently,the loss functions and objective evaluation metrics commonly used in existing work on disentangled representation are classified.Finally,the paper summarizes representative applications of disentangled representation learning in the field of remote sensing and discusses its future development. 展开更多
关键词 disentangled representation learning latent representation remote sensing data deep learning
原文传递
上一页 1 2 9 下一页 到第
使用帮助 返回顶部