期刊文献+
共找到657篇文章
< 1 2 33 >
每页显示 20 50 100
Tomato Growth Height Prediction Method by Phenotypic Feature Extraction Using Multi-modal Data
1
作者 GONG Yu WANG Ling +3 位作者 ZHAO Rongqiang YOU Haibo ZHOU Mo LIU Jie 《智慧农业(中英文)》 2025年第1期97-110,共14页
[Objective]Accurate prediction of tomato growth height is crucial for optimizing production environments in smart farming.However,current prediction methods predominantly rely on empirical,mechanistic,or learning-base... [Objective]Accurate prediction of tomato growth height is crucial for optimizing production environments in smart farming.However,current prediction methods predominantly rely on empirical,mechanistic,or learning-based models that utilize either images data or environmental data.These methods fail to fully leverage multi-modal data to capture the diverse aspects of plant growth comprehensively.[Methods]To address this limitation,a two-stage phenotypic feature extraction(PFE)model based on deep learning algorithm of recurrent neural network(RNN)and long short-term memory(LSTM)was developed.The model integrated environment and plant information to provide a holistic understanding of the growth process,emploied phenotypic and temporal feature extractors to comprehensively capture both types of features,enabled a deeper understanding of the interaction between tomato plants and their environment,ultimately leading to highly accurate predictions of growth height.[Results and Discussions]The experimental results showed the model's ef‐fectiveness:When predicting the next two days based on the past five days,the PFE-based RNN and LSTM models achieved mean absolute percentage error(MAPE)of 0.81%and 0.40%,respectively,which were significantly lower than the 8.00%MAPE of the large language model(LLM)and 6.72%MAPE of the Transformer-based model.In longer-term predictions,the 10-day prediction for 4 days ahead and the 30-day prediction for 12 days ahead,the PFE-RNN model continued to outperform the other two baseline models,with MAPE of 2.66%and 14.05%,respectively.[Conclusions]The proposed method,which leverages phenotypic-temporal collaboration,shows great potential for intelligent,data-driven management of tomato cultivation,making it a promising approach for enhancing the efficiency and precision of smart tomato planting management. 展开更多
关键词 tomato growth prediction deep learning phenotypic feature extraction multi-modal data recurrent neural net‐work long short-term memory large language model
在线阅读 下载PDF
Heterogeneous data-driven aerodynamic modeling based on physical feature embedding 被引量:3
2
作者 Weiwei ZHANG Xuhao PENG +1 位作者 Jiaqing KOU Xu WANG 《Chinese Journal of Aeronautics》 SCIE EI CAS CSCD 2024年第3期1-6,共6页
Aerodynamic surrogate modeling mostly relies only on integrated loads data obtained from simulation or experiment,while neglecting and wasting the valuable distributed physical information on the surface.To make full ... Aerodynamic surrogate modeling mostly relies only on integrated loads data obtained from simulation or experiment,while neglecting and wasting the valuable distributed physical information on the surface.To make full use of both integrated and distributed loads,a modeling paradigm,called the heterogeneous data-driven aerodynamic modeling,is presented.The essential concept is to incorporate the physical information of distributed loads as additional constraints within the end-to-end aerodynamic modeling.Towards heterogenous data,a novel and easily applicable physical feature embedding modeling framework is designed.This framework extracts lowdimensional physical features from pressure distribution and then effectively enhances the modeling of the integrated loads via feature embedding.The proposed framework can be coupled with multiple feature extraction methods,and the well-performed generalization capabilities over different airfoils are verified through a transonic case.Compared with traditional direct modeling,the proposed framework can reduce testing errors by almost 50%.Given the same prediction accuracy,it can save more than half of the training samples.Furthermore,the visualization analysis has revealed a significant correlation between the discovered low-dimensional physical features and the heterogeneous aerodynamic loads,which shows the interpretability and credibility of the superior performance offered by the proposed deep learning framework. 展开更多
关键词 Transonic flow Data-driven modeling feature embedding Heterogenous data feature visualization
原文传递
Multi-modal face parts fusion based on Gabor feature for face recognition 被引量:1
3
作者 相燕 《High Technology Letters》 EI CAS 2009年第1期70-74,共5页
A novel face recognition method, which is a fusion of muhi-modal face parts based on Gabor feature (MMP-GF), is proposed in this paper. Firstly, the bare face image detached from the normalized image was convolved w... A novel face recognition method, which is a fusion of muhi-modal face parts based on Gabor feature (MMP-GF), is proposed in this paper. Firstly, the bare face image detached from the normalized image was convolved with a family of Gabor kernels, and then according to the face structure and the key-points locations, the calculated Gabor images were divided into five parts: Gabor face, Gabor eyebrow, Gabor eye, Gabor nose and Gabor mouth. After that multi-modal Gabor features were spatially partitioned into non-overlapping regions and the averages of regions were concatenated to be a low dimension feature vector, whose dimension was further reduced by principal component analysis (PCA). In the decision level fusion, match results respectively calculated based on the five parts were combined according to linear discriminant analysis (LDA) and a normalized matching algorithm was used to improve the performance. Experiments on FERET database show that the proposed MMP-GF method achieves good robustness to the expression and age variations. 展开更多
关键词 Gabor filter multi-modal Gabor features principal component analysis (PCA) linear discriminant analysis (IDA) normalized matching algorithm
在线阅读 下载PDF
Unsupervised multi-modal image translation based on the squeeze-and-excitation mechanism and feature attention module 被引量:1
4
作者 胡振涛 HU Chonghao +1 位作者 YANG Haoran SHUAI Weiwei 《High Technology Letters》 EI CAS 2024年第1期23-30,共8页
The unsupervised multi-modal image translation is an emerging domain of computer vision whose goal is to transform an image from the source domain into many diverse styles in the target domain.However,the multi-genera... The unsupervised multi-modal image translation is an emerging domain of computer vision whose goal is to transform an image from the source domain into many diverse styles in the target domain.However,the multi-generator mechanism is employed among the advanced approaches available to model different domain mappings,which results in inefficient training of neural networks and pattern collapse,leading to inefficient generation of image diversity.To address this issue,this paper introduces a multi-modal unsupervised image translation framework that uses a generator to perform multi-modal image translation.Specifically,firstly,the domain code is introduced in this paper to explicitly control the different generation tasks.Secondly,this paper brings in the squeeze-and-excitation(SE)mechanism and feature attention(FA)module.Finally,the model integrates multiple optimization objectives to ensure efficient multi-modal translation.This paper performs qualitative and quantitative experiments on multiple non-paired benchmark image translation datasets while demonstrating the benefits of the proposed method over existing technologies.Overall,experimental results have shown that the proposed method is versatile and scalable. 展开更多
关键词 multi-modal image translation generative adversarial network(GAN) squeezeand-excitation(SE)mechanism feature attention(FA)module
在线阅读 下载PDF
Improving VQA via Dual-Level Feature Embedding Network 被引量:1
5
作者 Yaru Song Huahu Xu Dikai Fang 《Intelligent Automation & Soft Computing》 2024年第3期397-416,共20页
Visual Question Answering(VQA)has sparked widespread interest as a crucial task in integrating vision and language.VQA primarily uses attention mechanisms to effectively answer questions to associate relevant visual r... Visual Question Answering(VQA)has sparked widespread interest as a crucial task in integrating vision and language.VQA primarily uses attention mechanisms to effectively answer questions to associate relevant visual regions with input questions.The detection-based features extracted by the object detection network aim to acquire the visual attention distribution on a predetermined detection frame and provide object-level insights to answer questions about foreground objects more effectively.However,it cannot answer the question about the background forms without detection boxes due to the lack of fine-grained details,which is the advantage of grid-based features.In this paper,we propose a Dual-Level Feature Embedding(DLFE)network,which effectively integrates grid-based and detection-based image features in a unified architecture to realize the complementary advantages of both features.Specifically,in DLFE,In DLFE,firstly,a novel Dual-Level Self-Attention(DLSA)modular is proposed to mine the intrinsic properties of the two features,where Positional Relation Attention(PRA)is designed to model the position information.Then,we propose a Feature Fusion Attention(FFA)to address the semantic noise caused by the fusion of two features and construct an alignment graph to enhance and align the grid and detection features.Finally,we use co-attention to learn the interactive features of the image and question and answer questions more accurately.Our method has significantly improved compared to the baseline,increasing accuracy from 66.01%to 70.63%on the test-std dataset of VQA 1.0 and from 66.24%to 70.91%for the test-std dataset of VQA 2.0. 展开更多
关键词 Visual question answering multi-modal feature processing attention mechanisms cross-model fusion
在线阅读 下载PDF
Robust Symmetry Prediction with Multi-Modal Feature Fusion for Partial Shapes
6
作者 Junhua Xi Kouquan Zheng +3 位作者 Yifan Zhong Longjiang Li Zhiping Cai Jinjing Chen 《Intelligent Automation & Soft Computing》 SCIE 2023年第3期3099-3111,共13页
In geometry processing,symmetry research benefits from global geo-metric features of complete shapes,but the shape of an object captured in real-world applications is often incomplete due to the limited sensor resoluti... In geometry processing,symmetry research benefits from global geo-metric features of complete shapes,but the shape of an object captured in real-world applications is often incomplete due to the limited sensor resolution,single viewpoint,and occlusion.Different from the existing works predicting symmetry from the complete shape,we propose a learning approach for symmetry predic-tion based on a single RGB-D image.Instead of directly predicting the symmetry from incomplete shapes,our method consists of two modules,i.e.,the multi-mod-al feature fusion module and the detection-by-reconstruction module.Firstly,we build a channel-transformer network(CTN)to extract cross-fusion features from the RGB-D as the multi-modal feature fusion module,which helps us aggregate features from the color and the depth separately.Then,our self-reconstruction net-work based on a 3D variational auto-encoder(3D-VAE)takes the global geo-metric features as input,followed by a prediction symmetry network to detect the symmetry.Our experiments are conducted on three public datasets:ShapeNet,YCB,and ScanNet,we demonstrate that our method can produce reliable and accurate results. 展开更多
关键词 Symmetry prediction multi-modal feature fusion partial shapes
在线阅读 下载PDF
Adaptive multi-modal feature fusion for far and hard object detection
7
作者 LI Yang GE Hongwei 《Journal of Measurement Science and Instrumentation》 CAS CSCD 2021年第2期232-241,共10页
In order to solve difficult detection of far and hard objects due to the sparseness and insufficient semantic information of LiDAR point cloud,a 3D object detection network with multi-modal data adaptive fusion is pro... In order to solve difficult detection of far and hard objects due to the sparseness and insufficient semantic information of LiDAR point cloud,a 3D object detection network with multi-modal data adaptive fusion is proposed,which makes use of multi-neighborhood information of voxel and image information.Firstly,design an improved ResNet that maintains the structure information of far and hard objects in low-resolution feature maps,which is more suitable for detection task.Meanwhile,semantema of each image feature map is enhanced by semantic information from all subsequent feature maps.Secondly,extract multi-neighborhood context information with different receptive field sizes to make up for the defect of sparseness of point cloud which improves the ability of voxel features to represent the spatial structure and semantic information of objects.Finally,propose a multi-modal feature adaptive fusion strategy which uses learnable weights to express the contribution of different modal features to the detection task,and voxel attention further enhances the fused feature expression of effective target objects.The experimental results on the KITTI benchmark show that this method outperforms VoxelNet with remarkable margins,i.e.increasing the AP by 8.78%and 5.49%on medium and hard difficulty levels.Meanwhile,our method achieves greater detection performance compared with many mainstream multi-modal methods,i.e.outperforming the AP by 1%compared with that of MVX-Net on medium and hard difficulty levels. 展开更多
关键词 3D object detection adaptive fusion multi-modal data fusion attention mechanism multi-neighborhood features
在线阅读 下载PDF
基于Qt/Embedded的phoneME Feature移植与实现
8
作者 高攀 杨斌 刘建敏 《计算机技术与发展》 2011年第1期31-34,共4页
phoneME Feature是一个高性能的Java虚拟机,而Qt/Embedded是一个面向嵌入式系统的C++图形界面库。为了使phoneME Feature在带有Qt/Embedded图形库的ARM-Linux目标平台上运行,就必须深入研究phoneME Feature与Qt/Embedded图形库的关系,... phoneME Feature是一个高性能的Java虚拟机,而Qt/Embedded是一个面向嵌入式系统的C++图形界面库。为了使phoneME Feature在带有Qt/Embedded图形库的ARM-Linux目标平台上运行,就必须深入研究phoneME Feature与Qt/Embedded图形库的关系,以及在目标平台下编译和移植带有Qt/Embedded图形接口的phoneME Feature的方法和步骤。移植过程主要包括编译环境的搭建、PCSL(Portable Common Services Library)的编译、CLDC(Connected Limited Device Configuration)的编译、MIDP(Mobile Information Device Profile)的编译和Java虚拟机的下载。 展开更多
关键词 QT/embedDED PHONEME feature JAVA虚拟机 编译 移植
在线阅读 下载PDF
MMGC-Net: Deep neural network for classification of mineral grains using multi-modal polarization images 被引量:1
9
作者 Jun Shu Xiaohai He +3 位作者 Qizhi Teng Pengcheng Yan Haibo He Honggang Chen 《Journal of Rock Mechanics and Geotechnical Engineering》 2025年第6期3894-3909,共16页
The multi-modal characteristics of mineral particles play a pivotal role in enhancing the classification accuracy,which is critical for obtaining a profound understanding of the Earth's composition and ensuring ef... The multi-modal characteristics of mineral particles play a pivotal role in enhancing the classification accuracy,which is critical for obtaining a profound understanding of the Earth's composition and ensuring effective exploitation utilization of its resources.However,the existing methods for classifying mineral particles do not fully utilize these multi-modal features,thereby limiting the classification accuracy.Furthermore,when conventional multi-modal image classification methods are applied to planepolarized and cross-polarized sequence images of mineral particles,they encounter issues such as information loss,misaligned features,and challenges in spatiotemporal feature extraction.To address these challenges,we propose a multi-modal mineral particle polarization image classification network(MMGC-Net)for precise mineral particle classification.Initially,MMGC-Net employs a two-dimensional(2D)backbone network with shared parameters to extract features from two types of polarized images to ensure feature alignment.Subsequently,a cross-polarized intra-modal feature fusion module is designed to refine the spatiotemporal features from the extracted features of the cross-polarized sequence images.Ultimately,the inter-modal feature fusion module integrates the two types of modal features to enhance the classification precision.Quantitative and qualitative experimental results indicate that when compared with the current state-of-the-art multi-modal image classification methods,MMGC-Net demonstrates marked superiority in terms of mineral particle multi-modal feature learning and four classification evaluation metrics.It also demonstrates better stability than the existing models. 展开更多
关键词 Mineral particles multi-modal image classification Shared parameters feature fusion Spatiotemporal feature
暂未订购
Lazy learner text categorization algorithm based on embedded feature selection 被引量:1
10
作者 Yan Peng Zheng Xuefeng +1 位作者 Zhu Jianyong Xiao Yunhong 《Journal of Systems Engineering and Electronics》 SCIE EI CSCD 2009年第3期651-659,共9页
To avoid the curse of dimensionality, text categorization (TC) algorithms based on machine learning (ML) have to use an feature selection (FS) method to reduce the dimensionality of feature space. Although havin... To avoid the curse of dimensionality, text categorization (TC) algorithms based on machine learning (ML) have to use an feature selection (FS) method to reduce the dimensionality of feature space. Although having been widely used, FS process will generally cause information losing and then have much side-effect on the whole performance of TC algorithms. On the basis of the sparsity characteristic of text vectors, a new TC algorithm based on lazy feature selection (LFS) is presented. As a new type of embedded feature selection approach, the LFS method can greatly reduce the dimension of features without any information losing, which can improve both efficiency and performance of algorithms greatly. The experiments show the new algorithm can simultaneously achieve much higher both performance and efficiency than some of other classical TC algorithms. 展开更多
关键词 machine learning text categorization embedded feature selection lazy learner cosine similarity.
在线阅读 下载PDF
Multi-Modal Pre-Synergistic Fusion Entity Alignment Based on Mutual Information Strategy Optimization
11
作者 Huayu Li Xinxin Chen +3 位作者 Lizhuang Tan Konstantin I.Kostromitin Athanasios V.Vasilakos Peiying Zhang 《Computers, Materials & Continua》 2025年第11期4133-4153,共21页
To address the challenge of missing modal information in entity alignment and to mitigate information loss or bias arising frommodal heterogeneity during fusion,while also capturing shared information acrossmodalities... To address the challenge of missing modal information in entity alignment and to mitigate information loss or bias arising frommodal heterogeneity during fusion,while also capturing shared information acrossmodalities,this paper proposes a Multi-modal Pre-synergistic Entity Alignmentmodel based on Cross-modalMutual Information Strategy Optimization(MPSEA).The model first employs independent encoders to process multi-modal features,including text,images,and numerical values.Next,a multi-modal pre-synergistic fusion mechanism integrates graph structural and visual modal features into the textual modality as preparatory information.This pre-fusion strategy enables unified perception of heterogeneous modalities at the model’s initial stage,reducing discrepancies during the fusion process.Finally,using cross-modal deep perception reinforcement learning,the model achieves adaptive multilevel feature fusion between modalities,supporting learningmore effective alignment strategies.Extensive experiments on multiple public datasets show that the MPSEA method achieves gains of up to 7% in Hits@1 and 8.2% in MRR on the FBDB15K dataset,and up to 9.1% in Hits@1 and 7.7% in MRR on the FBYG15K dataset,compared to existing state-of-the-art methods.These results confirm the effectiveness of the proposed model. 展开更多
关键词 Knowledge graph multi-modal entity alignment feature fusion pre-synergistic fusion
在线阅读 下载PDF
Tri-M2MT:Multi-modalities based effective acute bilirubin encephalopathy diagnosis through multi-transformer using neonatal Magnetic Resonance Imaging
12
作者 Kumar Perumal Rakesh Kumar Mahendran +1 位作者 Arfat Ahmad Khan Seifedine Kadry 《CAAI Transactions on Intelligence Technology》 2025年第2期434-449,共16页
Acute Bilirubin Encephalopathy(ABE)is a significant threat to neonates and it leads to disability and high mortality rates.Detecting and treating ABE promptly is important to prevent further complications and long-ter... Acute Bilirubin Encephalopathy(ABE)is a significant threat to neonates and it leads to disability and high mortality rates.Detecting and treating ABE promptly is important to prevent further complications and long-term issues.Recent studies have explored ABE diagnosis.However,they often face limitations in classification due to reliance on a single modality of Magnetic Resonance Imaging(MRI).To tackle this problem,the authors propose a Tri-M2MT model for precise ABE detection by using tri-modality MRI scans.The scans include T1-weighted imaging(T1WI),T2-weighted imaging(T2WI),and apparent diffusion coefficient maps to get indepth information.Initially,the tri-modality MRI scans are collected and preprocessesed by using an Advanced Gaussian Filter for noise reduction and Z-score normalisation for data standardisation.An Advanced Capsule Network was utilised to extract relevant features by using Snake Optimization Algorithm to select optimal features based on feature correlation with the aim of minimising complexity and enhancing detection accuracy.Furthermore,a multi-transformer approach was used for feature fusion and identify feature correlations effectively.Finally,accurate ABE diagnosis is achieved through the utilisation of a SoftMax layer.The performance of the proposed Tri-M2MT model is evaluated across various metrics,including accuracy,specificity,sensitivity,F1-score,and ROC curve analysis,and the proposed methodology provides better performance compared to existing methodologies. 展开更多
关键词 Acute Bilirubin Encephalopathy(ABE)Diagnosis feature extraction MRI multi-modalITY multi-transformer NEONATAL
在线阅读 下载PDF
Enhanced Multimodal Sentiment Analysis via Integrated Spatial Position Encoding and Fusion Embedding
13
作者 Chenquan Gan Xu Liu +3 位作者 Yu Tang Xianrong Yu Qingyi Zhu Deepak Kumar Jain 《Computers, Materials & Continua》 2025年第12期5399-5421,共23页
Multimodal sentiment analysis aims to understand emotions from text,speech,and video data.However,current methods often overlook the dominant role of text and suffer from feature loss during integration.Given the vary... Multimodal sentiment analysis aims to understand emotions from text,speech,and video data.However,current methods often overlook the dominant role of text and suffer from feature loss during integration.Given the varying importance of each modality across different contexts,a central and pressing challenge in multimodal sentiment analysis lies in maximizing the use of rich intra-modal features while minimizing information loss during the fusion process.In response to these critical limitations,we propose a novel framework that integrates spatial position encoding and fusion embedding modules to address these issues.In our model,text is treated as the core modality,while speech and video features are selectively incorporated through a unique position-aware fusion process.The spatial position encoding strategy preserves the internal structural information of speech and visual modalities,enabling the model to capture localized intra-modal dependencies that are often overlooked.This design enhances the richness and discriminative power of the fused representation,enabling more accurate and context-aware sentiment prediction.Finally,we conduct comprehensive evaluations on two widely recognized standard datasets in the field—CMU-MOSI and CMU-MOSEI to validate the performance of the proposed model.The experimental results demonstrate that our model exhibits good performance and effectiveness for sentiment analysis tasks. 展开更多
关键词 Multimodal sentiment analysis spatial position encoding fusion embedding feature loss reduction
在线阅读 下载PDF
Advanced Feature Selection Techniques in Medical Imaging--A Systematic Literature Review
14
作者 Sunawar Khan Tehseen Mazhar +5 位作者 Naila Sammar Naz Fahed Ahmed Tariq Shahzad Atif Ali Muhammad Adnan Khan Habib Hamam 《Computers, Materials & Continua》 2025年第11期2347-2401,共55页
Feature selection(FS)plays a crucial role in medical imaging by reducing dimensionality,improving computational efficiency,and enhancing diagnostic accuracy.Traditional FS techniques,including filter,wrapper,and embed... Feature selection(FS)plays a crucial role in medical imaging by reducing dimensionality,improving computational efficiency,and enhancing diagnostic accuracy.Traditional FS techniques,including filter,wrapper,and embedded methods,have been widely used but often struggle with high-dimensional and heterogeneous medical imaging data.Deep learning-based FS methods,particularly Convolutional Neural Networks(CNNs)and autoencoders,have demonstrated superior performance but lack interpretability.Hybrid approaches that combine classical and deep learning techniques have emerged as a promising solution,offering improved accuracy and explainability.Furthermore,integratingmulti-modal imaging data(e.g.,MagneticResonance Imaging(MRI),ComputedTomography(CT),Positron Emission Tomography(PET),and Ultrasound(US))poses additional challenges in FS,necessitating advanced feature fusion strategies.Multi-modal feature fusion combines information fromdifferent imagingmodalities to improve diagnostic accuracy.Recently,quantum computing has gained attention as a revolutionary approach for FS,providing the potential to handle high-dimensional medical data more efficiently.This systematic literature review comprehensively examines classical,Deep Learning(DL),hybrid,and quantum-based FS techniques inmedical imaging.Key outcomes include a structured taxonomy of FS methods,a critical evaluation of their performance across modalities,and identification of core challenges such as computational burden,interpretability,and ethical considerations.Future research directions—such as explainable AI(XAI),federated learning,and quantum-enhanced FS—are also emphasized to bridge the current gaps.This review provides actionable insights for developing scalable,interpretable,and clinically applicable FS methods in the evolving landscape of medical imaging. 展开更多
关键词 feature selection medical imaging deep learning hybrid approaches multi-modal imaging quantum computing explainable AI computational efficiency dimensionality reduction
在线阅读 下载PDF
Morphological features of allogenic nerve segment in rats after subcutaneous embedment
15
作者 Mingtang Gao Dianming Jiang 《Neural Regeneration Research》 SCIE CAS CSCD 2006年第1期50-52,共3页
BACKGROUND : Some studies demonstrate that allogenic peripheral nerve segment embedded subcutaneously significantly reduce the infiltration of lymphocyte and decrease immunological reaction.OBJECTIVE : To observe th... BACKGROUND : Some studies demonstrate that allogenic peripheral nerve segment embedded subcutaneously significantly reduce the infiltration of lymphocyte and decrease immunological reaction.OBJECTIVE : To observe the gross shape, optical and electron microscope results of allogenic nerve segment in rats 2 weeks after subcutaneous embedment, and compare with subcutaneous emdedment of autologous nerve segment. DESIGN : A randomized and controlled experiment.SETTING : Department of Orthopaedics of Fifth People's Hospital of Zhengzhou; Department of Orthopaedics,First Hospital Affiliated to Chongqing Medical University.MATERIALS : Totally 30 adult healthy Wistar male rats, with body mass of (200±20) g, were enrolled. Ten rats were chosen as the donors of allogenic nerve transplantation. The other 20 rats were randomly divided into 2 groups: allogenic nerve embedment group and autologous nerve embedment group, with 10 rats in each one. JEM-1220 transmission electron microscope (Japan) and Olympus BX50 optical microscope (Japan) were used. METHODS : This experiment was carried out at the laboratory of Orthopaedic Department, Chongqing Medical University from October 2000 to April 2002. ① Sciatic nerve of donor rats for allogenic nerve transplantation was cut off at 5 mm distant from pelvic strait.15 mm sciatic nerve segment was chosen from lateral part as graft, allogenic nerve embedment group: 15 mm sciatic nerve form the donor rats was embedded in the posterior part of right legs. Autologous nerve embedment group: 15 mm sciatic nerve segment of autologous left side was embedded in the posterior side of right legs. ② Nerve segment embedded subcutaneously was taken out at postoperative 2 weeks and performed gross observation; then 5 samples chosen randomly respectively from 2 groups and given haematoxylin-eosin staining and observation under optical microscope (×400);The other 5 samples were made into ultrathin sections (0.5μm)and observed under transmission electron microscope(×17 000). MAIN OUTCOME MEASURES : Gross shape, optical and electron microscope results of nerve segments of rats between two groups at 2 weeks after subcutaneous embedment. RESULTS : ① Results of gross observation: Appearance of nerve segment was similar between 2 groups. ② Results of optical observation: medullary sheath denaturation, axonotmesis, vascular engorgement, desmoplasia of adventitia and infiltration of inflammatory cells were all found in both 2 groups. Inflammatory reaction was a little more severe in the allogenic nerve embedment group than in the autologous nerve embedment groups.③Results of electron microscope : Similar cataplasia and denaturation of medullary sheath and cataplasia of Schwann cell were all found in the 2 groups. CONCLUSION: Some inflammatory reaction occurs after allogenic nerve embedment, but the activity of Schwann cell is similar to that of peripheral nerve after autologous nerve embedment. 展开更多
关键词 Morphological features of allogenic nerve segment in rats after subcutaneous embedment
暂未订购
基于word embedding的短文本特征扩展与分类 被引量:8
16
作者 孟欣 左万利 《小型微型计算机系统》 CSCD 北大核心 2017年第8期1712-1717,共6页
近几年短文本的大量涌现,给传统的自动文本分类技术带来了挑战.针对短文本特征稀疏、特征覆盖率低等特点,提出了一种基于word embedding扩展短文本特征的分类方法.word embedding是一种词的分布式表示,表示形式为低维连续的向量形式,并... 近几年短文本的大量涌现,给传统的自动文本分类技术带来了挑战.针对短文本特征稀疏、特征覆盖率低等特点,提出了一种基于word embedding扩展短文本特征的分类方法.word embedding是一种词的分布式表示,表示形式为低维连续的向量形式,并且好的word embedding训练模型可以编码很多语言规则和语言模式.本文利用word embedding空间分布特点和其蕴含的线性规则提出了一种新的文本特征扩展方法.结合扩展特征我们分别在谷歌搜索片段、中国日报新闻摘要两类数据集上进行了短文本分类实验,对比于仅使用词袋表示文本特征的分类方法,准确率分别提高:8.59%,7.42%. 展开更多
关键词 WORD embedding 文本特征 语义推理 短文本分类
在线阅读 下载PDF
Detecting Local Manifold Structure for Unsupervised Feature Selection 被引量:3
17
作者 FENG Ding-Cheng CHEN Feng XU Wen-Li 《自动化学报》 EI CSCD 北大核心 2014年第10期2253-2261,共9页
Unsupervised feature selection is fundamental in statistical pattern recognition,and has drawn persistent attention in the past several decades.Recently,much work has shown that feature selection can be formulated as ... Unsupervised feature selection is fundamental in statistical pattern recognition,and has drawn persistent attention in the past several decades.Recently,much work has shown that feature selection can be formulated as nonlinear dimensionality reduction with discrete constraints.This line of research emphasizes utilizing the manifold learning techniques,where feature selection and learning can be studied based on the manifold assumption in data distribution.Many existing feature selection methods such as Laplacian score,SPEC(spectrum decomposition of graph Laplacian),TR(trace ratio)criterion,MSFS(multi-cluster feature selection)and EVSC(eigenvalue sensitive criterion)apply the basic properties of graph Laplacian,and select the optimal feature subsets which best preserve the manifold structure defined on the graph Laplacian.In this paper,we propose a new feature selection perspective from locally linear embedding(LLE),which is another popular manifold learning method.The main difficulty of using LLE for feature selection is that its optimization involves quadratic programming and eigenvalue decomposition,both of which are continuous procedures and different from discrete feature selection.We prove that the LLE objective can be decomposed with respect to data dimensionalities in the subset selection problem,which also facilitates constructing better coordinates from data using the principal component analysis(PCA)technique.Based on these results,we propose a novel unsupervised feature selection algorithm,called locally linear selection(LLS),to select a feature subset representing the underlying data manifold.The local relationship among samples is computed from the LLE formulation,which is then used to estimate the contribution of each individual feature to the underlying manifold structure.These contributions,represented as LLS scores,are ranked and selected as the candidate solution to feature selection.We further develop a locally linear rotation-selection(LLRS)algorithm which extends LLS to identify the optimal coordinate subset from a new space.Experimental results on real-world datasets show that our method can be more effective than Laplacian eigenmap based feature selection methods. 展开更多
关键词 Manifold learning Laplacian eigenmap locally linear embedding(LLE) feature selection
在线阅读 下载PDF
Image feature optimization based on nonlinear dimensionality reduction 被引量:3
18
作者 Rong ZHU Min YAO 《Journal of Zhejiang University-Science A(Applied Physics & Engineering)》 SCIE EI CAS CSCD 2009年第12期1720-1737,共18页
Image feature optimization is an important means to deal with high-dimensional image data in image semantic understanding and its applications. We formulate image feature optimization as the establishment of a mapping... Image feature optimization is an important means to deal with high-dimensional image data in image semantic understanding and its applications. We formulate image feature optimization as the establishment of a mapping between highand low-dimensional space via a five-tuple model. Nonlinear dimensionality reduction based on manifold learning provides a feasible way for solving such a problem. We propose a novel globular neighborhood based locally linear embedding (GNLLE) algorithm using neighborhood update and an incremental neighbor search scheme, which not only can handle sparse datasets but also has strong anti-noise capability and good topological stability. Given that the distance measure adopted in nonlinear dimensionality reduction is usually based on pairwise similarity calculation, we also present a globular neighborhood and path clustering based locally linear embedding (GNPCLLE) algorithm based on path-based clustering. Due to its full consideration of correlations between image data, GNPCLLE can eliminate the distortion of the overall topological structure within the dataset on the manifold. Experimental results on two image sets show the effectiveness and efficiency of the proposed algorithms. 展开更多
关键词 Image feature optimization Nonlinear dimensionality reduction Manifold learning Locally linear embedding (LLE)
原文传递
Multi-modality hierarchical fusion network for lumbar spine segmentation with magnetic resonance images 被引量:1
19
作者 Han Yan Guangtao Zhang +1 位作者 Wei Cui Zhuliang Yu 《Control Theory and Technology》 EI CSCD 2024年第4期612-622,共11页
For the analysis of spinal and disc diseases,automated tissue segmentation of the lumbar spine is vital.Due to the continuous and concentrated location of the target,the abundance of edge features,and individual diffe... For the analysis of spinal and disc diseases,automated tissue segmentation of the lumbar spine is vital.Due to the continuous and concentrated location of the target,the abundance of edge features,and individual differences,conventional automatic segmentation methods perform poorly.Since the success of deep learning in the segmentation of medical images has been shown in the past few years,it has been applied to this task in a number of ways.The multi-scale and multi-modal features of lumbar tissues,however,are rarely explored by methodologies of deep learning.Because of the inadequacies in medical images availability,it is crucial to effectively fuse various modes of data collection for model training to alleviate the problem of insufficient samples.In this paper,we propose a novel multi-modality hierarchical fusion network(MHFN)for improving lumbar spine segmentation by learning robust feature representations from multi-modality magnetic resonance images.An adaptive group fusion module(AGFM)is introduced in this paper to fuse features from various modes to extract cross-modality features that could be valuable.Furthermore,to combine features from low to high levels of cross-modality,we design a hierarchical fusion structure based on AGFM.Compared to the other feature fusion methods,AGFM is more effective based on experimental results on multi-modality MR images of the lumbar spine.To further enhance segmentation accuracy,we compare our network with baseline fusion structures.Compared to the baseline fusion structures(input-level:76.27%,layer-level:78.10%,decision-level:79.14%),our network was able to segment fractured vertebrae more accurately(85.05%). 展开更多
关键词 Lumbar spine segmentation Deep learning multi-modality fusion feature fusion
原文传递
Graph-Based Feature Learning for Cross-Project Software Defect Prediction 被引量:1
20
作者 Ahmed Abdu Zhengjun Zhai +2 位作者 Hakim A.Abdo Redhwan Algabri Sungon Lee 《Computers, Materials & Continua》 SCIE EI 2023年第10期161-180,共20页
Cross-project software defect prediction(CPDP)aims to enhance defect prediction in target projects with limited or no historical data by leveraging information from related source projects.The existing CPDP approaches... Cross-project software defect prediction(CPDP)aims to enhance defect prediction in target projects with limited or no historical data by leveraging information from related source projects.The existing CPDP approaches rely on static metrics or dynamic syntactic features,which have shown limited effectiveness in CPDP due to their inability to capture higher-level system properties,such as complex design patterns,relationships between multiple functions,and dependencies in different software projects,that are important for CPDP.This paper introduces a novel approach,a graph-based feature learning model for CPDP(GB-CPDP),that utilizes NetworkX to extract features and learn representations of program entities from control flow graphs(CFGs)and data dependency graphs(DDGs).These graphs capture the structural and data dependencies within the source code.The proposed approach employs Node2Vec to transform CFGs and DDGs into numerical vectors and leverages Long Short-Term Memory(LSTM)networks to learn predictive models.The process involves graph construction,feature learning through graph embedding and LSTM,and defect prediction.Experimental evaluation using nine open-source Java projects from the PROMISE dataset demonstrates that GB-CPDP outperforms state-of-the-art CPDP methods in terms of F1-measure and Area Under the Curve(AUC).The results showcase the effectiveness of GB-CPDP in improving the performance of cross-project defect prediction. 展开更多
关键词 Cross-project defect prediction graphs features deep learning graph embedding
在线阅读 下载PDF
上一页 1 2 33 下一页 到第
使用帮助 返回顶部