期刊文献+
共找到58篇文章
< 1 2 3 >
每页显示 20 50 100
Reconstructing the 3D digital core with a fully convolutional neural network 被引量:1
1
作者 Li Qiong Chen Zheng +4 位作者 He Jian-Jun Hao Si-Yu Wang Rui Yang Hao-Tao Sun Hua-Jun 《Applied Geophysics》 SCIE CSCD 2020年第3期401-410,共10页
In this paper, the complete process of constructing 3D digital core by fullconvolutional neural network is described carefully. A large number of sandstone computedtomography (CT) images are used as training input for... In this paper, the complete process of constructing 3D digital core by fullconvolutional neural network is described carefully. A large number of sandstone computedtomography (CT) images are used as training input for a fully convolutional neural networkmodel. This model is used to reconstruct the three-dimensional (3D) digital core of Bereasandstone based on a small number of CT images. The Hamming distance together with theMinkowski functions for porosity, average volume specifi c surface area, average curvature,and connectivity of both the real core and the digital reconstruction are used to evaluate theaccuracy of the proposed method. The results show that the reconstruction achieved relativeerrors of 6.26%, 1.40%, 6.06%, and 4.91% for the four Minkowski functions and a Hammingdistance of 0.04479. This demonstrates that the proposed method can not only reconstructthe physical properties of real sandstone but can also restore the real characteristics of poredistribution in sandstone, is the ability to which is a new way to characterize the internalmicrostructure of rocks. 展开更多
关键词 Fully convolutional neural network 3D digital core numerical simulation training set
在线阅读 下载PDF
Audiovisual speech recognition based on a deep convolutional neural network 被引量:2
2
作者 Shashidhar Rudregowda Sudarshan Patilkulkarni +2 位作者 Vinayakumar Ravi Gururaj H.L. Moez Krichen 《Data Science and Management》 2024年第1期25-34,共10页
Audiovisual speech recognition is an emerging research topic.Lipreading is the recognition of what someone is saying using visual information,primarily lip movements.In this study,we created a custom dataset for India... Audiovisual speech recognition is an emerging research topic.Lipreading is the recognition of what someone is saying using visual information,primarily lip movements.In this study,we created a custom dataset for Indian English linguistics and categorized it into three main categories:(1)audio recognition,(2)visual feature extraction,and(3)combined audio and visual recognition.Audio features were extracted using the mel-frequency cepstral coefficient,and classification was performed using a one-dimension convolutional neural network.Visual feature extraction uses Dlib and then classifies visual speech using a long short-term memory type of recurrent neural networks.Finally,integration was performed using a deep convolutional network.The audio speech of Indian English was successfully recognized with accuracies of 93.67%and 91.53%,respectively,using testing data from 200 epochs.The training accuracy for visual speech recognition using the Indian English dataset was 77.48%and the test accuracy was 76.19%using 60 epochs.After integration,the accuracies of audiovisual speech recognition using the Indian English dataset for training and testing were 94.67%and 91.75%,respectively. 展开更多
关键词 Audiovisual speech recognition Custom dataset 1D Convolution neural network(cnn) Deep cnn(Dcnn) Long short-term memory(LSTM) LIPREADING Dlib Mel-frequency cepstral coefficient(MFCC)
在线阅读 下载PDF
Image-Based Flow Prediction of Vocal Folds Using 3D Convolutional Neural Networks
3
作者 Yang Zhang Tianmei Pu +1 位作者 Jiasen Xu Chunhua Zhou 《Journal of Bionic Engineering》 SCIE EI CSCD 2024年第2期991-1002,共12页
In this work,a three dimensional(3D)convolutional neural network(CNN)model based on image slices of various normal and pathological vocal folds is proposed for accurate and efficient prediction of glottal flows.The 3D... In this work,a three dimensional(3D)convolutional neural network(CNN)model based on image slices of various normal and pathological vocal folds is proposed for accurate and efficient prediction of glottal flows.The 3D CNN model is composed of the feature extraction block and regression block.The feature extraction block is capable of learning low dimensional features from the high dimensional image data of the glottal shape,and the regression block is employed to flatten the output from the feature extraction block and obtain the desired glottal flow data.The input image data is the condensed set of 2D image slices captured in the axial plane of the 3D vocal folds,where these glottal shapes are synthesized based on the equations of normal vibration modes.The output flow data is the corresponding flow rate,averaged glottal pressure and nodal pressure distributions over the glottal surface.The 3D CNN model is built to establish the mapping between the input image data and output flow data.The ground-truth flow variables of each glottal shape in the training and test datasets are obtained by a high-fidelity sharp-interface immersed-boundary solver.The proposed model is trained to predict the concerned flow variables for glottal shapes in the test set.The present 3D CNN model is more efficient than traditional Computational Fluid Dynamics(CFD)models while the accuracy can still be retained,and more powerful than previous data-driven prediction models because more details of the glottal flow can be provided.The prediction performance of the trained 3D CNN model in accuracy and efficiency indicates that this model could be promising for future clinical applications. 展开更多
关键词 Vocal folds Computational fluid dynamics Machine learning 3D convolutional neural network
在线阅读 下载PDF
Review of Artificial Intelligence for Oil and Gas Exploration: Convolutional Neural Network Approaches and the U-Net 3D Model
4
作者 Weiyan Liu 《Open Journal of Geology》 CAS 2024年第4期578-593,共16页
Deep learning, especially through convolutional neural networks (CNN) such as the U-Net 3D model, has revolutionized fault identification from seismic data, representing a significant leap over traditional methods. Ou... Deep learning, especially through convolutional neural networks (CNN) such as the U-Net 3D model, has revolutionized fault identification from seismic data, representing a significant leap over traditional methods. Our review traces the evolution of CNN, emphasizing the adaptation and capabilities of the U-Net 3D model in automating seismic fault delineation with unprecedented accuracy. We find: 1) The transition from basic neural networks to sophisticated CNN has enabled remarkable advancements in image recognition, which are directly applicable to analyzing seismic data. The U-Net 3D model, with its innovative architecture, exemplifies this progress by providing a method for detailed and accurate fault detection with reduced manual interpretation bias. 2) The U-Net 3D model has demonstrated its superiority over traditional fault identification methods in several key areas: it has enhanced interpretation accuracy, increased operational efficiency, and reduced the subjectivity of manual methods. 3) Despite these achievements, challenges such as the need for effective data preprocessing, acquisition of high-quality annotated datasets, and achieving model generalization across different geological conditions remain. Future research should therefore focus on developing more complex network architectures and innovative training strategies to refine fault identification performance further. Our findings confirm the transformative potential of deep learning, particularly CNN like the U-Net 3D model, in geosciences, advocating for its broader integration to revolutionize geological exploration and seismic analysis. 展开更多
关键词 Deep Learning convolutional neural networks (cnn) Seismic Fault Identification U-Net 3D Model Geological Exploration
在线阅读 下载PDF
Using Neural Networks to Predict Secondary Structure for Protein Folding 被引量:1
5
作者 Ali Abdulhafidh Ibrahim Ibrahim Sabah Yasseen 《Journal of Computer and Communications》 2017年第1期1-8,共8页
Protein Secondary Structure Prediction (PSSP) is considered as one of the major challenging tasks in bioinformatics, so many solutions have been proposed to solve that problem via trying to achieve more accurate predi... Protein Secondary Structure Prediction (PSSP) is considered as one of the major challenging tasks in bioinformatics, so many solutions have been proposed to solve that problem via trying to achieve more accurate prediction results. The goal of this paper is to develop and implement an intelligent based system to predict secondary structure of a protein from its primary amino acid sequence by using five models of Neural Network (NN). These models are Feed Forward Neural Network (FNN), Learning Vector Quantization (LVQ), Probabilistic Neural Network (PNN), Convolutional Neural Network (CNN), and CNN Fine Tuning for PSSP. To evaluate our approaches two datasets have been used. The first one contains 114 protein samples, and the second one contains 1845 protein samples. 展开更多
关键词 Protein Secondary Structure Prediction (PSSP) neural network (NN) Α-HELIX (H) Β-SHEET (E) Coil (C) Feed Forward neural network (FNN) Learning Vector Quantization (LVQ) Probabilistic neural network (PNN) convolutional neural network (cnn)
在线阅读 下载PDF
基于因果分析与CNN模型的临汾市O_(3)浓度预测
6
作者 宋朕 营娜 +2 位作者 王璟煦 朱向哲 薛志钢 《中国环境监测》 北大核心 2025年第S1期34-40,共7页
近年来,我国O_(3)浓度上升明显。临汾市既是我国三大焦煤生产基地之一,又属于大气污染防治重点区域,因此开展临汾市O_(3)浓度预测研究对于该区域O_(3)污染防控以及空气质量进一步改善具有重要意义。基于2020—2022年临汾国控点的污染物... 近年来,我国O_(3)浓度上升明显。临汾市既是我国三大焦煤生产基地之一,又属于大气污染防治重点区域,因此开展临汾市O_(3)浓度预测研究对于该区域O_(3)污染防控以及空气质量进一步改善具有重要意义。基于2020—2022年临汾国控点的污染物监测数据及气象数据,采用因果分析方法研究站点间的O_(3)空间分布及联系,通过卷积神经网络(CNN)模型预测未来O_(3)浓度。结果显示,因果分析可为模型筛选空间特征,使结合该空间特征构建的预测模型CNN-1的预测精度得到有效提升。临汾市各站点间存在显著的O_(3)传输规律,市委与城南两个站点对其他站点影响大,临钢医院站点受其他站点影响较小。所构建的CNN模型在夏秋季拟合更佳。削减市委和城南站点的O_(3)浓度,可有效改善临钢医院站点的空气质量。该方法可精准识别O_(3)传输源,为准确预测并提前应对O_(3)污染提供技术支撑。 展开更多
关键词 O_(3) 收敛交叉映射 空间特征 卷积神经网络 浓度预测
在线阅读 下载PDF
基于自适应多目标进化CNN的图像分割方法 被引量:11
7
作者 王维 王显鹏 宋相满 《控制与决策》 EI CSCD 北大核心 2024年第4期1185-1193,共9页
卷积神经网络已经成为强大的分割模型,但通常为手动设计,这需要大量时间并且可能导致庞大而复杂的网络.人们对自动设计能够准确分割特定领域图像的高效网络架构越来越感兴趣,然而大部分方法或者没有考虑构建更加灵活的网络架构,或者没... 卷积神经网络已经成为强大的分割模型,但通常为手动设计,这需要大量时间并且可能导致庞大而复杂的网络.人们对自动设计能够准确分割特定领域图像的高效网络架构越来越感兴趣,然而大部分方法或者没有考虑构建更加灵活的网络架构,或者没有考虑多个目标优化模型.鉴于此,提出一种称为AdaMo-ECNAS的自适应多目标进化卷积神经架构搜索算法,用于特定领域的图像分割,在进化过程中考虑多个性能指标并通过优化模型的多目标适应特定的数据集.AdaMo-ECNAS可以构建灵活多变的预测分割模型,其网络架构和超参数通过基于多目标进化的算法找到,算法基于自适应PBI实现3个目标进化问题,即提升预测分割的F1-score、最大限度减少计算成本以及最大限度挖掘额外训练潜能.将AdaMo-ECNAS在两个真实数据集上进行评估,结果表明所提出方法与其他先进算法相比具有较高的竞争性,甚至是超越的. 展开更多
关键词 卷积神经网络 神经架构搜索 多目标优化问题 基于分解的多目标进化算法 自适应 图像分割
原文传递
基于CNN的加密C&C通信流量识别方法 被引量:17
8
作者 程华 谢金鑫 陈立皇 《计算机工程》 CAS CSCD 北大核心 2019年第8期31-34,41,共5页
为实现恶意软件加密C& C通信流量的准确识别,分析正常网页浏览访问和C& C通信的https通信过程,发现恶意软件C& C通信的服务器独立性特征,提出https通信序列建模方法。针对加密通信的行为特点,利用密文十六进制字符的向量表... 为实现恶意软件加密C& C通信流量的准确识别,分析正常网页浏览访问和C& C通信的https通信过程,发现恶意软件C& C通信的服务器独立性特征,提出https通信序列建模方法。针对加密通信的行为特点,利用密文十六进制字符的向量表示方法完成加密流量的向量化表达,并采用多窗口卷积神经网络提取加密C& C通信模式的特征,实现加密C& C通信数据流的识别与分类。实验结果表明,该方法识别恶意软件加密C& C流量的准确率高达91.07 %。 展开更多
关键词 加密流量 C&C通信 https通信 卷积神经网络 密文字符表达
在线阅读 下载PDF
BTDGCNN:面向三维点云拓扑结构的BallTree动态图卷积神经网络 被引量:4
9
作者 张学典 方慧 《小型微型计算机系统》 CSCD 北大核心 2022年第11期2342-2347,共6页
点云卷积网络对点云进行分割分类时,独立提取点云特征却忽略了点之间的几何关联,从而丢失了许多局部特征.而对稀疏、无结构、无序的点云进行输入转换则会导致数据变得更加庞大,卷积效率降低.为此构建了面向三维点云拓扑结构的BallTree... 点云卷积网络对点云进行分割分类时,独立提取点云特征却忽略了点之间的几何关联,从而丢失了许多局部特征.而对稀疏、无结构、无序的点云进行输入转换则会导致数据变得更加庞大,卷积效率降低.为此构建了面向三维点云拓扑结构的BallTree动态图卷积神经网络,利用Bat-Net变换网络(BallTree transfromation network)对初始无序点云进行空间变换,恢复点云的拓扑结构和距离向量,提高点云中各个点间的关联性,结合三层BAT边卷积模块(BallTree edge convolution network),提升其信息表征能力,以便更好地进行分类分割任务.实验结果表明,该方法在ModelNet40数据集上的分类性能均优于其他五种方法,分别提高了4.4%、2.9%、1.3%、2%和1.4%.同时在ShapeNet Parts数据集上的分割的平均交并比分别提高了1.7%、0.3%、0.3%、0.3%、0.3%,有效地提升了三维点云的分类分割性能. 展开更多
关键词 三维点云 图卷积神经网络 分类 分割
在线阅读 下载PDF
快速3D-CNN结合深度可分离卷积对高光谱图像分类 被引量:2
10
作者 王燕 梁琦 《计算机科学与探索》 CSCD 北大核心 2022年第12期2860-2869,共10页
针对卷积神经网络在高光谱图像特征提取和分类的过程中,存在空谱特征提取不充分以及网络层数太多引起的参数量大、计算复杂的问题,提出快速三维卷积神经网络(3D-CNN)结合深度可分离卷积(DSC)的轻量型卷积模型。该方法首先利用增量主成... 针对卷积神经网络在高光谱图像特征提取和分类的过程中,存在空谱特征提取不充分以及网络层数太多引起的参数量大、计算复杂的问题,提出快速三维卷积神经网络(3D-CNN)结合深度可分离卷积(DSC)的轻量型卷积模型。该方法首先利用增量主成分分析(IPCA)对输入的数据进行降维预处理;其次将输入模型的像素分割成小的重叠的三维小卷积块,在分割的小块上基于中心像素形成地面标签,利用三维核函数进行卷积处理,形成连续的三维特征图,保留空谱特征。用3D-CNN同时提取空谱特征,然后在三维卷积中加入深度可分离卷积对空间特征再次提取,丰富空谱特征的同时减少参数量,从而减少计算时间,分类精度也有所提高。所提模型在Indian Pines、Salinas Scene和University of Pavia公开数据集上验证,并且同其他经典的分类方法进行比较。实验结果表明,该方法不仅能大幅度节省可学习的参数,降低模型复杂度,而且表现出较好的分类性能,其中总体精度(OA)、平均分类精度(AA)和Kappa系数均可达99%以上。 展开更多
关键词 高光谱图像分类 空谱特征提取 三维卷积神经网络(3D-cnn) 深度可分离卷积(DSC) 深度学习
在线阅读 下载PDF
Enhancing SS-OCT 3D image reconstruction:A real-time system with stripe artifact suppression and GPU parallel acceleration
11
作者 Dandan LIU 《虚拟现实与智能硬件(中英文)》 2026年第1期115-130,共16页
Optical coherence tomography(OCT),particularly Swept-Source OCT,is widely employed in medical diagnostics and industrial inspections owing to its high-resolution imaging capabilities.However,Swept-Source OCT 3D imagin... Optical coherence tomography(OCT),particularly Swept-Source OCT,is widely employed in medical diagnostics and industrial inspections owing to its high-resolution imaging capabilities.However,Swept-Source OCT 3D imaging often suffers from stripe artifacts caused by unstable light sources,system noise,and environmental interference,posing challenges to real-time processing of large-scale datasets.To address this issue,this study introduces a real-time reconstruction system that integrates stripe-artifact suppression and parallel computing using a graphics processing unit.This approach employs a frequency-domain filtering algorithm with adaptive anti-suppression parameters,dynamically adjusted through an image quality evaluation function and optimized using a convolutional neural network for complex frequency-domain feature learning.Additionally,a graphics processing unit integrated 3D reconstruction framework is developed,enhancing data processing throughput and real-time performance via a dual-queue decoupling mechanism.Experimental results demonstrate significant improvements in structural similarity(0.92),peak signal-to-noise ratio(31.62 dB),and stripe suppression ratio(15.73 dB)compared with existing methods.On the RTX 4090 platform,the proposed system achieved an end-to-end delay of 94.36 milliseconds,a frame rate of 10.3 frames per second,and a throughput of 121.5 million voxels per second,effectively suppressing artifacts while preserving image details and enhancing real-time 3D reconstruction performance. 展开更多
关键词 Stripe artifact suppression 3D reconstruction GPU parallel computing Adaptive frequency domain filtering convolutional neural network
在线阅读 下载PDF
CurveNet:Curvature-Based Multitask Learning Deep Networks for 3D Object Recognition 被引量:4
12
作者 A.A.M.Muzahid Wanggen Wan +2 位作者 Ferdous Sohel Lianyao Wu Li Hou 《IEEE/CAA Journal of Automatica Sinica》 SCIE EI CSCD 2021年第6期1177-1187,共11页
In computer vision fields,3D object recognition is one of the most important tasks for many real-world applications.Three-dimensional convolutional neural networks(CNNs)have demonstrated their advantages in 3D object ... In computer vision fields,3D object recognition is one of the most important tasks for many real-world applications.Three-dimensional convolutional neural networks(CNNs)have demonstrated their advantages in 3D object recognition.In this paper,we propose to use the principal curvature directions of 3D objects(using a CAD model)to represent the geometric features as inputs for the 3D CNN.Our framework,namely CurveNet,learns perceptually relevant salient features and predicts object class labels.Curvature directions incorporate complex surface information of a 3D object,which helps our framework to produce more precise and discriminative features for object recognition.Multitask learning is inspired by sharing features between two related tasks,where we consider pose classification as an auxiliary task to enable our CurveNet to better generalize object label classification.Experimental results show that our proposed framework using curvature vectors performs better than voxels as an input for 3D object classification.We further improved the performance of CurveNet by combining two networks with both curvature direction and voxels of a 3D object as the inputs.A Cross-Stitch module was adopted to learn effective shared features across multiple representations.We evaluated our methods using three publicly available datasets and achieved competitive performance in the 3D object recognition task. 展开更多
关键词 3D shape analysis convolutional neural network DNNs object classification volumetric cnn
在线阅读 下载PDF
基于CNN+D-S证据理论的多维信息源局部放电故障识别 被引量:22
13
作者 王磊 张磊 +3 位作者 牛荣泽 孙芊 李丰君 张周胜 《电力工程技术》 北大核心 2022年第5期172-179,共8页
基于多维信息源融合的局部放电故障识别方法对提高故障识别的准确性和容错性具有重要意义。文中以开关柜中的典型局部放电类型为识别对象,设置4种典型的局部放电模型(电晕放电、沿面放电、悬浮放电和气隙放电),利用超声波(Ultra)法、甚... 基于多维信息源融合的局部放电故障识别方法对提高故障识别的准确性和容错性具有重要意义。文中以开关柜中的典型局部放电类型为识别对象,设置4种典型的局部放电模型(电晕放电、沿面放电、悬浮放电和气隙放电),利用超声波(Ultra)法、甚-特高频(V-UHF)法以及脉冲电流法(PCM)采集不同放电类型产生的局放信号。首先利用深度卷积神经网络(CNN)算法对不同传感器测量数据进行训练,之后利用Dempster-Shafer(D-S)证据理论对多维信息源识别结果进行融合,并作出最终决策。结果表明,相比于基于单一信息源的故障识别模式,基于多维信息源的故障识别模式准确率更高,且当多维信息源中某一信息源出现误判时仍能正确识别放电类型,对信息源的容错性更好,识别效果良好。 展开更多
关键词 局部放电 故障识别 深度卷积神经网络(cnn) Dempster-Shafer(D-S)证据理论 多维信息源 信息融合
在线阅读 下载PDF
Behavior recognition algorithm based on the improved R3D and LSTM network fusion 被引量:1
14
作者 Wu Jin An Yiyuan +1 位作者 Dai Wei Zhao Bo 《High Technology Letters》 EI CAS 2021年第4期381-387,共7页
Because behavior recognition is based on video frame sequences,this paper proposes a behavior recognition algorithm that combines 3D residual convolutional neural network(R3D)and long short-term memory(LSTM).First,the... Because behavior recognition is based on video frame sequences,this paper proposes a behavior recognition algorithm that combines 3D residual convolutional neural network(R3D)and long short-term memory(LSTM).First,the residual module is extended to three dimensions,which can extract features in the time and space domain at the same time.Second,by changing the size of the pooling layer window the integrity of the time domain features is preserved,at the same time,in order to overcome the difficulty of network training and over-fitting problems,the batch normalization(BN)layer and the dropout layer are added.After that,because the global average pooling layer(GAP)is affected by the size of the feature map,the network cannot be further deepened,so the convolution layer and maxpool layer are added to the R3D network.Finally,because LSTM has the ability to memorize information and can extract more abstract timing features,the LSTM network is introduced into the R3D network.Experimental results show that the R3D+LSTM network achieves 91%recognition rate on the UCF-101 dataset. 展开更多
关键词 behavior recognition three-dimensional residual convolutional neural network(R3D) long short-term memory(LSTM) DROPOUT batch normalization(BN)
在线阅读 下载PDF
基于1-D CNN的二阶段OFDM系统定时同步方法 被引量:1
15
作者 卿朝进 杨娜 +1 位作者 唐书海 饶川贵 《计算机应用研究》 CSCD 北大核心 2023年第2期565-570,共6页
针对存在多径干扰的正交频分复用系统的定时同步准确性低的问题,提出基于一维卷积神经网络(1-D CNN)的二阶段OFDM系统定时同步方法。在第一阶段,利用经典互相关方法实现路径特征初始抽取,捕获可分辨路径上的定时辅助同步点;基于定时辅... 针对存在多径干扰的正交频分复用系统的定时同步准确性低的问题,提出基于一维卷积神经网络(1-D CNN)的二阶段OFDM系统定时同步方法。在第一阶段,利用经典互相关方法实现路径特征初始抽取,捕获可分辨路径上的定时辅助同步点;基于定时辅助同步点构建1-D CNN网络学习第二阶段中的定时偏移;最后,结合两阶段处理,获得系统最终的定时同步偏移估计。相比于基于压缩感知的定时同步方法和基于极限学习机的定时同步方法,所研究的二阶段OFDM系统定时同步方法提高了定时同步准确性,并有效地降低计算复杂度与处理延迟。 展开更多
关键词 二阶段定时同步 一维卷积神经网络 正交频分复用
在线阅读 下载PDF
基于CNN的强噪声干扰下C扫成像检测方法
16
作者 程垄 周世圆 +3 位作者 胡怡 姚鹏娇 刘逯航 Fasil Kassa 《测控技术》 2020年第7期38-43,共6页
在C扫成像检测中,若A扫信号中含有与缺陷回波幅值相当的强噪声干扰,则闸门方法无法正确成像。针对这一问题,提出了一种新的卷积神经网络模型架构,对缺陷A扫信号进行识别,实现了强噪声下的C扫成像检测。网络架构中采用了残差模块,使得利... 在C扫成像检测中,若A扫信号中含有与缺陷回波幅值相当的强噪声干扰,则闸门方法无法正确成像。针对这一问题,提出了一种新的卷积神经网络模型架构,对缺陷A扫信号进行识别,实现了强噪声下的C扫成像检测。网络架构中采用了残差模块,使得利用深层卷积神经网络提取更加抽象特征成为可能;同时在训练中采用了Focal Loss损失函数和联合准确率克服了训练集类别不平衡的影响,有效调高了分类准确率。实验结果表明,在强噪声干扰下,该方法的A扫信号分类识别准确率接近100%,较传统闸门方法提高了20%以上,实现了高质量、高精度的C扫成像。 展开更多
关键词 超声C扫成像检测 强噪声 卷积神经网络 闸门
在线阅读 下载PDF
Mural Anomaly Region Detection Algorithm Based on Hyperspectral Multiscale Residual Attention Network
17
作者 Bolin Guo Shi Qiu +1 位作者 Pengchang Zhang Xingjia Tang 《Computers, Materials & Continua》 SCIE EI 2024年第10期1809-1833,共25页
Mural paintings hold significant historical information and possess substantial artistic and cultural value.However,murals are inevitably damaged by natural environmental factors such as wind and sunlight,as well as b... Mural paintings hold significant historical information and possess substantial artistic and cultural value.However,murals are inevitably damaged by natural environmental factors such as wind and sunlight,as well as by human activities.For this reason,the study of damaged areas is crucial for mural restoration.These damaged regions differ significantly from undamaged areas and can be considered abnormal targets.Traditional manual visual processing lacks strong characterization capabilities and is prone to omissions and false detections.Hyperspectral imaging can reflect the material properties more effectively than visual characterization methods.Thus,this study employs hyperspectral imaging to obtain mural information and proposes a mural anomaly detection algorithm based on a hyperspectral multi-scale residual attention network(HM-MRANet).The innovations of this paper include:(1)Constructing mural painting hyperspectral datasets.(2)Proposing a multi-scale residual spectral-spatial feature extraction module based on a 3D CNN(Convolutional Neural Networks)network to better capture multiscale information and improve performance on small-sample hyperspectral datasets.(3)Proposing the Enhanced Residual Attention Module(ERAM)to address the feature redundancy problem,enhance the network’s feature discrimination ability,and further improve abnormal area detection accuracy.The experimental results show that the AUC(Area Under Curve),Specificity,and Accuracy of this paper’s algorithm reach 85.42%,88.84%,and 87.65%,respectively,on this dataset.These results represent improvements of 3.07%,1.11%and 2.68%compared to the SSRN algorithm,demonstrating the effectiveness of this method for mural anomaly detection. 展开更多
关键词 MURALS anomaly detection HYPERSPECTRAL 3D cnn(convolutional neural networks) residual network
在线阅读 下载PDF
Short‐term and long‐term memory self‐attention network for segmentation of tumours in 3D medical images
18
作者 Mingwei Wen Quan Zhou +3 位作者 Bo Tao Pavel Shcherbakov Yang Xu Xuming Zhang 《CAAI Transactions on Intelligence Technology》 SCIE EI 2023年第4期1524-1537,共14页
Tumour segmentation in medical images(especially 3D tumour segmentation)is highly challenging due to the possible similarity between tumours and adjacent tissues,occurrence of multiple tumours and variable tumour shap... Tumour segmentation in medical images(especially 3D tumour segmentation)is highly challenging due to the possible similarity between tumours and adjacent tissues,occurrence of multiple tumours and variable tumour shapes and sizes.The popular deep learning‐based segmentation algorithms generally rely on the convolutional neural network(CNN)and Transformer.The former cannot extract the global image features effectively while the latter lacks the inductive bias and involves the complicated computation for 3D volume data.The existing hybrid CNN‐Transformer network can only provide the limited performance improvement or even poorer segmentation performance than the pure CNN.To address these issues,a short‐term and long‐term memory self‐attention network is proposed.Firstly,a distinctive self‐attention block uses the Transformer to explore the correlation among the region features at different levels extracted by the CNN.Then,the memory structure filters and combines the above information to exclude the similar regions and detect the multiple tumours.Finally,the multi‐layer reconstruction blocks will predict the tumour boundaries.Experimental results demonstrate that our method outperforms other methods in terms of subjective visual and quantitative evaluation.Compared with the most competitive method,the proposed method provides Dice(82.4%vs.76.6%)and Hausdorff distance 95%(HD95)(10.66 vs.11.54 mm)on the KiTS19 as well as Dice(80.2%vs.78.4%)and HD95(9.632 vs.12.17 mm)on the LiTS. 展开更多
关键词 3D medical images convolutional neural network self‐attention network TRANSFORMER tumor segmentation
在线阅读 下载PDF
MSF-Net: A Multilevel Spatiotemporal Feature Fusion Network Combines Attention for Action Recognition
19
作者 Mengmeng Yan Chuang Zhang +3 位作者 Jinqi Chu Haichao Zhang Tao Ge Suting Chen 《Computer Systems Science & Engineering》 SCIE EI 2023年第11期1433-1449,共17页
An action recognition network that combines multi-level spatiotemporal feature fusion with an attention mechanism is proposed as a solution to the issues of single spatiotemporal feature scale extraction,information r... An action recognition network that combines multi-level spatiotemporal feature fusion with an attention mechanism is proposed as a solution to the issues of single spatiotemporal feature scale extraction,information redundancy,and insufficient extraction of frequency domain information in channels in 3D convolutional neural networks.Firstly,based on 3D CNN,this paper designs a new multilevel spatiotemporal feature fusion(MSF)structure,which is embedded in the network model,mainly through multilevel spatiotemporal feature separation,splicing and fusion,to achieve the fusion of spatial perceptual fields and short-medium-long time series information at different scales with reduced network parameters;In the second step,a multi-frequency channel and spatiotemporal attention module(FSAM)is introduced to assign different frequency features and spatiotemporal features in the channels are assigned corresponding weights to reduce the information redundancy of the feature maps.Finally,we embed the proposed method into the R3D model,which replaced the 2D convolutional filters in the 2D Resnet with 3D convolutional filters and conduct extensive experimental validation on the small and medium-sized dataset UCF101 and the largesized dataset Kinetics-400.The findings revealed that our model increased the recognition accuracy on both datasets.Results on the UCF101 dataset,in particular,demonstrate that our model outperforms R3D in terms of a maximum recognition accuracy improvement of 7.2%while using 34.2%fewer parameters.The MSF and FSAM are migrated to another traditional 3D action recognition model named C3D for application testing.The test results based on UCF101 show that the recognition accuracy is improved by 8.9%,proving the strong generalization ability and universality of the method in this paper. 展开更多
关键词 3D convolutional neural network action recognition MSF FSAM
在线阅读 下载PDF
Action Recognition Using Multi-Scale Temporal Shift Module and Temporal Feature Difference Extraction Based on 2D CNN
20
作者 Kun-Hsuan Wu Ching-Te Chiu 《Journal of Software Engineering and Applications》 2021年第5期172-188,共17页
<span style="font-family:Verdana;">Convolutional neural networks, which have achieved outstanding performance in image recognition, have been extensively applied to action recognition. The mainstream a... <span style="font-family:Verdana;">Convolutional neural networks, which have achieved outstanding performance in image recognition, have been extensively applied to action recognition. The mainstream approaches to video understanding can be categorized into two-dimensional and three-dimensional convolutional neural networks. Although three-dimensional convolutional filters can learn the temporal correlation between different frames by extracting the features of multiple frames simultaneously, it results in an explosive number of parameters and calculation cost. Methods based on two-dimensional convolutional neural networks use fewer parameters;they often incorporate optical flow to compensate for their inability to learn temporal relationships. However, calculating the corresponding optical flow results in additional calculation cost;further, it necessitates the use of another model to learn the features of optical flow. We proposed an action recognition framework based on the two-dimensional convolutional neural network;therefore, it was necessary to resolve the lack of temporal relationships. To expand the temporal receptive field, we proposed a multi-scale temporal shift module, which was then combined with a temporal feature difference extraction module to extract the difference between the features of different frames. Finally, the model was compressed to make it more compact. We evaluated our method on two major action recognition benchmarks: the HMDB51 and UCF-101 datasets. Before compression, the proposed method achieved an accuracy of 72.83% on the HMDB51 dataset and 96.25% on the UCF-101 dataset. Following compression, the accuracy was still impressive, at 95.57% and 72.19% on each dataset. The final model was more compact than most related works.</span> 展开更多
关键词 Action Recognition convolutional neural network 2D cnn Temporal Relationship
在线阅读 下载PDF
上一页 1 2 3 下一页 到第
使用帮助 返回顶部