期刊文献+
共找到3篇文章
< 1 >
每页显示 20 50 100
S^(2)ANet:Combining local spectral and spatial point grouping for point cloud processing
1
作者 Yujie LIU Xiaorui SUN +1 位作者 Wenbin SHAO Yafu YUAN 《虚拟现实与智能硬件(中英文)》 EI 2024年第4期267-279,共13页
Background Despite the recent progress in 3D point cloud processing using deep convolutional neural networks,the inability to extract local features remains a challenging problem.In addition,existing methods consider ... Background Despite the recent progress in 3D point cloud processing using deep convolutional neural networks,the inability to extract local features remains a challenging problem.In addition,existing methods consider only the spatial domain in the feature extraction process.Methods In this paper,we propose a spectral and spatial aggregation convolutional network(S^(2)ANet),which combines spectral and spatial features for point cloud processing.First,we calculate the local frequency of the point cloud in the spectral domain.Then,we use the local frequency to group points and provide a spectral aggregation convolution module to extract the features of the points grouped by the local frequency.We simultaneously extract the local features in the spatial domain to supplement the final features.Results S^(2)ANet was applied in several point cloud analysis tasks;it achieved stateof-the-art classification accuracies of 93.8%,88.0%,and 83.1%on the ModelNet40,ShapeNetCore,and ScanObjectNN datasets,respectively.For indoor scene segmentation,training and testing were performed on the S3DIS dataset,and the mean intersection over union was 62.4%.Conclusions The proposed S^(2)ANet can effectively capture the local geometric information of point clouds,thereby improving accuracy on various tasks. 展开更多
关键词 Local frequency Spectral and spatial aggregation convolution Spectral group convolution Point cloud representation learning Graph convolutional network
在线阅读 下载PDF
FMCSNet: Mobile Devices-Oriented Lightweight Multi-Scale Object Detection via Fast Multi-Scale Channel Shuffling Network Model
2
作者 Lijuan Huang Xianyi Liu +1 位作者 Jinping Liu Pengfei Xu 《Computers, Materials & Continua》 2026年第1期1292-1311,共20页
The ubiquity of mobile devices has driven advancements in mobile object detection.However,challenges in multi-scale object detection in open,complex environments persist due to limited computational resources.Traditio... The ubiquity of mobile devices has driven advancements in mobile object detection.However,challenges in multi-scale object detection in open,complex environments persist due to limited computational resources.Traditional approaches like network compression,quantization,and lightweight design often sacrifice accuracy or feature representation robustness.This article introduces the Fast Multi-scale Channel Shuffling Network(FMCSNet),a novel lightweight detection model optimized for mobile devices.FMCSNet integrates a fully convolutional Multilayer Perceptron(MLP)module,offering global perception without significantly increasing parameters,effectively bridging the gap between CNNs and Vision Transformers.FMCSNet achieves a delicate balance between computation and accuracy mainly by two key modules:the ShiftMLP module,including a shift operation and an MLP module,and a Partial group Convolutional(PGConv)module,reducing computation while enhancing information exchange between channels.With a computational complexity of 1.4G FLOPs and 1.3M parameters,FMCSNet outperforms CNN-based and DWConv-based ShuffleNetv2 by 1%and 4.5%mAP on the Pascal VOC 2007 dataset,respectively.Additionally,FMCSNet achieves a mAP of 30.0(0.5:0.95 IoU threshold)with only 2.5G FLOPs and 2.0M parameters.It achieves 32 FPS on low-performance i5-series CPUs,meeting real-time detection requirements.The versatility of the PGConv module’s adaptability across scenarios further highlights FMCSNet as a promising solution for real-time mobile object detection. 展开更多
关键词 Object detection lightweight network partial group convolution multilayer perceptron
在线阅读 下载PDF
High-Precision Anime Conversion Model Based on Generative Adversarial Networks
3
作者 Jing Li Xuebin Liang 《国际计算机前沿大会会议论文集》 2024年第3期268-279,共12页
Currently,the application of anime image conversion is becoming increasingly widespread.However,in the task of converting real images to anime images,traditional convolution operations can lead to information loss and... Currently,the application of anime image conversion is becoming increasingly widespread.However,in the task of converting real images to anime images,traditional convolution operations can lead to information loss and blurring.Therefore,there are problems such as unstable network training,severe distortion of generated images,and blurring of generated images.This paper proposes an improved model GI_CartoonGAN(Group convolution channel shuffle and Inception dilated convolution Cartoon Generative Adversarial Network)used for anime image conversion.On the basis of the CartoonGAN model,this network model improves the representational capabilities of the generated network by introducing grouping convolution channel shuffle operations,enriches image features,improves the accuracy and expressiveness of feature extraction,improves image conversion accuracy,and introduces the concept structure and dilation convolution operation to expand the receptive field of the convolution kernel,effectively processing features at various scales of the image,Improve the ability of generator to model features and perceive different styles and details,thereby enhancing its ability to generate detailed features.The experimental results show that the FID index of the image generated by this model has an improvement of over 17%compared to other models,which can effectively improve the clarity and authenticity of the generated images. 展开更多
关键词 Anime Image Conversion CartoonGAN Group convolution Channel Shuffle Inception Structure Dilation convolution
原文传递
上一页 1 下一页 到第
使用帮助 返回顶部