Over the past decade, Graphics Processing Units (GPUs) have revolutionized high-performance computing, playing pivotal roles in advancing fields like IoT, autonomous vehicles, and exascale computing. Despite these adv...Over the past decade, Graphics Processing Units (GPUs) have revolutionized high-performance computing, playing pivotal roles in advancing fields like IoT, autonomous vehicles, and exascale computing. Despite these advancements, efficiently programming GPUs remains a daunting challenge, often relying on trial-and-error optimization methods. This paper introduces an optimization technique for CUDA programs through a novel Data Layout strategy, aimed at restructuring memory data arrangement to significantly enhance data access locality. Focusing on the dynamic programming algorithm for chained matrix multiplication—a critical operation across various domains including artificial intelligence (AI), high-performance computing (HPC), and the Internet of Things (IoT)—this technique facilitates more localized access. We specifically illustrate the importance of efficient matrix multiplication in these areas, underscoring the technique’s broader applicability and its potential to address some of the most pressing computational challenges in GPU-accelerated applications. Our findings reveal a remarkable reduction in memory consumption and a substantial 50% decrease in execution time for CUDA programs utilizing this technique, thereby setting a new benchmark for optimization in GPU computing.展开更多
Recognition of human activity based on convolutional neural network(CNN)has received the interest of researchers in recent years due to its significant improvement in accuracy.A large number of algorithms based on the...Recognition of human activity based on convolutional neural network(CNN)has received the interest of researchers in recent years due to its significant improvement in accuracy.A large number of algorithms based on the deep learning approach have been proposed for activity recognition purpose.However,with the increasing advancements in technologies having limited computational resources,it needs to design an efficient deep learning-based approaches with improved utilization of computational resources.This paper presents a simple and efficient 2-dimensional CNN(2-D CNN)architecture with very small-size convolutional kernel for human activity recognition.The merit of the proposed CNN architecture over standard deep learning architectures is fewer trainable parameters and lesser memory requirement which enables it to train the proposed CNN architecture on low GPU memory-based devices and also works well with smaller as well as larger size datasets.The proposed approach consists of mainly four stages:namely(1)creation of dataset and data augmentation,(2)designing 2-D CNN architecture,(3)the proposed 2-D CNN architecture trained from scratch up to optimum stage,and(4)evaluation of the trained 2-D CNN architecture.To illustrate the effectiveness of the proposed architecture several extensive experiments are conducted on three publicly available datasets,namely IXMAS,YouTube,and UCF101 dataset.The results of the proposed method and its comparison with other state-of-the-art methods demonstrate the usefulness of the proposed method.展开更多
Aiming at the problem of potential information noise introduced during the generation of ghost feature maps in GhostNet,this paper proposes a novel lightweight neural network model called ResghostNet.This model constr...Aiming at the problem of potential information noise introduced during the generation of ghost feature maps in GhostNet,this paper proposes a novel lightweight neural network model called ResghostNet.This model constructs the Resghost Module by combining residual connections and Adaptive-SE Blocks,which enhances the quality of generated feature maps through direct propagation of original input information and selection of important channels before cheap operations.Specifically,ResghostNet introduces residual connections on the basis of the Ghost Module to optimize the information flow,and designs a weight self-attention mechanism combined with SE blocks to enhance feature expression capabilities in cheap operations.Experimental results on the ImageNet dataset show that,compared to GhostNet,ResghostNet achieves higher accuracy while reducing the number of parameters by 52%.Although the computational complexity increases,by optimizing the usage strategy of GPU cachememory,themodel’s inference speed becomes faster.The ResghostNet is optimized in terms of classification accuracy and the number of model parameters,and shows great potential in edge computing devices.展开更多
文摘Over the past decade, Graphics Processing Units (GPUs) have revolutionized high-performance computing, playing pivotal roles in advancing fields like IoT, autonomous vehicles, and exascale computing. Despite these advancements, efficiently programming GPUs remains a daunting challenge, often relying on trial-and-error optimization methods. This paper introduces an optimization technique for CUDA programs through a novel Data Layout strategy, aimed at restructuring memory data arrangement to significantly enhance data access locality. Focusing on the dynamic programming algorithm for chained matrix multiplication—a critical operation across various domains including artificial intelligence (AI), high-performance computing (HPC), and the Internet of Things (IoT)—this technique facilitates more localized access. We specifically illustrate the importance of efficient matrix multiplication in these areas, underscoring the technique’s broader applicability and its potential to address some of the most pressing computational challenges in GPU-accelerated applications. Our findings reveal a remarkable reduction in memory consumption and a substantial 50% decrease in execution time for CUDA programs utilizing this technique, thereby setting a new benchmark for optimization in GPU computing.
文摘Recognition of human activity based on convolutional neural network(CNN)has received the interest of researchers in recent years due to its significant improvement in accuracy.A large number of algorithms based on the deep learning approach have been proposed for activity recognition purpose.However,with the increasing advancements in technologies having limited computational resources,it needs to design an efficient deep learning-based approaches with improved utilization of computational resources.This paper presents a simple and efficient 2-dimensional CNN(2-D CNN)architecture with very small-size convolutional kernel for human activity recognition.The merit of the proposed CNN architecture over standard deep learning architectures is fewer trainable parameters and lesser memory requirement which enables it to train the proposed CNN architecture on low GPU memory-based devices and also works well with smaller as well as larger size datasets.The proposed approach consists of mainly four stages:namely(1)creation of dataset and data augmentation,(2)designing 2-D CNN architecture,(3)the proposed 2-D CNN architecture trained from scratch up to optimum stage,and(4)evaluation of the trained 2-D CNN architecture.To illustrate the effectiveness of the proposed architecture several extensive experiments are conducted on three publicly available datasets,namely IXMAS,YouTube,and UCF101 dataset.The results of the proposed method and its comparison with other state-of-the-art methods demonstrate the usefulness of the proposed method.
基金funded by Science and Technology Innovation Project grant No.ZZKY20222304.
文摘Aiming at the problem of potential information noise introduced during the generation of ghost feature maps in GhostNet,this paper proposes a novel lightweight neural network model called ResghostNet.This model constructs the Resghost Module by combining residual connections and Adaptive-SE Blocks,which enhances the quality of generated feature maps through direct propagation of original input information and selection of important channels before cheap operations.Specifically,ResghostNet introduces residual connections on the basis of the Ghost Module to optimize the information flow,and designs a weight self-attention mechanism combined with SE blocks to enhance feature expression capabilities in cheap operations.Experimental results on the ImageNet dataset show that,compared to GhostNet,ResghostNet achieves higher accuracy while reducing the number of parameters by 52%.Although the computational complexity increases,by optimizing the usage strategy of GPU cachememory,themodel’s inference speed becomes faster.The ResghostNet is optimized in terms of classification accuracy and the number of model parameters,and shows great potential in edge computing devices.