The present study examines the impact of short-term public opinion sentiment on the secondary market,with a focus on the potential for such sentiment to cause dramatic stock price fluctuations and increase investment ...The present study examines the impact of short-term public opinion sentiment on the secondary market,with a focus on the potential for such sentiment to cause dramatic stock price fluctuations and increase investment risk.The quantification of investment sentiment indicators and the persistent analysis of their impact has been a complex and significant area of research.In this paper,a structured multi-head attention stock index prediction method based adaptive public opinion sentiment vector is proposed.The proposedmethod utilizes an innovative approach to transform numerous investor comments on social platforms over time into public opinion sentiment vectors expressing complex sentiments.It then analyzes the continuous impact of these vectors on the market through the use of aggregating techniques and public opinion data via a structured multi-head attention mechanism.The experimental results demonstrate that the public opinion sentiment vector can provide more comprehensive feedback on market sentiment than traditional sentiment polarity analysis.Furthermore,the multi-head attention mechanism is shown to improve prediction accuracy through attention convergence on each type of input information separately.Themean absolute percentage error(MAPE)of the proposedmethod is 0.463%,a reduction of 0.294% compared to the benchmark attention algorithm.Additionally,the market backtesting results indicate that the return was 24.560%,an improvement of 8.202% compared to the benchmark algorithm.These results suggest that themarket trading strategy based on thismethod has the potential to improve trading profits.展开更多
Deep neural network-based relational extraction research has made significant progress in recent years,andit provides data support for many natural language processing downstream tasks such as building knowledgegraph,...Deep neural network-based relational extraction research has made significant progress in recent years,andit provides data support for many natural language processing downstream tasks such as building knowledgegraph,sentiment analysis and question-answering systems.However,previous studies ignored much unusedstructural information in sentences that could enhance the performance of the relation extraction task.Moreover,most existing dependency-based models utilize self-attention to distinguish the importance of context,whichhardly deals withmultiple-structure information.To efficiently leverage multiple structure information,this paperproposes a dynamic structure attention mechanism model based on textual structure information,which deeplyintegrates word embedding,named entity recognition labels,part of speech,dependency tree and dependency typeinto a graph convolutional network.Specifically,our model extracts text features of different structures from theinput sentence.Textual Structure information Graph Convolutional Networks employs the dynamic structureattention mechanism to learn multi-structure attention,effectively distinguishing important contextual features invarious structural information.In addition,multi-structure weights are carefully designed as amergingmechanismin the different structure attention to dynamically adjust the final attention.This paper combines these featuresand trains a graph convolutional network for relation extraction.We experiment on supervised relation extractiondatasets including SemEval 2010 Task 8,TACRED,TACREV,and Re-TACED,the result significantly outperformsthe previous.展开更多
Filter pruning is an important technique to compress convolutional neural networks(CNNs)to acquire light-weight high-performance model for practical deployment.However,the existing filter pruning methods suffer from s...Filter pruning is an important technique to compress convolutional neural networks(CNNs)to acquire light-weight high-performance model for practical deployment.However,the existing filter pruning methods suffer from sharp performance drops when the pruning ratio is large,probably due to the unrecoverable information loss caused by aggressive pruning.In this paper,we propose a dual attention based pruning approach called DualPrune to push the limit of network pruning at an ultra-high compression ratio.Firstly,it adopts a graph attention network(GAT)to automatically extract filter-level and layer-level features from CNNs based on the roles of their filters in the whole computation graph.Then the extracted comprehensive features are fed to a side-attention network,which generates sparse attention weights for individual filters to guide model pruning.To avoid layer collapse,the side-attention network adopts a side-path design to preserve the information flow going through the CNN model properly,which allows the CNN model to be pruned at a high compression ratio at initialization and trained from scratch afterward.Extensive experiments based on several well-known CNN models and real-world datasets show that the proposed DualPrune method outperforms the state-of-the-art methods with significant performance improvement,particularly for model compression at a high pruning ratio.展开更多
In view of the problems of multi-scale changes of segmentation targets,noise interference,rough segmentation results and slow training process faced by medical image semantic segmentation,a multi-scale residual aggreg...In view of the problems of multi-scale changes of segmentation targets,noise interference,rough segmentation results and slow training process faced by medical image semantic segmentation,a multi-scale residual aggregation U-shaped attention network structure of MAAUNet(MultiRes aggregation attention UNet)is proposed based on MultiResUNet.Firstly,aggregate connection is introduced from the original feature aggregation at the same level.Skip connection is redesigned to aggregate features of different semantic scales at the decoder subnet,and the problem of semantic gaps is further solved that may exist between skip connections.Secondly,after the multi-scale convolution module,a convolution block attention module is added to focus and integrate features in the two attention directions of channel and space to adaptively optimize the intermediate feature map.Finally,the original convolution block is improved.The convolution channels are expanded with a series convolution structure to complement each other and extract richer spatial features.Residual connections are retained and the convolution block is turned into a multi-channel convolution block.The model is made to extract multi-scale spatial features.The experimental results show that MAAUNet has strong competitiveness in challenging datasets,and shows good segmentation performance and stability in dealing with multi-scale input and noise interference.展开更多
基金funded by the Major Humanities and Social Sciences Research Projects in Zhejiang higher education institutions,grant number 2023QN082,awarded to Cheng ZhaoThe National Natural Science Foundation of China also provided funding,grant number 61902349,awarded to Cheng Zhao.
文摘The present study examines the impact of short-term public opinion sentiment on the secondary market,with a focus on the potential for such sentiment to cause dramatic stock price fluctuations and increase investment risk.The quantification of investment sentiment indicators and the persistent analysis of their impact has been a complex and significant area of research.In this paper,a structured multi-head attention stock index prediction method based adaptive public opinion sentiment vector is proposed.The proposedmethod utilizes an innovative approach to transform numerous investor comments on social platforms over time into public opinion sentiment vectors expressing complex sentiments.It then analyzes the continuous impact of these vectors on the market through the use of aggregating techniques and public opinion data via a structured multi-head attention mechanism.The experimental results demonstrate that the public opinion sentiment vector can provide more comprehensive feedback on market sentiment than traditional sentiment polarity analysis.Furthermore,the multi-head attention mechanism is shown to improve prediction accuracy through attention convergence on each type of input information separately.Themean absolute percentage error(MAPE)of the proposedmethod is 0.463%,a reduction of 0.294% compared to the benchmark attention algorithm.Additionally,the market backtesting results indicate that the return was 24.560%,an improvement of 8.202% compared to the benchmark algorithm.These results suggest that themarket trading strategy based on thismethod has the potential to improve trading profits.
文摘Deep neural network-based relational extraction research has made significant progress in recent years,andit provides data support for many natural language processing downstream tasks such as building knowledgegraph,sentiment analysis and question-answering systems.However,previous studies ignored much unusedstructural information in sentences that could enhance the performance of the relation extraction task.Moreover,most existing dependency-based models utilize self-attention to distinguish the importance of context,whichhardly deals withmultiple-structure information.To efficiently leverage multiple structure information,this paperproposes a dynamic structure attention mechanism model based on textual structure information,which deeplyintegrates word embedding,named entity recognition labels,part of speech,dependency tree and dependency typeinto a graph convolutional network.Specifically,our model extracts text features of different structures from theinput sentence.Textual Structure information Graph Convolutional Networks employs the dynamic structureattention mechanism to learn multi-structure attention,effectively distinguishing important contextual features invarious structural information.In addition,multi-structure weights are carefully designed as amergingmechanismin the different structure attention to dynamically adjust the final attention.This paper combines these featuresand trains a graph convolutional network for relation extraction.We experiment on supervised relation extractiondatasets including SemEval 2010 Task 8,TACRED,TACREV,and Re-TACED,the result significantly outperformsthe previous.
基金supported by the Natural Science Foundation of Jiangsu Province of China under Grant No.BK20222003the National Natural Science Foundation of China under Grant Nos.61972196,61832008,and 61832005the Collaborative Innovation Center of Novel Software Technology and Industrialization,and the Sino-German Institutes of Social Computing.
文摘Filter pruning is an important technique to compress convolutional neural networks(CNNs)to acquire light-weight high-performance model for practical deployment.However,the existing filter pruning methods suffer from sharp performance drops when the pruning ratio is large,probably due to the unrecoverable information loss caused by aggressive pruning.In this paper,we propose a dual attention based pruning approach called DualPrune to push the limit of network pruning at an ultra-high compression ratio.Firstly,it adopts a graph attention network(GAT)to automatically extract filter-level and layer-level features from CNNs based on the roles of their filters in the whole computation graph.Then the extracted comprehensive features are fed to a side-attention network,which generates sparse attention weights for individual filters to guide model pruning.To avoid layer collapse,the side-attention network adopts a side-path design to preserve the information flow going through the CNN model properly,which allows the CNN model to be pruned at a high compression ratio at initialization and trained from scratch afterward.Extensive experiments based on several well-known CNN models and real-world datasets show that the proposed DualPrune method outperforms the state-of-the-art methods with significant performance improvement,particularly for model compression at a high pruning ratio.
基金National Natural Science Foundation of China(No.61806006)Jiangsu University Superior Discipline Construction Project。
文摘In view of the problems of multi-scale changes of segmentation targets,noise interference,rough segmentation results and slow training process faced by medical image semantic segmentation,a multi-scale residual aggregation U-shaped attention network structure of MAAUNet(MultiRes aggregation attention UNet)is proposed based on MultiResUNet.Firstly,aggregate connection is introduced from the original feature aggregation at the same level.Skip connection is redesigned to aggregate features of different semantic scales at the decoder subnet,and the problem of semantic gaps is further solved that may exist between skip connections.Secondly,after the multi-scale convolution module,a convolution block attention module is added to focus and integrate features in the two attention directions of channel and space to adaptively optimize the intermediate feature map.Finally,the original convolution block is improved.The convolution channels are expanded with a series convolution structure to complement each other and extract richer spatial features.Residual connections are retained and the convolution block is turned into a multi-channel convolution block.The model is made to extract multi-scale spatial features.The experimental results show that MAAUNet has strong competitiveness in challenging datasets,and shows good segmentation performance and stability in dealing with multi-scale input and noise interference.