Next point-of-interest(POI)recommendation has been applied by many internet companies to enhance the user travel experience.Recent research advocates deep-learning methods to model long-term check-in sequences and min...Next point-of-interest(POI)recommendation has been applied by many internet companies to enhance the user travel experience.Recent research advocates deep-learning methods to model long-term check-in sequences and mine mobility patterns of people to improve recommendation performance.Existing approaches model general user preferences based on historical check-ins and can be termed as preference pattern models.The preference pattern is different from the intention pattern,in that it does not emphasize the user mobility pattern of revisiting POIs,which is a common behavior and kind of intention for users.An effective module is needed to predict when and where users will repeat visits.In this paper,we propose a Spatio-Temporal Intention Learning Self-Attention Network(STILSAN)for next POI recommendation.STILSAN employs a preference-intention module to capture the user’s long-term preference and recognizes the user’s intention to revisit some specific POIs at a specific time.Meanwhile,we design a spatial encoder module as a pretrained model for learning POI spatial feature by simulating the spatial clustering phenomenon and the spatial proximity of the POIs.Experiments are conducted on two real-world check-in datasets.The experimental results demonstrate that all the proposed modules can effectively improve recommendation accuracy and STILSAN yields outstanding improvements over the state-of-the-art models.展开更多
Mathematical named entity recognition(MNER)is one of the fundamental tasks in the analysis of mathematical texts.To solve the existing problems of the current neural network that has local instability,fuzzy entity bou...Mathematical named entity recognition(MNER)is one of the fundamental tasks in the analysis of mathematical texts.To solve the existing problems of the current neural network that has local instability,fuzzy entity boundary,and long-distance dependence between entities in Chinese mathematical entity recognition task,we propose a series of optimization processing methods and constructed an Adversarial Training and Bidirectional long shortterm memory-Selfattention Conditional random field(AT-BSAC)model.In our model,the mathematical text was vectorized by the word embedding technique,and small perturbations were added to the word vector to generate adversarial samples,while local features were extracted by Bi-directional Long Short-Term Memory(BiLSTM).The self-attentive mechanism was incorporated to extract more dependent features between entities.The experimental results demonstrated that the AT-BSAC model achieved a precision(P)of 93.88%,a recall(R)of 93.84%,and an F1-score of 93.74%,respectively,which is 8.73%higher than the F1-score of the previous Bi-directional Long Short-Term Memory Conditional Random Field(BiLSTM-CRF)model.The effectiveness of the proposed model in mathematical named entity recognition.展开更多
In this paper,we propose Hformer,a novel supervised learning model for low-dose computer tomography(LDCT)denoising.Hformer combines the strengths of convolutional neural networks for local feature extraction and trans...In this paper,we propose Hformer,a novel supervised learning model for low-dose computer tomography(LDCT)denoising.Hformer combines the strengths of convolutional neural networks for local feature extraction and transformer models for global feature capture.The performance of Hformer was verified and evaluated based on the AAPM-Mayo Clinic LDCT Grand Challenge Dataset.Compared with the former representative state-of-the-art(SOTA)model designs under different architectures,Hformer achieved optimal metrics without requiring a large number of learning parameters,with metrics of33.4405 PSNR,8.6956 RMSE,and 0.9163 SSIM.The experiments demonstrated designed Hformer is a SOTA model for noise suppression,structure preservation,and lesion detection.展开更多
As one of the most important components in knowledge graph construction,entity linking has been drawing more and more attention in the last decade.In this paper,we propose two improvements towards better entity linkin...As one of the most important components in knowledge graph construction,entity linking has been drawing more and more attention in the last decade.In this paper,we propose two improvements towards better entity linking.On one hand,we propose a simple but effective coarse-to-fine unsupervised knowledge base(KB)extraction approach to improve the quality of KB,through which we can conduct entity linking more efficiently.On the other hand,we propose a highway network framework to bridge key words and sequential information captured with a self-attention mechanism to better represent both local and global information.Detailed experimentation on six public entity linking datasets verifies the great effectiveness of both our approaches.展开更多
基金supported by Chongqing Technology Innovation and Application Development Project[grant number cstc2021jscx-dxwtBX0023]funding from Chongqing Changan Automobile Co.,Ltd.,Dongfeng Motor Corporation,and Dongfeng Changxing Tech Co.,Ltd.
文摘Next point-of-interest(POI)recommendation has been applied by many internet companies to enhance the user travel experience.Recent research advocates deep-learning methods to model long-term check-in sequences and mine mobility patterns of people to improve recommendation performance.Existing approaches model general user preferences based on historical check-ins and can be termed as preference pattern models.The preference pattern is different from the intention pattern,in that it does not emphasize the user mobility pattern of revisiting POIs,which is a common behavior and kind of intention for users.An effective module is needed to predict when and where users will repeat visits.In this paper,we propose a Spatio-Temporal Intention Learning Self-Attention Network(STILSAN)for next POI recommendation.STILSAN employs a preference-intention module to capture the user’s long-term preference and recognizes the user’s intention to revisit some specific POIs at a specific time.Meanwhile,we design a spatial encoder module as a pretrained model for learning POI spatial feature by simulating the spatial clustering phenomenon and the spatial proximity of the POIs.Experiments are conducted on two real-world check-in datasets.The experimental results demonstrate that all the proposed modules can effectively improve recommendation accuracy and STILSAN yields outstanding improvements over the state-of-the-art models.
文摘Mathematical named entity recognition(MNER)is one of the fundamental tasks in the analysis of mathematical texts.To solve the existing problems of the current neural network that has local instability,fuzzy entity boundary,and long-distance dependence between entities in Chinese mathematical entity recognition task,we propose a series of optimization processing methods and constructed an Adversarial Training and Bidirectional long shortterm memory-Selfattention Conditional random field(AT-BSAC)model.In our model,the mathematical text was vectorized by the word embedding technique,and small perturbations were added to the word vector to generate adversarial samples,while local features were extracted by Bi-directional Long Short-Term Memory(BiLSTM).The self-attentive mechanism was incorporated to extract more dependent features between entities.The experimental results demonstrated that the AT-BSAC model achieved a precision(P)of 93.88%,a recall(R)of 93.84%,and an F1-score of 93.74%,respectively,which is 8.73%higher than the F1-score of the previous Bi-directional Long Short-Term Memory Conditional Random Field(BiLSTM-CRF)model.The effectiveness of the proposed model in mathematical named entity recognition.
基金supported by the National Natural Science Foundation of China(Nos.11975292,12222512)the CAS"Light of West Chin"Program+1 种基金the CAS Pioneer Hundred Talent Programthe Guangdong Major Project of Basic and Applied Basic Research(No.2020B0301030008)。
文摘In this paper,we propose Hformer,a novel supervised learning model for low-dose computer tomography(LDCT)denoising.Hformer combines the strengths of convolutional neural networks for local feature extraction and transformer models for global feature capture.The performance of Hformer was verified and evaluated based on the AAPM-Mayo Clinic LDCT Grand Challenge Dataset.Compared with the former representative state-of-the-art(SOTA)model designs under different architectures,Hformer achieved optimal metrics without requiring a large number of learning parameters,with metrics of33.4405 PSNR,8.6956 RMSE,and 0.9163 SSIM.The experiments demonstrated designed Hformer is a SOTA model for noise suppression,structure preservation,and lesion detection.
基金This work was supported by the key project of the National Natural Science Foundation of China(Grant No.61836007)the normal project of the National Natural Science Foundation of China(Grant No.61876118)the project funded by the Priority Academic Program Development of Jiangsu Higher Education Institutions.
文摘As one of the most important components in knowledge graph construction,entity linking has been drawing more and more attention in the last decade.In this paper,we propose two improvements towards better entity linking.On one hand,we propose a simple but effective coarse-to-fine unsupervised knowledge base(KB)extraction approach to improve the quality of KB,through which we can conduct entity linking more efficiently.On the other hand,we propose a highway network framework to bridge key words and sequential information captured with a self-attention mechanism to better represent both local and global information.Detailed experimentation on six public entity linking datasets verifies the great effectiveness of both our approaches.