Semantic segmentation of remote sensing images is a critical research area in the field of remote sensing.Despite the success of Convolutional Neural Networks(CNNs),they often fail to capture inter-layer feature relat...Semantic segmentation of remote sensing images is a critical research area in the field of remote sensing.Despite the success of Convolutional Neural Networks(CNNs),they often fail to capture inter-layer feature relationships and fully leverage contextual information,leading to the loss of important details.Additionally,due to significant intraclass variation and small inter-class differences in remote sensing images,CNNs may experience class confusion.To address these issues,we propose a novel Category-Guided Feature Collaborative Learning Network(CG-FCLNet),which enables fine-grained feature extraction and adaptive fusion.Specifically,we design a Feature Collaborative Learning Module(FCLM)to facilitate the tight interaction of multi-scale features.We also introduce a Scale-Aware Fusion Module(SAFM),which iteratively fuses features from different layers using a spatial attention mechanism,enabling deeper feature fusion.Furthermore,we design a Category-Guided Module(CGM)to extract category-aware information that guides feature fusion,ensuring that the fused featuresmore accurately reflect the semantic information of each category,thereby improving detailed segmentation.The experimental results show that CG-FCLNet achieves a Mean Intersection over Union(mIoU)of 83.46%,an mF1 of 90.87%,and an Overall Accuracy(OA)of 91.34% on the Vaihingen dataset.On the Potsdam dataset,it achieves a mIoU of 86.54%,an mF1 of 92.65%,and an OA of 91.29%.These results highlight the superior performance of CG-FCLNet compared to existing state-of-the-art methods.展开更多
Text format information is full of most of the resources of Internet,which puts forward higher and higher requirements for the accuracy of text classification.Therefore,in this manuscript,firstly,we design a hybrid mo...Text format information is full of most of the resources of Internet,which puts forward higher and higher requirements for the accuracy of text classification.Therefore,in this manuscript,firstly,we design a hybrid model of bidirectional encoder representation from transformers-hierarchical attention networks-dilated convolutions networks(BERT_HAN_DCN)which based on BERT pre-trained model with superior ability of extracting characteristic.The advantages of HAN model and DCN model are taken into account which can help gain abundant semantic information,fusing context semantic features and hierarchical characteristics.Secondly,the traditional softmax algorithm increases the learning difficulty of the same kind of samples,making it more difficult to distinguish similar features.Based on this,AM-softmax is introduced to replace the traditional softmax.Finally,the fused model is validated,which shows superior performance in the accuracy rate and F1-score of this hybrid model on two datasets and the experimental analysis shows the general single models such as HAN,DCN,based on BERT pre-trained model.Besides,the improved AM-softmax network model is superior to the general softmax network model.展开更多
Skin melanoma is one of the most common malignant tumorsoriginating from melanocytes, and the incidence of the Chinese populationis showing a continuous increasing trend. Early and accurate diagnosisof melanoma has gr...Skin melanoma is one of the most common malignant tumorsoriginating from melanocytes, and the incidence of the Chinese populationis showing a continuous increasing trend. Early and accurate diagnosisof melanoma has great significance for guiding clinical treatment.However, the symptoms of malignant melanoma are not obvious in theearly stage. It is difficult to be diagnosed with human observation. Meanwhile,it is easy to spread due to missed diagnosis. In order to accuratelydiagnose melanoma, end-to-end skin lesion attribute segmentation frameworkis presented in this paper. It is applied to facilitate the digitalizationprocess of attributes segmentation. The framework was improved on theU-Net construction that use the channel context feature fusion modulebetween the encoder and decoder to further merge context information. Adual-domain attention module is proposed to get more effective informationfrom the feature map. It shows that the proposed method effectivelysegments the lesion attributes and achieves good result in the ISIC2018task2 dataset.展开更多
Visual cognition,as one of the fundamental aspects of cognitive neuroscience,is generally associated with high-order brain functions in animals and human.Drosophila,as a model organism,shares certain features of visua...Visual cognition,as one of the fundamental aspects of cognitive neuroscience,is generally associated with high-order brain functions in animals and human.Drosophila,as a model organism,shares certain features of visual cognition in common with mammals at the genetic,molecular,cellular,and even higher behavioral levels.From learning and memory to decision making,Drosophila covers a broad spectrum of higher cognitive behaviors beyond what we had expected.Armed with powerful tools of genetic manipulation in Drosophila,an increasing number of studies have been conducted in order to elucidate the neural circuit mechanisms underlying these cognitive behaviors from a genes-brain-behavior perspective.The goal of this review is to integrate the most important studies on visual cognition in Drosophila carried out in China's Mainland during the last decade into a body of knowledge encompassing both the basic neural operations and circuitry of higher brain function in Drosophila.Here,we consider a series of the higher cognitive behaviors beyond learning and memory,such as visual pattern recognition,feature and context generalization,different feature memory traces,salience-based decision,attention-like behavior,and cross-modal leaning and memory.We discuss the possible general gain-gating mechanism implementing by dopamine-mushroom body circuit in fly's visual cognition.We hope that our brief review on this aspect will inspire further study on visual cognition in flies,or even beyond.展开更多
基金funded by National Natural Science Foundation of China(61603245).
文摘Semantic segmentation of remote sensing images is a critical research area in the field of remote sensing.Despite the success of Convolutional Neural Networks(CNNs),they often fail to capture inter-layer feature relationships and fully leverage contextual information,leading to the loss of important details.Additionally,due to significant intraclass variation and small inter-class differences in remote sensing images,CNNs may experience class confusion.To address these issues,we propose a novel Category-Guided Feature Collaborative Learning Network(CG-FCLNet),which enables fine-grained feature extraction and adaptive fusion.Specifically,we design a Feature Collaborative Learning Module(FCLM)to facilitate the tight interaction of multi-scale features.We also introduce a Scale-Aware Fusion Module(SAFM),which iteratively fuses features from different layers using a spatial attention mechanism,enabling deeper feature fusion.Furthermore,we design a Category-Guided Module(CGM)to extract category-aware information that guides feature fusion,ensuring that the fused featuresmore accurately reflect the semantic information of each category,thereby improving detailed segmentation.The experimental results show that CG-FCLNet achieves a Mean Intersection over Union(mIoU)of 83.46%,an mF1 of 90.87%,and an Overall Accuracy(OA)of 91.34% on the Vaihingen dataset.On the Potsdam dataset,it achieves a mIoU of 86.54%,an mF1 of 92.65%,and an OA of 91.29%.These results highlight the superior performance of CG-FCLNet compared to existing state-of-the-art methods.
基金Fundamental Research Funds for the Central University,China(No.2232018D3-17)。
文摘Text format information is full of most of the resources of Internet,which puts forward higher and higher requirements for the accuracy of text classification.Therefore,in this manuscript,firstly,we design a hybrid model of bidirectional encoder representation from transformers-hierarchical attention networks-dilated convolutions networks(BERT_HAN_DCN)which based on BERT pre-trained model with superior ability of extracting characteristic.The advantages of HAN model and DCN model are taken into account which can help gain abundant semantic information,fusing context semantic features and hierarchical characteristics.Secondly,the traditional softmax algorithm increases the learning difficulty of the same kind of samples,making it more difficult to distinguish similar features.Based on this,AM-softmax is introduced to replace the traditional softmax.Finally,the fused model is validated,which shows superior performance in the accuracy rate and F1-score of this hybrid model on two datasets and the experimental analysis shows the general single models such as HAN,DCN,based on BERT pre-trained model.Besides,the improved AM-softmax network model is superior to the general softmax network model.
基金The paper is supported by the National Natural Science Foundation of China under Grant No.62072135 and No.61672181.
文摘Skin melanoma is one of the most common malignant tumorsoriginating from melanocytes, and the incidence of the Chinese populationis showing a continuous increasing trend. Early and accurate diagnosisof melanoma has great significance for guiding clinical treatment.However, the symptoms of malignant melanoma are not obvious in theearly stage. It is difficult to be diagnosed with human observation. Meanwhile,it is easy to spread due to missed diagnosis. In order to accuratelydiagnose melanoma, end-to-end skin lesion attribute segmentation frameworkis presented in this paper. It is applied to facilitate the digitalizationprocess of attributes segmentation. The framework was improved on theU-Net construction that use the channel context feature fusion modulebetween the encoder and decoder to further merge context information. Adual-domain attention module is proposed to get more effective informationfrom the feature map. It shows that the proposed method effectivelysegments the lesion attributes and achieves good result in the ISIC2018task2 dataset.
文摘Visual cognition,as one of the fundamental aspects of cognitive neuroscience,is generally associated with high-order brain functions in animals and human.Drosophila,as a model organism,shares certain features of visual cognition in common with mammals at the genetic,molecular,cellular,and even higher behavioral levels.From learning and memory to decision making,Drosophila covers a broad spectrum of higher cognitive behaviors beyond what we had expected.Armed with powerful tools of genetic manipulation in Drosophila,an increasing number of studies have been conducted in order to elucidate the neural circuit mechanisms underlying these cognitive behaviors from a genes-brain-behavior perspective.The goal of this review is to integrate the most important studies on visual cognition in Drosophila carried out in China's Mainland during the last decade into a body of knowledge encompassing both the basic neural operations and circuitry of higher brain function in Drosophila.Here,we consider a series of the higher cognitive behaviors beyond learning and memory,such as visual pattern recognition,feature and context generalization,different feature memory traces,salience-based decision,attention-like behavior,and cross-modal leaning and memory.We discuss the possible general gain-gating mechanism implementing by dopamine-mushroom body circuit in fly's visual cognition.We hope that our brief review on this aspect will inspire further study on visual cognition in flies,or even beyond.