In the field of optoelectronics,certain types of data may be difficult to accurately annotate,such as high-resolution optoelectronic imaging or imaging in certain special spectral ranges.Weakly supervised learning can...In the field of optoelectronics,certain types of data may be difficult to accurately annotate,such as high-resolution optoelectronic imaging or imaging in certain special spectral ranges.Weakly supervised learning can provide a more reliable approach in these situations.Current popular approaches mainly adopt the classification-based class activation maps(CAM)as initial pseudo labels to solve the task.展开更多
The primary challenge in weakly supervised semantic segmentation is effectively leveraging weak annotations while minimizing the performance gap compared to fully supervised methods.End-to-end model designs have gaine...The primary challenge in weakly supervised semantic segmentation is effectively leveraging weak annotations while minimizing the performance gap compared to fully supervised methods.End-to-end model designs have gained significant attention for improving training efficiency.Most current algorithms rely on Convolutional Neural Networks(CNNs)for feature extraction.Although CNNs are proficient at capturing local features,they often struggle with global context,leading to incomplete and false Class Activation Mapping(CAM).To address these limitations,this work proposes a Contextual Prototype-Based End-to-End Weakly Supervised Semantic Segmentation(CPEWS)model,which improves feature extraction by utilizing the Vision Transformer(ViT).By incorporating its intermediate feature layers to preserve semantic information,this work introduces the Intermediate Supervised Module(ISM)to supervise the final layer’s output,reducing boundary ambiguity and mitigating issues related to incomplete activation.Additionally,the Contextual Prototype Module(CPM)generates class-specific prototypes,while the proposed Prototype Discrimination Loss and Superclass Suppression Loss guide the network’s training,(LPDL)(LSSL)effectively addressing false activation without the need for extra supervision.The CPEWS model proposed in this paper achieves state-of-the-art performance in end-to-end weakly supervised semantic segmentation without additional supervision.The validation set and test set Mean Intersection over Union(MIoU)of PASCAL VOC 2012 dataset achieved 69.8%and 72.6%,respectively.Compared with ToCo(pre trained weight ImageNet-1k),MIoU on the test set is 2.1%higher.In addition,MIoU reached 41.4%on the validation set of the MS COCO 2014 dataset.展开更多
Significant advancements have been achieved in road surface extraction based on high-resolution remote sensingimage processing. Most current methods rely on fully supervised learning, which necessitates enormous human...Significant advancements have been achieved in road surface extraction based on high-resolution remote sensingimage processing. Most current methods rely on fully supervised learning, which necessitates enormous humaneffort to label the image. Within this field, other research endeavors utilize weakly supervised methods. Theseapproaches aim to reduce the expenses associated with annotation by leveraging sparsely annotated data, such asscribbles. This paper presents a novel technique called a weakly supervised network using scribble-supervised andedge-mask (WSSE-net). This network is a three-branch network architecture, whereby each branch is equippedwith a distinct decoder module dedicated to road extraction tasks. One of the branches is dedicated to generatingedge masks using edge detection algorithms and optimizing road edge details. The other two branches supervise themodel’s training by employing scribble labels and spreading scribble information throughout the image. To addressthe historical flaw that created pseudo-labels that are not updated with network training, we use mixup to blendprediction results dynamically and continually update new pseudo-labels to steer network training. Our solutiondemonstrates efficient operation by simultaneously considering both edge-mask aid and dynamic pseudo-labelsupport. The studies are conducted on three separate road datasets, which consist primarily of high-resolutionremote-sensing satellite photos and drone images. The experimental findings suggest that our methodologyperforms better than advanced scribble-supervised approaches and specific traditional fully supervised methods.展开更多
Recently,weak supervision has received growing attention in the field of salient object detection due to the convenience of labelling.However,there is a large performance gap between weakly supervised and fully superv...Recently,weak supervision has received growing attention in the field of salient object detection due to the convenience of labelling.However,there is a large performance gap between weakly supervised and fully supervised salient object detectors because the scribble annotation can only provide very limited foreground/background information.Therefore,an intuitive idea is to infer annotations that cover more complete object and background regions for training.To this end,a label inference strategy is proposed based on the assumption that pixels with similar colours and close positions should have consistent labels.Specifically,k-means clustering algorithm was first performed on both colours and coordinates of original annotations,and then assigned the same labels to points having similar colours with colour cluster centres and near coordinate cluster centres.Next,the same annotations for pixels with similar colours within each kernel neighbourhood was set further.Extensive experiments on six benchmarks demonstrate that our method can significantly improve the performance and achieve the state-of-the-art results.展开更多
The coronavirus disease 2019(COVID-19)has severely disrupted both human life and the health care system.Timely diagnosis and treatment have become increasingly important;however,the distribution and size of lesions va...The coronavirus disease 2019(COVID-19)has severely disrupted both human life and the health care system.Timely diagnosis and treatment have become increasingly important;however,the distribution and size of lesions vary widely among individuals,making it challenging to accurately diagnose the disease.This study proposed a deep-learning disease diagnosismodel based onweakly supervised learning and clustering visualization(W_CVNet)that fused classification with segmentation.First,the data were preprocessed.An optimizable weakly supervised segmentation preprocessing method(O-WSSPM)was used to remove redundant data and solve the category imbalance problem.Second,a deep-learning fusion method was used for feature extraction and classification recognition.A dual asymmetric complementary bilinear feature extraction method(D-CBM)was used to fully extract complementary features,which solved the problem of insufficient feature extraction by a single deep learning network.Third,an unsupervised learning method based on Fuzzy C-Means(FCM)clustering was used to segment and visualize COVID-19 lesions enabling physicians to accurately assess lesion distribution and disease severity.In this study,5-fold cross-validation methods were used,and the results showed that the network had an average classification accuracy of 85.8%,outperforming six recent advanced classification models.W_CVNet can effectively help physicians with automated aid in diagnosis to determine if the disease is present and,in the case of COVID-19 patients,to further predict the area of the lesion.展开更多
A large variety of complaint reports reflect subjective information expressed by citizens.A key challenge of text summarization for complaint reports is to ensure the factual consistency of generated summary.Therefore...A large variety of complaint reports reflect subjective information expressed by citizens.A key challenge of text summarization for complaint reports is to ensure the factual consistency of generated summary.Therefore,in this paper,a simple and weakly supervised framework considering factual consistency is proposed to generate a summary of city-based complaint reports without pre-labeled sentences/words.Furthermore,it considers the importance of entity in complaint reports to ensure factual consistency of summary.Experimental results on the customer review datasets(Yelp and Amazon)and complaint report dataset(complaint reports of Shenyang in China)show that the proposed framework outperforms state-of-the-art approaches in ROUGE scores and human evaluation.It unveils the effectiveness of our approach to helping in dealing with complaint reports.展开更多
We study the novel problem of weakly supervised instance action recognition(WSiAR)in multi-person(crowd)scenes.We specifically aim to recognize the action of each subject in the crowd,for which we propose the use of a...We study the novel problem of weakly supervised instance action recognition(WSiAR)in multi-person(crowd)scenes.We specifically aim to recognize the action of each subject in the crowd,for which we propose the use of a weakly supervised method,considering the expense of large-scale annotations for training.This problem is of great practical value for video surveillance and sports scene analysis.To this end,we investigated and designed a series of weak annotations for the supervision of weakly supervised instance action recognition(WSiAR).We propose two categories of weak label settings,bag labels and sparse labels,to significantly reduce the number of labels.Based on the former,we propose a novel sub-block-aware multi-instance learning(MIL)loss to obtain more effective information from weak labels during training.With respect to the latter,we propose a pseudo label generation strategy for extending sparse labels.This enables our method to achieve results comparable to those of fully supervised methods but with significantly fewer annotations.The experimental results on two benchmarks verified the rationality of the problem definition and effectiveness of the proposed weakly supervised training method in solving our problem.展开更多
Accurate and timely surveying of airfield pavement distress is crucial for cost-effective air-port maintenance.Deep learning(DL)approaches,leveraging advancements in computer science and image acquisition techniques,h...Accurate and timely surveying of airfield pavement distress is crucial for cost-effective air-port maintenance.Deep learning(DL)approaches,leveraging advancements in computer science and image acquisition techniques,have become the mainstream for automated air-field pavement distress detection.However,fully-supervised DL methods require a large number of manually annotated ground truth labels to achieve high accuracy.To address the challenge of limited high-quality manual annotations,we propose a novel end-to-end distress detection model called class activation map informed weakly-supervised dis-tress detection(WSDD-CAM).Based on YOLOv5,WSDD-CAM consists of an efficient back-bone,a classification branch,and a localization network.By utilizing class activation map(CAM)information,our model significantly reduces the need for manual annotations,auto-matically generating pseudo bounding boxes with a 71%overlap with the ground truth.To evaluate WSDD-CAM,we tested it on a self-made dataset and compared it with other weakly-supervised and fully-supervised models.The results show that our model achieves 49.2%mean average precision(mAP),outperforming other weakly-supervised methods and even approaching state-of-the-art fully-supervised methods.Additionally,ablation experiments confirm the effectiveness of our architecture design.In conclusion,our WSDD-CAM model offers a promising solution for airfield pavement distress detection,reducing manual annotation time while maintaining high accuracy.This efficient and effec-tive approach can significantly contribute to cost-effective airport maintenance management.展开更多
Accurate prognosis prediction is essential for guiding cancer treatment and improving patient outcomes.While recent studies have demonstrated the potential of histopathological images in survival analysis,existing mod...Accurate prognosis prediction is essential for guiding cancer treatment and improving patient outcomes.While recent studies have demonstrated the potential of histopathological images in survival analysis,existing models are typically developed in a cancerspecific manner,lack extensive external validation,and often rely on molecular data that are not routinely available in clinical practice.To address these limitations,we present PROGPATH,a unified model capable of integrating histopathological image features with routinely collected clinical variables to achieve pancancer prognosis prediction.PROGPATH employs a weakly supervised deep learning architecture built upon the foundation model for image encoding.Morphological features are aggregated through an attention-guided multiple instance learning module and fused with clinical information via a cross-attention transformer.A router-based classification strategy further refines the prediction performance.PROGPATH was trained on 7999 whole-slide images(WSIs)from 6,670 patients across 15 cancer types,and extensively validated on 17 external cohorts with a total of 7374 WSIs from 4441 patients,covering 12 cancer types from 8 consortia and institutions across three continents.PROGPATH achieved consistently superior performance compared with state-of-the-art multimodal prognosis prediction models.It demonstrated strong generalizability across cancer types and robustness in stratified subgroups,including early-and advancedstage patients,treatment cohorts(radiotherapy and pharmaceutical therapy),and biomarker-defined subsets.We further provide model interpretability by identifying pathological patterns critical to PROGPATH’s risk predictions,such as the degree of cell differentiation and extent of necrosis.Together,these results highlight the potential of PROGPATH to support pancancer outcome prediction and inform personalized cancer management strategies.展开更多
Recently,video object segmentation has received great attention in the computer vision community.Most of the existing methods heavily rely on the pixel-wise human annotations,which are expensive and time-consuming to ...Recently,video object segmentation has received great attention in the computer vision community.Most of the existing methods heavily rely on the pixel-wise human annotations,which are expensive and time-consuming to obtain.To tackle this problem,we make an early attempt to achieve video object segmentation with scribble-level supervision,which can alleviate large amounts of human labor for collecting the manual annotation.However,using conventional network architectures and learning objective functions under this scenario cannot work well as the supervision information is highly sparse and incomplete.To address this issue,this paper introduces two novel elements to learn the video object segmentation model.The first one is the scribble attention module,which captures more accurate context information and learns an effective attention map to enhance the contrast between foreground and background.The other one is the scribble-supervised loss,which can optimize the unlabeled pixels and dynamically correct inaccurate segmented areas during the training stage.To evaluate the proposed method,we implement experiments on two video object segmentation benchmark datasets,You Tube-video object segmentation(VOS),and densely annotated video segmentation(DAVIS)-2017.We first generate the scribble annotations from the original per-pixel annotations.Then,we train our model and compare its test performance with the baseline models and other existing works.Extensive experiments demonstrate that the proposed method can work effectively and approach to the methods requiring the dense per-pixel annotations.展开更多
Background In computer vision,simultaneously estimating human pose,shape,and clothing is a practical issue in real life,but remains a challenging task owing to the variety of clothing,complexity of de-formation,shorta...Background In computer vision,simultaneously estimating human pose,shape,and clothing is a practical issue in real life,but remains a challenging task owing to the variety of clothing,complexity of de-formation,shortage of large-scale datasets,and difficulty in estimating clothing style.Methods We propose a multistage weakly supervised method that makes full use of data with less labeled information for learning to estimate human body shape,pose,and clothing deformation.In the first stage,the SMPL human-body model parameters were regressed using the multi-view 2D key points of the human body.Using multi-view information as weakly supervised information can avoid the deep ambiguity problem of a single view,obtain a more accurate human posture,and access supervisory information easily.In the second stage,clothing is represented by a PCA-based model that uses two-dimensional key points of clothing as supervised information to regress the parameters.In the third stage,we predefine an embedding graph for each type of clothing to describe the deformation.Then,the mask information of the clothing is used to further adjust the deformation of the clothing.To facilitate training,we constructed a multi-view synthetic dataset that included BCNet and SURREAL.Results The Experiments show that the accuracy of our method reaches the same level as that of SOTA methods using strong supervision information while only using weakly supervised information.Because this study uses only weakly supervised information,which is much easier to obtain,it has the advantage of utilizing existing data as training data.Experiments on the DeepFashion2 dataset show that our method can make full use of the existing weak supervision information for fine-tuning on a dataset with little supervision information,compared with the strong supervision information that cannot be trained or adjusted owing to the lack of exact annotation information.Conclusions Our weak supervision method can accurately estimate human body size,pose,and several common types of clothing and overcome the issues of the current shortage of clothing data.展开更多
Since the preparation of labeled datafor training semantic segmentation networks of pointclouds is a time-consuming process, weakly supervisedapproaches have been introduced to learn fromonly a small fraction of data....Since the preparation of labeled datafor training semantic segmentation networks of pointclouds is a time-consuming process, weakly supervisedapproaches have been introduced to learn fromonly a small fraction of data. These methods aretypically based on learning with contrastive losses whileautomatically deriving per-point pseudo-labels from asparse set of user-annotated labels. In this paper, ourkey observation is that the selection of which samplesto annotate is as important as how these samplesare used for training. Thus, we introduce a methodfor weakly supervised segmentation of 3D scenes thatcombines self-training with active learning. Activelearning selects points for annotation that are likelyto result in improvements to the trained model, whileself-training makes efficient use of the user-providedlabels for learning the model. We demonstrate thatour approach leads to an effective method that providesimprovements in scene segmentation over previouswork and baselines, while requiring only a few userannotations.展开更多
Temporal action localization (TAL) is a task of detecting the start and end timestamps of action instances and classifying them in an untrimmed video. As the number of action categories per video increases, existing w...Temporal action localization (TAL) is a task of detecting the start and end timestamps of action instances and classifying them in an untrimmed video. As the number of action categories per video increases, existing weakly-supervised TAL (W-TAL) methods with only video-level labels cannot provide sufficient supervision. Single-frame supervision has attracted the interest of researchers. Existing paradigms model single-frame annotations from the perspective of video snippet sequences, neglect action discrimination of annotated frames, and do not pay sufficient attention to their correlations in the same category. Considering a category, the annotated frames exhibit distinctive appearance characteristics or clear action patterns.Thus, a novel method to enhance action discrimination via category-specific frame clustering for W-TAL is proposed. Specifically,the K-means clustering algorithm is employed to aggregate the annotated discriminative frames of the same category, which are regarded as exemplars to exhibit the characteristics of the action category. Then, the class activation scores are obtained by calculating the similarities between a frame and exemplars of various categories. Category-specific representation modeling can provide complimentary guidance to snippet sequence modeling in the mainline. As a result, a convex combination fusion mechanism is presented for annotated frames and snippet sequences to enhance the consistency properties of action discrimination,which can generate a robust class activation sequence for precise action classification and localization. Due to the supplementary guidance of action discriminative enhancement for video snippet sequences, our method outperforms existing single-frame annotation based methods. Experiments conducted on three datasets (THUMOS14, GTEA, and BEOID) show that our method achieves high localization performance compared with state-of-the-art methods.展开更多
The problem of art forgery and infringement is becoming increasingly prominent,since diverse self-media contents with all kinds of art pieces are released on the Internet every day.For art paintings,object detection a...The problem of art forgery and infringement is becoming increasingly prominent,since diverse self-media contents with all kinds of art pieces are released on the Internet every day.For art paintings,object detection and localization provide an efficient and ef-fective means of art authentication and copyright protection.However,the acquisition of a precise detector requires large amounts of ex-pensive pixel-level annotations.To alleviate this,we propose a novel weakly supervised object localization(WSOL)with background su-perposition erasing(BSE),which recognizes objects with inexpensive image-level labels.First,integrated adversarial erasing(IAE)for vanilla convolutional neural network(CNN)dropouts the most discriminative region by leveraging high-level semantic information.Second,a background suppression module(BSM)limits the activation area of the IAE to the object region through a self-guidance mechanism.Finally,in the inference phase,we utilize the refined importance map(RIM)of middle features to obtain class-agnostic loc-alization results.Extensive experiments are conducted on paintings,CUB-200-2011 and ILSVRC to validate the effectiveness of our BSE.展开更多
Due to the lack of annotations in target bounding boxes,most methods for weakly supervised target detection transform the problem of object detection into a classification problem of candidate regions,making it easy f...Due to the lack of annotations in target bounding boxes,most methods for weakly supervised target detection transform the problem of object detection into a classification problem of candidate regions,making it easy for weakly supervised target detectors to locate significant and highly discriminative local areas of objects.We propose a weak monitoring method that combines attention and erasure mechanisms.The supervised target detection method uses attention maps to search for areas with higher discrimination within candidate regions,and then uses an erasure mechanism to erase the region,forcing the model to enhance its learning of features in areas with weaker discrimination.To improve the positioning ability of the detector,we cascade the weakly supervised target detection network and the fully supervised target detection network,and jointly train the weakly supervised target detection network and the fully supervised target detection network through multi-task learning.Based on the validation trials,the category mean average precision(mAP)and the correct localization(CorLoc)on the two datasets,i.e.,VOC2007 and VOC2012,are 55.2% and 53.8%,respectively.In regard to the mAP and CorLoc,this approach significantly outperforms previous approaches,which creates opportunities for additional investigations into weakly supervised target identification algorithms.展开更多
近年来,小样本图异常检测在各个领域中引起了广泛的研究兴趣,其旨在在少量有标记训练节点(支持集)的引导下去检测出大量无标记测试节点(查询集)中的异常行为。然而,现有的小样本图异常检测算法通常假设其可以从具有大量有标记节点的训...近年来,小样本图异常检测在各个领域中引起了广泛的研究兴趣,其旨在在少量有标记训练节点(支持集)的引导下去检测出大量无标记测试节点(查询集)中的异常行为。然而,现有的小样本图异常检测算法通常假设其可以从具有大量有标记节点的训练任务(元训练任务)中学习,从而有效地推广到具有少量标记节点的测试任务(元测试任务),这一假设并不符合真实世界的应用条件。在实际应用中,用于小样本图异常检测训练的元训练任务通常只包含极其有限的有标记节点,其标签占比通常不超过0.1%,甚至更低。由于元训练和元测试任务之间存在的巨大任务差异,现有的小样本图异常检测算法很容易出现模型的过拟合问题。除此之外,现有的小样本图异常检测算法仅利用节点间的一阶邻域(局部结构信息)来学习节点的低维特征嵌入,反而忽略了节点间的长距离依赖关系(全局结构信息),进而导致学习到的低维特征嵌入的不准确性和失真问题。针对上述挑战,本文提出了极其弱监督场景下的小样本图异常检测算法——EWSFSGAD。具体来说,该方法首先提出了一个简单且有效的图神经网络框架——GLN(Global and Local Network),其能够同时有效地利用节点间的全局和局部结构信息,并进一步引入注意力机制实现节点间的信息交互,从而更加有效地学习节点鲁棒的低维特征嵌入;该方法还引入了图对比学习中的自监督重建损失,使得节点原始视图与其增强视图之间低维特征嵌入的互信息尽可能一致,为EWS-FSGAD模型的优化提供更多有效的自监督信息,进而提升模型的泛化性;为了提升模型在真实场景中小样本图异常检测任务的快速适应性,该方法引入跨网络元学习训练机制,从多个辅助网络学习可迁移元知识,为模型提供良好的参数初始化,从而能够通过在仅有很少甚至一个标记节点的目标网络上进行微调并有效泛化。在三个真实世界的数据集(Flickr、PubMed、Yelp)上的大量实验结果表明,本文所提方法的性能明显优于现有的图异常检测算法。特别是在PubMed数据集上,AUC-PR提升了28.8%~35.4%。这些实验结果强有力地证明了在极其有限标记的元训练任务引导下,本文所提方法能够更好地学习到异常节点本质特征,从而提升小样本图异常检测任务的有效性。展开更多
文摘In the field of optoelectronics,certain types of data may be difficult to accurately annotate,such as high-resolution optoelectronic imaging or imaging in certain special spectral ranges.Weakly supervised learning can provide a more reliable approach in these situations.Current popular approaches mainly adopt the classification-based class activation maps(CAM)as initial pseudo labels to solve the task.
基金funding from the following sources:National Natural Science Foundation of China(U1904119)Research Programs of Henan Science and Technology Department(232102210054)+3 种基金Chongqing Natural Science Foundation(CSTB2023NSCQ-MSX0070)Henan Province Key Research and Development Project(231111212000)Aviation Science Foundation(20230001055002)supported by Henan Center for Outstanding Overseas Scientists(GZS2022011).
文摘The primary challenge in weakly supervised semantic segmentation is effectively leveraging weak annotations while minimizing the performance gap compared to fully supervised methods.End-to-end model designs have gained significant attention for improving training efficiency.Most current algorithms rely on Convolutional Neural Networks(CNNs)for feature extraction.Although CNNs are proficient at capturing local features,they often struggle with global context,leading to incomplete and false Class Activation Mapping(CAM).To address these limitations,this work proposes a Contextual Prototype-Based End-to-End Weakly Supervised Semantic Segmentation(CPEWS)model,which improves feature extraction by utilizing the Vision Transformer(ViT).By incorporating its intermediate feature layers to preserve semantic information,this work introduces the Intermediate Supervised Module(ISM)to supervise the final layer’s output,reducing boundary ambiguity and mitigating issues related to incomplete activation.Additionally,the Contextual Prototype Module(CPM)generates class-specific prototypes,while the proposed Prototype Discrimination Loss and Superclass Suppression Loss guide the network’s training,(LPDL)(LSSL)effectively addressing false activation without the need for extra supervision.The CPEWS model proposed in this paper achieves state-of-the-art performance in end-to-end weakly supervised semantic segmentation without additional supervision.The validation set and test set Mean Intersection over Union(MIoU)of PASCAL VOC 2012 dataset achieved 69.8%and 72.6%,respectively.Compared with ToCo(pre trained weight ImageNet-1k),MIoU on the test set is 2.1%higher.In addition,MIoU reached 41.4%on the validation set of the MS COCO 2014 dataset.
基金the National Natural Science Foundation of China(42001408,61806097).
文摘Significant advancements have been achieved in road surface extraction based on high-resolution remote sensingimage processing. Most current methods rely on fully supervised learning, which necessitates enormous humaneffort to label the image. Within this field, other research endeavors utilize weakly supervised methods. Theseapproaches aim to reduce the expenses associated with annotation by leveraging sparsely annotated data, such asscribbles. This paper presents a novel technique called a weakly supervised network using scribble-supervised andedge-mask (WSSE-net). This network is a three-branch network architecture, whereby each branch is equippedwith a distinct decoder module dedicated to road extraction tasks. One of the branches is dedicated to generatingedge masks using edge detection algorithms and optimizing road edge details. The other two branches supervise themodel’s training by employing scribble labels and spreading scribble information throughout the image. To addressthe historical flaw that created pseudo-labels that are not updated with network training, we use mixup to blendprediction results dynamically and continually update new pseudo-labels to steer network training. Our solutiondemonstrates efficient operation by simultaneously considering both edge-mask aid and dynamic pseudo-labelsupport. The studies are conducted on three separate road datasets, which consist primarily of high-resolutionremote-sensing satellite photos and drone images. The experimental findings suggest that our methodologyperforms better than advanced scribble-supervised approaches and specific traditional fully supervised methods.
文摘Recently,weak supervision has received growing attention in the field of salient object detection due to the convenience of labelling.However,there is a large performance gap between weakly supervised and fully supervised salient object detectors because the scribble annotation can only provide very limited foreground/background information.Therefore,an intuitive idea is to infer annotations that cover more complete object and background regions for training.To this end,a label inference strategy is proposed based on the assumption that pixels with similar colours and close positions should have consistent labels.Specifically,k-means clustering algorithm was first performed on both colours and coordinates of original annotations,and then assigned the same labels to points having similar colours with colour cluster centres and near coordinate cluster centres.Next,the same annotations for pixels with similar colours within each kernel neighbourhood was set further.Extensive experiments on six benchmarks demonstrate that our method can significantly improve the performance and achieve the state-of-the-art results.
基金funded by the Open Foundation of Anhui EngineeringResearch Center of Intelligent Perception and Elderly Care,Chuzhou University(No.2022OPA03)the Higher EducationNatural Science Foundation of Anhui Province(No.KJ2021B01)and the Innovation Team Projects of Universities in Guangdong(No.2022KCXTD057).
文摘The coronavirus disease 2019(COVID-19)has severely disrupted both human life and the health care system.Timely diagnosis and treatment have become increasingly important;however,the distribution and size of lesions vary widely among individuals,making it challenging to accurately diagnose the disease.This study proposed a deep-learning disease diagnosismodel based onweakly supervised learning and clustering visualization(W_CVNet)that fused classification with segmentation.First,the data were preprocessed.An optimizable weakly supervised segmentation preprocessing method(O-WSSPM)was used to remove redundant data and solve the category imbalance problem.Second,a deep-learning fusion method was used for feature extraction and classification recognition.A dual asymmetric complementary bilinear feature extraction method(D-CBM)was used to fully extract complementary features,which solved the problem of insufficient feature extraction by a single deep learning network.Third,an unsupervised learning method based on Fuzzy C-Means(FCM)clustering was used to segment and visualize COVID-19 lesions enabling physicians to accurately assess lesion distribution and disease severity.In this study,5-fold cross-validation methods were used,and the results showed that the network had an average classification accuracy of 85.8%,outperforming six recent advanced classification models.W_CVNet can effectively help physicians with automated aid in diagnosis to determine if the disease is present and,in the case of COVID-19 patients,to further predict the area of the lesion.
基金supported by National Natural Science Foundation of China(62276058,61902057,41774063)Fundamental Research Funds for the Central Universities(N2217003)Joint Fund of Science&Technology Department of Liaoning Province and State Key Laboratory of Robotics,China(2020-KF-12-11).
文摘A large variety of complaint reports reflect subjective information expressed by citizens.A key challenge of text summarization for complaint reports is to ensure the factual consistency of generated summary.Therefore,in this paper,a simple and weakly supervised framework considering factual consistency is proposed to generate a summary of city-based complaint reports without pre-labeled sentences/words.Furthermore,it considers the importance of entity in complaint reports to ensure factual consistency of summary.Experimental results on the customer review datasets(Yelp and Amazon)and complaint report dataset(complaint reports of Shenyang in China)show that the proposed framework outperforms state-of-the-art approaches in ROUGE scores and human evaluation.It unveils the effectiveness of our approach to helping in dealing with complaint reports.
基金supported by the National Natural Science Foundation of China(NSFC)under Grant Nos.62402490 and 62072334.
文摘We study the novel problem of weakly supervised instance action recognition(WSiAR)in multi-person(crowd)scenes.We specifically aim to recognize the action of each subject in the crowd,for which we propose the use of a weakly supervised method,considering the expense of large-scale annotations for training.This problem is of great practical value for video surveillance and sports scene analysis.To this end,we investigated and designed a series of weak annotations for the supervision of weakly supervised instance action recognition(WSiAR).We propose two categories of weak label settings,bag labels and sparse labels,to significantly reduce the number of labels.Based on the former,we propose a novel sub-block-aware multi-instance learning(MIL)loss to obtain more effective information from weak labels during training.With respect to the latter,we propose a pseudo label generation strategy for extending sparse labels.This enables our method to achieve results comparable to those of fully supervised methods but with significantly fewer annotations.The experimental results on two benchmarks verified the rationality of the problem definition and effectiveness of the proposed weakly supervised training method in solving our problem.
基金support of the National Natural Science Foundation of China(Nos.52008311,51878499,and 52178433)the Science and Technology Commission of Shanghai Municipality(No.21ZR1465700)the Fundamental Research Funds for the Central Universities(No.22120230196).
文摘Accurate and timely surveying of airfield pavement distress is crucial for cost-effective air-port maintenance.Deep learning(DL)approaches,leveraging advancements in computer science and image acquisition techniques,have become the mainstream for automated air-field pavement distress detection.However,fully-supervised DL methods require a large number of manually annotated ground truth labels to achieve high accuracy.To address the challenge of limited high-quality manual annotations,we propose a novel end-to-end distress detection model called class activation map informed weakly-supervised dis-tress detection(WSDD-CAM).Based on YOLOv5,WSDD-CAM consists of an efficient back-bone,a classification branch,and a localization network.By utilizing class activation map(CAM)information,our model significantly reduces the need for manual annotations,auto-matically generating pseudo bounding boxes with a 71%overlap with the ground truth.To evaluate WSDD-CAM,we tested it on a self-made dataset and compared it with other weakly-supervised and fully-supervised models.The results show that our model achieves 49.2%mean average precision(mAP),outperforming other weakly-supervised methods and even approaching state-of-the-art fully-supervised methods.Additionally,ablation experiments confirm the effectiveness of our architecture design.In conclusion,our WSDD-CAM model offers a promising solution for airfield pavement distress detection,reducing manual annotation time while maintaining high accuracy.This efficient and effec-tive approach can significantly contribute to cost-effective airport maintenance management.
基金supported in part by the National Cancer Institute under award numbers R01CA268287A1,U01CA269181,R01CA26820701A1,R01CA249992-01A1,R01CA202752-01A1,R01CA208236-01A1,R01CA216579-01A1,R01CA220581-01A1,R01CA257612-01A1,1U01CA239055-01,1U01CA248226-01,1U54CA254566-01National Heart,Lung and Blood Institute 1R01HL15127701A1,R01HL15807101A1+8 种基金National Institute of Biomedical Imaging and Bioengineering 1R43EB028736-01VA Merit Review Award IBX004121A from the United States Department of Veterans Affairs Biomedical Laboratory Research and Development Service the Office of the Assistant Secretary of Defense for Health Affairs,through the Breast Cancer Research Program(W81XWH-19-1-0668)the Prostate Cancer Research Program(W81XWH-20-1-0851)the Lung Cancer Research Program(W81XWH-18-1-0440,W81XWH-20-1-0595)the Peer Reviewed Cancer Research Program(W81XWH-18-1-0404,W81XWH-21-1-0345,W81XWH-211-0160)the Kidney Precision Medicine Project(KPMP)Glue Grant and sponsored research agreements from Bristol Myers-Squibb,Boehringer-Ingelheim,Eli-Lilly and Astrazenecasupported in part by the National Natural Science Foundation of China general program(No.61571314)the Sichuan University-Yibin City Strategic Cooperation Special Fund(No.2020CDYB-27)Support Program of Sichuan Science and Technology Department(No.2023YFS0327-LH).
文摘Accurate prognosis prediction is essential for guiding cancer treatment and improving patient outcomes.While recent studies have demonstrated the potential of histopathological images in survival analysis,existing models are typically developed in a cancerspecific manner,lack extensive external validation,and often rely on molecular data that are not routinely available in clinical practice.To address these limitations,we present PROGPATH,a unified model capable of integrating histopathological image features with routinely collected clinical variables to achieve pancancer prognosis prediction.PROGPATH employs a weakly supervised deep learning architecture built upon the foundation model for image encoding.Morphological features are aggregated through an attention-guided multiple instance learning module and fused with clinical information via a cross-attention transformer.A router-based classification strategy further refines the prediction performance.PROGPATH was trained on 7999 whole-slide images(WSIs)from 6,670 patients across 15 cancer types,and extensively validated on 17 external cohorts with a total of 7374 WSIs from 4441 patients,covering 12 cancer types from 8 consortia and institutions across three continents.PROGPATH achieved consistently superior performance compared with state-of-the-art multimodal prognosis prediction models.It demonstrated strong generalizability across cancer types and robustness in stratified subgroups,including early-and advancedstage patients,treatment cohorts(radiotherapy and pharmaceutical therapy),and biomarker-defined subsets.We further provide model interpretability by identifying pathological patterns critical to PROGPATH’s risk predictions,such as the degree of cell differentiation and extent of necrosis.Together,these results highlight the potential of PROGPATH to support pancancer outcome prediction and inform personalized cancer management strategies.
基金supported in part by the National Key R&D Program of China(2017YFB0502904)the National Science Foundation of China(61876140)。
文摘Recently,video object segmentation has received great attention in the computer vision community.Most of the existing methods heavily rely on the pixel-wise human annotations,which are expensive and time-consuming to obtain.To tackle this problem,we make an early attempt to achieve video object segmentation with scribble-level supervision,which can alleviate large amounts of human labor for collecting the manual annotation.However,using conventional network architectures and learning objective functions under this scenario cannot work well as the supervision information is highly sparse and incomplete.To address this issue,this paper introduces two novel elements to learn the video object segmentation model.The first one is the scribble attention module,which captures more accurate context information and learns an effective attention map to enhance the contrast between foreground and background.The other one is the scribble-supervised loss,which can optimize the unlabeled pixels and dynamically correct inaccurate segmented areas during the training stage.To evaluate the proposed method,we implement experiments on two video object segmentation benchmark datasets,You Tube-video object segmentation(VOS),and densely annotated video segmentation(DAVIS)-2017.We first generate the scribble annotations from the original per-pixel annotations.Then,we train our model and compare its test performance with the baseline models and other existing works.Extensive experiments demonstrate that the proposed method can work effectively and approach to the methods requiring the dense per-pixel annotations.
基金Supported by the National Key Research and Development Programme of China(2018YFC0831201).
文摘Background In computer vision,simultaneously estimating human pose,shape,and clothing is a practical issue in real life,but remains a challenging task owing to the variety of clothing,complexity of de-formation,shortage of large-scale datasets,and difficulty in estimating clothing style.Methods We propose a multistage weakly supervised method that makes full use of data with less labeled information for learning to estimate human body shape,pose,and clothing deformation.In the first stage,the SMPL human-body model parameters were regressed using the multi-view 2D key points of the human body.Using multi-view information as weakly supervised information can avoid the deep ambiguity problem of a single view,obtain a more accurate human posture,and access supervisory information easily.In the second stage,clothing is represented by a PCA-based model that uses two-dimensional key points of clothing as supervised information to regress the parameters.In the third stage,we predefine an embedding graph for each type of clothing to describe the deformation.Then,the mask information of the clothing is used to further adjust the deformation of the clothing.To facilitate training,we constructed a multi-view synthetic dataset that included BCNet and SURREAL.Results The Experiments show that the accuracy of our method reaches the same level as that of SOTA methods using strong supervision information while only using weakly supervised information.Because this study uses only weakly supervised information,which is much easier to obtain,it has the advantage of utilizing existing data as training data.Experiments on the DeepFashion2 dataset show that our method can make full use of the existing weak supervision information for fine-tuning on a dataset with little supervision information,compared with the strong supervision information that cannot be trained or adjusted owing to the lack of exact annotation information.Conclusions Our weak supervision method can accurately estimate human body size,pose,and several common types of clothing and overcome the issues of the current shortage of clothing data.
基金supported by Guangdong Natural Science Foundation(2021B1515020085)Shenzhen Science and Technology Program(RCYX20210609103121030)+4 种基金National Natural Science Foundation of China(62322207,61872250,U2001206,U21B2023)Department of Education of Guangdong Province Innovation Team(2022KCXTD025)Shenzhen Science and Technology Innovation Program(JCYJ20210324120213036)the Natural Sciences and Engineering Research Council of Canada(NSERC)Guangdong Laboratory of Artificial Intelligence and Digital Economy(ShenZhen).
文摘Since the preparation of labeled datafor training semantic segmentation networks of pointclouds is a time-consuming process, weakly supervisedapproaches have been introduced to learn fromonly a small fraction of data. These methods aretypically based on learning with contrastive losses whileautomatically deriving per-point pseudo-labels from asparse set of user-annotated labels. In this paper, ourkey observation is that the selection of which samplesto annotate is as important as how these samplesare used for training. Thus, we introduce a methodfor weakly supervised segmentation of 3D scenes thatcombines self-training with active learning. Activelearning selects points for annotation that are likelyto result in improvements to the trained model, whileself-training makes efficient use of the user-providedlabels for learning the model. We demonstrate thatour approach leads to an effective method that providesimprovements in scene segmentation over previouswork and baselines, while requiring only a few userannotations.
基金supported by the National Natural Science Foundation of China(No.61672268)。
文摘Temporal action localization (TAL) is a task of detecting the start and end timestamps of action instances and classifying them in an untrimmed video. As the number of action categories per video increases, existing weakly-supervised TAL (W-TAL) methods with only video-level labels cannot provide sufficient supervision. Single-frame supervision has attracted the interest of researchers. Existing paradigms model single-frame annotations from the perspective of video snippet sequences, neglect action discrimination of annotated frames, and do not pay sufficient attention to their correlations in the same category. Considering a category, the annotated frames exhibit distinctive appearance characteristics or clear action patterns.Thus, a novel method to enhance action discrimination via category-specific frame clustering for W-TAL is proposed. Specifically,the K-means clustering algorithm is employed to aggregate the annotated discriminative frames of the same category, which are regarded as exemplars to exhibit the characteristics of the action category. Then, the class activation scores are obtained by calculating the similarities between a frame and exemplars of various categories. Category-specific representation modeling can provide complimentary guidance to snippet sequence modeling in the mainline. As a result, a convex combination fusion mechanism is presented for annotated frames and snippet sequences to enhance the consistency properties of action discrimination,which can generate a robust class activation sequence for precise action classification and localization. Due to the supplementary guidance of action discriminative enhancement for video snippet sequences, our method outperforms existing single-frame annotation based methods. Experiments conducted on three datasets (THUMOS14, GTEA, and BEOID) show that our method achieves high localization performance compared with state-of-the-art methods.
基金This work was supported in part by Guangdong Provincial Key Laboratory of Artificial Intelligence in Medical Image Analysis and Application,China(No.2022B1212010011).
文摘The problem of art forgery and infringement is becoming increasingly prominent,since diverse self-media contents with all kinds of art pieces are released on the Internet every day.For art paintings,object detection and localization provide an efficient and ef-fective means of art authentication and copyright protection.However,the acquisition of a precise detector requires large amounts of ex-pensive pixel-level annotations.To alleviate this,we propose a novel weakly supervised object localization(WSOL)with background su-perposition erasing(BSE),which recognizes objects with inexpensive image-level labels.First,integrated adversarial erasing(IAE)for vanilla convolutional neural network(CNN)dropouts the most discriminative region by leveraging high-level semantic information.Second,a background suppression module(BSM)limits the activation area of the IAE to the object region through a self-guidance mechanism.Finally,in the inference phase,we utilize the refined importance map(RIM)of middle features to obtain class-agnostic loc-alization results.Extensive experiments are conducted on paintings,CUB-200-2011 and ILSVRC to validate the effectiveness of our BSE.
基金supported by the National Natural Science Foundation of China(No.61871182,61773160)the Natural Science Foundation of Hebei Province of China(No.F2021502013)+1 种基金the Fundamental Research Funds for the Central Universities(No.2020MS153,2021PT018)the National Natural Science Foundation of China(No.62371188).
文摘Due to the lack of annotations in target bounding boxes,most methods for weakly supervised target detection transform the problem of object detection into a classification problem of candidate regions,making it easy for weakly supervised target detectors to locate significant and highly discriminative local areas of objects.We propose a weak monitoring method that combines attention and erasure mechanisms.The supervised target detection method uses attention maps to search for areas with higher discrimination within candidate regions,and then uses an erasure mechanism to erase the region,forcing the model to enhance its learning of features in areas with weaker discrimination.To improve the positioning ability of the detector,we cascade the weakly supervised target detection network and the fully supervised target detection network,and jointly train the weakly supervised target detection network and the fully supervised target detection network through multi-task learning.Based on the validation trials,the category mean average precision(mAP)and the correct localization(CorLoc)on the two datasets,i.e.,VOC2007 and VOC2012,are 55.2% and 53.8%,respectively.In regard to the mAP and CorLoc,this approach significantly outperforms previous approaches,which creates opportunities for additional investigations into weakly supervised target identification algorithms.
文摘近年来,小样本图异常检测在各个领域中引起了广泛的研究兴趣,其旨在在少量有标记训练节点(支持集)的引导下去检测出大量无标记测试节点(查询集)中的异常行为。然而,现有的小样本图异常检测算法通常假设其可以从具有大量有标记节点的训练任务(元训练任务)中学习,从而有效地推广到具有少量标记节点的测试任务(元测试任务),这一假设并不符合真实世界的应用条件。在实际应用中,用于小样本图异常检测训练的元训练任务通常只包含极其有限的有标记节点,其标签占比通常不超过0.1%,甚至更低。由于元训练和元测试任务之间存在的巨大任务差异,现有的小样本图异常检测算法很容易出现模型的过拟合问题。除此之外,现有的小样本图异常检测算法仅利用节点间的一阶邻域(局部结构信息)来学习节点的低维特征嵌入,反而忽略了节点间的长距离依赖关系(全局结构信息),进而导致学习到的低维特征嵌入的不准确性和失真问题。针对上述挑战,本文提出了极其弱监督场景下的小样本图异常检测算法——EWSFSGAD。具体来说,该方法首先提出了一个简单且有效的图神经网络框架——GLN(Global and Local Network),其能够同时有效地利用节点间的全局和局部结构信息,并进一步引入注意力机制实现节点间的信息交互,从而更加有效地学习节点鲁棒的低维特征嵌入;该方法还引入了图对比学习中的自监督重建损失,使得节点原始视图与其增强视图之间低维特征嵌入的互信息尽可能一致,为EWS-FSGAD模型的优化提供更多有效的自监督信息,进而提升模型的泛化性;为了提升模型在真实场景中小样本图异常检测任务的快速适应性,该方法引入跨网络元学习训练机制,从多个辅助网络学习可迁移元知识,为模型提供良好的参数初始化,从而能够通过在仅有很少甚至一个标记节点的目标网络上进行微调并有效泛化。在三个真实世界的数据集(Flickr、PubMed、Yelp)上的大量实验结果表明,本文所提方法的性能明显优于现有的图异常检测算法。特别是在PubMed数据集上,AUC-PR提升了28.8%~35.4%。这些实验结果强有力地证明了在极其有限标记的元训练任务引导下,本文所提方法能够更好地学习到异常节点本质特征,从而提升小样本图异常检测任务的有效性。