In the field of optoelectronics,certain types of data may be difficult to accurately annotate,such as high-resolution optoelectronic imaging or imaging in certain special spectral ranges.Weakly supervised learning can...In the field of optoelectronics,certain types of data may be difficult to accurately annotate,such as high-resolution optoelectronic imaging or imaging in certain special spectral ranges.Weakly supervised learning can provide a more reliable approach in these situations.Current popular approaches mainly adopt the classification-based class activation maps(CAM)as initial pseudo labels to solve the task.展开更多
Existing weakly supervised semantic segmentation(WSSS)methods based on image-level labels always rely on class activation maps(CAMs),which measure the relationships between features and classifiers.However,CAMs only f...Existing weakly supervised semantic segmentation(WSSS)methods based on image-level labels always rely on class activation maps(CAMs),which measure the relationships between features and classifiers.However,CAMs only focus on the most discriminative regions of images,resulting in their poor coverage performance.We attribute this to the deficiency in the recognition ability of a single classifier and the negative impacts caused by magnitudes during the CAMs normalisation process.To address the aforementioned issues,we propose to construct selective multiple classifiers(SMC).During the training process,we extract multiple prototypes for each class and store them in the corresponding memory bank.These prototypes are divided into foreground and background prototypes,with the former used to identify foreground objects and the latter aimed at preventing the false activation of background pixels.As for the inference stage,multiple prototypes are adaptively selected from the memory bank for each image as SMC.Subsequently,CAMs are generated by measuring the angle between SMC and features.We enhance the recognition ability of classifiers by adaptively constructing multiple classifiers for each image,while only relying on angle measurement to generate CAMs can alleviate the suppression phenomenon caused by magnitudes.Furthermore,SMC can be integrated into other WSSS approaches to help generate better CAMs.Extensive experiments conducted on standard WSSS benchmarks such as PASCAL VOC 2012 and MS COCO 2014 demonstrate the superiority of our proposed method.展开更多
The primary challenge in weakly supervised semantic segmentation is effectively leveraging weak annotations while minimizing the performance gap compared to fully supervised methods.End-to-end model designs have gaine...The primary challenge in weakly supervised semantic segmentation is effectively leveraging weak annotations while minimizing the performance gap compared to fully supervised methods.End-to-end model designs have gained significant attention for improving training efficiency.Most current algorithms rely on Convolutional Neural Networks(CNNs)for feature extraction.Although CNNs are proficient at capturing local features,they often struggle with global context,leading to incomplete and false Class Activation Mapping(CAM).To address these limitations,this work proposes a Contextual Prototype-Based End-to-End Weakly Supervised Semantic Segmentation(CPEWS)model,which improves feature extraction by utilizing the Vision Transformer(ViT).By incorporating its intermediate feature layers to preserve semantic information,this work introduces the Intermediate Supervised Module(ISM)to supervise the final layer’s output,reducing boundary ambiguity and mitigating issues related to incomplete activation.Additionally,the Contextual Prototype Module(CPM)generates class-specific prototypes,while the proposed Prototype Discrimination Loss and Superclass Suppression Loss guide the network’s training,(LPDL)(LSSL)effectively addressing false activation without the need for extra supervision.The CPEWS model proposed in this paper achieves state-of-the-art performance in end-to-end weakly supervised semantic segmentation without additional supervision.The validation set and test set Mean Intersection over Union(MIoU)of PASCAL VOC 2012 dataset achieved 69.8%and 72.6%,respectively.Compared with ToCo(pre trained weight ImageNet-1k),MIoU on the test set is 2.1%higher.In addition,MIoU reached 41.4%on the validation set of the MS COCO 2014 dataset.展开更多
Significant advancements have been achieved in road surface extraction based on high-resolution remote sensingimage processing. Most current methods rely on fully supervised learning, which necessitates enormous human...Significant advancements have been achieved in road surface extraction based on high-resolution remote sensingimage processing. Most current methods rely on fully supervised learning, which necessitates enormous humaneffort to label the image. Within this field, other research endeavors utilize weakly supervised methods. Theseapproaches aim to reduce the expenses associated with annotation by leveraging sparsely annotated data, such asscribbles. This paper presents a novel technique called a weakly supervised network using scribble-supervised andedge-mask (WSSE-net). This network is a three-branch network architecture, whereby each branch is equippedwith a distinct decoder module dedicated to road extraction tasks. One of the branches is dedicated to generatingedge masks using edge detection algorithms and optimizing road edge details. The other two branches supervise themodel’s training by employing scribble labels and spreading scribble information throughout the image. To addressthe historical flaw that created pseudo-labels that are not updated with network training, we use mixup to blendprediction results dynamically and continually update new pseudo-labels to steer network training. Our solutiondemonstrates efficient operation by simultaneously considering both edge-mask aid and dynamic pseudo-labelsupport. The studies are conducted on three separate road datasets, which consist primarily of high-resolutionremote-sensing satellite photos and drone images. The experimental findings suggest that our methodologyperforms better than advanced scribble-supervised approaches and specific traditional fully supervised methods.展开更多
The coronavirus disease 2019(COVID-19)has severely disrupted both human life and the health care system.Timely diagnosis and treatment have become increasingly important;however,the distribution and size of lesions va...The coronavirus disease 2019(COVID-19)has severely disrupted both human life and the health care system.Timely diagnosis and treatment have become increasingly important;however,the distribution and size of lesions vary widely among individuals,making it challenging to accurately diagnose the disease.This study proposed a deep-learning disease diagnosismodel based onweakly supervised learning and clustering visualization(W_CVNet)that fused classification with segmentation.First,the data were preprocessed.An optimizable weakly supervised segmentation preprocessing method(O-WSSPM)was used to remove redundant data and solve the category imbalance problem.Second,a deep-learning fusion method was used for feature extraction and classification recognition.A dual asymmetric complementary bilinear feature extraction method(D-CBM)was used to fully extract complementary features,which solved the problem of insufficient feature extraction by a single deep learning network.Third,an unsupervised learning method based on Fuzzy C-Means(FCM)clustering was used to segment and visualize COVID-19 lesions enabling physicians to accurately assess lesion distribution and disease severity.In this study,5-fold cross-validation methods were used,and the results showed that the network had an average classification accuracy of 85.8%,outperforming six recent advanced classification models.W_CVNet can effectively help physicians with automated aid in diagnosis to determine if the disease is present and,in the case of COVID-19 patients,to further predict the area of the lesion.展开更多
Recently,weak supervision has received growing attention in the field of salient object detection due to the convenience of labelling.However,there is a large performance gap between weakly supervised and fully superv...Recently,weak supervision has received growing attention in the field of salient object detection due to the convenience of labelling.However,there is a large performance gap between weakly supervised and fully supervised salient object detectors because the scribble annotation can only provide very limited foreground/background information.Therefore,an intuitive idea is to infer annotations that cover more complete object and background regions for training.To this end,a label inference strategy is proposed based on the assumption that pixels with similar colours and close positions should have consistent labels.Specifically,k-means clustering algorithm was first performed on both colours and coordinates of original annotations,and then assigned the same labels to points having similar colours with colour cluster centres and near coordinate cluster centres.Next,the same annotations for pixels with similar colours within each kernel neighbourhood was set further.Extensive experiments on six benchmarks demonstrate that our method can significantly improve the performance and achieve the state-of-the-art results.展开更多
A large variety of complaint reports reflect subjective information expressed by citizens.A key challenge of text summarization for complaint reports is to ensure the factual consistency of generated summary.Therefore...A large variety of complaint reports reflect subjective information expressed by citizens.A key challenge of text summarization for complaint reports is to ensure the factual consistency of generated summary.Therefore,in this paper,a simple and weakly supervised framework considering factual consistency is proposed to generate a summary of city-based complaint reports without pre-labeled sentences/words.Furthermore,it considers the importance of entity in complaint reports to ensure factual consistency of summary.Experimental results on the customer review datasets(Yelp and Amazon)and complaint report dataset(complaint reports of Shenyang in China)show that the proposed framework outperforms state-of-the-art approaches in ROUGE scores and human evaluation.It unveils the effectiveness of our approach to helping in dealing with complaint reports.展开更多
In weakly supervised video anomaly detection(WSVAD)tasks,the temporal relationships of video are crucial for modeling event patterns.Transformer is a commonly used method for modeling temporal relationships.However,du...In weakly supervised video anomaly detection(WSVAD)tasks,the temporal relationships of video are crucial for modeling event patterns.Transformer is a commonly used method for modeling temporal relationships.However,due to the large amount of redundancy in videos and the quadratic complexity of the Transformer,this method cannot effectively model long-range information.In addition,most WSVAD methods select key snippets based on predicted scores to represent event patterns,but this paradigm is susceptible to noise interference.To address the above issues,a novel temporal context and representative feature learning(TCRFL)method for WSVAD is proposed.Specifically,a temporal context learning(TCL)module is proposed to utilize both Mamba with linear complexity and Transformer to capture short-range and long-range dependencies of events.In addition,a representative feature learning(RFL)module is proposed to mine representative snippets to capture important information about events,further spreading it to video features to enhance the influence of representative features.The RFL module not only suppresses noise interference but also guides the model to select key snippets more accurately.The experimental results on UCF-Crime,XD-Violence,and ShanghaiTech datasets demonstrate the effectiveness and superiority of our method.展开更多
Recently,video object segmentation has received great attention in the computer vision community.Most of the existing methods heavily rely on the pixel-wise human annotations,which are expensive and time-consuming to ...Recently,video object segmentation has received great attention in the computer vision community.Most of the existing methods heavily rely on the pixel-wise human annotations,which are expensive and time-consuming to obtain.To tackle this problem,we make an early attempt to achieve video object segmentation with scribble-level supervision,which can alleviate large amounts of human labor for collecting the manual annotation.However,using conventional network architectures and learning objective functions under this scenario cannot work well as the supervision information is highly sparse and incomplete.To address this issue,this paper introduces two novel elements to learn the video object segmentation model.The first one is the scribble attention module,which captures more accurate context information and learns an effective attention map to enhance the contrast between foreground and background.The other one is the scribble-supervised loss,which can optimize the unlabeled pixels and dynamically correct inaccurate segmented areas during the training stage.To evaluate the proposed method,we implement experiments on two video object segmentation benchmark datasets,You Tube-video object segmentation(VOS),and densely annotated video segmentation(DAVIS)-2017.We first generate the scribble annotations from the original per-pixel annotations.Then,we train our model and compare its test performance with the baseline models and other existing works.Extensive experiments demonstrate that the proposed method can work effectively and approach to the methods requiring the dense per-pixel annotations.展开更多
Background In computer vision,simultaneously estimating human pose,shape,and clothing is a practical issue in real life,but remains a challenging task owing to the variety of clothing,complexity of de-formation,shorta...Background In computer vision,simultaneously estimating human pose,shape,and clothing is a practical issue in real life,but remains a challenging task owing to the variety of clothing,complexity of de-formation,shortage of large-scale datasets,and difficulty in estimating clothing style.Methods We propose a multistage weakly supervised method that makes full use of data with less labeled information for learning to estimate human body shape,pose,and clothing deformation.In the first stage,the SMPL human-body model parameters were regressed using the multi-view 2D key points of the human body.Using multi-view information as weakly supervised information can avoid the deep ambiguity problem of a single view,obtain a more accurate human posture,and access supervisory information easily.In the second stage,clothing is represented by a PCA-based model that uses two-dimensional key points of clothing as supervised information to regress the parameters.In the third stage,we predefine an embedding graph for each type of clothing to describe the deformation.Then,the mask information of the clothing is used to further adjust the deformation of the clothing.To facilitate training,we constructed a multi-view synthetic dataset that included BCNet and SURREAL.Results The Experiments show that the accuracy of our method reaches the same level as that of SOTA methods using strong supervision information while only using weakly supervised information.Because this study uses only weakly supervised information,which is much easier to obtain,it has the advantage of utilizing existing data as training data.Experiments on the DeepFashion2 dataset show that our method can make full use of the existing weak supervision information for fine-tuning on a dataset with little supervision information,compared with the strong supervision information that cannot be trained or adjusted owing to the lack of exact annotation information.Conclusions Our weak supervision method can accurately estimate human body size,pose,and several common types of clothing and overcome the issues of the current shortage of clothing data.展开更多
We study the novel problem of weakly supervised instance action recognition(WSiAR)in multi-person(crowd)scenes.We specifically aim to recognize the action of each subject in the crowd,for which we propose the use of a...We study the novel problem of weakly supervised instance action recognition(WSiAR)in multi-person(crowd)scenes.We specifically aim to recognize the action of each subject in the crowd,for which we propose the use of a weakly supervised method,considering the expense of large-scale annotations for training.This problem is of great practical value for video surveillance and sports scene analysis.To this end,we investigated and designed a series of weak annotations for the supervision of weakly supervised instance action recognition(WSiAR).We propose two categories of weak label settings,bag labels and sparse labels,to significantly reduce the number of labels.Based on the former,we propose a novel sub-block-aware multi-instance learning(MIL)loss to obtain more effective information from weak labels during training.With respect to the latter,we propose a pseudo label generation strategy for extending sparse labels.This enables our method to achieve results comparable to those of fully supervised methods but with significantly fewer annotations.The experimental results on two benchmarks verified the rationality of the problem definition and effectiveness of the proposed weakly supervised training method in solving our problem.展开更多
Accurate and timely surveying of airfield pavement distress is crucial for cost-effective air-port maintenance.Deep learning(DL)approaches,leveraging advancements in computer science and image acquisition techniques,h...Accurate and timely surveying of airfield pavement distress is crucial for cost-effective air-port maintenance.Deep learning(DL)approaches,leveraging advancements in computer science and image acquisition techniques,have become the mainstream for automated air-field pavement distress detection.However,fully-supervised DL methods require a large number of manually annotated ground truth labels to achieve high accuracy.To address the challenge of limited high-quality manual annotations,we propose a novel end-to-end distress detection model called class activation map informed weakly-supervised dis-tress detection(WSDD-CAM).Based on YOLOv5,WSDD-CAM consists of an efficient back-bone,a classification branch,and a localization network.By utilizing class activation map(CAM)information,our model significantly reduces the need for manual annotations,auto-matically generating pseudo bounding boxes with a 71%overlap with the ground truth.To evaluate WSDD-CAM,we tested it on a self-made dataset and compared it with other weakly-supervised and fully-supervised models.The results show that our model achieves 49.2%mean average precision(mAP),outperforming other weakly-supervised methods and even approaching state-of-the-art fully-supervised methods.Additionally,ablation experiments confirm the effectiveness of our architecture design.In conclusion,our WSDD-CAM model offers a promising solution for airfield pavement distress detection,reducing manual annotation time while maintaining high accuracy.This efficient and effec-tive approach can significantly contribute to cost-effective airport maintenance management.展开更多
Accurate prognosis prediction is essential for guiding cancer treatment and improving patient outcomes.While recent studies have demonstrated the potential of histopathological images in survival analysis,existing mod...Accurate prognosis prediction is essential for guiding cancer treatment and improving patient outcomes.While recent studies have demonstrated the potential of histopathological images in survival analysis,existing models are typically developed in a cancerspecific manner,lack extensive external validation,and often rely on molecular data that are not routinely available in clinical practice.To address these limitations,we present PROGPATH,a unified model capable of integrating histopathological image features with routinely collected clinical variables to achieve pancancer prognosis prediction.PROGPATH employs a weakly supervised deep learning architecture built upon the foundation model for image encoding.Morphological features are aggregated through an attention-guided multiple instance learning module and fused with clinical information via a cross-attention transformer.A router-based classification strategy further refines the prediction performance.PROGPATH was trained on 7999 whole-slide images(WSIs)from 6,670 patients across 15 cancer types,and extensively validated on 17 external cohorts with a total of 7374 WSIs from 4441 patients,covering 12 cancer types from 8 consortia and institutions across three continents.PROGPATH achieved consistently superior performance compared with state-of-the-art multimodal prognosis prediction models.It demonstrated strong generalizability across cancer types and robustness in stratified subgroups,including early-and advancedstage patients,treatment cohorts(radiotherapy and pharmaceutical therapy),and biomarker-defined subsets.We further provide model interpretability by identifying pathological patterns critical to PROGPATH’s risk predictions,such as the degree of cell differentiation and extent of necrosis.Together,these results highlight the potential of PROGPATH to support pancancer outcome prediction and inform personalized cancer management strategies.展开更多
Action recognition and localization in untrimmed videos is important for many applications and have attracted a lot of attention. Since full supervision with frame-level annotation places an overwhelming burden on man...Action recognition and localization in untrimmed videos is important for many applications and have attracted a lot of attention. Since full supervision with frame-level annotation places an overwhelming burden on manual labeling effort, learning with weak video-level supervision becomes a potential solution. In this paper, we propose a novel weakly supervised framework to recognize actions and locate the corresponding frames in untrimmed videos simultaneously. Considering that there are abundant trimmed videos publicly available and well-segmented with semantic descriptions, the instructive knowledge learned on trimmed videos can be fully leveraged to analyze untrimmed videos. We present an effective knowledge transfer strategy based on inter-class semantic relevance. We also take advantage of the self-attention mechanism to obtain a compact video representation, such that the influence of background frames can be effectively eliminated. A learning architecture is designed with twin networks for trimmed and untrimmed videos, to facilitate transferable self-attentive representation learning. Extensive experiments are conducted on three untrimmed benchmark datasets (i.e., THUMOS14, ActivityNet1.3, and MEXaction2), and the experimental results clearly corroborate the efficacy of our method. It is especially encouraging to see that the proposed weakly supervised method even achieves comparable results to some fully supervised methods.展开更多
This paper presents a novel algorithm for an extreme form of weak label learning, in which only one of all relevant labels is given for each training sample. Using genetic algorithm, all of the labels in the training ...This paper presents a novel algorithm for an extreme form of weak label learning, in which only one of all relevant labels is given for each training sample. Using genetic algorithm, all of the labels in the training set are optimally divided into several non-overlapping groups to maximize the label distinguishability in every group. Multiple classifiers are trained separately and ensembled for label predictions. Experimental results show significant improvement over previous weak label learning algorithms.展开更多
Since the preparation of labeled datafor training semantic segmentation networks of pointclouds is a time-consuming process, weakly supervisedapproaches have been introduced to learn fromonly a small fraction of data....Since the preparation of labeled datafor training semantic segmentation networks of pointclouds is a time-consuming process, weakly supervisedapproaches have been introduced to learn fromonly a small fraction of data. These methods aretypically based on learning with contrastive losses whileautomatically deriving per-point pseudo-labels from asparse set of user-annotated labels. In this paper, ourkey observation is that the selection of which samplesto annotate is as important as how these samplesare used for training. Thus, we introduce a methodfor weakly supervised segmentation of 3D scenes thatcombines self-training with active learning. Activelearning selects points for annotation that are likelyto result in improvements to the trained model, whileself-training makes efficient use of the user-providedlabels for learning the model. We demonstrate thatour approach leads to an effective method that providesimprovements in scene segmentation over previouswork and baselines, while requiring only a few userannotations.展开更多
Temporal action localization (TAL) is a task of detecting the start and end timestamps of action instances and classifying them in an untrimmed video. As the number of action categories per video increases, existing w...Temporal action localization (TAL) is a task of detecting the start and end timestamps of action instances and classifying them in an untrimmed video. As the number of action categories per video increases, existing weakly-supervised TAL (W-TAL) methods with only video-level labels cannot provide sufficient supervision. Single-frame supervision has attracted the interest of researchers. Existing paradigms model single-frame annotations from the perspective of video snippet sequences, neglect action discrimination of annotated frames, and do not pay sufficient attention to their correlations in the same category. Considering a category, the annotated frames exhibit distinctive appearance characteristics or clear action patterns.Thus, a novel method to enhance action discrimination via category-specific frame clustering for W-TAL is proposed. Specifically,the K-means clustering algorithm is employed to aggregate the annotated discriminative frames of the same category, which are regarded as exemplars to exhibit the characteristics of the action category. Then, the class activation scores are obtained by calculating the similarities between a frame and exemplars of various categories. Category-specific representation modeling can provide complimentary guidance to snippet sequence modeling in the mainline. As a result, a convex combination fusion mechanism is presented for annotated frames and snippet sequences to enhance the consistency properties of action discrimination,which can generate a robust class activation sequence for precise action classification and localization. Due to the supplementary guidance of action discriminative enhancement for video snippet sequences, our method outperforms existing single-frame annotation based methods. Experiments conducted on three datasets (THUMOS14, GTEA, and BEOID) show that our method achieves high localization performance compared with state-of-the-art methods.展开更多
Anticipating future actions without observing any partial videos of future actions plays an important role in action prediction and is also a challenging task.To obtain abundant information for action anticipation,som...Anticipating future actions without observing any partial videos of future actions plays an important role in action prediction and is also a challenging task.To obtain abundant information for action anticipation,some methods integrate multimodal contexts,including scene object labels.However,extensively labelling each frame in video datasets requires considerable effort.In this paper,we develop a weakly supervised method that integrates global motion and local finegrained features from current action videos to predict next action label without the need for specific scene context labels.Specifically,we extract diverse types of local features with weakly supervised learning,including object appearance and human pose representations without ground truth.Moreover,we construct a graph convolutional network for exploiting the inherent relationships of humans and objects under present incidents.We evaluate the proposed model on two datasets,the MPII-Cooking dataset and the EPIC-Kitchens dataset,and we demonstrate the generalizability and effectiveness of our approach for action anticipation.展开更多
Temporal localization is crucial for action video recognition.Since the manual annotations are expensive and time-consuming in videos,temporal localization with weak video-level labels is challenging but indispensable...Temporal localization is crucial for action video recognition.Since the manual annotations are expensive and time-consuming in videos,temporal localization with weak video-level labels is challenging but indispensable.In this paper,we propose a weakly-supervised temporal action localization approach in untrimmed videos.To settle this issue,we train the model based on the proxies of each action class.The proxies are used to measure the distances between action segments and different original action features.We use a proxy-based metric to cluster the same actions together and separate actions from backgrounds.Compared with state-of-the-art methods,our method achieved competitive results on the THUMOS14 and ActivityNet1.2 datasets.展开更多
Background:Image-based automatic diagnosis of field diseases can help increase crop yields and is of great importance.However,crop lesion regions tend to be scattered and of varying sizes,this along with substantial i...Background:Image-based automatic diagnosis of field diseases can help increase crop yields and is of great importance.However,crop lesion regions tend to be scattered and of varying sizes,this along with substantial intraclass variation and small inter-class variation makes segmentation difficult.Methods:We propose a novel end-to-end system that only requires weak supervision of image-level labels for lesion region segmentation.First,a two-branch network is designed for joint disease classification and seed region generation.The generated seed regions are then used as input to the next segmentation stage where we design to use an encoder-decoder network.Different from previous works that use an encoder in the segmentation network,the encoder-decoder network is critical for our system to successfully segment images with small and scattered regions,which is the major challenge in image-based diagnosis of field diseases.We further propose a novel weakly supervised training strategy for the encoder-decoder semantic segmentation network,making use of the extracted seed regions.Results:Experimental results show that our system achieves better lesion region segmentation results than state of the arts.In addition to crop images,our method is also applicable to general scattered object segmentation.We demonstrate this by extending our framework to work on the PASCAL VOC dataset,which achieves comparable performance with the state-of-the-art DSRG(deep seeded region growing)method.Conclusion:Our method not only outperforms state-of-the-art semantic segmentation methods by a large margin for the lesion segmentation task,but also shows its capability to perform well on more general tasks.展开更多
文摘In the field of optoelectronics,certain types of data may be difficult to accurately annotate,such as high-resolution optoelectronic imaging or imaging in certain special spectral ranges.Weakly supervised learning can provide a more reliable approach in these situations.Current popular approaches mainly adopt the classification-based class activation maps(CAM)as initial pseudo labels to solve the task.
基金supported by the National Natural Science Foundation of China(Grants 62176097,61433007)Fundamental Research Funds for the Central Universities(Grant 2019kfyXKJC024)the 111 Project on Computational Intelligence and Intelligent Control(Grant B18024).
文摘Existing weakly supervised semantic segmentation(WSSS)methods based on image-level labels always rely on class activation maps(CAMs),which measure the relationships between features and classifiers.However,CAMs only focus on the most discriminative regions of images,resulting in their poor coverage performance.We attribute this to the deficiency in the recognition ability of a single classifier and the negative impacts caused by magnitudes during the CAMs normalisation process.To address the aforementioned issues,we propose to construct selective multiple classifiers(SMC).During the training process,we extract multiple prototypes for each class and store them in the corresponding memory bank.These prototypes are divided into foreground and background prototypes,with the former used to identify foreground objects and the latter aimed at preventing the false activation of background pixels.As for the inference stage,multiple prototypes are adaptively selected from the memory bank for each image as SMC.Subsequently,CAMs are generated by measuring the angle between SMC and features.We enhance the recognition ability of classifiers by adaptively constructing multiple classifiers for each image,while only relying on angle measurement to generate CAMs can alleviate the suppression phenomenon caused by magnitudes.Furthermore,SMC can be integrated into other WSSS approaches to help generate better CAMs.Extensive experiments conducted on standard WSSS benchmarks such as PASCAL VOC 2012 and MS COCO 2014 demonstrate the superiority of our proposed method.
基金funding from the following sources:National Natural Science Foundation of China(U1904119)Research Programs of Henan Science and Technology Department(232102210054)+3 种基金Chongqing Natural Science Foundation(CSTB2023NSCQ-MSX0070)Henan Province Key Research and Development Project(231111212000)Aviation Science Foundation(20230001055002)supported by Henan Center for Outstanding Overseas Scientists(GZS2022011).
文摘The primary challenge in weakly supervised semantic segmentation is effectively leveraging weak annotations while minimizing the performance gap compared to fully supervised methods.End-to-end model designs have gained significant attention for improving training efficiency.Most current algorithms rely on Convolutional Neural Networks(CNNs)for feature extraction.Although CNNs are proficient at capturing local features,they often struggle with global context,leading to incomplete and false Class Activation Mapping(CAM).To address these limitations,this work proposes a Contextual Prototype-Based End-to-End Weakly Supervised Semantic Segmentation(CPEWS)model,which improves feature extraction by utilizing the Vision Transformer(ViT).By incorporating its intermediate feature layers to preserve semantic information,this work introduces the Intermediate Supervised Module(ISM)to supervise the final layer’s output,reducing boundary ambiguity and mitigating issues related to incomplete activation.Additionally,the Contextual Prototype Module(CPM)generates class-specific prototypes,while the proposed Prototype Discrimination Loss and Superclass Suppression Loss guide the network’s training,(LPDL)(LSSL)effectively addressing false activation without the need for extra supervision.The CPEWS model proposed in this paper achieves state-of-the-art performance in end-to-end weakly supervised semantic segmentation without additional supervision.The validation set and test set Mean Intersection over Union(MIoU)of PASCAL VOC 2012 dataset achieved 69.8%and 72.6%,respectively.Compared with ToCo(pre trained weight ImageNet-1k),MIoU on the test set is 2.1%higher.In addition,MIoU reached 41.4%on the validation set of the MS COCO 2014 dataset.
基金the National Natural Science Foundation of China(42001408,61806097).
文摘Significant advancements have been achieved in road surface extraction based on high-resolution remote sensingimage processing. Most current methods rely on fully supervised learning, which necessitates enormous humaneffort to label the image. Within this field, other research endeavors utilize weakly supervised methods. Theseapproaches aim to reduce the expenses associated with annotation by leveraging sparsely annotated data, such asscribbles. This paper presents a novel technique called a weakly supervised network using scribble-supervised andedge-mask (WSSE-net). This network is a three-branch network architecture, whereby each branch is equippedwith a distinct decoder module dedicated to road extraction tasks. One of the branches is dedicated to generatingedge masks using edge detection algorithms and optimizing road edge details. The other two branches supervise themodel’s training by employing scribble labels and spreading scribble information throughout the image. To addressthe historical flaw that created pseudo-labels that are not updated with network training, we use mixup to blendprediction results dynamically and continually update new pseudo-labels to steer network training. Our solutiondemonstrates efficient operation by simultaneously considering both edge-mask aid and dynamic pseudo-labelsupport. The studies are conducted on three separate road datasets, which consist primarily of high-resolutionremote-sensing satellite photos and drone images. The experimental findings suggest that our methodologyperforms better than advanced scribble-supervised approaches and specific traditional fully supervised methods.
基金funded by the Open Foundation of Anhui EngineeringResearch Center of Intelligent Perception and Elderly Care,Chuzhou University(No.2022OPA03)the Higher EducationNatural Science Foundation of Anhui Province(No.KJ2021B01)and the Innovation Team Projects of Universities in Guangdong(No.2022KCXTD057).
文摘The coronavirus disease 2019(COVID-19)has severely disrupted both human life and the health care system.Timely diagnosis and treatment have become increasingly important;however,the distribution and size of lesions vary widely among individuals,making it challenging to accurately diagnose the disease.This study proposed a deep-learning disease diagnosismodel based onweakly supervised learning and clustering visualization(W_CVNet)that fused classification with segmentation.First,the data were preprocessed.An optimizable weakly supervised segmentation preprocessing method(O-WSSPM)was used to remove redundant data and solve the category imbalance problem.Second,a deep-learning fusion method was used for feature extraction and classification recognition.A dual asymmetric complementary bilinear feature extraction method(D-CBM)was used to fully extract complementary features,which solved the problem of insufficient feature extraction by a single deep learning network.Third,an unsupervised learning method based on Fuzzy C-Means(FCM)clustering was used to segment and visualize COVID-19 lesions enabling physicians to accurately assess lesion distribution and disease severity.In this study,5-fold cross-validation methods were used,and the results showed that the network had an average classification accuracy of 85.8%,outperforming six recent advanced classification models.W_CVNet can effectively help physicians with automated aid in diagnosis to determine if the disease is present and,in the case of COVID-19 patients,to further predict the area of the lesion.
文摘Recently,weak supervision has received growing attention in the field of salient object detection due to the convenience of labelling.However,there is a large performance gap between weakly supervised and fully supervised salient object detectors because the scribble annotation can only provide very limited foreground/background information.Therefore,an intuitive idea is to infer annotations that cover more complete object and background regions for training.To this end,a label inference strategy is proposed based on the assumption that pixels with similar colours and close positions should have consistent labels.Specifically,k-means clustering algorithm was first performed on both colours and coordinates of original annotations,and then assigned the same labels to points having similar colours with colour cluster centres and near coordinate cluster centres.Next,the same annotations for pixels with similar colours within each kernel neighbourhood was set further.Extensive experiments on six benchmarks demonstrate that our method can significantly improve the performance and achieve the state-of-the-art results.
基金supported by National Natural Science Foundation of China(62276058,61902057,41774063)Fundamental Research Funds for the Central Universities(N2217003)Joint Fund of Science&Technology Department of Liaoning Province and State Key Laboratory of Robotics,China(2020-KF-12-11).
文摘A large variety of complaint reports reflect subjective information expressed by citizens.A key challenge of text summarization for complaint reports is to ensure the factual consistency of generated summary.Therefore,in this paper,a simple and weakly supervised framework considering factual consistency is proposed to generate a summary of city-based complaint reports without pre-labeled sentences/words.Furthermore,it considers the importance of entity in complaint reports to ensure factual consistency of summary.Experimental results on the customer review datasets(Yelp and Amazon)and complaint report dataset(complaint reports of Shenyang in China)show that the proposed framework outperforms state-of-the-art approaches in ROUGE scores and human evaluation.It unveils the effectiveness of our approach to helping in dealing with complaint reports.
基金supported in part by the National Natural Science Foundation of China(62171347,62101405,62371373,6227137).
文摘In weakly supervised video anomaly detection(WSVAD)tasks,the temporal relationships of video are crucial for modeling event patterns.Transformer is a commonly used method for modeling temporal relationships.However,due to the large amount of redundancy in videos and the quadratic complexity of the Transformer,this method cannot effectively model long-range information.In addition,most WSVAD methods select key snippets based on predicted scores to represent event patterns,but this paradigm is susceptible to noise interference.To address the above issues,a novel temporal context and representative feature learning(TCRFL)method for WSVAD is proposed.Specifically,a temporal context learning(TCL)module is proposed to utilize both Mamba with linear complexity and Transformer to capture short-range and long-range dependencies of events.In addition,a representative feature learning(RFL)module is proposed to mine representative snippets to capture important information about events,further spreading it to video features to enhance the influence of representative features.The RFL module not only suppresses noise interference but also guides the model to select key snippets more accurately.The experimental results on UCF-Crime,XD-Violence,and ShanghaiTech datasets demonstrate the effectiveness and superiority of our method.
基金supported in part by the National Key R&D Program of China(2017YFB0502904)the National Science Foundation of China(61876140)。
文摘Recently,video object segmentation has received great attention in the computer vision community.Most of the existing methods heavily rely on the pixel-wise human annotations,which are expensive and time-consuming to obtain.To tackle this problem,we make an early attempt to achieve video object segmentation with scribble-level supervision,which can alleviate large amounts of human labor for collecting the manual annotation.However,using conventional network architectures and learning objective functions under this scenario cannot work well as the supervision information is highly sparse and incomplete.To address this issue,this paper introduces two novel elements to learn the video object segmentation model.The first one is the scribble attention module,which captures more accurate context information and learns an effective attention map to enhance the contrast between foreground and background.The other one is the scribble-supervised loss,which can optimize the unlabeled pixels and dynamically correct inaccurate segmented areas during the training stage.To evaluate the proposed method,we implement experiments on two video object segmentation benchmark datasets,You Tube-video object segmentation(VOS),and densely annotated video segmentation(DAVIS)-2017.We first generate the scribble annotations from the original per-pixel annotations.Then,we train our model and compare its test performance with the baseline models and other existing works.Extensive experiments demonstrate that the proposed method can work effectively and approach to the methods requiring the dense per-pixel annotations.
基金Supported by the National Key Research and Development Programme of China(2018YFC0831201).
文摘Background In computer vision,simultaneously estimating human pose,shape,and clothing is a practical issue in real life,but remains a challenging task owing to the variety of clothing,complexity of de-formation,shortage of large-scale datasets,and difficulty in estimating clothing style.Methods We propose a multistage weakly supervised method that makes full use of data with less labeled information for learning to estimate human body shape,pose,and clothing deformation.In the first stage,the SMPL human-body model parameters were regressed using the multi-view 2D key points of the human body.Using multi-view information as weakly supervised information can avoid the deep ambiguity problem of a single view,obtain a more accurate human posture,and access supervisory information easily.In the second stage,clothing is represented by a PCA-based model that uses two-dimensional key points of clothing as supervised information to regress the parameters.In the third stage,we predefine an embedding graph for each type of clothing to describe the deformation.Then,the mask information of the clothing is used to further adjust the deformation of the clothing.To facilitate training,we constructed a multi-view synthetic dataset that included BCNet and SURREAL.Results The Experiments show that the accuracy of our method reaches the same level as that of SOTA methods using strong supervision information while only using weakly supervised information.Because this study uses only weakly supervised information,which is much easier to obtain,it has the advantage of utilizing existing data as training data.Experiments on the DeepFashion2 dataset show that our method can make full use of the existing weak supervision information for fine-tuning on a dataset with little supervision information,compared with the strong supervision information that cannot be trained or adjusted owing to the lack of exact annotation information.Conclusions Our weak supervision method can accurately estimate human body size,pose,and several common types of clothing and overcome the issues of the current shortage of clothing data.
基金supported by the National Natural Science Foundation of China(NSFC)under Grant Nos.62402490 and 62072334.
文摘We study the novel problem of weakly supervised instance action recognition(WSiAR)in multi-person(crowd)scenes.We specifically aim to recognize the action of each subject in the crowd,for which we propose the use of a weakly supervised method,considering the expense of large-scale annotations for training.This problem is of great practical value for video surveillance and sports scene analysis.To this end,we investigated and designed a series of weak annotations for the supervision of weakly supervised instance action recognition(WSiAR).We propose two categories of weak label settings,bag labels and sparse labels,to significantly reduce the number of labels.Based on the former,we propose a novel sub-block-aware multi-instance learning(MIL)loss to obtain more effective information from weak labels during training.With respect to the latter,we propose a pseudo label generation strategy for extending sparse labels.This enables our method to achieve results comparable to those of fully supervised methods but with significantly fewer annotations.The experimental results on two benchmarks verified the rationality of the problem definition and effectiveness of the proposed weakly supervised training method in solving our problem.
基金support of the National Natural Science Foundation of China(Nos.52008311,51878499,and 52178433)the Science and Technology Commission of Shanghai Municipality(No.21ZR1465700)the Fundamental Research Funds for the Central Universities(No.22120230196).
文摘Accurate and timely surveying of airfield pavement distress is crucial for cost-effective air-port maintenance.Deep learning(DL)approaches,leveraging advancements in computer science and image acquisition techniques,have become the mainstream for automated air-field pavement distress detection.However,fully-supervised DL methods require a large number of manually annotated ground truth labels to achieve high accuracy.To address the challenge of limited high-quality manual annotations,we propose a novel end-to-end distress detection model called class activation map informed weakly-supervised dis-tress detection(WSDD-CAM).Based on YOLOv5,WSDD-CAM consists of an efficient back-bone,a classification branch,and a localization network.By utilizing class activation map(CAM)information,our model significantly reduces the need for manual annotations,auto-matically generating pseudo bounding boxes with a 71%overlap with the ground truth.To evaluate WSDD-CAM,we tested it on a self-made dataset and compared it with other weakly-supervised and fully-supervised models.The results show that our model achieves 49.2%mean average precision(mAP),outperforming other weakly-supervised methods and even approaching state-of-the-art fully-supervised methods.Additionally,ablation experiments confirm the effectiveness of our architecture design.In conclusion,our WSDD-CAM model offers a promising solution for airfield pavement distress detection,reducing manual annotation time while maintaining high accuracy.This efficient and effec-tive approach can significantly contribute to cost-effective airport maintenance management.
基金supported in part by the National Cancer Institute under award numbers R01CA268287A1,U01CA269181,R01CA26820701A1,R01CA249992-01A1,R01CA202752-01A1,R01CA208236-01A1,R01CA216579-01A1,R01CA220581-01A1,R01CA257612-01A1,1U01CA239055-01,1U01CA248226-01,1U54CA254566-01National Heart,Lung and Blood Institute 1R01HL15127701A1,R01HL15807101A1+8 种基金National Institute of Biomedical Imaging and Bioengineering 1R43EB028736-01VA Merit Review Award IBX004121A from the United States Department of Veterans Affairs Biomedical Laboratory Research and Development Service the Office of the Assistant Secretary of Defense for Health Affairs,through the Breast Cancer Research Program(W81XWH-19-1-0668)the Prostate Cancer Research Program(W81XWH-20-1-0851)the Lung Cancer Research Program(W81XWH-18-1-0440,W81XWH-20-1-0595)the Peer Reviewed Cancer Research Program(W81XWH-18-1-0404,W81XWH-21-1-0345,W81XWH-211-0160)the Kidney Precision Medicine Project(KPMP)Glue Grant and sponsored research agreements from Bristol Myers-Squibb,Boehringer-Ingelheim,Eli-Lilly and Astrazenecasupported in part by the National Natural Science Foundation of China general program(No.61571314)the Sichuan University-Yibin City Strategic Cooperation Special Fund(No.2020CDYB-27)Support Program of Sichuan Science and Technology Department(No.2023YFS0327-LH).
文摘Accurate prognosis prediction is essential for guiding cancer treatment and improving patient outcomes.While recent studies have demonstrated the potential of histopathological images in survival analysis,existing models are typically developed in a cancerspecific manner,lack extensive external validation,and often rely on molecular data that are not routinely available in clinical practice.To address these limitations,we present PROGPATH,a unified model capable of integrating histopathological image features with routinely collected clinical variables to achieve pancancer prognosis prediction.PROGPATH employs a weakly supervised deep learning architecture built upon the foundation model for image encoding.Morphological features are aggregated through an attention-guided multiple instance learning module and fused with clinical information via a cross-attention transformer.A router-based classification strategy further refines the prediction performance.PROGPATH was trained on 7999 whole-slide images(WSIs)from 6,670 patients across 15 cancer types,and extensively validated on 17 external cohorts with a total of 7374 WSIs from 4441 patients,covering 12 cancer types from 8 consortia and institutions across three continents.PROGPATH achieved consistently superior performance compared with state-of-the-art multimodal prognosis prediction models.It demonstrated strong generalizability across cancer types and robustness in stratified subgroups,including early-and advancedstage patients,treatment cohorts(radiotherapy and pharmaceutical therapy),and biomarker-defined subsets.We further provide model interpretability by identifying pathological patterns critical to PROGPATH’s risk predictions,such as the degree of cell differentiation and extent of necrosis.Together,these results highlight the potential of PROGPATH to support pancancer outcome prediction and inform personalized cancer management strategies.
基金supported by National Natural Science Foundation of China(Nos.61871378,U2003111,62122013 and U2001211).
文摘Action recognition and localization in untrimmed videos is important for many applications and have attracted a lot of attention. Since full supervision with frame-level annotation places an overwhelming burden on manual labeling effort, learning with weak video-level supervision becomes a potential solution. In this paper, we propose a novel weakly supervised framework to recognize actions and locate the corresponding frames in untrimmed videos simultaneously. Considering that there are abundant trimmed videos publicly available and well-segmented with semantic descriptions, the instructive knowledge learned on trimmed videos can be fully leveraged to analyze untrimmed videos. We present an effective knowledge transfer strategy based on inter-class semantic relevance. We also take advantage of the self-attention mechanism to obtain a compact video representation, such that the influence of background frames can be effectively eliminated. A learning architecture is designed with twin networks for trimmed and untrimmed videos, to facilitate transferable self-attentive representation learning. Extensive experiments are conducted on three untrimmed benchmark datasets (i.e., THUMOS14, ActivityNet1.3, and MEXaction2), and the experimental results clearly corroborate the efficacy of our method. It is especially encouraging to see that the proposed weakly supervised method even achieves comparable results to some fully supervised methods.
基金Supported by the National Natural Science Foundation of China(61672433)the Fundamental Research Fund for Shenzhen Science and Technology Innovation Committee(201703063000511,201703063000517)+1 种基金the National Cryptography Development Fund(MMJJ20170210)the Science and Technology Project of State Grid Corporation of China(522722180007)
文摘This paper presents a novel algorithm for an extreme form of weak label learning, in which only one of all relevant labels is given for each training sample. Using genetic algorithm, all of the labels in the training set are optimally divided into several non-overlapping groups to maximize the label distinguishability in every group. Multiple classifiers are trained separately and ensembled for label predictions. Experimental results show significant improvement over previous weak label learning algorithms.
基金supported by Guangdong Natural Science Foundation(2021B1515020085)Shenzhen Science and Technology Program(RCYX20210609103121030)+4 种基金National Natural Science Foundation of China(62322207,61872250,U2001206,U21B2023)Department of Education of Guangdong Province Innovation Team(2022KCXTD025)Shenzhen Science and Technology Innovation Program(JCYJ20210324120213036)the Natural Sciences and Engineering Research Council of Canada(NSERC)Guangdong Laboratory of Artificial Intelligence and Digital Economy(ShenZhen).
文摘Since the preparation of labeled datafor training semantic segmentation networks of pointclouds is a time-consuming process, weakly supervisedapproaches have been introduced to learn fromonly a small fraction of data. These methods aretypically based on learning with contrastive losses whileautomatically deriving per-point pseudo-labels from asparse set of user-annotated labels. In this paper, ourkey observation is that the selection of which samplesto annotate is as important as how these samplesare used for training. Thus, we introduce a methodfor weakly supervised segmentation of 3D scenes thatcombines self-training with active learning. Activelearning selects points for annotation that are likelyto result in improvements to the trained model, whileself-training makes efficient use of the user-providedlabels for learning the model. We demonstrate thatour approach leads to an effective method that providesimprovements in scene segmentation over previouswork and baselines, while requiring only a few userannotations.
基金supported by the National Natural Science Foundation of China(No.61672268)。
文摘Temporal action localization (TAL) is a task of detecting the start and end timestamps of action instances and classifying them in an untrimmed video. As the number of action categories per video increases, existing weakly-supervised TAL (W-TAL) methods with only video-level labels cannot provide sufficient supervision. Single-frame supervision has attracted the interest of researchers. Existing paradigms model single-frame annotations from the perspective of video snippet sequences, neglect action discrimination of annotated frames, and do not pay sufficient attention to their correlations in the same category. Considering a category, the annotated frames exhibit distinctive appearance characteristics or clear action patterns.Thus, a novel method to enhance action discrimination via category-specific frame clustering for W-TAL is proposed. Specifically,the K-means clustering algorithm is employed to aggregate the annotated discriminative frames of the same category, which are regarded as exemplars to exhibit the characteristics of the action category. Then, the class activation scores are obtained by calculating the similarities between a frame and exemplars of various categories. Category-specific representation modeling can provide complimentary guidance to snippet sequence modeling in the mainline. As a result, a convex combination fusion mechanism is presented for annotated frames and snippet sequences to enhance the consistency properties of action discrimination,which can generate a robust class activation sequence for precise action classification and localization. Due to the supplementary guidance of action discriminative enhancement for video snippet sequences, our method outperforms existing single-frame annotation based methods. Experiments conducted on three datasets (THUMOS14, GTEA, and BEOID) show that our method achieves high localization performance compared with state-of-the-art methods.
基金supported partially by the National Natural Science Foundation of China(NSFC)(Grant Nos.U1911401 and U1811461)Guangdong NSF Project(2020B1515120085,2018B030312002)+2 种基金Guangzhou Research Project(201902010037)Research Projects of Zhejiang Lab(2019KD0AB03)the Key-Area Research and Development Program of Guangzhou(202007030004).
文摘Anticipating future actions without observing any partial videos of future actions plays an important role in action prediction and is also a challenging task.To obtain abundant information for action anticipation,some methods integrate multimodal contexts,including scene object labels.However,extensively labelling each frame in video datasets requires considerable effort.In this paper,we develop a weakly supervised method that integrates global motion and local finegrained features from current action videos to predict next action label without the need for specific scene context labels.Specifically,we extract diverse types of local features with weakly supervised learning,including object appearance and human pose representations without ground truth.Moreover,we construct a graph convolutional network for exploiting the inherent relationships of humans and objects under present incidents.We evaluate the proposed model on two datasets,the MPII-Cooking dataset and the EPIC-Kitchens dataset,and we demonstrate the generalizability and effectiveness of our approach for action anticipation.
基金supported by the National Key Research and Development Program of China(2018AAA0100104 and 2018AAA0100100)the National Natural Science Foundation of China(Grant No.61702095)+1 种基金Natural Science Foundation of Jiangsu Province(BK20211164,BK20190341,and BK20210002)the Big Data Computing Center of Southeast University.
文摘Temporal localization is crucial for action video recognition.Since the manual annotations are expensive and time-consuming in videos,temporal localization with weak video-level labels is challenging but indispensable.In this paper,we propose a weakly-supervised temporal action localization approach in untrimmed videos.To settle this issue,we train the model based on the proxies of each action class.The proxies are used to measure the distances between action segments and different original action features.We use a proxy-based metric to cluster the same actions together and separate actions from backgrounds.Compared with state-of-the-art methods,our method achieved competitive results on the THUMOS14 and ActivityNet1.2 datasets.
基金This work was partially supported by the National Natural Science Foundation of China(Nos.61725204 and 62002258)a Grant from Science and Technology Department of Jiangsu Province,China.
文摘Background:Image-based automatic diagnosis of field diseases can help increase crop yields and is of great importance.However,crop lesion regions tend to be scattered and of varying sizes,this along with substantial intraclass variation and small inter-class variation makes segmentation difficult.Methods:We propose a novel end-to-end system that only requires weak supervision of image-level labels for lesion region segmentation.First,a two-branch network is designed for joint disease classification and seed region generation.The generated seed regions are then used as input to the next segmentation stage where we design to use an encoder-decoder network.Different from previous works that use an encoder in the segmentation network,the encoder-decoder network is critical for our system to successfully segment images with small and scattered regions,which is the major challenge in image-based diagnosis of field diseases.We further propose a novel weakly supervised training strategy for the encoder-decoder semantic segmentation network,making use of the extracted seed regions.Results:Experimental results show that our system achieves better lesion region segmentation results than state of the arts.In addition to crop images,our method is also applicable to general scattered object segmentation.We demonstrate this by extending our framework to work on the PASCAL VOC dataset,which achieves comparable performance with the state-of-the-art DSRG(deep seeded region growing)method.Conclusion:Our method not only outperforms state-of-the-art semantic segmentation methods by a large margin for the lesion segmentation task,but also shows its capability to perform well on more general tasks.