Medical image segmentation is a crucial preliminary step for a number of downstream diagnosis tasks.As deep convolutional neural networks successfully promote the development of computer vision,it is possible to make ...Medical image segmentation is a crucial preliminary step for a number of downstream diagnosis tasks.As deep convolutional neural networks successfully promote the development of computer vision,it is possible to make medical image segmentation a semi-automatic procedure by applying deep convolutional neural networks to finding the contours of regions of interest that are then revised by radiologists.However,supervised learning necessitates large annotated data,which are difficult to acquire especially for medical images.Self-supervised learning is able to take advantage of unlabeled data and provide good initialization to be finetuned for downstream tasks with limited annotations.Considering that most self-supervised learning especially contrastive learning methods are tailored to natural image classification and entail expensive GPU resources,we propose a novel and simple pretext-based self-supervised learning method that exploits the value of positional information in volumetric medical images.Specifically,we regard spatial coordinates as pseudo labels and pretrain the model by predicting positions of randomly sampled 2D slices in volumetric medical images.Experiments on four semantic segmentation datasets demonstrate the superiority of our method over other self-supervised learning methods in both semi-supervised learning and transfer learning settings.Codes are available at https://github.com/alienzyj/PPos.展开更多
基金the Major Research Plan of the National Natural Science Foundation of China(No.92059206)。
文摘Medical image segmentation is a crucial preliminary step for a number of downstream diagnosis tasks.As deep convolutional neural networks successfully promote the development of computer vision,it is possible to make medical image segmentation a semi-automatic procedure by applying deep convolutional neural networks to finding the contours of regions of interest that are then revised by radiologists.However,supervised learning necessitates large annotated data,which are difficult to acquire especially for medical images.Self-supervised learning is able to take advantage of unlabeled data and provide good initialization to be finetuned for downstream tasks with limited annotations.Considering that most self-supervised learning especially contrastive learning methods are tailored to natural image classification and entail expensive GPU resources,we propose a novel and simple pretext-based self-supervised learning method that exploits the value of positional information in volumetric medical images.Specifically,we regard spatial coordinates as pseudo labels and pretrain the model by predicting positions of randomly sampled 2D slices in volumetric medical images.Experiments on four semantic segmentation datasets demonstrate the superiority of our method over other self-supervised learning methods in both semi-supervised learning and transfer learning settings.Codes are available at https://github.com/alienzyj/PPos.