BACKGROUND Secure transluminal closure remains a fundamental barrier to endoscopic surgery.It has been reported that through-the-scope clips were used to secure the incision of the gallbladder during natural orifice t...BACKGROUND Secure transluminal closure remains a fundamental barrier to endoscopic surgery.It has been reported that through-the-scope clips were used to secure the incision of the gallbladder during natural orifice transluminal endoscopic cholecystolithotomy and were left in the body post-operation.The over-the-scope clip(OTSC)is favored for its rapid deployment and strong anchoring capabilities.Nevertheless,OTSCs are difficult to remove once implanted.The Senscure Biotechnology in China has developed a detachable over-the-scope clip(D-OTSC)for this purpose.Here,we utilized the D-OTSC to successfully close a full-thickness sigmoid defect exceeding 1 cm in diameter.Subsequently,the clip was completely removed postoperatively,yielding favorable clinical outcomes.CASE SUMMARY We present the case of a 51-year-old female patient who underwent natural orifice transluminal endoscopic cholecystolithotomy.The sigmoid incision was closed using a D-OTSC.Postoperative recovery was uneventful,with no abdominal infection or bleeding.The D-OTSC was subsequently removed via enteroscopy in the outpatient department one month later.CONCLUSION The utilization of D-OTSC presents a viable option for closing colonic mucosal incisions ranging from 1 cm to 2 cm.展开更多
当前,全球科技创新呈现高速发展和高度融合的态势。准确识别出颠覆性技术主题以推动全面创新已成为科学技术发展和经济增长的关键动力。然而,传统的颠覆性技术主题识别方法主要依赖于单一模态数据,存在一定的局限性。本文基于CLIP(contr...当前,全球科技创新呈现高速发展和高度融合的态势。准确识别出颠覆性技术主题以推动全面创新已成为科学技术发展和经济增长的关键动力。然而,传统的颠覆性技术主题识别方法主要依赖于单一模态数据,存在一定的局限性。本文基于CLIP(contrastive language-image pre-training)和LDAGV(linear discriminant analysis&global vectors for word representation)模型构建新闻文本与图像特征融合向量,通过k-means聚类迭代并结合3个颠覆性技术主题指标进行筛选,实现了多模态信息的融合以及主题的精准识别。以新能源领域为例,验证了该模型在颠覆性技术主题识别方面的可行性和有效性。与其他单一模态模型相比,多模态信息融合模型在颠覆性技术主题识别方面更具优势。展开更多
Image segmentation is attracting increasing attention in the field of medical image analysis.Since widespread utilization across various medical applications,ensuring and improving segmentation accuracy has become a c...Image segmentation is attracting increasing attention in the field of medical image analysis.Since widespread utilization across various medical applications,ensuring and improving segmentation accuracy has become a crucial topic of research.With advances in deep learning,researchers have developed numerous methods that combine Transformers and convolutional neural networks(CNNs)to create highly accurate models for medical image segmentation.However,efforts to further enhance accuracy by developing larger and more complex models or training with more extensive datasets,significantly increase computational resource consumption.To address this problem,we propose BiCLIP-nnFormer(the prefix"Bi"refers to the use of two distinct CLIP models),a virtual multimodal instrument that leverages CLIP models to enhance the segmentation performance of a medical segmentation model nnFormer.Since two CLIP models(PMC-CLIP and CoCa-CLIP)are pre-trained on large datasets,they do not require additional training,thus conserving computation resources.These models are used offline to extract image and text embeddings from medical images.These embeddings are then processed by the proposed 3D CLIP adapter,which adapts the CLIP knowledge for segmentation tasks by fine-tuning.Finally,the adapted embeddings are fused with feature maps extracted from the nnFormer encoder for generating predicted masks.This process enriches the representation capabilities of the feature maps by integrating global multimodal information,leading to more precise segmentation predictions.We demonstrate the superiority of BiCLIP-nnFormer and the effectiveness of using CLIP models to enhance nnFormer through experiments on two public datasets,namely the Synapse multi-organ segmentation dataset(Synapse)and the Automatic Cardiac Diagnosis Challenge dataset(ACDC),as well as a self-annotated lung multi-category segmentation dataset(LMCS).展开更多
基金Supported by Natural Science Foundation of Fujian Province,China,No.2021J01545.
文摘BACKGROUND Secure transluminal closure remains a fundamental barrier to endoscopic surgery.It has been reported that through-the-scope clips were used to secure the incision of the gallbladder during natural orifice transluminal endoscopic cholecystolithotomy and were left in the body post-operation.The over-the-scope clip(OTSC)is favored for its rapid deployment and strong anchoring capabilities.Nevertheless,OTSCs are difficult to remove once implanted.The Senscure Biotechnology in China has developed a detachable over-the-scope clip(D-OTSC)for this purpose.Here,we utilized the D-OTSC to successfully close a full-thickness sigmoid defect exceeding 1 cm in diameter.Subsequently,the clip was completely removed postoperatively,yielding favorable clinical outcomes.CASE SUMMARY We present the case of a 51-year-old female patient who underwent natural orifice transluminal endoscopic cholecystolithotomy.The sigmoid incision was closed using a D-OTSC.Postoperative recovery was uneventful,with no abdominal infection or bleeding.The D-OTSC was subsequently removed via enteroscopy in the outpatient department one month later.CONCLUSION The utilization of D-OTSC presents a viable option for closing colonic mucosal incisions ranging from 1 cm to 2 cm.
文摘当前,全球科技创新呈现高速发展和高度融合的态势。准确识别出颠覆性技术主题以推动全面创新已成为科学技术发展和经济增长的关键动力。然而,传统的颠覆性技术主题识别方法主要依赖于单一模态数据,存在一定的局限性。本文基于CLIP(contrastive language-image pre-training)和LDAGV(linear discriminant analysis&global vectors for word representation)模型构建新闻文本与图像特征融合向量,通过k-means聚类迭代并结合3个颠覆性技术主题指标进行筛选,实现了多模态信息的融合以及主题的精准识别。以新能源领域为例,验证了该模型在颠覆性技术主题识别方面的可行性和有效性。与其他单一模态模型相比,多模态信息融合模型在颠覆性技术主题识别方面更具优势。
基金funded by the National Natural Science Foundation of China(Grant No.6240072655)the Hubei Provincial Key Research and Development Program(Grant No.2023BCB151)+1 种基金the Wuhan Natural Science Foundation Exploration Program(Chenguang Program,Grant No.2024040801020202)the Natural Science Foundation of Hubei Province of China(Grant No.2025AFB148).
文摘Image segmentation is attracting increasing attention in the field of medical image analysis.Since widespread utilization across various medical applications,ensuring and improving segmentation accuracy has become a crucial topic of research.With advances in deep learning,researchers have developed numerous methods that combine Transformers and convolutional neural networks(CNNs)to create highly accurate models for medical image segmentation.However,efforts to further enhance accuracy by developing larger and more complex models or training with more extensive datasets,significantly increase computational resource consumption.To address this problem,we propose BiCLIP-nnFormer(the prefix"Bi"refers to the use of two distinct CLIP models),a virtual multimodal instrument that leverages CLIP models to enhance the segmentation performance of a medical segmentation model nnFormer.Since two CLIP models(PMC-CLIP and CoCa-CLIP)are pre-trained on large datasets,they do not require additional training,thus conserving computation resources.These models are used offline to extract image and text embeddings from medical images.These embeddings are then processed by the proposed 3D CLIP adapter,which adapts the CLIP knowledge for segmentation tasks by fine-tuning.Finally,the adapted embeddings are fused with feature maps extracted from the nnFormer encoder for generating predicted masks.This process enriches the representation capabilities of the feature maps by integrating global multimodal information,leading to more precise segmentation predictions.We demonstrate the superiority of BiCLIP-nnFormer and the effectiveness of using CLIP models to enhance nnFormer through experiments on two public datasets,namely the Synapse multi-organ segmentation dataset(Synapse)and the Automatic Cardiac Diagnosis Challenge dataset(ACDC),as well as a self-annotated lung multi-category segmentation dataset(LMCS).