Two-dimensional endoscopic images are susceptible to interferences such as specular reflections and monotonous texture illumination,hindering accurate three-dimensional lesion reconstruction by surgical robots.This st...Two-dimensional endoscopic images are susceptible to interferences such as specular reflections and monotonous texture illumination,hindering accurate three-dimensional lesion reconstruction by surgical robots.This study proposes a novel end-to-end disparity estimation model to address these challenges.Our approach combines a Pseudo-Siamese neural network architecture with pyramid dilated convolutions,integrating multi-scale image information to enhance robustness against lighting interferences.This study introduces a Pseudo-Siamese structure-based disparity regression model that simplifies left-right image comparison,improving accuracy and efficiency.The model was evaluated using a dataset of stereo endoscopic videos captured by the Da Vinci surgical robot,comprising simulated silicone heart sequences and real heart video data.Experimental results demonstrate significant improvement in the network’s resistance to lighting interference without substantially increasing parameters.Moreover,the model exhibited faster convergence during training,contributing to overall performance enhancement.This study advances endoscopic image processing accuracy and has potential implications for surgical robot applications in complex environments.展开更多
Seismic data denoising is a critical process usually applied at various stages of the seismic processing workflow,as our ability to mitigate noise in seismic data affects the quality of our subsequent analyses.However...Seismic data denoising is a critical process usually applied at various stages of the seismic processing workflow,as our ability to mitigate noise in seismic data affects the quality of our subsequent analyses.However,finding an optimal balance between preserving seismic signals and effectively reducing seismic noise presents a substantial challenge.In this study,we introduce a multi-stage deep learning model,trained in a self-supervised manner,designed specifically to suppress seismic noise while minimizing signal leakage.This model operates as a patch-based approach,extracting overlapping patches from the noisy data and converting them into 1D vectors for input.It consists of two identical sub-networks,each configured differently.Inspired by the transformer architecture,each sub-network features an embedded block that comprises two fully connected layers,which are utilized for feature extraction from the input patches.After reshaping,a multi-head attention module enhances the model’s focus on significant features by assigning higher attention weights to them.The key difference between the two sub-networks lies in the number of neurons within their fully connected layers.The first sub-network serves as a strong denoiser with a small number of neurons,effectively attenuating seismic noise;in contrast,the second sub-network functions as a signal-add-back model,using a larger number of neurons to retrieve some of the signal that was not preserved in the output of the first sub-network.The proposed model produces two outputs,each corresponding to one of the sub-networks,and both sub-networks are optimized simultaneously using the noisy data as the label for both outputs.Evaluations conducted on both synthetic and field data demonstrate the model’s effectiveness in suppressing seismic noise with minimal signal leakage,outperforming some benchmark methods.展开更多
Computed Tomography(CT)reconstruction is essential inmedical imaging and other engineering fields.However,blurring of the projection during CT imaging can lead to artifacts in the reconstructed images.Projection blur ...Computed Tomography(CT)reconstruction is essential inmedical imaging and other engineering fields.However,blurring of the projection during CT imaging can lead to artifacts in the reconstructed images.Projection blur combines factors such as larger ray sources,scattering and imaging system vibration.To address the problem,we propose DeblurTomo,a novel self-supervised learning-based deblurring and reconstruction algorithm that efficiently reconstructs sharp CT images from blurry input without needing external data and blur measurement.Specifically,we constructed a coordinate-based implicit neural representation reconstruction network,which can map the coordinates to the attenuation coefficient in the reconstructed space formore convenient ray representation.Then,wemodel the blur as aweighted sumof offset rays and design the RayCorrectionNetwork(RCN)andWeight ProposalNetwork(WPN)to fit these rays and their weights bymulti-view consistency and geometric information,thereby extending 2D deblurring to 3D space.In the training phase,we use the blurry input as the supervision signal to optimize the reconstruction network,the RCN,and the WPN simultaneously.Extensive experiments on the widely used synthetic dataset show that DeblurTomo performs superiorly on the limited-angle and sparse-view in the simulated blurred scenarios.Further experiments on real datasets demonstrate the superiority of our method in practical scenarios.展开更多
Blended acquisition offers efficiency improvements over conventional seismic data acquisition, at the cost of introducing blending noise effects. Besides, seismic data often suffers from irregularly missing shots caus...Blended acquisition offers efficiency improvements over conventional seismic data acquisition, at the cost of introducing blending noise effects. Besides, seismic data often suffers from irregularly missing shots caused by artificial or natural effects during blended acquisition. Therefore, blending noise attenuation and missing shots reconstruction are essential for providing high-quality seismic data for further seismic processing and interpretation. The iterative shrinkage thresholding algorithm can help obtain deblended data based on sparsity assumptions of complete unblended data, and it characterizes seismic data linearly. Supervised learning algorithms can effectively capture the nonlinear relationship between incomplete pseudo-deblended data and complete unblended data. However, the dependence on complete unblended labels limits their practicality in field applications. Consequently, a self-supervised algorithm is presented for simultaneous deblending and interpolation of incomplete blended data, which minimizes the difference between simulated and observed incomplete pseudo-deblended data. The used blind-trace U-Net (BTU-Net) prevents identity mapping during complete unblended data estimation. Furthermore, a multistep process with blending noise simulation-subtraction and missing traces reconstruction-insertion is used in each step to improve the deblending and interpolation performance. Experiments with synthetic and field incomplete blended data demonstrate the effectiveness of the multistep self-supervised BTU-Net algorithm.展开更多
Feature fusion is an important technique in medical image classification that can improve diagnostic accuracy by integrating complementary information from multiple sources.Recently,Deep Learning(DL)has been widely us...Feature fusion is an important technique in medical image classification that can improve diagnostic accuracy by integrating complementary information from multiple sources.Recently,Deep Learning(DL)has been widely used in pulmonary disease diagnosis,such as pneumonia and tuberculosis.However,traditional feature fusion methods often suffer from feature disparity,information loss,redundancy,and increased complexity,hindering the further extension of DL algorithms.To solve this problem,we propose a Graph-Convolution Fusion Network with Self-Supervised Feature Alignment(Self-FAGCFN)to address the limitations of traditional feature fusion methods in deep learning-based medical image classification for respiratory diseases such as pneumonia and tuberculosis.The network integrates Convolutional Neural Networks(CNNs)for robust feature extraction from two-dimensional grid structures and Graph Convolutional Networks(GCNs)within a Graph Neural Network branch to capture features based on graph structure,focusing on significant node representations.Additionally,an Attention-Embedding Ensemble Block is included to capture critical features from GCN outputs.To ensure effective feature alignment between pre-and post-fusion stages,we introduce a feature alignment loss that minimizes disparities.Moreover,to address the limitations of proposed methods,such as inappropriate centroid discrepancies during feature alignment and class imbalance in the dataset,we develop a Feature-Centroid Fusion(FCF)strategy and a Multi-Level Feature-Centroid Update(MLFCU)algorithm,respectively.Extensive experiments on public datasets LungVision and Chest-Xray demonstrate that the Self-FAGCFN model significantly outperforms existing methods in diagnosing pneumonia and tuberculosis,highlighting its potential for practical medical applications.展开更多
Self-supervised monocular depth estimation has emerged as a major research focus in recent years,primarily due to the elimination of ground-truth depth dependence.However,the prevailing architectures in this domain su...Self-supervised monocular depth estimation has emerged as a major research focus in recent years,primarily due to the elimination of ground-truth depth dependence.However,the prevailing architectures in this domain suffer from inherent limitations:existing pose network branches infer camera ego-motion exclusively under static-scene and Lambertian-surface assumptions.These assumptions are often violated in real-world scenarios due to dynamic objects,non-Lambertian reflectance,and unstructured background elements,leading to pervasive artifacts such as depth discontinuities(“holes”),structural collapse,and ambiguous reconstruction.To address these challenges,we propose a novel framework that integrates scene dynamic pose estimation into the conventional self-supervised depth network,enhancing its ability to model complex scene dynamics.Our contributions are threefold:(1)a pixel-wise dynamic pose estimation module that jointly resolves the pose transformations of moving objects and localized scene perturbations;(2)a physically-informed loss function that couples dynamic pose and depth predictions,designed to mitigate depth errors arising from high-speed distant objects and geometrically inconsistent motion profiles;(3)an efficient SE(3)transformation parameterization that streamlines network complexity and temporal pre-processing.Extensive experiments on the KITTI and NYU-V2 benchmarks show that our framework achieves state-of-the-art performance in both quantitative metrics and qualitative visual fidelity,significantly improving the robustness and generalization of monocular depth estimation under dynamic conditions.展开更多
Few-shot learning has emerged as a crucial technique for coral species classification,addressing the challenge of limited labeled data in underwater environments.This study introduces an optimized few-shot learning mo...Few-shot learning has emerged as a crucial technique for coral species classification,addressing the challenge of limited labeled data in underwater environments.This study introduces an optimized few-shot learning model that enhances classification accuracy while minimizing reliance on extensive data collection.The proposed model integrates a hybrid similarity measure combining Euclidean distance and cosine similarity,effectively capturing both feature magnitude and directional relationships.This approach achieves a notable accuracy of 71.8%under a 5-way 5-shot evaluation,outperforming state-of-the-art models such as Prototypical Networks,FEAT,and ESPT by up to 10%.Notably,the model demonstrates high precision in classifying Siderastreidae(87.52%)and Fungiidae(88.95%),underscoring its effectiveness in distinguishing subtle morphological differences.To further enhance performance,we incorporate a self-supervised learning mechanism based on contrastive learning,enabling the model to extract robust representations by leveraging local structural patterns in corals.This enhancement significantly improves classification accuracy,particularly for species with high intra-class variation,leading to an overall accuracy of 76.52%under a 5-way 10-shot evaluation.Additionally,the model exploits the repetitive structures inherent in corals,introducing a local feature aggregation strategy that refines classification through spatial information integration.Beyond its technical contributions,this study presents a scalable and efficient approach for automated coral reef monitoring,reducing annotation costs while maintaining high classification accuracy.By improving few-shot learning performance in underwater environments,our model enhances monitoring accuracy by up to 15%compared to traditional methods,offering a practical solution for large-scale coral conservation efforts.展开更多
Accurate aging diagnosis is crucial for the health and safety management of lithium-ion batteries in electric vehicles.Despite significant advancements achieved by data-driven methods,diagnosis accuracy remains constr...Accurate aging diagnosis is crucial for the health and safety management of lithium-ion batteries in electric vehicles.Despite significant advancements achieved by data-driven methods,diagnosis accuracy remains constrained by the high costs of check-up tests and the scarcity of labeled data.This paper presents a framework utilizing self-supervised machine learning to harness the potential of unlabeled data for diagnosing battery aging in electric vehicles during field operations.We validate our method using battery degradation datasets collected over more than two years from twenty real-world electric vehicles.Our analysis comprehensively addresses cell inconsistencies,physical interpretations,and charging uncertainties in real-world applications.This is achieved through self-supervised feature extraction using random short charging sequences in the main peak of incremental capacity curves.By leveraging inexpensive unlabeled data in a self-supervised approach,our method demonstrates improvements in average root mean square errors of 74.54%and 60.50%in the best and worst cases,respectively,compared to the supervised benchmark.This work underscores the potential of employing low-cost unlabeled data with self-supervised machine learning for effective battery health and safety management in realworld scenarios.展开更多
Low-light images suffer from low quality due to poor lighting conditions,noise pollution,and improper settings of cameras.To enhance low-light images,most existing methods rely on normal-light images for guidance but ...Low-light images suffer from low quality due to poor lighting conditions,noise pollution,and improper settings of cameras.To enhance low-light images,most existing methods rely on normal-light images for guidance but the collection of suitable normal-light images is difficult.In contrast,a self-supervised method breaks free from the reliance on normal-light data,resulting in more convenience and better generalization.Existing self-supervised methods primarily focus on illumination adjustment and design pixel-based adjustment methods,resulting in remnants of other degradations,uneven brightness and artifacts.In response,this paper proposes a self-supervised enhancement method,termed as SLIE.It can handle multiple degradations including illumination attenuation,noise pollution,and color shift,all in a self-supervised manner.Illumination attenuation is estimated based on physical principles and local neighborhood information.The removal and correction of noise and color shift removal are solely realized with noisy images and images with color shifts.Finally,the comprehensive and fully self-supervised approach can achieve better adaptability and generalization.It is applicable to various low light conditions,and can reproduce the original color of scenes in natural light.Extensive experiments conducted on four public datasets demonstrate the superiority of SLIE to thirteen state-of-the-art methods.Our code is available at https://github.com/hanna-xu/SLIE.展开更多
The encoding aperture snapshot spectral imaging system,based on the compressive sensing theory,can be regarded as an encoder,which can efficiently obtain compressed two-dimensional spectral data and then decode it int...The encoding aperture snapshot spectral imaging system,based on the compressive sensing theory,can be regarded as an encoder,which can efficiently obtain compressed two-dimensional spectral data and then decode it into three-dimensional spectral data through deep neural networks.However,training the deep neural net⁃works requires a large amount of clean data that is difficult to obtain.To address the problem of insufficient training data for deep neural networks,a self-supervised hyperspectral denoising neural network based on neighbor⁃hood sampling is proposed.This network is integrated into a deep plug-and-play framework to achieve self-supervised spectral reconstruction.The study also examines the impact of different noise degradation models on the fi⁃nal reconstruction quality.Experimental results demonstrate that the self-supervised learning method enhances the average peak signal-to-noise ratio by 1.18 dB and improves the structural similarity by 0.009 compared with the supervised learning method.Additionally,it achieves better visual reconstruction results.展开更多
Handheld ultrasound devices are known for their portability and affordability,making them widely utilized in underdeveloped areas and community healthcare for rapid diagnosis and early screening.However,the image qual...Handheld ultrasound devices are known for their portability and affordability,making them widely utilized in underdeveloped areas and community healthcare for rapid diagnosis and early screening.However,the image quality of handheld ultrasound devices is not always satisfactory due to the limited equipment size,which hinders accurate diagnoses by doctors.At the same time,paired ultrasound images are difficult to obtain from the clinic because imaging process is complicated.Therefore,we propose a modified cycle generative adversarial network(cycleGAN) for ultrasound image enhancement from multiple organs via unpaired pre-training.We introduce an ultrasound image pre-training method that does not require paired images,alleviating the requirement for large-scale paired datasets.We also propose an enhanced block with different structures in the pre-training and fine-tuning phases,which can help achieve the goals of different training phases.To improve the robustness of the model,we add Gaussian noise to the training images as data augmentation.Our approach is effective in obtaining the best quantitative evaluation results using a small number of parameters and less training costs to improve the quality of handheld ultrasound devices.展开更多
As important geological data,a geological report contains rich expert and geological knowledge,but the challenge facing current research into geological knowledge extraction and mining is how to render accurate unders...As important geological data,a geological report contains rich expert and geological knowledge,but the challenge facing current research into geological knowledge extraction and mining is how to render accurate understanding of geological reports guided by domain knowledge.While generic named entity recognition models/tools can be utilized for the processing of geoscience reports/documents,their effectiveness is hampered by a dearth of domain-specific knowledge,which in turn leads to a pronounced decline in recognition accuracy.This study summarizes six types of typical geological entities,with reference to the ontological system of geological domains and builds a high quality corpus for the task of geological named entity recognition(GNER).In addition,Geo Wo BERT-adv BGP(Geological Word-base BERTadversarial training Bi-directional Long Short-Term Memory Global Pointer)is proposed to address the issues of ambiguity,diversity and nested entities for the geological entities.The model first uses the fine-tuned word granularitybased pre-training model Geo Wo BERT(Geological Word-base BERT)and combines the text features that are extracted using the Bi LSTM(Bi-directional Long Short-Term Memory),followed by an adversarial training algorithm to improve the robustness of the model and enhance its resistance to interference,the decoding finally being performed using a global association pointer algorithm.The experimental results show that the proposed model for the constructed dataset achieves high performance and is capable of mining the rich geological information.展开更多
The federated self-supervised framework is a distributed machine learning method that combines federated learning and self-supervised learning, which can effectively solve the problem of traditional federated learning...The federated self-supervised framework is a distributed machine learning method that combines federated learning and self-supervised learning, which can effectively solve the problem of traditional federated learning being difficult to process large-scale unlabeled data. The existing federated self-supervision framework has problems with low communication efficiency and high communication delay between clients and central servers. Therefore, we added edge servers to the federated self-supervision framework to reduce the pressure on the central server caused by frequent communication between both ends. A communication compression scheme using gradient quantization and sparsification was proposed to optimize the communication of the entire framework, and the algorithm of the sparse communication compression module was improved. Experiments have proved that the learning rate changes of the improved sparse communication compression module are smoother and more stable. Our communication compression scheme effectively reduced the overall communication overhead.展开更多
Spectroscopy,especially for plasma spectroscopy,provides a powerful platform for biological and material analysis with its elemental and molecular fingerprinting capability.Artificial intelligence(AI)has the tremendou...Spectroscopy,especially for plasma spectroscopy,provides a powerful platform for biological and material analysis with its elemental and molecular fingerprinting capability.Artificial intelligence(AI)has the tremendous potential to build a universal quantitative framework covering all branches of plasma spectroscopy based on its unmatched representation and generalization ability.Herein,we introduce an AI-based unified method called self-supervised image-spectrum twin information fusion detection(SISTIFD)to collect twin co-occurrence signals of the plasma and to intelligently predict the physical parameters for improving the performances of all plasma spectroscopic techniques.It can fuse the spectra and plasma images in synchronization,derive the plasma parameters(total number density,plasma temperature,electron density,and other implicit factors),and provide accurate results.The experimental data demonstrate their excellent utility and capacity,with a reduction of 98%in evaluation indices(root mean square error,relative standard deviation,etc.)and an analysis frequency of 143 Hz(much faster than the mainstream detection frame rate of 1 Hz).In addition,as a completely end-to-end and self-supervised framework,the SISTIFD enables automatic detection without manual preprocessing or intervention.With these advantages,it has remarkably enhanced various plasma spectroscopic techniques with state-of-the-art performance and unsealed their possibility in industry,especially in the regions that require both capability and efficiency.This scheme brings new inspiration to the whole field of plasma spectroscopy and enables in situ analysis with a real-world scenario of high throughput,cross-interference,various analyte complexity,and diverse applications.展开更多
Intelligent sorting is an important prerequisite for the full quantitative consumption and harmless disposal of kitchen waste.The existing object detection method based on an ImageNet pre-trained model is an effective...Intelligent sorting is an important prerequisite for the full quantitative consumption and harmless disposal of kitchen waste.The existing object detection method based on an ImageNet pre-trained model is an effective way of sorting.Owing to significant domain gaps between natural images and kitchen waste images,it is difficult to reflect the characteristics of diverse scales and dense distribution in kitchen waste based on an ImageNet pre-trained model,leading to poor generalisation.In this article,the authors propose the first pre-trained model for kitchen waste sorting called KitWaSor,which combines both contrastive learning(CL)and masked image modelling(MIM)through self-supervised learning(SSL).First,to address the issue of diverse scales,the authors propose a mixed masking strategy by introducing an incomplete masking branch based on the original random masking branch.It prevents the complete loss of small-scale objects while avoiding excessive leakage of large-scale object pixels.Second,to address the issue of dense distribution,the authors introduce semantic consistency constraints on the basis of the mixed masking strategy.That is,object semantic reasoning is performed through semantic consistency constraints to compensate for the lack of contextual information.To train KitWaSor,the authors construct the first million-level kitchen waste dataset across seasonal and regional distributions,named KWD-Million.Extensive experiments show that KitWaSor achieves state-of-the-art(SOTA)performance on the two most relevant downstream tasks for kitchen waste sorting(i.e.image classification and object detection),demonstrating the effectiveness of the proposed KitWaSor.展开更多
Predicting cross-immunity between viral strains is vital for public health surveillance and vaccine development.Traditional neural network methods,such as BiLSTM,could be ineffective due to the lack of lab data for mo...Predicting cross-immunity between viral strains is vital for public health surveillance and vaccine development.Traditional neural network methods,such as BiLSTM,could be ineffective due to the lack of lab data for model training and the overshadowing of crucial features within sequence concatenation.The current work proposes a less data-consuming model incorporating a pre-trained gene sequence model and a mutual information inference operator.Our methodology utilizes gene alignment and deduplication algorithms to preprocess gene sequences,enhancing the model’s capacity to discern and focus on distinctions among input gene pairs.The model,i.e.,DNA Pretrained Cross-Immunity Protection Inference model(DPCIPI),outperforms state-of-theart(SOTA)models in predicting hemagglutination inhibition titer from influenza viral gene sequences only.Improvement in binary cross-immunity prediction is 1.58%in F1,2.34%in precision,1.57%in recall,and 1.57%in Accuracy.For multilevel cross-immunity improvements,the improvement is 2.12%in F1,3.50%in precision,2.19%in recall,and 2.19%in Accuracy.Our study showcases the potential of pre-trained gene models to improve predictions of antigenic variation and cross-immunity.With expanding gene data and advancements in pre-trained models,this approach promises significant impacts on vaccine development and public health.展开更多
Deep neural networks provide accurate results for most applications.However,they need a big dataset to train properly.Providing a big dataset is a significant challenge in most applications.Image augmentation refers t...Deep neural networks provide accurate results for most applications.However,they need a big dataset to train properly.Providing a big dataset is a significant challenge in most applications.Image augmentation refers to techniques that increase the amount of image data.Common operations for image augmentation include changes in illumination,rotation,contrast,size,viewing angle,and others.Recently,Generative Adversarial Networks(GANs)have been employed for image generation.However,like image augmentation methods,GAN approaches can only generate images that are similar to the original images.Therefore,they also cannot generate new classes of data.Texture images presentmore challenges than general images,and generating textures is more complex than creating other types of images.This study proposes a gradient-based deep neural network method that generates a new class of texture.It is possible to rapidly generate new classes of textures using different kernels from pre-trained deep networks.After generating new textures for each class,the number of textures increases through image augmentation.During this process,several techniques are proposed to automatically remove incomplete and similar textures that are created.The proposed method is faster than some well-known generative networks by around 4 to 10 times.In addition,the quality of the generated textures surpasses that of these networks.The proposed method can generate textures that surpass those of someGANs and parametric models in certain image qualitymetrics.It can provide a big texture dataset to train deep networks.A new big texture dataset is created artificially using the proposed method.This dataset is approximately 2 GB in size and comprises 30,000 textures,each 150×150 pixels in size,organized into 600 classes.It is uploaded to the Kaggle site and Google Drive.This dataset is called BigTex.Compared to other texture datasets,the proposed dataset is the largest and can serve as a comprehensive texture dataset for training more powerful deep neural networks and mitigating overfitting.展开更多
Research on reconstructing imperfect faces is a challenging task.In this study,we explore a data-driven approach using a pre-trained MICA(MetrIC fAce)model combined with 3D printing to address this challenge.We propos...Research on reconstructing imperfect faces is a challenging task.In this study,we explore a data-driven approach using a pre-trained MICA(MetrIC fAce)model combined with 3D printing to address this challenge.We propose a training strategy that utilizes the pre-trained MICA model and self-supervised learning techniques to improve accuracy and reduce the time needed for 3D facial structure reconstruction.Our results demonstrate high accuracy,evaluated by the geometric loss function and various statistical measures.To showcase the effectiveness of the approach,we used 3D printing to create a model that covers facial wounds.The findings indicate that our method produces a model that fits well and achieves comprehensive 3D facial reconstruction.This technique has the potential to aid doctors in treating patients with facial injuries.展开更多
We analyze the suitability of existing pre-trained transformer-based language models(PLMs)for abstractive text summarization on German technical healthcare texts.The study focuses on the multilingual capabilities of t...We analyze the suitability of existing pre-trained transformer-based language models(PLMs)for abstractive text summarization on German technical healthcare texts.The study focuses on the multilingual capabilities of these models and their ability to perform the task of abstractive text summarization in the healthcare field.The research hypothesis was that large language models could perform high-quality abstractive text summarization on German technical healthcare texts,even if the model is not specifically trained in that language.Through experiments,the research questions explore the performance of transformer language models in dealing with complex syntax constructs,the difference in performance between models trained in English and German,and the impact of translating the source text to English before conducting the summarization.We conducted an evaluation of four PLMs(GPT-3,a translation-based approach also utilizing GPT-3,a German language Model,and a domain-specific bio-medical model approach).The evaluation considered the informativeness using 3 types of metrics based on Recall-Oriented Understudy for Gisting Evaluation(ROUGE)and the quality of results which is manually evaluated considering 5 aspects.The results show that text summarization models could be used in the German healthcare domain and that domain-independent language models achieved the best results.The study proves that text summarization models can simplify the search for pre-existing German knowledge in various domains.展开更多
Learning discriminative representations with deep neural networks often relies on massive labeled data, which is expensive and difficult to obtain in many real scenarios. As an alternative, self-supervised learning th...Learning discriminative representations with deep neural networks often relies on massive labeled data, which is expensive and difficult to obtain in many real scenarios. As an alternative, self-supervised learning that leverages input itself as supervision is strongly preferred for its soaring performance on visual representation learning. This paper introduces a contrastive self-supervised framework for learning generalizable representations on the synthetic data that can be obtained easily with complete controllability.Specifically, we propose to optimize a contrastive learning task and a physical property prediction task simultaneously. Given the synthetic scene, the first task aims to maximize agreement between a pair of synthetic images generated by our proposed view sampling module, while the second task aims to predict three physical property maps, i.e., depth, instance contour maps, and surface normal maps. In addition, a feature-level domain adaptation technique with adversarial training is applied to reduce the domain difference between the realistic and the synthetic data. Experiments demonstrate that our proposed method achieves state-of-the-art performance on several visual recognition datasets.展开更多
基金Supported by Sichuan Science and Technology Program(2023YFSY0026,2023YFH0004)Supported by the Institute of Information&Communications Technology Planning&Evaluation(IITP)grant funded by the Korean government(MSIT)(No.RS-2022-00155885,Artificial Intelligence Convergence Innovation Human Resources Development(Hanyang University ERICA)).
文摘Two-dimensional endoscopic images are susceptible to interferences such as specular reflections and monotonous texture illumination,hindering accurate three-dimensional lesion reconstruction by surgical robots.This study proposes a novel end-to-end disparity estimation model to address these challenges.Our approach combines a Pseudo-Siamese neural network architecture with pyramid dilated convolutions,integrating multi-scale image information to enhance robustness against lighting interferences.This study introduces a Pseudo-Siamese structure-based disparity regression model that simplifies left-right image comparison,improving accuracy and efficiency.The model was evaluated using a dataset of stereo endoscopic videos captured by the Da Vinci surgical robot,comprising simulated silicone heart sequences and real heart video data.Experimental results demonstrate significant improvement in the network’s resistance to lighting interference without substantially increasing parameters.Moreover,the model exhibited faster convergence during training,contributing to overall performance enhancement.This study advances endoscopic image processing accuracy and has potential implications for surgical robot applications in complex environments.
基金supported by the King Abdullah University of Science and Technology(KAUST)。
文摘Seismic data denoising is a critical process usually applied at various stages of the seismic processing workflow,as our ability to mitigate noise in seismic data affects the quality of our subsequent analyses.However,finding an optimal balance between preserving seismic signals and effectively reducing seismic noise presents a substantial challenge.In this study,we introduce a multi-stage deep learning model,trained in a self-supervised manner,designed specifically to suppress seismic noise while minimizing signal leakage.This model operates as a patch-based approach,extracting overlapping patches from the noisy data and converting them into 1D vectors for input.It consists of two identical sub-networks,each configured differently.Inspired by the transformer architecture,each sub-network features an embedded block that comprises two fully connected layers,which are utilized for feature extraction from the input patches.After reshaping,a multi-head attention module enhances the model’s focus on significant features by assigning higher attention weights to them.The key difference between the two sub-networks lies in the number of neurons within their fully connected layers.The first sub-network serves as a strong denoiser with a small number of neurons,effectively attenuating seismic noise;in contrast,the second sub-network functions as a signal-add-back model,using a larger number of neurons to retrieve some of the signal that was not preserved in the output of the first sub-network.The proposed model produces two outputs,each corresponding to one of the sub-networks,and both sub-networks are optimized simultaneously using the noisy data as the label for both outputs.Evaluations conducted on both synthetic and field data demonstrate the model’s effectiveness in suppressing seismic noise with minimal signal leakage,outperforming some benchmark methods.
基金supported in part by the National Natural Science Foundation of China under Grants 62472434 and 62402171in part by the National Key Research and Development Program of China under Grant 2022YFF1203001+1 种基金in part by the Science and Technology Innovation Program of Hunan Province under Grant 2022RC3061in part by the Sci-Tech Innovation 2030 Agenda under Grant 2023ZD0508600.
文摘Computed Tomography(CT)reconstruction is essential inmedical imaging and other engineering fields.However,blurring of the projection during CT imaging can lead to artifacts in the reconstructed images.Projection blur combines factors such as larger ray sources,scattering and imaging system vibration.To address the problem,we propose DeblurTomo,a novel self-supervised learning-based deblurring and reconstruction algorithm that efficiently reconstructs sharp CT images from blurry input without needing external data and blur measurement.Specifically,we constructed a coordinate-based implicit neural representation reconstruction network,which can map the coordinates to the attenuation coefficient in the reconstructed space formore convenient ray representation.Then,wemodel the blur as aweighted sumof offset rays and design the RayCorrectionNetwork(RCN)andWeight ProposalNetwork(WPN)to fit these rays and their weights bymulti-view consistency and geometric information,thereby extending 2D deblurring to 3D space.In the training phase,we use the blurry input as the supervision signal to optimize the reconstruction network,the RCN,and the WPN simultaneously.Extensive experiments on the widely used synthetic dataset show that DeblurTomo performs superiorly on the limited-angle and sparse-view in the simulated blurred scenarios.Further experiments on real datasets demonstrate the superiority of our method in practical scenarios.
基金supported by the National Natural Science Foundation of China(42374134,42304125,U20B6005)the Science and Technology Commission of Shanghai Municipality(23JC1400502)the Fundamental Research Funds for the Central Universities.
文摘Blended acquisition offers efficiency improvements over conventional seismic data acquisition, at the cost of introducing blending noise effects. Besides, seismic data often suffers from irregularly missing shots caused by artificial or natural effects during blended acquisition. Therefore, blending noise attenuation and missing shots reconstruction are essential for providing high-quality seismic data for further seismic processing and interpretation. The iterative shrinkage thresholding algorithm can help obtain deblended data based on sparsity assumptions of complete unblended data, and it characterizes seismic data linearly. Supervised learning algorithms can effectively capture the nonlinear relationship between incomplete pseudo-deblended data and complete unblended data. However, the dependence on complete unblended labels limits their practicality in field applications. Consequently, a self-supervised algorithm is presented for simultaneous deblending and interpolation of incomplete blended data, which minimizes the difference between simulated and observed incomplete pseudo-deblended data. The used blind-trace U-Net (BTU-Net) prevents identity mapping during complete unblended data estimation. Furthermore, a multistep process with blending noise simulation-subtraction and missing traces reconstruction-insertion is used in each step to improve the deblending and interpolation performance. Experiments with synthetic and field incomplete blended data demonstrate the effectiveness of the multistep self-supervised BTU-Net algorithm.
基金supported by the National Natural Science Foundation of China(62276092,62303167)the Postdoctoral Fellowship Program(Grade C)of China Postdoctoral Science Foundation(GZC20230707)+3 种基金the Key Science and Technology Program of Henan Province,China(242102211051,242102211042,212102310084)Key Scientiffc Research Projects of Colleges and Universities in Henan Province,China(25A520009)the China Postdoctoral Science Foundation(2024M760808)the Henan Province medical science and technology research plan joint construction project(LHGJ2024069).
文摘Feature fusion is an important technique in medical image classification that can improve diagnostic accuracy by integrating complementary information from multiple sources.Recently,Deep Learning(DL)has been widely used in pulmonary disease diagnosis,such as pneumonia and tuberculosis.However,traditional feature fusion methods often suffer from feature disparity,information loss,redundancy,and increased complexity,hindering the further extension of DL algorithms.To solve this problem,we propose a Graph-Convolution Fusion Network with Self-Supervised Feature Alignment(Self-FAGCFN)to address the limitations of traditional feature fusion methods in deep learning-based medical image classification for respiratory diseases such as pneumonia and tuberculosis.The network integrates Convolutional Neural Networks(CNNs)for robust feature extraction from two-dimensional grid structures and Graph Convolutional Networks(GCNs)within a Graph Neural Network branch to capture features based on graph structure,focusing on significant node representations.Additionally,an Attention-Embedding Ensemble Block is included to capture critical features from GCN outputs.To ensure effective feature alignment between pre-and post-fusion stages,we introduce a feature alignment loss that minimizes disparities.Moreover,to address the limitations of proposed methods,such as inappropriate centroid discrepancies during feature alignment and class imbalance in the dataset,we develop a Feature-Centroid Fusion(FCF)strategy and a Multi-Level Feature-Centroid Update(MLFCU)algorithm,respectively.Extensive experiments on public datasets LungVision and Chest-Xray demonstrate that the Self-FAGCFN model significantly outperforms existing methods in diagnosing pneumonia and tuberculosis,highlighting its potential for practical medical applications.
基金supported in part by the National Natural Science Foundation of China under Grants 62071345。
文摘Self-supervised monocular depth estimation has emerged as a major research focus in recent years,primarily due to the elimination of ground-truth depth dependence.However,the prevailing architectures in this domain suffer from inherent limitations:existing pose network branches infer camera ego-motion exclusively under static-scene and Lambertian-surface assumptions.These assumptions are often violated in real-world scenarios due to dynamic objects,non-Lambertian reflectance,and unstructured background elements,leading to pervasive artifacts such as depth discontinuities(“holes”),structural collapse,and ambiguous reconstruction.To address these challenges,we propose a novel framework that integrates scene dynamic pose estimation into the conventional self-supervised depth network,enhancing its ability to model complex scene dynamics.Our contributions are threefold:(1)a pixel-wise dynamic pose estimation module that jointly resolves the pose transformations of moving objects and localized scene perturbations;(2)a physically-informed loss function that couples dynamic pose and depth predictions,designed to mitigate depth errors arising from high-speed distant objects and geometrically inconsistent motion profiles;(3)an efficient SE(3)transformation parameterization that streamlines network complexity and temporal pre-processing.Extensive experiments on the KITTI and NYU-V2 benchmarks show that our framework achieves state-of-the-art performance in both quantitative metrics and qualitative visual fidelity,significantly improving the robustness and generalization of monocular depth estimation under dynamic conditions.
基金funded by theNational Science and TechnologyCouncil(NSTC),Taiwan,under grant numbers NSTC 112-2634-F-019-001 and NSTC 113-2634-F-A49-007.
文摘Few-shot learning has emerged as a crucial technique for coral species classification,addressing the challenge of limited labeled data in underwater environments.This study introduces an optimized few-shot learning model that enhances classification accuracy while minimizing reliance on extensive data collection.The proposed model integrates a hybrid similarity measure combining Euclidean distance and cosine similarity,effectively capturing both feature magnitude and directional relationships.This approach achieves a notable accuracy of 71.8%under a 5-way 5-shot evaluation,outperforming state-of-the-art models such as Prototypical Networks,FEAT,and ESPT by up to 10%.Notably,the model demonstrates high precision in classifying Siderastreidae(87.52%)and Fungiidae(88.95%),underscoring its effectiveness in distinguishing subtle morphological differences.To further enhance performance,we incorporate a self-supervised learning mechanism based on contrastive learning,enabling the model to extract robust representations by leveraging local structural patterns in corals.This enhancement significantly improves classification accuracy,particularly for species with high intra-class variation,leading to an overall accuracy of 76.52%under a 5-way 10-shot evaluation.Additionally,the model exploits the repetitive structures inherent in corals,introducing a local feature aggregation strategy that refines classification through spatial information integration.Beyond its technical contributions,this study presents a scalable and efficient approach for automated coral reef monitoring,reducing annotation costs while maintaining high classification accuracy.By improving few-shot learning performance in underwater environments,our model enhances monitoring accuracy by up to 15%compared to traditional methods,offering a practical solution for large-scale coral conservation efforts.
基金supported by the research project‘‘SafeDaBatt”(03EMF0409A)funded by the German Federal Ministry for Digital and Transport(BMDV)+2 种基金the National Key Research and Development Program of China(2022YFE0102700)the Key Research and Development Program of Shaanxi Province(2023-GHYB-05,2023-YBSF-104)the financial support from the China Scholarship Council(CSC)(202206567008)。
文摘Accurate aging diagnosis is crucial for the health and safety management of lithium-ion batteries in electric vehicles.Despite significant advancements achieved by data-driven methods,diagnosis accuracy remains constrained by the high costs of check-up tests and the scarcity of labeled data.This paper presents a framework utilizing self-supervised machine learning to harness the potential of unlabeled data for diagnosing battery aging in electric vehicles during field operations.We validate our method using battery degradation datasets collected over more than two years from twenty real-world electric vehicles.Our analysis comprehensively addresses cell inconsistencies,physical interpretations,and charging uncertainties in real-world applications.This is achieved through self-supervised feature extraction using random short charging sequences in the main peak of incremental capacity curves.By leveraging inexpensive unlabeled data in a self-supervised approach,our method demonstrates improvements in average root mean square errors of 74.54%and 60.50%in the best and worst cases,respectively,compared to the supervised benchmark.This work underscores the potential of employing low-cost unlabeled data with self-supervised machine learning for effective battery health and safety management in realworld scenarios.
基金supported by the National Natural Science Foundation of China(62276192)。
文摘Low-light images suffer from low quality due to poor lighting conditions,noise pollution,and improper settings of cameras.To enhance low-light images,most existing methods rely on normal-light images for guidance but the collection of suitable normal-light images is difficult.In contrast,a self-supervised method breaks free from the reliance on normal-light data,resulting in more convenience and better generalization.Existing self-supervised methods primarily focus on illumination adjustment and design pixel-based adjustment methods,resulting in remnants of other degradations,uneven brightness and artifacts.In response,this paper proposes a self-supervised enhancement method,termed as SLIE.It can handle multiple degradations including illumination attenuation,noise pollution,and color shift,all in a self-supervised manner.Illumination attenuation is estimated based on physical principles and local neighborhood information.The removal and correction of noise and color shift removal are solely realized with noisy images and images with color shifts.Finally,the comprehensive and fully self-supervised approach can achieve better adaptability and generalization.It is applicable to various low light conditions,and can reproduce the original color of scenes in natural light.Extensive experiments conducted on four public datasets demonstrate the superiority of SLIE to thirteen state-of-the-art methods.Our code is available at https://github.com/hanna-xu/SLIE.
基金Supported by the Zhejiang Provincial"Jianbing"and"Lingyan"R&D Programs(2023C03012,2024C01126)。
文摘The encoding aperture snapshot spectral imaging system,based on the compressive sensing theory,can be regarded as an encoder,which can efficiently obtain compressed two-dimensional spectral data and then decode it into three-dimensional spectral data through deep neural networks.However,training the deep neural net⁃works requires a large amount of clean data that is difficult to obtain.To address the problem of insufficient training data for deep neural networks,a self-supervised hyperspectral denoising neural network based on neighbor⁃hood sampling is proposed.This network is integrated into a deep plug-and-play framework to achieve self-supervised spectral reconstruction.The study also examines the impact of different noise degradation models on the fi⁃nal reconstruction quality.Experimental results demonstrate that the self-supervised learning method enhances the average peak signal-to-noise ratio by 1.18 dB and improves the structural similarity by 0.009 compared with the supervised learning method.Additionally,it achieves better visual reconstruction results.
文摘Handheld ultrasound devices are known for their portability and affordability,making them widely utilized in underdeveloped areas and community healthcare for rapid diagnosis and early screening.However,the image quality of handheld ultrasound devices is not always satisfactory due to the limited equipment size,which hinders accurate diagnoses by doctors.At the same time,paired ultrasound images are difficult to obtain from the clinic because imaging process is complicated.Therefore,we propose a modified cycle generative adversarial network(cycleGAN) for ultrasound image enhancement from multiple organs via unpaired pre-training.We introduce an ultrasound image pre-training method that does not require paired images,alleviating the requirement for large-scale paired datasets.We also propose an enhanced block with different structures in the pre-training and fine-tuning phases,which can help achieve the goals of different training phases.To improve the robustness of the model,we add Gaussian noise to the training images as data augmentation.Our approach is effective in obtaining the best quantitative evaluation results using a small number of parameters and less training costs to improve the quality of handheld ultrasound devices.
基金financially supported by the Natural Science Foundation of China(Grant No.42301492)the National Key R&D Program of China(Grant Nos.2022YFF0711600,2022YFF0801201,2022YFF0801200)+3 种基金the Major Special Project of Xinjiang(Grant No.2022A03009-3)the Open Fund of Key Laboratory of Urban Land Resources Monitoring and Simulation,Ministry of Natural Resources(Grant No.KF-2022-07014)the Opening Fund of the Key Laboratory of the Geological Survey and Evaluation of the Ministry of Education(Grant No.GLAB 2023ZR01)the Fundamental Research Funds for the Central Universities。
文摘As important geological data,a geological report contains rich expert and geological knowledge,but the challenge facing current research into geological knowledge extraction and mining is how to render accurate understanding of geological reports guided by domain knowledge.While generic named entity recognition models/tools can be utilized for the processing of geoscience reports/documents,their effectiveness is hampered by a dearth of domain-specific knowledge,which in turn leads to a pronounced decline in recognition accuracy.This study summarizes six types of typical geological entities,with reference to the ontological system of geological domains and builds a high quality corpus for the task of geological named entity recognition(GNER).In addition,Geo Wo BERT-adv BGP(Geological Word-base BERTadversarial training Bi-directional Long Short-Term Memory Global Pointer)is proposed to address the issues of ambiguity,diversity and nested entities for the geological entities.The model first uses the fine-tuned word granularitybased pre-training model Geo Wo BERT(Geological Word-base BERT)and combines the text features that are extracted using the Bi LSTM(Bi-directional Long Short-Term Memory),followed by an adversarial training algorithm to improve the robustness of the model and enhance its resistance to interference,the decoding finally being performed using a global association pointer algorithm.The experimental results show that the proposed model for the constructed dataset achieves high performance and is capable of mining the rich geological information.
文摘The federated self-supervised framework is a distributed machine learning method that combines federated learning and self-supervised learning, which can effectively solve the problem of traditional federated learning being difficult to process large-scale unlabeled data. The existing federated self-supervision framework has problems with low communication efficiency and high communication delay between clients and central servers. Therefore, we added edge servers to the federated self-supervision framework to reduce the pressure on the central server caused by frequent communication between both ends. A communication compression scheme using gradient quantization and sparsification was proposed to optimize the communication of the entire framework, and the algorithm of the sparse communication compression module was improved. Experiments have proved that the learning rate changes of the improved sparse communication compression module are smoother and more stable. Our communication compression scheme effectively reduced the overall communication overhead.
基金supported by the National Key Research and Development Program of China(Grant No.2022YFE0118700)the National Natural Science Foundation of China(Grant No.62375101)the Fundamental Research Funds for the Central Universities(Grant No.YCJJ20230216).
文摘Spectroscopy,especially for plasma spectroscopy,provides a powerful platform for biological and material analysis with its elemental and molecular fingerprinting capability.Artificial intelligence(AI)has the tremendous potential to build a universal quantitative framework covering all branches of plasma spectroscopy based on its unmatched representation and generalization ability.Herein,we introduce an AI-based unified method called self-supervised image-spectrum twin information fusion detection(SISTIFD)to collect twin co-occurrence signals of the plasma and to intelligently predict the physical parameters for improving the performances of all plasma spectroscopic techniques.It can fuse the spectra and plasma images in synchronization,derive the plasma parameters(total number density,plasma temperature,electron density,and other implicit factors),and provide accurate results.The experimental data demonstrate their excellent utility and capacity,with a reduction of 98%in evaluation indices(root mean square error,relative standard deviation,etc.)and an analysis frequency of 143 Hz(much faster than the mainstream detection frame rate of 1 Hz).In addition,as a completely end-to-end and self-supervised framework,the SISTIFD enables automatic detection without manual preprocessing or intervention.With these advantages,it has remarkably enhanced various plasma spectroscopic techniques with state-of-the-art performance and unsealed their possibility in industry,especially in the regions that require both capability and efficiency.This scheme brings new inspiration to the whole field of plasma spectroscopy and enables in situ analysis with a real-world scenario of high throughput,cross-interference,various analyte complexity,and diverse applications.
基金National Key Research and Development Program of China,Grant/Award Number:2021YFC1910402。
文摘Intelligent sorting is an important prerequisite for the full quantitative consumption and harmless disposal of kitchen waste.The existing object detection method based on an ImageNet pre-trained model is an effective way of sorting.Owing to significant domain gaps between natural images and kitchen waste images,it is difficult to reflect the characteristics of diverse scales and dense distribution in kitchen waste based on an ImageNet pre-trained model,leading to poor generalisation.In this article,the authors propose the first pre-trained model for kitchen waste sorting called KitWaSor,which combines both contrastive learning(CL)and masked image modelling(MIM)through self-supervised learning(SSL).First,to address the issue of diverse scales,the authors propose a mixed masking strategy by introducing an incomplete masking branch based on the original random masking branch.It prevents the complete loss of small-scale objects while avoiding excessive leakage of large-scale object pixels.Second,to address the issue of dense distribution,the authors introduce semantic consistency constraints on the basis of the mixed masking strategy.That is,object semantic reasoning is performed through semantic consistency constraints to compensate for the lack of contextual information.To train KitWaSor,the authors construct the first million-level kitchen waste dataset across seasonal and regional distributions,named KWD-Million.Extensive experiments show that KitWaSor achieves state-of-the-art(SOTA)performance on the two most relevant downstream tasks for kitchen waste sorting(i.e.image classification and object detection),demonstrating the effectiveness of the proposed KitWaSor.
基金supported by the Bill & Melinda Gates Foundation and the Minderoo Foundation
文摘Predicting cross-immunity between viral strains is vital for public health surveillance and vaccine development.Traditional neural network methods,such as BiLSTM,could be ineffective due to the lack of lab data for model training and the overshadowing of crucial features within sequence concatenation.The current work proposes a less data-consuming model incorporating a pre-trained gene sequence model and a mutual information inference operator.Our methodology utilizes gene alignment and deduplication algorithms to preprocess gene sequences,enhancing the model’s capacity to discern and focus on distinctions among input gene pairs.The model,i.e.,DNA Pretrained Cross-Immunity Protection Inference model(DPCIPI),outperforms state-of-theart(SOTA)models in predicting hemagglutination inhibition titer from influenza viral gene sequences only.Improvement in binary cross-immunity prediction is 1.58%in F1,2.34%in precision,1.57%in recall,and 1.57%in Accuracy.For multilevel cross-immunity improvements,the improvement is 2.12%in F1,3.50%in precision,2.19%in recall,and 2.19%in Accuracy.Our study showcases the potential of pre-trained gene models to improve predictions of antigenic variation and cross-immunity.With expanding gene data and advancements in pre-trained models,this approach promises significant impacts on vaccine development and public health.
基金supported via funding from Prince Sattam bin Abdulaziz University(PSAU/2025/R/1446)Princess Nourah bint Abdulrahman University(PNURSP2025R300)Prince Sultan University.
文摘Deep neural networks provide accurate results for most applications.However,they need a big dataset to train properly.Providing a big dataset is a significant challenge in most applications.Image augmentation refers to techniques that increase the amount of image data.Common operations for image augmentation include changes in illumination,rotation,contrast,size,viewing angle,and others.Recently,Generative Adversarial Networks(GANs)have been employed for image generation.However,like image augmentation methods,GAN approaches can only generate images that are similar to the original images.Therefore,they also cannot generate new classes of data.Texture images presentmore challenges than general images,and generating textures is more complex than creating other types of images.This study proposes a gradient-based deep neural network method that generates a new class of texture.It is possible to rapidly generate new classes of textures using different kernels from pre-trained deep networks.After generating new textures for each class,the number of textures increases through image augmentation.During this process,several techniques are proposed to automatically remove incomplete and similar textures that are created.The proposed method is faster than some well-known generative networks by around 4 to 10 times.In addition,the quality of the generated textures surpasses that of these networks.The proposed method can generate textures that surpass those of someGANs and parametric models in certain image qualitymetrics.It can provide a big texture dataset to train deep networks.A new big texture dataset is created artificially using the proposed method.This dataset is approximately 2 GB in size and comprises 30,000 textures,each 150×150 pixels in size,organized into 600 classes.It is uploaded to the Kaggle site and Google Drive.This dataset is called BigTex.Compared to other texture datasets,the proposed dataset is the largest and can serve as a comprehensive texture dataset for training more powerful deep neural networks and mitigating overfitting.
文摘Research on reconstructing imperfect faces is a challenging task.In this study,we explore a data-driven approach using a pre-trained MICA(MetrIC fAce)model combined with 3D printing to address this challenge.We propose a training strategy that utilizes the pre-trained MICA model and self-supervised learning techniques to improve accuracy and reduce the time needed for 3D facial structure reconstruction.Our results demonstrate high accuracy,evaluated by the geometric loss function and various statistical measures.To showcase the effectiveness of the approach,we used 3D printing to create a model that covers facial wounds.The findings indicate that our method produces a model that fits well and achieves comprehensive 3D facial reconstruction.This technique has the potential to aid doctors in treating patients with facial injuries.
文摘We analyze the suitability of existing pre-trained transformer-based language models(PLMs)for abstractive text summarization on German technical healthcare texts.The study focuses on the multilingual capabilities of these models and their ability to perform the task of abstractive text summarization in the healthcare field.The research hypothesis was that large language models could perform high-quality abstractive text summarization on German technical healthcare texts,even if the model is not specifically trained in that language.Through experiments,the research questions explore the performance of transformer language models in dealing with complex syntax constructs,the difference in performance between models trained in English and German,and the impact of translating the source text to English before conducting the summarization.We conducted an evaluation of four PLMs(GPT-3,a translation-based approach also utilizing GPT-3,a German language Model,and a domain-specific bio-medical model approach).The evaluation considered the informativeness using 3 types of metrics based on Recall-Oriented Understudy for Gisting Evaluation(ROUGE)and the quality of results which is manually evaluated considering 5 aspects.The results show that text summarization models could be used in the German healthcare domain and that domain-independent language models achieved the best results.The study proves that text summarization models can simplify the search for pre-existing German knowledge in various domains.
基金by National Natural Science Foundation of China(Nos.61822204 and 61521002).
文摘Learning discriminative representations with deep neural networks often relies on massive labeled data, which is expensive and difficult to obtain in many real scenarios. As an alternative, self-supervised learning that leverages input itself as supervision is strongly preferred for its soaring performance on visual representation learning. This paper introduces a contrastive self-supervised framework for learning generalizable representations on the synthetic data that can be obtained easily with complete controllability.Specifically, we propose to optimize a contrastive learning task and a physical property prediction task simultaneously. Given the synthetic scene, the first task aims to maximize agreement between a pair of synthetic images generated by our proposed view sampling module, while the second task aims to predict three physical property maps, i.e., depth, instance contour maps, and surface normal maps. In addition, a feature-level domain adaptation technique with adversarial training is applied to reduce the domain difference between the realistic and the synthetic data. Experiments demonstrate that our proposed method achieves state-of-the-art performance on several visual recognition datasets.