Traditional steganography conceals information by modifying cover data,but steganalysis tools easily detect such alterations.While deep learning-based steganography often involves high training costs and complex deplo...Traditional steganography conceals information by modifying cover data,but steganalysis tools easily detect such alterations.While deep learning-based steganography often involves high training costs and complex deployment.Diffusion model-based methods face security vulnerabilities,particularly due to potential information leakage during generation.We propose a fixed neural network image steganography framework based on secure diffu-sion models to address these challenges.Unlike conventional approaches,our method minimizes cover modifications through neural network optimization,achieving superior steganographic performance in human visual perception and computer vision analyses.The cover images are generated in an anime style using state-of-the-art diffusion models,ensuring the transmitted images appear more natural.This study introduces fixed neural network technology that allows senders to transmit only minimal critical information alongside stego-images.Recipients can accurately reconstruct secret images using this compact data,significantly reducing transmission overhead compared to conventional deep steganography.Furthermore,our framework innovatively integrates ElGamal,a cryptographic algorithm,to protect critical information during transmission,enhancing overall system security and ensuring end-to-end information protection.This dual optimization of payload reduction and cryptographic reinforcement establishes a new paradigm for secure and efficient image steganography.展开更多
Air target intent recognition holds significant importance in aiding commanders to assess battlefield situations and secure a competitive edge in decision-making.Progress in this domain has been hindered by challenges...Air target intent recognition holds significant importance in aiding commanders to assess battlefield situations and secure a competitive edge in decision-making.Progress in this domain has been hindered by challenges posed by imbalanced battlefield data and the limited robustness of traditional recognition models.Inspired by the success of diffusion models in addressing visual domain sample imbalances,this paper introduces a new approach that utilizes the Markov Transfer Field(MTF)method for time series data visualization.This visualization,when combined with the Denoising Diffusion Probabilistic Model(DDPM),effectively enhances sample data and mitigates noise within the original dataset.Additionally,a transformer-based model tailored for time series visualization and air target intent recognition is developed.Comprehensive experimental results,encompassing comparative,ablation,and denoising validations,reveal that the proposed method achieves a notable 98.86%accuracy in air target intent recognition while demonstrating exceptional robustness and generalization capabilities.This approach represents a promising avenue for advancing air target intent recognition.展开更多
Transformer models have emerged as dominant networks for various tasks in computer vision compared to Convolutional Neural Networks(CNNs).The transformers demonstrate the ability to model long-range dependencies by ut...Transformer models have emerged as dominant networks for various tasks in computer vision compared to Convolutional Neural Networks(CNNs).The transformers demonstrate the ability to model long-range dependencies by utilizing a self-attention mechanism.This study aims to provide a comprehensive survey of recent transformerbased approaches in image and video applications,as well as diffusion models.We begin by discussing existing surveys of vision transformers and comparing them to this work.Then,we review the main components of a vanilla transformer network,including the self-attention mechanism,feed-forward network,position encoding,etc.In the main part of this survey,we review recent transformer-based models in three categories:Transformer for downstream tasks,Vision Transformer for Generation,and Vision Transformer for Segmentation.We also provide a comprehensive overview of recent transformer models for video tasks and diffusion models.We compare the performance of various hierarchical transformer networks for multiple tasks on popular benchmark datasets.Finally,we explore some future research directions to further improve the field.展开更多
Accurately identifying building distribution from remote sensing images with complex background information is challenging.The emergence of diffusion models has prompted the innovative idea of employing the reverse de...Accurately identifying building distribution from remote sensing images with complex background information is challenging.The emergence of diffusion models has prompted the innovative idea of employing the reverse denoising process to distill building distribution from these complex backgrounds.Building on this concept,we propose a novel framework,building extraction diffusion model(BEDiff),which meticulously refines the extraction of building footprints from remote sensing images in a stepwise fashion.Our approach begins with the design of booster guidance,a mechanism that extracts structural and semantic features from remote sensing images to serve as priors,thereby providing targeted guidance for the diffusion process.Additionally,we introduce a cross-feature fusion module(CFM)that bridges the semantic gap between different types of features,facilitating the integration of the attributes extracted by booster guidance into the diffusion process more effectively.Our proposed BEDiff marks the first application of diffusion models to the task of building extraction.Empirical evidence from extensive experiments on the Beijing building dataset demonstrates the superior performance of BEDiff,affirming its effectiveness and potential for enhancing the accuracy of building extraction in complex urban landscapes.展开更多
The application of generative artificial intelligence(AI)is bringing about notable changes in anime creation.This paper surveys recent advancements and applications of diffusion and language models in anime generation...The application of generative artificial intelligence(AI)is bringing about notable changes in anime creation.This paper surveys recent advancements and applications of diffusion and language models in anime generation,focusing on their demonstrated potential to enhance production efficiency through automation and personalization.Despite these benefits,it is crucial to acknowledge the substantial initial computational investments required for training and deploying these models.We conduct an in-depth survey of cutting-edge generative AI technologies,encompassing models such as Stable Diffusion and GPT,and appraise pivotal large-scale datasets alongside quantifiable evaluation metrics.Review of the surveyed literature indicates the achievement of considerable maturity in the capacity of AI models to synthesize high-quality,aesthetically compelling anime visual images from textual prompts,alongside discernible progress in the generation of coherent narratives.However,achieving perfect long-form consistency,mitigating artifacts like flickering in video sequences,and enabling fine-grained artistic control remain critical ongoing challenges.Building upon these advancements,research efforts have increasingly pivoted towards the synthesis of higher-dimensional content,such as video and three-dimensional assets,with recent studies demonstrating significant progress in this burgeoning field.Nevertheless,formidable challenges endure amidst these advancements.Foremost among these are the substantial computational exigencies requisite for training and deploying these sophisticated models,particularly pronounced in the realm of high-dimensional generation such as video synthesis.Additional persistent hurdles include maintaining spatial-temporal consistency across complex scenes and mitigating ethical considerations surrounding bias and the preservation of human creative autonomy.This research underscores the transformative potential and inherent complexities of AI-driven synergy within the creative industries.We posit that future research should be dedicated to the synergistic fusion of diffusion and autoregressive models,the integration of multimodal inputs,and the balanced consideration of ethical implications,particularly regarding bias and the preservation of human creative autonomy,thereby establishing a robust foundation for the advancement of anime creation and the broader landscape of AI-driven content generation.展开更多
Recent advancements in diffusion models have significantly impacted content creation,leading to the emergence of per-sonalized content synthesis(PCS).By utilizing a small set of user-provided examples featuring the sa...Recent advancements in diffusion models have significantly impacted content creation,leading to the emergence of per-sonalized content synthesis(PCS).By utilizing a small set of user-provided examples featuring the same subject,PCS aims to tailor this subject to specific user-defined prompts.Over the past two years,more than 150 methods have been introduced in this area.However,existing surveys primarily focus on text-to-image generation,with few providing up-to-date summaries on PCS.This pa-per provides a comprehensive survey of PCS,introducing the general frameworks of PCS research,which can be categorized into test-time fine-tuning(TTF)and pre-trained adaptation(PTA)approaches.We analyze the strengths,limitations and key tech-niques of these methodologies.Additionally,we explore specialized tasks within the field,such as object,face and style personaliza-tion,while highlighting their unique challenges and innovations.Despite the promising progress,we also discuss ongoing challenges,including overfitting and the trade-off between subject fidelity and text alignment.Through this detailed overview and analysis,we propose future directions to further the development of PCS.展开更多
Diffusion models are a type of generative deep learning model that can process medical images more efficiently than traditional generative models.They have been applied to several medical image computing tasks.This pa...Diffusion models are a type of generative deep learning model that can process medical images more efficiently than traditional generative models.They have been applied to several medical image computing tasks.This paper aims to help researchers understand the advancements of diffusion models in medical image computing.It begins by describing the fundamental principles,sampling methods,and architecture of diffusion models.Subsequently,it discusses the application of diffusion models in five medical image computing tasks:image generation,modality conversion,image segmentation,image denoising,and anomaly detection.Additionally,this paper conducts fine-tuning of a large model for image generation tasks and comparative experiments between diffusion models and traditional generative models across these five tasks.The evaluation of the fine-tuned large model shows its potential for clinical applications.Comparative experiments demonstrate that diffusion models have a distinct advantage in tasks related to image generation,modality conversion,and image denoising.However,they require further optimization in image segmentation and anomaly detection tasks to match the efficacy of traditional models.Our codes are publicly available at:https://github.com/hiahub/CodeForDiffusion.展开更多
Denoising diffusion models have demonstrated tremendous success in modeling data distributions and synthesizing high-quality samples.In the 2D image domain,they have become the state-of-the-art and are capable of gene...Denoising diffusion models have demonstrated tremendous success in modeling data distributions and synthesizing high-quality samples.In the 2D image domain,they have become the state-of-the-art and are capable of generating photo-realistic images with high controllability.More recently,researchers have begun to explore how to utilize diffusion models to generate 3D data,as doing so has more potential in real-world applications.This requires careful design choices in two key ways:identifying a suitable 3D representation and determining how to apply the diffusion process.In this survey,we provide the first comprehensive review of diffusion models for manipulating 3D content,including 3D generation,reconstruction,and 3D-aware image synthesis.We classify existing methods into three major categories:2D space diffusion with pretrained models,2D space diffusion without pretrained models,and 3D space diffusion.We also summarize popular datasets used for 3D generation with diffusion models.Along with this survey,we maintain a repository https://github.com/cwchenwang/awesome-3d-diffusion to track the latest relevant papers and codebases.Finally,we pose current challenges for diffusion models for 3D generation,and suggest future research directions.展开更多
AlphaPanda(AlphaFold2[1]inspired protein-specific antibody design in a diffusional manner)is an advanced algorithm for designing complementary determining regions(CDRs)of the antibody targeted the specific epitope,com...AlphaPanda(AlphaFold2[1]inspired protein-specific antibody design in a diffusional manner)is an advanced algorithm for designing complementary determining regions(CDRs)of the antibody targeted the specific epitope,combining transformer[2]models,3DCNN[3],and diffusion[4]generative models.展开更多
Obtaining unsteady hydrodynamic performance is of great significance for seaplane design.Common methods for obtaining unsteady hydrodynamic performance data include tank test and Computational Fluid Dynamics(CFD)numer...Obtaining unsteady hydrodynamic performance is of great significance for seaplane design.Common methods for obtaining unsteady hydrodynamic performance data include tank test and Computational Fluid Dynamics(CFD)numerical simulation,which are costly and time-consuming.Therefore,it is necessary to obtain unsteady hydrodynamic performance in a low-cost and high-precision manner.Due to the strong nonlinearity,complex data distribution,and temporal characteristics of unsteady hydrodynamic performance,the prediction of it is challenging.This paper proposes a Temporal Convolutional Diffusion Model(TCDM)for predicting the unsteady hydrodynamic performance of seaplanes given design parameters.Under the framework of a classifier-free guided diffusion model,TCDM learns the distribution patterns of unsteady hydrodynamic performance data with the designed denoising module based on temporal convolutional network and captures the temporal features of unsteady hydrodynamic performance data.Using CFD simulation data,the proposed method is compared with the alternative methods to demonstrate its accuracy and generalization.This paper provides a method that enables the rapid and accurate prediction of unsteady hydrodynamic performance data,expecting to shorten the design cycle of seaplanes.展开更多
The unprecedented scale of large models,such as large language models(LLMs)and text-to-image diffusion models,has raised critical concerns about the unauthorized use of copyrighted data during model training.These con...The unprecedented scale of large models,such as large language models(LLMs)and text-to-image diffusion models,has raised critical concerns about the unauthorized use of copyrighted data during model training.These concerns have spurred a growing demand for dataset copyright auditing techniques,which aim to detect and verify potential infringements in the training data of commercial AI systems.This paper presents a survey of existing auditing solutions,categorizing them across key dimensions:data modality,model training stage,data overlap scenarios,and model access levels.We highlight major trends,including the prevalence of black-box auditing methods and the emphasis on fine-tuning rather than pre-training.Through an in-depth analysis of 12 representative works,we extract four key observations that reveal the limitations of current methods.Furthermore,we identify three open challenges and propose future directions for robust,multimodal,and scalable auditing solutions.Our findings underscore the urgent need to establish standardized benchmarks and develop auditing frameworks that are resilient to low watermark densities and applicable in diverse deployment settings.展开更多
Multi-instance image generation remains a challenging task in the field of computer vision.While existing diffusionmodels demonstrate impressive fidelity in image generation,they often struggle with precisely controll...Multi-instance image generation remains a challenging task in the field of computer vision.While existing diffusionmodels demonstrate impressive fidelity in image generation,they often struggle with precisely controlling each object’s shape,pose,and size.Methods like layout-to-image and mask-to-image provide spatial guidance but frequently suffer from object shape distortion,overlaps,and poor consistency,particularly in complex scenes with multiple objects.To address these issues,we introduce PolyDiffusion,a contour-based diffusion framework that encodes each object’s contour as a boundary-coordinate sequence,decoupling object shapes and positions.This approach allows for better control over object geometry and spatial positioning,which is critical for achieving high-quality multiinstance generation.We formulate the training process as a multi-objective optimization problem,balancing three key objectives:a denoising diffusion loss to maintain overall image fidelity,a cross-attention contour alignment loss to ensure precise shape adherence,and a reward-guided denoising objective that minimizes the Fréchet distance to real images.In addition,the Object Space-Aware Attention module fuses contour tokens with visual features,while a prior-guided fusion mechanism utilizes inter-object spatial relationships and class semantics to enhance consistency across multiple objects.Experimental results on benchmark datasets such as COCO-Stuff and VOC-2012 demonstrate that PolyDiffusion significantly outperforms existing layout-to-image and mask-to-image methods,achieving notable improvements in both image quality and instance-level segmentation accuracy.The implementation of Poly Diffusion is available at https://github.com/YYYYYJS/PolyDiffusion(accessed on 06 August 2025).展开更多
Diffusion models have recently emerged as powerful generative models,producing high-fidelity samples across domains.Despite this,they have two key challenges,including improving the time-consuming iterative generation...Diffusion models have recently emerged as powerful generative models,producing high-fidelity samples across domains.Despite this,they have two key challenges,including improving the time-consuming iterative generation process and controlling and steering the generation process.Existing surveys provide broad overviews of diffusion model advancements.However,they lack comprehensive coverage specifically centered on techniques for controllable generation.This survey seeks to address this gap by providing a comprehensive and coherent review on controllable generation in diffusion models.We provide a detailed taxonomy defining controlled generation for diffusion models.Controllable generation is categorized based on the formulation,methodologies,and evaluation metrics.By enumerating the range of methods researchers have developed for enhanced control,we aim to establish controllable diffusion generation as a distinct subfield warranting dedicated focus.With this survey,we contextualize recent results,provide the dedicated treatment of controllable diffusion model generation,and outline limitations and future directions.To demonstrate applicability,we highlight controllable diffusion techniques for major computer vision tasks application.By consolidating methods and applications for controllable diffusion models,we hope to catalyze further innovations in reliable and scalable controllable generation.展开更多
Creating realistic materials is essential in the construction of immersive virtual environments.While existing techniques for material capture and conditional generation rely on flash-lit photos,they often produce art...Creating realistic materials is essential in the construction of immersive virtual environments.While existing techniques for material capture and conditional generation rely on flash-lit photos,they often produce artifacts when the illumination mismatches the training data.In this study,we introduce DiffMat,a novel diffusion model that integrates the CLIP image encoder and a multi-layer,crossattention denoising backbone to generate latent materials from images under various illuminations.Using a pre-trained StyleGAN-based material generator,our method converts these latent materials into high-resolution SVBRDF textures,a process that enables a seamless fit into the standard physically based rendering pipeline,reducing the requirements for vast computational resources and expansive datasets.DiffMat surpasses existing generative methods in terms of material quality and variety,and shows adaptability to a broader spectrum of lighting conditions in reference images.展开更多
Recently,diffusion models have emerged as a promising paradigm for molecular design and optimization.However,most diffusion-based molecular generative models focus on modeling 2D graphs or 3D geom-etries,with limited ...Recently,diffusion models have emerged as a promising paradigm for molecular design and optimization.However,most diffusion-based molecular generative models focus on modeling 2D graphs or 3D geom-etries,with limited research on molecular sequence diffusion models.The International Union of Pure and Applied Chemistry(IUPAC)names are more akin to chemical natural language than the simplified molecular input line entry system(SMILES)for organic compounds.In this work,we apply an IUPAC-guided conditional diffusion model to facilitate molecular editing from chemical natural language to chemical language(SMILES)and explore whether the pre-trained generative performance of diffusion models can be transferred to chemical natural language.We propose DiffIUPAC,a controllable molecular editing diffusion model that converts IUPAC names to SMILES strings.Evaluation results demonstrate that our model out-performs existing methods and successfully captures the semantic rules of both chemical languages.Chemical space and scaffold analysis show that the model can generate similar compounds with diverse scaffolds within the specified constraints.Additionally,to illustrate the model’s applicability in drug design,we conducted case studies in functional group editing,analogue design and linker design.展开更多
The internal structures of cells as the basic units of life are a major wonder of the microscopic world.Cellular images provide an intriguing window to help explore and understand the composition and function of these...The internal structures of cells as the basic units of life are a major wonder of the microscopic world.Cellular images provide an intriguing window to help explore and understand the composition and function of these structures.Scientific imagery combined with artistic expression can further expand the potential of imaging in educational dissemination and interdisciplinary applications.展开更多
Deep learning has achieved great progress in image recognition,segmentation,semantic recognition and game theory.In this study,a latest deep learning model,a conditional diffusion model was adopted as a surrogate mode...Deep learning has achieved great progress in image recognition,segmentation,semantic recognition and game theory.In this study,a latest deep learning model,a conditional diffusion model was adopted as a surrogate model to predict the heat transfer during the casting process instead of numerical simulation.The conditional diffusion model was established and trained with the geometry shapes,initial temperature fields and temperature fields at t_(i) as the condition and random noise sampled from standard normal distribution as the input.The output was the temperature field at t_(i+1).Therefore,the temperature field at t_(i+1)can be predicted as the temperature field at t_(i) is known,and the continuous temperature fields of all the time steps can be predicted based on the initial temperature field of an arbitrary 2D geometry.A training set with 3022D shapes and their simulated temperature fields at different time steps was established.The accuracy for the temperature field for a single time step reaches 97.7%,and that for continuous time steps reaches 69.1%with the main error actually existing in the sand mold.The effect of geometry shape and initial temperature field on the prediction accuracy was investigated,the former achieves better result than the latter because the former can identify casting,mold and chill by different colors in the input images.The diffusion model has proved the potential as a surrogate model for numerical simulation of the casting process.展开更多
Multi-scale problems in Computational Fluid Dynamics(CFD)often require numerous simulations across various design parameters.Using a fixed mesh for all cases may fail to capture critical physical features.Moving mesh ...Multi-scale problems in Computational Fluid Dynamics(CFD)often require numerous simulations across various design parameters.Using a fixed mesh for all cases may fail to capture critical physical features.Moving mesh adaptation provides an optimal resource allocation to obtain high-resolution flow-fields on low-resolution meshes.However,most existing methods require manual experience and the flow posteriori information poses great challenges to practical applications.In addition,generating adaptive meshes directly from design parameters is difficult due to highly nonlinear relationships.The diffusion model is currently the most popular model in generative tasks that integrates the diffusion principle into deep learning to capture the complex nonlinear correlations.A dual diffusion framework,Para2Mesh,is proposed to predict the adaptive meshes from design parameters by exploiting the robust data distribution learning ability of the diffusion model.Through iterative denoising,the proposed dual networks accurately reconstruct the flow-field to provide flow features as supervised information,and then achieve rapid and reliable mesh movement.Experiments in CFD scenarios demonstrate that Para2Mesh predicts similar meshes directly from design parameters with much higher efficiency than traditional method.It could become a real-time adaptation tool to assist engineering design and optimization,providing a promising solution for high-resolution flow-field analysis.展开更多
Imputation of missing data has long been an important topic and an essential application for intelligent transportation systems(ITS)in the real world.As a state-of-the-art generative model,the diffusion model has prov...Imputation of missing data has long been an important topic and an essential application for intelligent transportation systems(ITS)in the real world.As a state-of-the-art generative model,the diffusion model has proven highly successful in image generation,speech generation,time series modelling etc.and now opens a new avenue for traffic data imputation.In this paper,we propose a conditional diffusion model,called the implicit-explicit diffusion model,for traffic data imputation.This model exploits both the implicit and explicit feature of the data simultaneously.More specifically,we design two types of feature extraction modules,one to capture the implicit dependencies hidden in the raw data at multiple time scales and the other to obtain the long-term temporal dependencies of the time series.This approach not only inherits the advantages of the diffusion model for estimating missing data,but also takes into account the multiscale correlation inherent in traffic data.To illustrate the performance of the model,extensive experiments are conducted on three real-world time series datasets using different missing rates.The experimental results demonstrate that the model improves imputation accuracy and generalization capability.展开更多
Objective This study aimed to explore a novel method that integrates the segmentation guidance classification and the dif-fusion model augmentation to realize the automatic classification for tibial plateau fractures(...Objective This study aimed to explore a novel method that integrates the segmentation guidance classification and the dif-fusion model augmentation to realize the automatic classification for tibial plateau fractures(TPFs).Methods YOLOv8n-cls was used to construct a baseline model on the data of 3781 patients from the Orthopedic Trauma Center of Wuhan Union Hospital.Additionally,a segmentation-guided classification approach was proposed.To enhance the dataset,a diffusion model was further demonstrated for data augmentation.Results The novel method that integrated the segmentation-guided classification and diffusion model augmentation sig-nificantly improved the accuracy and robustness of fracture classification.The average accuracy of classification for TPFs rose from 0.844 to 0.896.The comprehensive performance of the dual-stream model was also significantly enhanced after many rounds of training,with both the macro-area under the curve(AUC)and the micro-AUC increasing from 0.94 to 0.97.By utilizing diffusion model augmentation and segmentation map integration,the model demonstrated superior efficacy in identifying SchatzkerⅠ,achieving an accuracy of 0.880.It yielded an accuracy of 0.898 for SchatzkerⅡandⅢand 0.913 for SchatzkerⅣ;for SchatzkerⅤandⅥ,the accuracy was 0.887;and for intercondylar ridge fracture,the accuracy was 0.923.Conclusion The dual-stream attention-based classification network,which has been verified by many experiments,exhibited great potential in predicting the classification of TPFs.This method facilitates automatic TPF assessment and may assist surgeons in the rapid formulation of surgical plans.展开更多
基金supported in part by the National Natural Science Foundation of China under Grants 62102450,62272478 and the Independent Research Project of a Certain Unit under Grant ZZKY20243127。
文摘Traditional steganography conceals information by modifying cover data,but steganalysis tools easily detect such alterations.While deep learning-based steganography often involves high training costs and complex deployment.Diffusion model-based methods face security vulnerabilities,particularly due to potential information leakage during generation.We propose a fixed neural network image steganography framework based on secure diffu-sion models to address these challenges.Unlike conventional approaches,our method minimizes cover modifications through neural network optimization,achieving superior steganographic performance in human visual perception and computer vision analyses.The cover images are generated in an anime style using state-of-the-art diffusion models,ensuring the transmitted images appear more natural.This study introduces fixed neural network technology that allows senders to transmit only minimal critical information alongside stego-images.Recipients can accurately reconstruct secret images using this compact data,significantly reducing transmission overhead compared to conventional deep steganography.Furthermore,our framework innovatively integrates ElGamal,a cryptographic algorithm,to protect critical information during transmission,enhancing overall system security and ensuring end-to-end information protection.This dual optimization of payload reduction and cryptographic reinforcement establishes a new paradigm for secure and efficient image steganography.
基金co-supported by the National Natural Science Foundation of China(Nos.61806219,61876189 and 61703426)the Young Talent Fund of University Association for Science and Technology in Shaanxi,China(Nos.20190108 and 20220106)the Innvation Talent Supporting Project of Shaanxi,China(No.2020KJXX-065)。
文摘Air target intent recognition holds significant importance in aiding commanders to assess battlefield situations and secure a competitive edge in decision-making.Progress in this domain has been hindered by challenges posed by imbalanced battlefield data and the limited robustness of traditional recognition models.Inspired by the success of diffusion models in addressing visual domain sample imbalances,this paper introduces a new approach that utilizes the Markov Transfer Field(MTF)method for time series data visualization.This visualization,when combined with the Denoising Diffusion Probabilistic Model(DDPM),effectively enhances sample data and mitigates noise within the original dataset.Additionally,a transformer-based model tailored for time series visualization and air target intent recognition is developed.Comprehensive experimental results,encompassing comparative,ablation,and denoising validations,reveal that the proposed method achieves a notable 98.86%accuracy in air target intent recognition while demonstrating exceptional robustness and generalization capabilities.This approach represents a promising avenue for advancing air target intent recognition.
基金supported in part by the National Natural Science Foundation of China under Grants 61502162,61702175,and 61772184in part by the Fund of the State Key Laboratory of Geo-information Engineering under Grant SKLGIE2016-M-4-2+4 种基金in part by the Hunan Natural Science Foundation of China under Grant 2018JJ2059in part by the Key R&D Project of Hunan Province of China under Grant 2018GK2014in part by the Open Fund of the State Key Laboratory of Integrated Services Networks under Grant ISN17-14Chinese Scholarship Council(CSC)through College of Computer Science and Electronic Engineering,Changsha,410082Hunan University with Grant CSC No.2018GXZ020784.
文摘Transformer models have emerged as dominant networks for various tasks in computer vision compared to Convolutional Neural Networks(CNNs).The transformers demonstrate the ability to model long-range dependencies by utilizing a self-attention mechanism.This study aims to provide a comprehensive survey of recent transformerbased approaches in image and video applications,as well as diffusion models.We begin by discussing existing surveys of vision transformers and comparing them to this work.Then,we review the main components of a vanilla transformer network,including the self-attention mechanism,feed-forward network,position encoding,etc.In the main part of this survey,we review recent transformer-based models in three categories:Transformer for downstream tasks,Vision Transformer for Generation,and Vision Transformer for Segmentation.We also provide a comprehensive overview of recent transformer models for video tasks and diffusion models.We compare the performance of various hierarchical transformer networks for multiple tasks on popular benchmark datasets.Finally,we explore some future research directions to further improve the field.
基金supported by the National Natural Science Foundation of China(Nos.61906168,62202429 and 62272267)the Zhejiang Provincial Natural Science Foundation of China(No.LY23F020023)the Construction of Hubei Provincial Key Laboratory for Intelligent Visual Monitoring of Hydropower Projects(No.2022SDSJ01)。
文摘Accurately identifying building distribution from remote sensing images with complex background information is challenging.The emergence of diffusion models has prompted the innovative idea of employing the reverse denoising process to distill building distribution from these complex backgrounds.Building on this concept,we propose a novel framework,building extraction diffusion model(BEDiff),which meticulously refines the extraction of building footprints from remote sensing images in a stepwise fashion.Our approach begins with the design of booster guidance,a mechanism that extracts structural and semantic features from remote sensing images to serve as priors,thereby providing targeted guidance for the diffusion process.Additionally,we introduce a cross-feature fusion module(CFM)that bridges the semantic gap between different types of features,facilitating the integration of the attributes extracted by booster guidance into the diffusion process more effectively.Our proposed BEDiff marks the first application of diffusion models to the task of building extraction.Empirical evidence from extensive experiments on the Beijing building dataset demonstrates the superior performance of BEDiff,affirming its effectiveness and potential for enhancing the accuracy of building extraction in complex urban landscapes.
基金supported by the National Natural Science Foundation of China(Grant No.62202210).
文摘The application of generative artificial intelligence(AI)is bringing about notable changes in anime creation.This paper surveys recent advancements and applications of diffusion and language models in anime generation,focusing on their demonstrated potential to enhance production efficiency through automation and personalization.Despite these benefits,it is crucial to acknowledge the substantial initial computational investments required for training and deploying these models.We conduct an in-depth survey of cutting-edge generative AI technologies,encompassing models such as Stable Diffusion and GPT,and appraise pivotal large-scale datasets alongside quantifiable evaluation metrics.Review of the surveyed literature indicates the achievement of considerable maturity in the capacity of AI models to synthesize high-quality,aesthetically compelling anime visual images from textual prompts,alongside discernible progress in the generation of coherent narratives.However,achieving perfect long-form consistency,mitigating artifacts like flickering in video sequences,and enabling fine-grained artistic control remain critical ongoing challenges.Building upon these advancements,research efforts have increasingly pivoted towards the synthesis of higher-dimensional content,such as video and three-dimensional assets,with recent studies demonstrating significant progress in this burgeoning field.Nevertheless,formidable challenges endure amidst these advancements.Foremost among these are the substantial computational exigencies requisite for training and deploying these sophisticated models,particularly pronounced in the realm of high-dimensional generation such as video synthesis.Additional persistent hurdles include maintaining spatial-temporal consistency across complex scenes and mitigating ethical considerations surrounding bias and the preservation of human creative autonomy.This research underscores the transformative potential and inherent complexities of AI-driven synergy within the creative industries.We posit that future research should be dedicated to the synergistic fusion of diffusion and autoregressive models,the integration of multimodal inputs,and the balanced consideration of ethical implications,particularly regarding bias and the preservation of human creative autonomy,thereby establishing a robust foundation for the advancement of anime creation and the broader landscape of AI-driven content generation.
基金supported in part by Chinese National Natural Science Foundation Projects,China(Nos.U23B2054,62276254 and 62372314)Beijing Natural Science Foundation,China(No.L221013)+1 种基金InnoHK program,and Hong Kong Research Grants Council through Research Impact Fund,China(No.R1015-23)Open access funding provided by The Hong Kong Polytechnic University,China.
文摘Recent advancements in diffusion models have significantly impacted content creation,leading to the emergence of per-sonalized content synthesis(PCS).By utilizing a small set of user-provided examples featuring the same subject,PCS aims to tailor this subject to specific user-defined prompts.Over the past two years,more than 150 methods have been introduced in this area.However,existing surveys primarily focus on text-to-image generation,with few providing up-to-date summaries on PCS.This pa-per provides a comprehensive survey of PCS,introducing the general frameworks of PCS research,which can be categorized into test-time fine-tuning(TTF)and pre-trained adaptation(PTA)approaches.We analyze the strengths,limitations and key tech-niques of these methodologies.Additionally,we explore specialized tasks within the field,such as object,face and style personaliza-tion,while highlighting their unique challenges and innovations.Despite the promising progress,we also discuss ongoing challenges,including overfitting and the trade-off between subject fidelity and text alignment.Through this detailed overview and analysis,we propose future directions to further the development of PCS.
基金supported by the National Natural Science Foundation of China(Nos.62366050,61966033,and 61866035)。
文摘Diffusion models are a type of generative deep learning model that can process medical images more efficiently than traditional generative models.They have been applied to several medical image computing tasks.This paper aims to help researchers understand the advancements of diffusion models in medical image computing.It begins by describing the fundamental principles,sampling methods,and architecture of diffusion models.Subsequently,it discusses the application of diffusion models in five medical image computing tasks:image generation,modality conversion,image segmentation,image denoising,and anomaly detection.Additionally,this paper conducts fine-tuning of a large model for image generation tasks and comparative experiments between diffusion models and traditional generative models across these five tasks.The evaluation of the fine-tuned large model shows its potential for clinical applications.Comparative experiments demonstrate that diffusion models have a distinct advantage in tasks related to image generation,modality conversion,and image denoising.However,they require further optimization in image segmentation and anomaly detection tasks to match the efficacy of traditional models.Our codes are publicly available at:https://github.com/hiahub/CodeForDiffusion.
文摘Denoising diffusion models have demonstrated tremendous success in modeling data distributions and synthesizing high-quality samples.In the 2D image domain,they have become the state-of-the-art and are capable of generating photo-realistic images with high controllability.More recently,researchers have begun to explore how to utilize diffusion models to generate 3D data,as doing so has more potential in real-world applications.This requires careful design choices in two key ways:identifying a suitable 3D representation and determining how to apply the diffusion process.In this survey,we provide the first comprehensive review of diffusion models for manipulating 3D content,including 3D generation,reconstruction,and 3D-aware image synthesis.We classify existing methods into three major categories:2D space diffusion with pretrained models,2D space diffusion without pretrained models,and 3D space diffusion.We also summarize popular datasets used for 3D generation with diffusion models.Along with this survey,we maintain a repository https://github.com/cwchenwang/awesome-3d-diffusion to track the latest relevant papers and codebases.Finally,we pose current challenges for diffusion models for 3D generation,and suggest future research directions.
基金supported by the Key Project of International Cooperation of Qilu University of Technology(Grant No.:QLUTGJHZ2018008)Shandong Provincial Natural Science Foundation Committee,China(Grant No.:ZR2016HB54)Shandong Provincial Key Laboratory of Microbial Engineering(SME).
文摘AlphaPanda(AlphaFold2[1]inspired protein-specific antibody design in a diffusional manner)is an advanced algorithm for designing complementary determining regions(CDRs)of the antibody targeted the specific epitope,combining transformer[2]models,3DCNN[3],and diffusion[4]generative models.
基金supported by the Aeronautical Science Foundation of China(Nos.2018ZA52002,2019ZA052011)the National Natural Science Foundation of China(No.12472236).
文摘Obtaining unsteady hydrodynamic performance is of great significance for seaplane design.Common methods for obtaining unsteady hydrodynamic performance data include tank test and Computational Fluid Dynamics(CFD)numerical simulation,which are costly and time-consuming.Therefore,it is necessary to obtain unsteady hydrodynamic performance in a low-cost and high-precision manner.Due to the strong nonlinearity,complex data distribution,and temporal characteristics of unsteady hydrodynamic performance,the prediction of it is challenging.This paper proposes a Temporal Convolutional Diffusion Model(TCDM)for predicting the unsteady hydrodynamic performance of seaplanes given design parameters.Under the framework of a classifier-free guided diffusion model,TCDM learns the distribution patterns of unsteady hydrodynamic performance data with the designed denoising module based on temporal convolutional network and captures the temporal features of unsteady hydrodynamic performance data.Using CFD simulation data,the proposed method is compared with the alternative methods to demonstrate its accuracy and generalization.This paper provides a method that enables the rapid and accurate prediction of unsteady hydrodynamic performance data,expecting to shorten the design cycle of seaplanes.
基金supported in part by NSFC under Grant Nos.62402379,U22A2029 and U24A20237.
文摘The unprecedented scale of large models,such as large language models(LLMs)and text-to-image diffusion models,has raised critical concerns about the unauthorized use of copyrighted data during model training.These concerns have spurred a growing demand for dataset copyright auditing techniques,which aim to detect and verify potential infringements in the training data of commercial AI systems.This paper presents a survey of existing auditing solutions,categorizing them across key dimensions:data modality,model training stage,data overlap scenarios,and model access levels.We highlight major trends,including the prevalence of black-box auditing methods and the emphasis on fine-tuning rather than pre-training.Through an in-depth analysis of 12 representative works,we extract four key observations that reveal the limitations of current methods.Furthermore,we identify three open challenges and propose future directions for robust,multimodal,and scalable auditing solutions.Our findings underscore the urgent need to establish standardized benchmarks and develop auditing frameworks that are resilient to low watermark densities and applicable in diverse deployment settings.
基金supported in part by the Scientific Research Fund of National Natural Science Foundation of China(Grant No.62372168)the Hunan Provincial Natural Science Foundation of China(Grant No.2023JJ30266)+2 种基金the Research Project on teaching reform in Hunan province(No.HNJG-2022-0791)the Hunan University of Science and Technology(No.2022-44-8)the National Social Science Funds of China(19BZX044).
文摘Multi-instance image generation remains a challenging task in the field of computer vision.While existing diffusionmodels demonstrate impressive fidelity in image generation,they often struggle with precisely controlling each object’s shape,pose,and size.Methods like layout-to-image and mask-to-image provide spatial guidance but frequently suffer from object shape distortion,overlaps,and poor consistency,particularly in complex scenes with multiple objects.To address these issues,we introduce PolyDiffusion,a contour-based diffusion framework that encodes each object’s contour as a boundary-coordinate sequence,decoupling object shapes and positions.This approach allows for better control over object geometry and spatial positioning,which is critical for achieving high-quality multiinstance generation.We formulate the training process as a multi-objective optimization problem,balancing three key objectives:a denoising diffusion loss to maintain overall image fidelity,a cross-attention contour alignment loss to ensure precise shape adherence,and a reward-guided denoising objective that minimizes the Fréchet distance to real images.In addition,the Object Space-Aware Attention module fuses contour tokens with visual features,while a prior-guided fusion mechanism utilizes inter-object spatial relationships and class semantics to enhance consistency across multiple objects.Experimental results on benchmark datasets such as COCO-Stuff and VOC-2012 demonstrate that PolyDiffusion significantly outperforms existing layout-to-image and mask-to-image methods,achieving notable improvements in both image quality and instance-level segmentation accuracy.The implementation of Poly Diffusion is available at https://github.com/YYYYYJS/PolyDiffusion(accessed on 06 August 2025).
基金supported in part by the National Science Foundation for Distinguished Young Scholars of China under Grant No.62225605the National Natural Science Foundation of China under Grant No.U20A20222+1 种基金the Zhejiang Provincial Natural ScienceFoundation of China under Grant No.LD24F020016the Ng Teng Fong Charitable Foundation in the form of ZJU-SUTDIDEA under Grant No.188170-11102。
文摘Diffusion models have recently emerged as powerful generative models,producing high-fidelity samples across domains.Despite this,they have two key challenges,including improving the time-consuming iterative generation process and controlling and steering the generation process.Existing surveys provide broad overviews of diffusion model advancements.However,they lack comprehensive coverage specifically centered on techniques for controllable generation.This survey seeks to address this gap by providing a comprehensive and coherent review on controllable generation in diffusion models.We provide a detailed taxonomy defining controlled generation for diffusion models.Controllable generation is categorized based on the formulation,methodologies,and evaluation metrics.By enumerating the range of methods researchers have developed for enhanced control,we aim to establish controllable diffusion generation as a distinct subfield warranting dedicated focus.With this survey,we contextualize recent results,provide the dedicated treatment of controllable diffusion model generation,and outline limitations and future directions.To demonstrate applicability,we highlight controllable diffusion techniques for major computer vision tasks application.By consolidating methods and applications for controllable diffusion models,we hope to catalyze further innovations in reliable and scalable controllable generation.
基金Grant-in-Aid for Scientific Research(A)JP21H04916 and the Research Grant of Keio Leading-edge Laboratory of Science and Technology,Japan.
文摘Creating realistic materials is essential in the construction of immersive virtual environments.While existing techniques for material capture and conditional generation rely on flash-lit photos,they often produce artifacts when the illumination mismatches the training data.In this study,we introduce DiffMat,a novel diffusion model that integrates the CLIP image encoder and a multi-layer,crossattention denoising backbone to generate latent materials from images under various illuminations.Using a pre-trained StyleGAN-based material generator,our method converts these latent materials into high-resolution SVBRDF textures,a process that enables a seamless fit into the standard physically based rendering pipeline,reducing the requirements for vast computational resources and expansive datasets.DiffMat surpasses existing generative methods in terms of material quality and variety,and shows adaptability to a broader spectrum of lighting conditions in reference images.
基金supported by the Yonsei University graduate school Department of Integrative Biotechnology.
文摘Recently,diffusion models have emerged as a promising paradigm for molecular design and optimization.However,most diffusion-based molecular generative models focus on modeling 2D graphs or 3D geom-etries,with limited research on molecular sequence diffusion models.The International Union of Pure and Applied Chemistry(IUPAC)names are more akin to chemical natural language than the simplified molecular input line entry system(SMILES)for organic compounds.In this work,we apply an IUPAC-guided conditional diffusion model to facilitate molecular editing from chemical natural language to chemical language(SMILES)and explore whether the pre-trained generative performance of diffusion models can be transferred to chemical natural language.We propose DiffIUPAC,a controllable molecular editing diffusion model that converts IUPAC names to SMILES strings.Evaluation results demonstrate that our model out-performs existing methods and successfully captures the semantic rules of both chemical languages.Chemical space and scaffold analysis show that the model can generate similar compounds with diverse scaffolds within the specified constraints.Additionally,to illustrate the model’s applicability in drug design,we conducted case studies in functional group editing,analogue design and linker design.
基金supported by the Fundamental Research Funds for the Central Universities(No.226-2024-00038),China.
文摘The internal structures of cells as the basic units of life are a major wonder of the microscopic world.Cellular images provide an intriguing window to help explore and understand the composition and function of these structures.Scientific imagery combined with artistic expression can further expand the potential of imaging in educational dissemination and interdisciplinary applications.
基金sponsored by Tsinghua-Toyota Joint Research Fund
文摘Deep learning has achieved great progress in image recognition,segmentation,semantic recognition and game theory.In this study,a latest deep learning model,a conditional diffusion model was adopted as a surrogate model to predict the heat transfer during the casting process instead of numerical simulation.The conditional diffusion model was established and trained with the geometry shapes,initial temperature fields and temperature fields at t_(i) as the condition and random noise sampled from standard normal distribution as the input.The output was the temperature field at t_(i+1).Therefore,the temperature field at t_(i+1)can be predicted as the temperature field at t_(i) is known,and the continuous temperature fields of all the time steps can be predicted based on the initial temperature field of an arbitrary 2D geometry.A training set with 3022D shapes and their simulated temperature fields at different time steps was established.The accuracy for the temperature field for a single time step reaches 97.7%,and that for continuous time steps reaches 69.1%with the main error actually existing in the sand mold.The effect of geometry shape and initial temperature field on the prediction accuracy was investigated,the former achieves better result than the latter because the former can identify casting,mold and chill by different colors in the input images.The diffusion model has proved the potential as a surrogate model for numerical simulation of the casting process.
基金co-supported by the Aeronautical Science Foundation of China(Nos.2018ZA52002 and 2019ZA052011)。
文摘Multi-scale problems in Computational Fluid Dynamics(CFD)often require numerous simulations across various design parameters.Using a fixed mesh for all cases may fail to capture critical physical features.Moving mesh adaptation provides an optimal resource allocation to obtain high-resolution flow-fields on low-resolution meshes.However,most existing methods require manual experience and the flow posteriori information poses great challenges to practical applications.In addition,generating adaptive meshes directly from design parameters is difficult due to highly nonlinear relationships.The diffusion model is currently the most popular model in generative tasks that integrates the diffusion principle into deep learning to capture the complex nonlinear correlations.A dual diffusion framework,Para2Mesh,is proposed to predict the adaptive meshes from design parameters by exploiting the robust data distribution learning ability of the diffusion model.Through iterative denoising,the proposed dual networks accurately reconstruct the flow-field to provide flow features as supervised information,and then achieve rapid and reliable mesh movement.Experiments in CFD scenarios demonstrate that Para2Mesh predicts similar meshes directly from design parameters with much higher efficiency than traditional method.It could become a real-time adaptation tool to assist engineering design and optimization,providing a promising solution for high-resolution flow-field analysis.
基金partially supported by the National Natural Science Foundation of China(62271485)the SDHS Science and Technology Project(HS2023B044)
文摘Imputation of missing data has long been an important topic and an essential application for intelligent transportation systems(ITS)in the real world.As a state-of-the-art generative model,the diffusion model has proven highly successful in image generation,speech generation,time series modelling etc.and now opens a new avenue for traffic data imputation.In this paper,we propose a conditional diffusion model,called the implicit-explicit diffusion model,for traffic data imputation.This model exploits both the implicit and explicit feature of the data simultaneously.More specifically,we design two types of feature extraction modules,one to capture the implicit dependencies hidden in the raw data at multiple time scales and the other to obtain the long-term temporal dependencies of the time series.This approach not only inherits the advantages of the diffusion model for estimating missing data,but also takes into account the multiscale correlation inherent in traffic data.To illustrate the performance of the model,extensive experiments are conducted on three real-world time series datasets using different missing rates.The experimental results demonstrate that the model improves imputation accuracy and generalization capability.
基金supported by the National Natural Science Foundation of China(Nos.81974355 and 82172524)Key Research and Development Program of Hubei Province(No.2021BEA161)+2 种基金National Innovation Platform Development Program(No.2020021105012440)Open Project Funding of the Hubei Key Laboratory of Big Data Intelligent Analysis and Application,Hubei University(No.2024BDIAA03)Free Innovation Preliminary Research Fund of Wuhan Union Hospital(No.2024XHYN047).
文摘Objective This study aimed to explore a novel method that integrates the segmentation guidance classification and the dif-fusion model augmentation to realize the automatic classification for tibial plateau fractures(TPFs).Methods YOLOv8n-cls was used to construct a baseline model on the data of 3781 patients from the Orthopedic Trauma Center of Wuhan Union Hospital.Additionally,a segmentation-guided classification approach was proposed.To enhance the dataset,a diffusion model was further demonstrated for data augmentation.Results The novel method that integrated the segmentation-guided classification and diffusion model augmentation sig-nificantly improved the accuracy and robustness of fracture classification.The average accuracy of classification for TPFs rose from 0.844 to 0.896.The comprehensive performance of the dual-stream model was also significantly enhanced after many rounds of training,with both the macro-area under the curve(AUC)and the micro-AUC increasing from 0.94 to 0.97.By utilizing diffusion model augmentation and segmentation map integration,the model demonstrated superior efficacy in identifying SchatzkerⅠ,achieving an accuracy of 0.880.It yielded an accuracy of 0.898 for SchatzkerⅡandⅢand 0.913 for SchatzkerⅣ;for SchatzkerⅤandⅥ,the accuracy was 0.887;and for intercondylar ridge fracture,the accuracy was 0.923.Conclusion The dual-stream attention-based classification network,which has been verified by many experiments,exhibited great potential in predicting the classification of TPFs.This method facilitates automatic TPF assessment and may assist surgeons in the rapid formulation of surgical plans.