期刊文献+
共找到163篇文章
< 1 2 9 >
每页显示 20 50 100
Fixed Neural Network Image Steganography Based on Secure Diffusion Models
1
作者 Yixin Tang Minqing Zhang +2 位作者 Peizheng Lai Ya Yue Fuqiang Di 《Computers, Materials & Continua》 2025年第9期5733-5750,共18页
Traditional steganography conceals information by modifying cover data,but steganalysis tools easily detect such alterations.While deep learning-based steganography often involves high training costs and complex deplo... Traditional steganography conceals information by modifying cover data,but steganalysis tools easily detect such alterations.While deep learning-based steganography often involves high training costs and complex deployment.Diffusion model-based methods face security vulnerabilities,particularly due to potential information leakage during generation.We propose a fixed neural network image steganography framework based on secure diffu-sion models to address these challenges.Unlike conventional approaches,our method minimizes cover modifications through neural network optimization,achieving superior steganographic performance in human visual perception and computer vision analyses.The cover images are generated in an anime style using state-of-the-art diffusion models,ensuring the transmitted images appear more natural.This study introduces fixed neural network technology that allows senders to transmit only minimal critical information alongside stego-images.Recipients can accurately reconstruct secret images using this compact data,significantly reducing transmission overhead compared to conventional deep steganography.Furthermore,our framework innovatively integrates ElGamal,a cryptographic algorithm,to protect critical information during transmission,enhancing overall system security and ensuring end-to-end information protection.This dual optimization of payload reduction and cryptographic reinforcement establishes a new paradigm for secure and efficient image steganography. 展开更多
关键词 Image steganography fixed neural network secure diffusion models ELGAMAL
在线阅读 下载PDF
Air target intent recognition method combining graphing time series and diffusion models
2
作者 Chenghai LI Ke WANG +2 位作者 Yafei SONG Peng WANG Lemin LI 《Chinese Journal of Aeronautics》 2025年第1期507-519,共13页
Air target intent recognition holds significant importance in aiding commanders to assess battlefield situations and secure a competitive edge in decision-making.Progress in this domain has been hindered by challenges... Air target intent recognition holds significant importance in aiding commanders to assess battlefield situations and secure a competitive edge in decision-making.Progress in this domain has been hindered by challenges posed by imbalanced battlefield data and the limited robustness of traditional recognition models.Inspired by the success of diffusion models in addressing visual domain sample imbalances,this paper introduces a new approach that utilizes the Markov Transfer Field(MTF)method for time series data visualization.This visualization,when combined with the Denoising Diffusion Probabilistic Model(DDPM),effectively enhances sample data and mitigates noise within the original dataset.Additionally,a transformer-based model tailored for time series visualization and air target intent recognition is developed.Comprehensive experimental results,encompassing comparative,ablation,and denoising validations,reveal that the proposed method achieves a notable 98.86%accuracy in air target intent recognition while demonstrating exceptional robustness and generalization capabilities.This approach represents a promising avenue for advancing air target intent recognition. 展开更多
关键词 Intent Recognition Markov Transfer Field Denoising diffusion Probability Model Transformer Neural Network
原文传递
A Comprehensive Survey of Recent Transformers in Image,Video and Diffusion Models 被引量:1
3
作者 Dinh Phu Cuong Le Dong Wang Viet-Tuan Le 《Computers, Materials & Continua》 SCIE EI 2024年第7期37-60,共24页
Transformer models have emerged as dominant networks for various tasks in computer vision compared to Convolutional Neural Networks(CNNs).The transformers demonstrate the ability to model long-range dependencies by ut... Transformer models have emerged as dominant networks for various tasks in computer vision compared to Convolutional Neural Networks(CNNs).The transformers demonstrate the ability to model long-range dependencies by utilizing a self-attention mechanism.This study aims to provide a comprehensive survey of recent transformerbased approaches in image and video applications,as well as diffusion models.We begin by discussing existing surveys of vision transformers and comparing them to this work.Then,we review the main components of a vanilla transformer network,including the self-attention mechanism,feed-forward network,position encoding,etc.In the main part of this survey,we review recent transformer-based models in three categories:Transformer for downstream tasks,Vision Transformer for Generation,and Vision Transformer for Segmentation.We also provide a comprehensive overview of recent transformer models for video tasks and diffusion models.We compare the performance of various hierarchical transformer networks for multiple tasks on popular benchmark datasets.Finally,we explore some future research directions to further improve the field. 展开更多
关键词 TRANSFORMER vision transformer self-attention hierarchical transformer diffusion models
在线阅读 下载PDF
BEDiff:denoising diffusion probabilistic models for building extraction
4
作者 LEI Yanjing WANG Yuan +3 位作者 CHAN Sixian HU Jie ZHOU Xiaolong ZHANG Hongkai 《Optoelectronics Letters》 2025年第5期298-305,共8页
Accurately identifying building distribution from remote sensing images with complex background information is challenging.The emergence of diffusion models has prompted the innovative idea of employing the reverse de... Accurately identifying building distribution from remote sensing images with complex background information is challenging.The emergence of diffusion models has prompted the innovative idea of employing the reverse denoising process to distill building distribution from these complex backgrounds.Building on this concept,we propose a novel framework,building extraction diffusion model(BEDiff),which meticulously refines the extraction of building footprints from remote sensing images in a stepwise fashion.Our approach begins with the design of booster guidance,a mechanism that extracts structural and semantic features from remote sensing images to serve as priors,thereby providing targeted guidance for the diffusion process.Additionally,we introduce a cross-feature fusion module(CFM)that bridges the semantic gap between different types of features,facilitating the integration of the attributes extracted by booster guidance into the diffusion process more effectively.Our proposed BEDiff marks the first application of diffusion models to the task of building extraction.Empirical evidence from extensive experiments on the Beijing building dataset demonstrates the superior performance of BEDiff,affirming its effectiveness and potential for enhancing the accuracy of building extraction in complex urban landscapes. 展开更多
关键词 booster guidance building extraction reverse denoising process diffusion model bediff which remote sensing images complex background diffusion models
原文传递
Anime Generation through Diffusion and Language Models:A Comprehensive Survey of Techniques and Trends
5
作者 Yujie Wu Xing Deng +4 位作者 Haijian Shao Ke Cheng Ming Zhang Yingtao Jiang Fei Wang 《Computer Modeling in Engineering & Sciences》 2025年第9期2709-2778,共70页
The application of generative artificial intelligence(AI)is bringing about notable changes in anime creation.This paper surveys recent advancements and applications of diffusion and language models in anime generation... The application of generative artificial intelligence(AI)is bringing about notable changes in anime creation.This paper surveys recent advancements and applications of diffusion and language models in anime generation,focusing on their demonstrated potential to enhance production efficiency through automation and personalization.Despite these benefits,it is crucial to acknowledge the substantial initial computational investments required for training and deploying these models.We conduct an in-depth survey of cutting-edge generative AI technologies,encompassing models such as Stable Diffusion and GPT,and appraise pivotal large-scale datasets alongside quantifiable evaluation metrics.Review of the surveyed literature indicates the achievement of considerable maturity in the capacity of AI models to synthesize high-quality,aesthetically compelling anime visual images from textual prompts,alongside discernible progress in the generation of coherent narratives.However,achieving perfect long-form consistency,mitigating artifacts like flickering in video sequences,and enabling fine-grained artistic control remain critical ongoing challenges.Building upon these advancements,research efforts have increasingly pivoted towards the synthesis of higher-dimensional content,such as video and three-dimensional assets,with recent studies demonstrating significant progress in this burgeoning field.Nevertheless,formidable challenges endure amidst these advancements.Foremost among these are the substantial computational exigencies requisite for training and deploying these sophisticated models,particularly pronounced in the realm of high-dimensional generation such as video synthesis.Additional persistent hurdles include maintaining spatial-temporal consistency across complex scenes and mitigating ethical considerations surrounding bias and the preservation of human creative autonomy.This research underscores the transformative potential and inherent complexities of AI-driven synergy within the creative industries.We posit that future research should be dedicated to the synergistic fusion of diffusion and autoregressive models,the integration of multimodal inputs,and the balanced consideration of ethical implications,particularly regarding bias and the preservation of human creative autonomy,thereby establishing a robust foundation for the advancement of anime creation and the broader landscape of AI-driven content generation. 展开更多
关键词 diffusion models language models anime generation image synthesis video generation stable diffusion AIGC
在线阅读 下载PDF
A Survey on Personalized Content Synthesis with Diffusion Models
6
作者 Xulu Zhang Xiaoyong Wei +6 位作者 Wentao Hu Jinlin Wu Jiaxin Wu Wengyu Zhang Zhaoxiang Zhang Zhen Lei Qing Li 《Machine Intelligence Research》 2025年第5期817-848,共32页
Recent advancements in diffusion models have significantly impacted content creation,leading to the emergence of per-sonalized content synthesis(PCS).By utilizing a small set of user-provided examples featuring the sa... Recent advancements in diffusion models have significantly impacted content creation,leading to the emergence of per-sonalized content synthesis(PCS).By utilizing a small set of user-provided examples featuring the same subject,PCS aims to tailor this subject to specific user-defined prompts.Over the past two years,more than 150 methods have been introduced in this area.However,existing surveys primarily focus on text-to-image generation,with few providing up-to-date summaries on PCS.This pa-per provides a comprehensive survey of PCS,introducing the general frameworks of PCS research,which can be categorized into test-time fine-tuning(TTF)and pre-trained adaptation(PTA)approaches.We analyze the strengths,limitations and key tech-niques of these methodologies.Additionally,we explore specialized tasks within the field,such as object,face and style personaliza-tion,while highlighting their unique challenges and innovations.Despite the promising progress,we also discuss ongoing challenges,including overfitting and the trade-off between subject fidelity and text alignment.Through this detailed overview and analysis,we propose future directions to further the development of PCS. 展开更多
关键词 Generative models image synthesis diffusion models personalized content synthesis subject customization
原文传递
Diffusion Models for Medical Image Computing:A Survey
7
作者 Yaqing Shi Abudukelimu Abulizi +4 位作者 Hao Wang Ke Feng Nihemaiti Abudukelimu Youli Su Halidanmu Abudukelimu 《Tsinghua Science and Technology》 2025年第1期357-383,共27页
Diffusion models are a type of generative deep learning model that can process medical images more efficiently than traditional generative models.They have been applied to several medical image computing tasks.This pa... Diffusion models are a type of generative deep learning model that can process medical images more efficiently than traditional generative models.They have been applied to several medical image computing tasks.This paper aims to help researchers understand the advancements of diffusion models in medical image computing.It begins by describing the fundamental principles,sampling methods,and architecture of diffusion models.Subsequently,it discusses the application of diffusion models in five medical image computing tasks:image generation,modality conversion,image segmentation,image denoising,and anomaly detection.Additionally,this paper conducts fine-tuning of a large model for image generation tasks and comparative experiments between diffusion models and traditional generative models across these five tasks.The evaluation of the fine-tuned large model shows its potential for clinical applications.Comparative experiments demonstrate that diffusion models have a distinct advantage in tasks related to image generation,modality conversion,and image denoising.However,they require further optimization in image segmentation and anomaly detection tasks to match the efficacy of traditional models.Our codes are publicly available at:https://github.com/hiahub/CodeForDiffusion. 展开更多
关键词 diffusion models generative models medical image large model
原文传递
Diffusion models for 3D generation: A survey
8
作者 Chen Wang Hao-Yang Peng +2 位作者 Ying-Tian Liu Jiatao Gu Shi-Min Hu 《Computational Visual Media》 2025年第1期1-28,共28页
Denoising diffusion models have demonstrated tremendous success in modeling data distributions and synthesizing high-quality samples.In the 2D image domain,they have become the state-of-the-art and are capable of gene... Denoising diffusion models have demonstrated tremendous success in modeling data distributions and synthesizing high-quality samples.In the 2D image domain,they have become the state-of-the-art and are capable of generating photo-realistic images with high controllability.More recently,researchers have begun to explore how to utilize diffusion models to generate 3D data,as doing so has more potential in real-world applications.This requires careful design choices in two key ways:identifying a suitable 3D representation and determining how to apply the diffusion process.In this survey,we provide the first comprehensive review of diffusion models for manipulating 3D content,including 3D generation,reconstruction,and 3D-aware image synthesis.We classify existing methods into three major categories:2D space diffusion with pretrained models,2D space diffusion without pretrained models,and 3D space diffusion.We also summarize popular datasets used for 3D generation with diffusion models.Along with this survey,we maintain a repository https://github.com/cwchenwang/awesome-3d-diffusion to track the latest relevant papers and codebases.Finally,we pose current challenges for diffusion models for 3D generation,and suggest future research directions. 展开更多
关键词 diffusion models 3D generation generative models AIG
原文传递
Combining transformer and 3DCNN models to achieve co-design of structures and sequences of antibodies in a diffusional manner
9
作者 Yue Hu Feng Tao +3 位作者 Jiajie Xu Wen-Jun Lan Jing Zhang Wei Lan 《Journal of Pharmaceutical Analysis》 2025年第6期1406-1408,共3页
AlphaPanda(AlphaFold2[1]inspired protein-specific antibody design in a diffusional manner)is an advanced algorithm for designing complementary determining regions(CDRs)of the antibody targeted the specific epitope,com... AlphaPanda(AlphaFold2[1]inspired protein-specific antibody design in a diffusional manner)is an advanced algorithm for designing complementary determining regions(CDRs)of the antibody targeted the specific epitope,combining transformer[2]models,3DCNN[3],and diffusion[4]generative models. 展开更多
关键词 advanced algorithm diffusion generative models dcnn epitope targeting antibody design complementary determining regions complementary determining regions cdrs transformer models
在线阅读 下载PDF
Predicting unsteady hydrodynamic performance of seaplanes based on diffusion models
10
作者 Xinlong YU Miao PENG +4 位作者 Mingzhen WANG Junlong ZHANG Jian YU Hongqiang LYU Xuejun LIU 《Chinese Journal of Aeronautics》 2025年第10期327-346,共20页
Obtaining unsteady hydrodynamic performance is of great significance for seaplane design.Common methods for obtaining unsteady hydrodynamic performance data include tank test and Computational Fluid Dynamics(CFD)numer... Obtaining unsteady hydrodynamic performance is of great significance for seaplane design.Common methods for obtaining unsteady hydrodynamic performance data include tank test and Computational Fluid Dynamics(CFD)numerical simulation,which are costly and time-consuming.Therefore,it is necessary to obtain unsteady hydrodynamic performance in a low-cost and high-precision manner.Due to the strong nonlinearity,complex data distribution,and temporal characteristics of unsteady hydrodynamic performance,the prediction of it is challenging.This paper proposes a Temporal Convolutional Diffusion Model(TCDM)for predicting the unsteady hydrodynamic performance of seaplanes given design parameters.Under the framework of a classifier-free guided diffusion model,TCDM learns the distribution patterns of unsteady hydrodynamic performance data with the designed denoising module based on temporal convolutional network and captures the temporal features of unsteady hydrodynamic performance data.Using CFD simulation data,the proposed method is compared with the alternative methods to demonstrate its accuracy and generalization.This paper provides a method that enables the rapid and accurate prediction of unsteady hydrodynamic performance data,expecting to shorten the design cycle of seaplanes. 展开更多
关键词 Seaplanes Unsteady hydrodynamic performance Classifier-free guided diffusion model Temporal convolutional network Temporal data
原文传递
Dataset Copyright Auditing for Large Models:Fundamentals,Open Problems,and Future Directions
11
作者 DU Linkang SU Zhou YU Xinyi 《ZTE Communications》 2025年第3期38-47,共10页
The unprecedented scale of large models,such as large language models(LLMs)and text-to-image diffusion models,has raised critical concerns about the unauthorized use of copyrighted data during model training.These con... The unprecedented scale of large models,such as large language models(LLMs)and text-to-image diffusion models,has raised critical concerns about the unauthorized use of copyrighted data during model training.These concerns have spurred a growing demand for dataset copyright auditing techniques,which aim to detect and verify potential infringements in the training data of commercial AI systems.This paper presents a survey of existing auditing solutions,categorizing them across key dimensions:data modality,model training stage,data overlap scenarios,and model access levels.We highlight major trends,including the prevalence of black-box auditing methods and the emphasis on fine-tuning rather than pre-training.Through an in-depth analysis of 12 representative works,we extract four key observations that reveal the limitations of current methods.Furthermore,we identify three open challenges and propose future directions for robust,multimodal,and scalable auditing solutions.Our findings underscore the urgent need to establish standardized benchmarks and develop auditing frameworks that are resilient to low watermark densities and applicable in diverse deployment settings. 展开更多
关键词 dataset copyright auditing large language models diffusion models multimodal auditing membership inference
在线阅读 下载PDF
PolyDiffusion:AMulti-Objective Optimized Contour-to-Image Diffusion Framework
12
作者 Yuzhen Liu Jiasheng Yin +3 位作者 Yixuan Chen Jin Wang Xiaolan Zhou Xiaoliang Wang 《Computers, Materials & Continua》 2025年第11期3965-3980,共16页
Multi-instance image generation remains a challenging task in the field of computer vision.While existing diffusionmodels demonstrate impressive fidelity in image generation,they often struggle with precisely controll... Multi-instance image generation remains a challenging task in the field of computer vision.While existing diffusionmodels demonstrate impressive fidelity in image generation,they often struggle with precisely controlling each object’s shape,pose,and size.Methods like layout-to-image and mask-to-image provide spatial guidance but frequently suffer from object shape distortion,overlaps,and poor consistency,particularly in complex scenes with multiple objects.To address these issues,we introduce PolyDiffusion,a contour-based diffusion framework that encodes each object’s contour as a boundary-coordinate sequence,decoupling object shapes and positions.This approach allows for better control over object geometry and spatial positioning,which is critical for achieving high-quality multiinstance generation.We formulate the training process as a multi-objective optimization problem,balancing three key objectives:a denoising diffusion loss to maintain overall image fidelity,a cross-attention contour alignment loss to ensure precise shape adherence,and a reward-guided denoising objective that minimizes the Fréchet distance to real images.In addition,the Object Space-Aware Attention module fuses contour tokens with visual features,while a prior-guided fusion mechanism utilizes inter-object spatial relationships and class semantics to enhance consistency across multiple objects.Experimental results on benchmark datasets such as COCO-Stuff and VOC-2012 demonstrate that PolyDiffusion significantly outperforms existing layout-to-image and mask-to-image methods,achieving notable improvements in both image quality and instance-level segmentation accuracy.The implementation of Poly Diffusion is available at https://github.com/YYYYYJS/PolyDiffusion(accessed on 06 August 2025). 展开更多
关键词 diffusion models multi-object generation multi-objective optimization contour-to-image
在线阅读 下载PDF
A Survey of Multimodal Controllable Diffusion Models
13
作者 江锐 郑光聪 +3 位作者 李藤 杨天瑞 王井东 李玺 《Journal of Computer Science & Technology》 SCIE EI CSCD 2024年第3期509-541,共33页
Diffusion models have recently emerged as powerful generative models,producing high-fidelity samples across domains.Despite this,they have two key challenges,including improving the time-consuming iterative generation... Diffusion models have recently emerged as powerful generative models,producing high-fidelity samples across domains.Despite this,they have two key challenges,including improving the time-consuming iterative generation process and controlling and steering the generation process.Existing surveys provide broad overviews of diffusion model advancements.However,they lack comprehensive coverage specifically centered on techniques for controllable generation.This survey seeks to address this gap by providing a comprehensive and coherent review on controllable generation in diffusion models.We provide a detailed taxonomy defining controlled generation for diffusion models.Controllable generation is categorized based on the formulation,methodologies,and evaluation metrics.By enumerating the range of methods researchers have developed for enhanced control,we aim to establish controllable diffusion generation as a distinct subfield warranting dedicated focus.With this survey,we contextualize recent results,provide the dedicated treatment of controllable diffusion model generation,and outline limitations and future directions.To demonstrate applicability,we highlight controllable diffusion techniques for major computer vision tasks application.By consolidating methods and applications for controllable diffusion models,we hope to catalyze further innovations in reliable and scalable controllable generation. 展开更多
关键词 diffusion model controllable generation APPLICATION PERSONALIZATION
原文传递
DiffMat:Latent diffusion models for image-guided material generation
14
作者 Liang Yuan Dingkun Yan +1 位作者 Suguru Saito Issei Fujishiro 《Visual Informatics》 EI 2024年第1期6-14,共9页
Creating realistic materials is essential in the construction of immersive virtual environments.While existing techniques for material capture and conditional generation rely on flash-lit photos,they often produce art... Creating realistic materials is essential in the construction of immersive virtual environments.While existing techniques for material capture and conditional generation rely on flash-lit photos,they often produce artifacts when the illumination mismatches the training data.In this study,we introduce DiffMat,a novel diffusion model that integrates the CLIP image encoder and a multi-layer,crossattention denoising backbone to generate latent materials from images under various illuminations.Using a pre-trained StyleGAN-based material generator,our method converts these latent materials into high-resolution SVBRDF textures,a process that enables a seamless fit into the standard physically based rendering pipeline,reducing the requirements for vast computational resources and expansive datasets.DiffMat surpasses existing generative methods in terms of material quality and variety,and shows adaptability to a broader spectrum of lighting conditions in reference images. 展开更多
关键词 SVBRDF diffusion model Generative model Appearance modeling
原文传递
Diffusion-based generative drug-like molecular editing with chemical natural language 被引量:1
15
作者 Jianmin Wang Peng Zhou +6 位作者 Zixu Wang Wei Long Yangyang Chen Kyoung Tai No Dongsheng Ouyang Jiashun Mao Xiangxiang Zeng 《Journal of Pharmaceutical Analysis》 2025年第6期1215-1225,共11页
Recently,diffusion models have emerged as a promising paradigm for molecular design and optimization.However,most diffusion-based molecular generative models focus on modeling 2D graphs or 3D geom-etries,with limited ... Recently,diffusion models have emerged as a promising paradigm for molecular design and optimization.However,most diffusion-based molecular generative models focus on modeling 2D graphs or 3D geom-etries,with limited research on molecular sequence diffusion models.The International Union of Pure and Applied Chemistry(IUPAC)names are more akin to chemical natural language than the simplified molecular input line entry system(SMILES)for organic compounds.In this work,we apply an IUPAC-guided conditional diffusion model to facilitate molecular editing from chemical natural language to chemical language(SMILES)and explore whether the pre-trained generative performance of diffusion models can be transferred to chemical natural language.We propose DiffIUPAC,a controllable molecular editing diffusion model that converts IUPAC names to SMILES strings.Evaluation results demonstrate that our model out-performs existing methods and successfully captures the semantic rules of both chemical languages.Chemical space and scaffold analysis show that the model can generate similar compounds with diverse scaffolds within the specified constraints.Additionally,to illustrate the model’s applicability in drug design,we conducted case studies in functional group editing,analogue design and linker design. 展开更多
关键词 diffusion model IUPAC Molecular generative model Chemical natural language Transformer
在线阅读 下载PDF
Seeing the macro in the micro:a diffusion model-based approach for style transfer in cellular images
16
作者 Jiayi CAI Yong HE +2 位作者 Feng LIU Byung-Ho KANG Xuping FENG 《Journal of Zhejiang University-Science B(Biomedicine & Biotechnology)》 2025年第6期609-612,共4页
The internal structures of cells as the basic units of life are a major wonder of the microscopic world.Cellular images provide an intriguing window to help explore and understand the composition and function of these... The internal structures of cells as the basic units of life are a major wonder of the microscopic world.Cellular images provide an intriguing window to help explore and understand the composition and function of these structures.Scientific imagery combined with artistic expression can further expand the potential of imaging in educational dissemination and interdisciplinary applications. 展开更多
关键词 interdisciplinary applications artistic expression diffusion model explore understand composition function cellular images educational dissemination style transfer internal structures
原文传递
Temperature fields prediction for the casting process by a conditional diffusion model
17
作者 Jin-wu Kang Jing-xi Zhu Qi-chao Zhao 《China Foundry》 2025年第2期139-150,共12页
Deep learning has achieved great progress in image recognition,segmentation,semantic recognition and game theory.In this study,a latest deep learning model,a conditional diffusion model was adopted as a surrogate mode... Deep learning has achieved great progress in image recognition,segmentation,semantic recognition and game theory.In this study,a latest deep learning model,a conditional diffusion model was adopted as a surrogate model to predict the heat transfer during the casting process instead of numerical simulation.The conditional diffusion model was established and trained with the geometry shapes,initial temperature fields and temperature fields at t_(i) as the condition and random noise sampled from standard normal distribution as the input.The output was the temperature field at t_(i+1).Therefore,the temperature field at t_(i+1)can be predicted as the temperature field at t_(i) is known,and the continuous temperature fields of all the time steps can be predicted based on the initial temperature field of an arbitrary 2D geometry.A training set with 3022D shapes and their simulated temperature fields at different time steps was established.The accuracy for the temperature field for a single time step reaches 97.7%,and that for continuous time steps reaches 69.1%with the main error actually existing in the sand mold.The effect of geometry shape and initial temperature field on the prediction accuracy was investigated,the former achieves better result than the latter because the former can identify casting,mold and chill by different colors in the input images.The diffusion model has proved the potential as a surrogate model for numerical simulation of the casting process. 展开更多
关键词 diffusion model U-Net CASTING simulation heat transfer
在线阅读 下载PDF
Para2Mesh:A dual diffusion framework for moving mesh adaptation
18
作者 Jian YU Hongqiang LYU +2 位作者 Ran XU Wenxuan OUYANG Xuejun LIU 《Chinese Journal of Aeronautics》 2025年第7期147-163,共17页
Multi-scale problems in Computational Fluid Dynamics(CFD)often require numerous simulations across various design parameters.Using a fixed mesh for all cases may fail to capture critical physical features.Moving mesh ... Multi-scale problems in Computational Fluid Dynamics(CFD)often require numerous simulations across various design parameters.Using a fixed mesh for all cases may fail to capture critical physical features.Moving mesh adaptation provides an optimal resource allocation to obtain high-resolution flow-fields on low-resolution meshes.However,most existing methods require manual experience and the flow posteriori information poses great challenges to practical applications.In addition,generating adaptive meshes directly from design parameters is difficult due to highly nonlinear relationships.The diffusion model is currently the most popular model in generative tasks that integrates the diffusion principle into deep learning to capture the complex nonlinear correlations.A dual diffusion framework,Para2Mesh,is proposed to predict the adaptive meshes from design parameters by exploiting the robust data distribution learning ability of the diffusion model.Through iterative denoising,the proposed dual networks accurately reconstruct the flow-field to provide flow features as supervised information,and then achieve rapid and reliable mesh movement.Experiments in CFD scenarios demonstrate that Para2Mesh predicts similar meshes directly from design parameters with much higher efficiency than traditional method.It could become a real-time adaptation tool to assist engineering design and optimization,providing a promising solution for high-resolution flow-field analysis. 展开更多
关键词 Mesh adaptation Flow-field reconstruction Computational fluid dynamics Deep learning diffusion model Graph neural network
原文传递
A Diffusion Model for Traffic Data Imputation
19
作者 Bo Lu Qinghai Miao +5 位作者 Yahui Liu Tariku Sinshaw Tamir Hongxia Zhao Xiqiao Zhang Yisheng Lv Fei-Yue Wang 《IEEE/CAA Journal of Automatica Sinica》 2025年第3期606-617,共12页
Imputation of missing data has long been an important topic and an essential application for intelligent transportation systems(ITS)in the real world.As a state-of-the-art generative model,the diffusion model has prov... Imputation of missing data has long been an important topic and an essential application for intelligent transportation systems(ITS)in the real world.As a state-of-the-art generative model,the diffusion model has proven highly successful in image generation,speech generation,time series modelling etc.and now opens a new avenue for traffic data imputation.In this paper,we propose a conditional diffusion model,called the implicit-explicit diffusion model,for traffic data imputation.This model exploits both the implicit and explicit feature of the data simultaneously.More specifically,we design two types of feature extraction modules,one to capture the implicit dependencies hidden in the raw data at multiple time scales and the other to obtain the long-term temporal dependencies of the time series.This approach not only inherits the advantages of the diffusion model for estimating missing data,but also takes into account the multiscale correlation inherent in traffic data.To illustrate the performance of the model,extensive experiments are conducted on three real-world time series datasets using different missing rates.The experimental results demonstrate that the model improves imputation accuracy and generalization capability. 展开更多
关键词 Data imputation diffusion model implicit feature time series traffic data
在线阅读 下载PDF
Dual-Stream Attention-Based Classification Network for Tibial Plateau Fractures via Diffusion Model Augmentation and Segmentation Map Integration
20
作者 Yi Xie Zhi-wei Hao +8 位作者 Xin-meng Wang Hong-lin Wang Jia-ming Yang Hong Zhou Xu-dong Wang Jia-yao Zhang Hui-wen Yang Peng-ran Liu Zhe-wei Ye 《Current Medical Science》 2025年第1期57-69,共13页
Objective This study aimed to explore a novel method that integrates the segmentation guidance classification and the dif-fusion model augmentation to realize the automatic classification for tibial plateau fractures(... Objective This study aimed to explore a novel method that integrates the segmentation guidance classification and the dif-fusion model augmentation to realize the automatic classification for tibial plateau fractures(TPFs).Methods YOLOv8n-cls was used to construct a baseline model on the data of 3781 patients from the Orthopedic Trauma Center of Wuhan Union Hospital.Additionally,a segmentation-guided classification approach was proposed.To enhance the dataset,a diffusion model was further demonstrated for data augmentation.Results The novel method that integrated the segmentation-guided classification and diffusion model augmentation sig-nificantly improved the accuracy and robustness of fracture classification.The average accuracy of classification for TPFs rose from 0.844 to 0.896.The comprehensive performance of the dual-stream model was also significantly enhanced after many rounds of training,with both the macro-area under the curve(AUC)and the micro-AUC increasing from 0.94 to 0.97.By utilizing diffusion model augmentation and segmentation map integration,the model demonstrated superior efficacy in identifying SchatzkerⅠ,achieving an accuracy of 0.880.It yielded an accuracy of 0.898 for SchatzkerⅡandⅢand 0.913 for SchatzkerⅣ;for SchatzkerⅤandⅥ,the accuracy was 0.887;and for intercondylar ridge fracture,the accuracy was 0.923.Conclusion The dual-stream attention-based classification network,which has been verified by many experiments,exhibited great potential in predicting the classification of TPFs.This method facilitates automatic TPF assessment and may assist surgeons in the rapid formulation of surgical plans. 展开更多
关键词 Artificial intelligence YOLOv8 Tibial plateau fracture diffusion model augmentation Segmentation map
暂未订购
上一页 1 2 9 下一页 到第
使用帮助 返回顶部