Accurate and efficient detection of building changes in remote sensing imagery is crucial for urban planning,disaster emergency response,and resource management.However,existing methods face challenges such as spectra...Accurate and efficient detection of building changes in remote sensing imagery is crucial for urban planning,disaster emergency response,and resource management.However,existing methods face challenges such as spectral similarity between buildings and backgrounds,sensor variations,and insufficient computational efficiency.To address these challenges,this paper proposes a novel Multi-scale Efficient Wavelet-based Change Detection Network(MewCDNet),which integrates the advantages of Convolutional Neural Networks and Transformers,balances computational costs,and achieves high-performance building change detection.The network employs EfficientNet-B4 as the backbone for hierarchical feature extraction,integrates multi-level feature maps through a multi-scale fusion strategy,and incorporates two key modules:Cross-temporal Difference Detection(CTDD)and Cross-scale Wavelet Refinement(CSWR).CTDD adopts a dual-branch architecture that combines pixel-wise differencing with semanticaware Euclidean distance weighting to enhance the distinction between true changes and background noise.CSWR integrates Haar-based Discrete Wavelet Transform with multi-head cross-attention mechanisms,enabling cross-scale feature fusion while significantly improving edge localization and suppressing spurious changes.Extensive experiments on four benchmark datasets demonstrate MewCDNet’s superiority over comparison methods:achieving F1 scores of 91.54%on LEVIR,93.70%on WHUCD,and 64.96%on S2Looking for building change detection.Furthermore,MewCDNet exhibits optimal performance on the multi-class⋅SYSU dataset(F1:82.71%),highlighting its exceptional generalization capability.展开更多
Synthetic speech detection is an essential task in the field of voice security,aimed at identifying deceptive voice attacks generated by text-to-speech(TTS)systems or voice conversion(VC)systems.In this paper,we propo...Synthetic speech detection is an essential task in the field of voice security,aimed at identifying deceptive voice attacks generated by text-to-speech(TTS)systems or voice conversion(VC)systems.In this paper,we propose a synthetic speech detection model called TFTransformer,which integrates both local and global features to enhance detection capabilities by effectively modeling local and global dependencies.Structurally,the model is divided into two main components:a front-end and a back-end.The front-end of the model uses a combination of SincLayer and two-dimensional(2D)convolution to extract high-level feature maps(HFM)containing local dependency of the input speech signals.The back-end uses time-frequency Transformer module to process these feature maps and further capture global dependency.Furthermore,we propose TFTransformer-SE,which incorporates a channel attention mechanism within the 2D convolutional blocks.This enhancement aims to more effectively capture local dependencies,thereby improving the model’s performance.The experiments were conducted on the ASVspoof 2021 LA dataset,and the results showed that the model achieved an equal error rate(EER)of 3.37%without data augmentation.Additionally,we evaluated the model using the ASVspoof 2019 LA dataset,achieving an EER of 0.84%,also without data augmentation.This demonstrates that combining local and global dependencies in the time-frequency domain can significantly improve detection accuracy.展开更多
Adult neurogenesis continuously produces new neurons critical for cognitive plasticity in adult rodents.While it is known transforming growth factor-βsignaling is important in embryonic neurogenesis,its role in postn...Adult neurogenesis continuously produces new neurons critical for cognitive plasticity in adult rodents.While it is known transforming growth factor-βsignaling is important in embryonic neurogenesis,its role in postnatal neurogenesis remains unclear.In this study,to define the precise role of transforming growth factor-βsignaling in postnatal neurogenesis at distinct stages of the neurogenic cascade both in vitro and in vivo,we developed two novel inducible and cell type-specific mouse models to specifically silence transforming growth factor-βsignaling in neural stem cells in(mGFAPcre-ALK5fl/fl-Ai9)or immature neuroblasts in(DCXcreERT2-ALK5fl/fl-Ai9).Our data showed that exogenous transforming growth factor-βtreatment led to inhibition of the proliferation of primary neural stem cells while stimulating their migration.These effects were abolished in activin-like kinase 5(ALK5)knockout primary neural stem cells.Consistent with this,inhibition of transforming growth factor-βsignaling with SB-431542 in wild-type neural stem cells stimulated proliferation while inhibited the migration of neural stem cells.Interestingly,deletion of transforming growth factor-βreceptor in neural stem cells in vivo inhibited the migration of postnatal born neurons in mGFAPcre-ALK5fl/fl-Ai9 mice,while abolishment of transforming growth factor-βsignaling in immature neuroblasts in DCXcreERT2-ALK5fl/fl-Ai9 mice did not affect the migration of these cells in the hippocampus.In summary,our data supports a dual role of transforming growth factor-βsignaling in the proliferation and migration of neural stem cells in vitro.Moreover,our data provides novel insights on cell type-specific-dependent requirements of transforming growth factor-βsignaling on neural stem cell proliferation and migration in vivo.展开更多
Brain tumors require precise segmentation for diagnosis and treatment plans due to their complex morphology and heterogeneous characteristics.While MRI-based automatic brain tumor segmentation technology reduces the b...Brain tumors require precise segmentation for diagnosis and treatment plans due to their complex morphology and heterogeneous characteristics.While MRI-based automatic brain tumor segmentation technology reduces the burden on medical staff and provides quantitative information,existing methodologies and recent models still struggle to accurately capture and classify the fine boundaries and diverse morphologies of tumors.In order to address these challenges and maximize the performance of brain tumor segmentation,this research introduces a novel SwinUNETR-based model by integrating a new decoder block,the Hierarchical Channel-wise Attention Decoder(HCAD),into a powerful SwinUNETR encoder.The HCAD decoder block utilizes hierarchical features and channelspecific attention mechanisms to further fuse information at different scales transmitted from the encoder and preserve spatial details throughout the reconstruction phase.Rigorous evaluations on the recent BraTS GLI datasets demonstrate that the proposed SwinHCAD model achieved superior and improved segmentation accuracy on both the Dice score and HD95 metrics across all tumor subregions(WT,TC,and ET)compared to baseline models.In particular,the rationale and contribution of the model design were clarified through ablation studies to verify the effectiveness of the proposed HCAD decoder block.The results of this study are expected to greatly contribute to enhancing the efficiency of clinical diagnosis and treatment planning by increasing the precision of automated brain tumor segmentation.展开更多
Images taken in dim environments frequently exhibit issues like insufficient brightness,noise,color shifts,and loss of detail.These problems pose significant challenges to dark image enhancement tasks.Current approach...Images taken in dim environments frequently exhibit issues like insufficient brightness,noise,color shifts,and loss of detail.These problems pose significant challenges to dark image enhancement tasks.Current approaches,while effective in global illumination modeling,often struggle to simultaneously suppress noise and preserve structural details,especially under heterogeneous lighting.Furthermore,misalignment between luminance and color channels introduces additional challenges to accurate enhancement.In response to the aforementioned difficulties,we introduce a single-stage framework,M2ATNet,using the multi-scale multi-attention and Transformer architecture.First,to address the problems of texture blurring and residual noise,we design a multi-scale multi-attention denoising module(MMAD),which is applied separately to the luminance and color channels to enhance the structural and texture modeling capabilities.Secondly,to solve the non-alignment problem of the luminance and color channels,we introduce the multi-channel feature fusion Transformer(CFFT)module,which effectively recovers the dark details and corrects the color shifts through cross-channel alignment and deep feature interaction.To guide the model to learn more stably and efficiently,we also fuse multiple types of loss functions to form a hybrid loss term.We extensively evaluate the proposed method on various standard datasets,including LOL-v1,LOL-v2,DICM,LIME,and NPE.Evaluation in terms of numerical metrics and visual quality demonstrate that M2ATNet consistently outperforms existing advanced approaches.Ablation studies further confirm the critical roles played by the MMAD and CFFT modules to detail preservation and visual fidelity under challenging illumination-deficient environments.展开更多
This systematic review aims to comprehensively examine and compare deep learning methods for brain tumor segmentation and classification using MRI and other imaging modalities,focusing on recent trends from 2022 to 20...This systematic review aims to comprehensively examine and compare deep learning methods for brain tumor segmentation and classification using MRI and other imaging modalities,focusing on recent trends from 2022 to 2025.The primary objective is to evaluate methodological advancements,model performance,dataset usage,and existing challenges in developing clinically robust AI systems.We included peer-reviewed journal articles and highimpact conference papers published between 2022 and 2025,written in English,that proposed or evaluated deep learning methods for brain tumor segmentation and/or classification.Excluded were non-open-access publications,books,and non-English articles.A structured search was conducted across Scopus,Google Scholar,Wiley,and Taylor&Francis,with the last search performed in August 2025.Risk of bias was not formally quantified but considered during full-text screening based on dataset diversity,validation methods,and availability of performance metrics.We used narrative synthesis and tabular benchmarking to compare performance metrics(e.g.,accuracy,Dice score)across model types(CNN,Transformer,Hybrid),imaging modalities,and datasets.A total of 49 studies were included(43 journal articles and 6 conference papers).These studies spanned over 9 public datasets(e.g.,BraTS,Figshare,REMBRANDT,MOLAB)and utilized a range of imaging modalities,predominantly MRI.Hybrid models,especially ResViT and UNetFormer,consistently achieved high performance,with classification accuracy exceeding 98%and segmentation Dice scores above 0.90 across multiple studies.Transformers and hybrid architectures showed increasing adoption post2023.Many studies lacked external validation and were evaluated only on a few benchmark datasets,raising concerns about generalizability and dataset bias.Few studies addressed clinical interpretability or uncertainty quantification.Despite promising results,particularly for hybrid deep learning models,widespread clinical adoption remains limited due to lack of validation,interpretability concerns,and real-world deployment barriers.展开更多
With the increasing growth of online news,fake electronic news detection has become one of the most important paradigms of modern research.Traditional electronic news detection techniques are generally based on contex...With the increasing growth of online news,fake electronic news detection has become one of the most important paradigms of modern research.Traditional electronic news detection techniques are generally based on contextual understanding,sequential dependencies,and/or data imbalance.This makes distinction between genuine and fabricated news a challenging task.To address this problem,we propose a novel hybrid architecture,T5-SA-LSTM,which synergistically integrates the T5 Transformer for semantically rich contextual embedding with the Self-Attentionenhanced(SA)Long Short-Term Memory(LSTM).The LSTM is trained using the Adam optimizer,which provides faster and more stable convergence compared to the Stochastic Gradient Descend(SGD)and Root Mean Square Propagation(RMSProp).The WELFake and FakeNewsPrediction datasets are used,which consist of labeled news articles having fake and real news samples.Tokenization and Synthetic Minority Over-sampling Technique(SMOTE)methods are used for data preprocessing to ensure linguistic normalization and class imbalance.The incorporation of the Self-Attention(SA)mechanism enables the model to highlight critical words and phrases,thereby enhancing predictive accuracy.The proposed model is evaluated using accuracy,precision,recall(sensitivity),and F1-score as performance metrics.The model achieved 99%accuracy on the WELFake dataset and 96.5%accuracy on the FakeNewsPrediction dataset.It outperformed the competitive schemes such as T5-SA-LSTM(RMSProp),T5-SA-LSTM(SGD)and some other models.展开更多
Peripheral nerve defect repair is a complex process that involves multiple cell types;perineurial cells play a pivotal role.Hair follicle neural crest stem cells promote perineurial cell proliferation and migration vi...Peripheral nerve defect repair is a complex process that involves multiple cell types;perineurial cells play a pivotal role.Hair follicle neural crest stem cells promote perineurial cell proliferation and migration via paracrine signaling;however,their clinical applications are limited by potential risks such as tumorigenesis and xenogeneic immune rejection,which are similar to the risks associated with other stem cell transplantations.The present study therefore focuses on small extracellular vesicles derived from hair follicle neural crest stem cells,which preserve the bioactive properties of the parent cells while avoiding the transplantation-associated risks.In vitro,small extracellular vesicles derived from hair follicle neural crest stem cells significantly enhanced the proliferation,migration,tube formation,and barrier function of perineurial cells,and subsequently upregulated the expression of tight junction proteins.Furthermore,in a rat model of sciatic nerve defects bridged with silicon tubes,treatment with small extracellular vesicles derived from hair follicle neural crest stem cells resulted in higher tight junction protein expression in perineurial cells,thus facilitating neural tissue regeneration.At 10 weeks post-surgery,rats treated with small extracellular vesicles derived from hair follicle neural crest stem cells exhibited improved nerve function recovery and reduced muscle atrophy.Transcriptomic and micro RNA analyses revealed that small extracellular vesicles derived from hair follicle neural crest stem cells deliver mi R-21-5p,which inhibits mothers against decapentaplegic homolog 7 expression,thereby activating the transforming growth factor-β/mothers against decapentaplegic homolog signaling pathway and upregulating hyaluronan synthase 2 expression,and further enhancing tight junction protein expression.Together,our findings indicate that small extracellular vesicles derived from hair follicle neural crest stem cells promote the proliferation,migration,and tight junction protein formation of perineurial cells.These results provide new insights into peripheral nerve regeneration from the perspective of perineurial cells,and present a novel approach for the clinical treatment of peripheral nerve defects.展开更多
为解决风电机组故障诊断中故障样本不足而导致模型准确率不高的问题,将当下备受关注的数据增强方法-去噪扩散概率模型(denoising diffusion probability model,DDPM)引入到故障诊断领域以生成大量高质量的故障样本数据集。因此,结合Tran...为解决风电机组故障诊断中故障样本不足而导致模型准确率不高的问题,将当下备受关注的数据增强方法-去噪扩散概率模型(denoising diffusion probability model,DDPM)引入到故障诊断领域以生成大量高质量的故障样本数据集。因此,结合Transformer网络,提出了一种DDPM-Transformer风电机组故障样本生成方法。首先,将用于计算机视觉图像生成领域的DDPM模型应用于风电机组故障诊断领域中,通过前向加噪过程将数据逐渐转化为噪声,再通过逆向去噪过程将噪声逐步恢复为原始数据,实现从噪声中生成故障数据,解决数据不平衡问题;其次,通过对原始DDPM中使用的U-net模块进行改进,使用Transformer模型替换U-net网络,利用扩散后的数据和添加的噪声训练Transformer模型,实现噪声预测,以提高故障数据的生成质量;最后,使用多种生成模型评价指标对生成的故障数据进行评价,在监督控制和数据采集系统(supervisory control and data acquisition,SCADA)故障数据生成中论证改进DDPM-Transformer模型的性能。通过试验证明,所提DDPM-Transformer模型与现有的生成模型相比,最大均值异(maximum mean discrepancy,MMD)最大提升0.13,峰值信噪比(peak signal to noise ratio,PSNR)最大提升7.8。所提模型可以有效地生成质量更高的风电机组故障样本,从而基于该样本集辅助训练基于深度学习的故障诊断模型,可以使诊断模型具有更高精度和良好的稳定性。展开更多
深度学习是人工智能领域的热门研究方向之一,它通过构建多层人工神经网络模仿人脑对数据的处理机制。大型语言模型(large language model,LLM)基于深度学习的架构,在无需编程指令的情况下,能通过分析大量数据以获得理解和生成人类语言...深度学习是人工智能领域的热门研究方向之一,它通过构建多层人工神经网络模仿人脑对数据的处理机制。大型语言模型(large language model,LLM)基于深度学习的架构,在无需编程指令的情况下,能通过分析大量数据以获得理解和生成人类语言的能力,被广泛应用于自然语言处理、计算机视觉、智慧医疗、智慧交通等诸多领域。文章总结了LLM在医疗领域的应用,涵盖了LLM针对医疗任务的基本训练流程、特殊策略以及在具体医疗场景中的应用。同时,进一步讨论了LLM在应用中面临的挑战,包括决策过程缺乏透明度、输出准确性以及隐私、伦理问题等,随后列举了相应的改进策略。最后,文章展望了LLM在医疗领域的未来发展趋势,及其对人类健康事业发展的潜在影响。展开更多
基金supported by the Henan Province Key R&D Project under Grant 241111210400the Henan Provincial Science and Technology Research Project under Grants 252102211047,252102211062,252102211055 and 232102210069+2 种基金the Jiangsu Provincial Scheme Double Initiative Plan JSS-CBS20230474,the XJTLU RDF-21-02-008the Science and Technology Innovation Project of Zhengzhou University of Light Industry under Grant 23XNKJTD0205the Higher Education Teaching Reform Research and Practice Project of Henan Province under Grant 2024SJGLX0126。
文摘Accurate and efficient detection of building changes in remote sensing imagery is crucial for urban planning,disaster emergency response,and resource management.However,existing methods face challenges such as spectral similarity between buildings and backgrounds,sensor variations,and insufficient computational efficiency.To address these challenges,this paper proposes a novel Multi-scale Efficient Wavelet-based Change Detection Network(MewCDNet),which integrates the advantages of Convolutional Neural Networks and Transformers,balances computational costs,and achieves high-performance building change detection.The network employs EfficientNet-B4 as the backbone for hierarchical feature extraction,integrates multi-level feature maps through a multi-scale fusion strategy,and incorporates two key modules:Cross-temporal Difference Detection(CTDD)and Cross-scale Wavelet Refinement(CSWR).CTDD adopts a dual-branch architecture that combines pixel-wise differencing with semanticaware Euclidean distance weighting to enhance the distinction between true changes and background noise.CSWR integrates Haar-based Discrete Wavelet Transform with multi-head cross-attention mechanisms,enabling cross-scale feature fusion while significantly improving edge localization and suppressing spurious changes.Extensive experiments on four benchmark datasets demonstrate MewCDNet’s superiority over comparison methods:achieving F1 scores of 91.54%on LEVIR,93.70%on WHUCD,and 64.96%on S2Looking for building change detection.Furthermore,MewCDNet exhibits optimal performance on the multi-class⋅SYSU dataset(F1:82.71%),highlighting its exceptional generalization capability.
基金supported by project ZR2022MF330 supported by Shandong Provincial Natural Science Foundationthe National Natural Science Foundation of China under Grant No.61701286.
文摘Synthetic speech detection is an essential task in the field of voice security,aimed at identifying deceptive voice attacks generated by text-to-speech(TTS)systems or voice conversion(VC)systems.In this paper,we propose a synthetic speech detection model called TFTransformer,which integrates both local and global features to enhance detection capabilities by effectively modeling local and global dependencies.Structurally,the model is divided into two main components:a front-end and a back-end.The front-end of the model uses a combination of SincLayer and two-dimensional(2D)convolution to extract high-level feature maps(HFM)containing local dependency of the input speech signals.The back-end uses time-frequency Transformer module to process these feature maps and further capture global dependency.Furthermore,we propose TFTransformer-SE,which incorporates a channel attention mechanism within the 2D convolutional blocks.This enhancement aims to more effectively capture local dependencies,thereby improving the model’s performance.The experiments were conducted on the ASVspoof 2021 LA dataset,and the results showed that the model achieved an equal error rate(EER)of 3.37%without data augmentation.Additionally,we evaluated the model using the ASVspoof 2019 LA dataset,achieving an EER of 0.84%,also without data augmentation.This demonstrates that combining local and global dependencies in the time-frequency domain can significantly improve detection accuracy.
基金supported by NIH grants,Nos.R01NS125074,R01AG083164,R01NS107365,and R21NS127177(to YL),1F31NS129204-01A1(to KW)and Albert Ryan Fellowship(to KW).
文摘Adult neurogenesis continuously produces new neurons critical for cognitive plasticity in adult rodents.While it is known transforming growth factor-βsignaling is important in embryonic neurogenesis,its role in postnatal neurogenesis remains unclear.In this study,to define the precise role of transforming growth factor-βsignaling in postnatal neurogenesis at distinct stages of the neurogenic cascade both in vitro and in vivo,we developed two novel inducible and cell type-specific mouse models to specifically silence transforming growth factor-βsignaling in neural stem cells in(mGFAPcre-ALK5fl/fl-Ai9)or immature neuroblasts in(DCXcreERT2-ALK5fl/fl-Ai9).Our data showed that exogenous transforming growth factor-βtreatment led to inhibition of the proliferation of primary neural stem cells while stimulating their migration.These effects were abolished in activin-like kinase 5(ALK5)knockout primary neural stem cells.Consistent with this,inhibition of transforming growth factor-βsignaling with SB-431542 in wild-type neural stem cells stimulated proliferation while inhibited the migration of neural stem cells.Interestingly,deletion of transforming growth factor-βreceptor in neural stem cells in vivo inhibited the migration of postnatal born neurons in mGFAPcre-ALK5fl/fl-Ai9 mice,while abolishment of transforming growth factor-βsignaling in immature neuroblasts in DCXcreERT2-ALK5fl/fl-Ai9 mice did not affect the migration of these cells in the hippocampus.In summary,our data supports a dual role of transforming growth factor-βsignaling in the proliferation and migration of neural stem cells in vitro.Moreover,our data provides novel insights on cell type-specific-dependent requirements of transforming growth factor-βsignaling on neural stem cell proliferation and migration in vivo.
基金supported by Institute of Information&Communications Technology Planning&Evaluation(IITP)under the Metaverse Support Program to Nurture the Best Talents(IITP-2024-RS-2023-00254529)grant funded by the Korea government(MSIT).
文摘Brain tumors require precise segmentation for diagnosis and treatment plans due to their complex morphology and heterogeneous characteristics.While MRI-based automatic brain tumor segmentation technology reduces the burden on medical staff and provides quantitative information,existing methodologies and recent models still struggle to accurately capture and classify the fine boundaries and diverse morphologies of tumors.In order to address these challenges and maximize the performance of brain tumor segmentation,this research introduces a novel SwinUNETR-based model by integrating a new decoder block,the Hierarchical Channel-wise Attention Decoder(HCAD),into a powerful SwinUNETR encoder.The HCAD decoder block utilizes hierarchical features and channelspecific attention mechanisms to further fuse information at different scales transmitted from the encoder and preserve spatial details throughout the reconstruction phase.Rigorous evaluations on the recent BraTS GLI datasets demonstrate that the proposed SwinHCAD model achieved superior and improved segmentation accuracy on both the Dice score and HD95 metrics across all tumor subregions(WT,TC,and ET)compared to baseline models.In particular,the rationale and contribution of the model design were clarified through ablation studies to verify the effectiveness of the proposed HCAD decoder block.The results of this study are expected to greatly contribute to enhancing the efficiency of clinical diagnosis and treatment planning by increasing the precision of automated brain tumor segmentation.
基金funded by the National Natural Science Foundation of China,grant numbers 52374156 and 62476005。
文摘Images taken in dim environments frequently exhibit issues like insufficient brightness,noise,color shifts,and loss of detail.These problems pose significant challenges to dark image enhancement tasks.Current approaches,while effective in global illumination modeling,often struggle to simultaneously suppress noise and preserve structural details,especially under heterogeneous lighting.Furthermore,misalignment between luminance and color channels introduces additional challenges to accurate enhancement.In response to the aforementioned difficulties,we introduce a single-stage framework,M2ATNet,using the multi-scale multi-attention and Transformer architecture.First,to address the problems of texture blurring and residual noise,we design a multi-scale multi-attention denoising module(MMAD),which is applied separately to the luminance and color channels to enhance the structural and texture modeling capabilities.Secondly,to solve the non-alignment problem of the luminance and color channels,we introduce the multi-channel feature fusion Transformer(CFFT)module,which effectively recovers the dark details and corrects the color shifts through cross-channel alignment and deep feature interaction.To guide the model to learn more stably and efficiently,we also fuse multiple types of loss functions to form a hybrid loss term.We extensively evaluate the proposed method on various standard datasets,including LOL-v1,LOL-v2,DICM,LIME,and NPE.Evaluation in terms of numerical metrics and visual quality demonstrate that M2ATNet consistently outperforms existing advanced approaches.Ablation studies further confirm the critical roles played by the MMAD and CFFT modules to detail preservation and visual fidelity under challenging illumination-deficient environments.
文摘This systematic review aims to comprehensively examine and compare deep learning methods for brain tumor segmentation and classification using MRI and other imaging modalities,focusing on recent trends from 2022 to 2025.The primary objective is to evaluate methodological advancements,model performance,dataset usage,and existing challenges in developing clinically robust AI systems.We included peer-reviewed journal articles and highimpact conference papers published between 2022 and 2025,written in English,that proposed or evaluated deep learning methods for brain tumor segmentation and/or classification.Excluded were non-open-access publications,books,and non-English articles.A structured search was conducted across Scopus,Google Scholar,Wiley,and Taylor&Francis,with the last search performed in August 2025.Risk of bias was not formally quantified but considered during full-text screening based on dataset diversity,validation methods,and availability of performance metrics.We used narrative synthesis and tabular benchmarking to compare performance metrics(e.g.,accuracy,Dice score)across model types(CNN,Transformer,Hybrid),imaging modalities,and datasets.A total of 49 studies were included(43 journal articles and 6 conference papers).These studies spanned over 9 public datasets(e.g.,BraTS,Figshare,REMBRANDT,MOLAB)and utilized a range of imaging modalities,predominantly MRI.Hybrid models,especially ResViT and UNetFormer,consistently achieved high performance,with classification accuracy exceeding 98%and segmentation Dice scores above 0.90 across multiple studies.Transformers and hybrid architectures showed increasing adoption post2023.Many studies lacked external validation and were evaluated only on a few benchmark datasets,raising concerns about generalizability and dataset bias.Few studies addressed clinical interpretability or uncertainty quantification.Despite promising results,particularly for hybrid deep learning models,widespread clinical adoption remains limited due to lack of validation,interpretability concerns,and real-world deployment barriers.
基金supported by Princess Nourah bint Abdulrahman University Researchers Supporting Project number(PNURSP2025R195)Princess Nourah bint Abdulrahman University,Riyadh,Saudi Arabia.
文摘With the increasing growth of online news,fake electronic news detection has become one of the most important paradigms of modern research.Traditional electronic news detection techniques are generally based on contextual understanding,sequential dependencies,and/or data imbalance.This makes distinction between genuine and fabricated news a challenging task.To address this problem,we propose a novel hybrid architecture,T5-SA-LSTM,which synergistically integrates the T5 Transformer for semantically rich contextual embedding with the Self-Attentionenhanced(SA)Long Short-Term Memory(LSTM).The LSTM is trained using the Adam optimizer,which provides faster and more stable convergence compared to the Stochastic Gradient Descend(SGD)and Root Mean Square Propagation(RMSProp).The WELFake and FakeNewsPrediction datasets are used,which consist of labeled news articles having fake and real news samples.Tokenization and Synthetic Minority Over-sampling Technique(SMOTE)methods are used for data preprocessing to ensure linguistic normalization and class imbalance.The incorporation of the Self-Attention(SA)mechanism enables the model to highlight critical words and phrases,thereby enhancing predictive accuracy.The proposed model is evaluated using accuracy,precision,recall(sensitivity),and F1-score as performance metrics.The model achieved 99%accuracy on the WELFake dataset and 96.5%accuracy on the FakeNewsPrediction dataset.It outperformed the competitive schemes such as T5-SA-LSTM(RMSProp),T5-SA-LSTM(SGD)and some other models.
基金supported by the National Natural Science Foundation of China,No.81571211(to FL)the Natural Science Foundation of Shanghai,No.22ZR1476800(to CH)。
文摘Peripheral nerve defect repair is a complex process that involves multiple cell types;perineurial cells play a pivotal role.Hair follicle neural crest stem cells promote perineurial cell proliferation and migration via paracrine signaling;however,their clinical applications are limited by potential risks such as tumorigenesis and xenogeneic immune rejection,which are similar to the risks associated with other stem cell transplantations.The present study therefore focuses on small extracellular vesicles derived from hair follicle neural crest stem cells,which preserve the bioactive properties of the parent cells while avoiding the transplantation-associated risks.In vitro,small extracellular vesicles derived from hair follicle neural crest stem cells significantly enhanced the proliferation,migration,tube formation,and barrier function of perineurial cells,and subsequently upregulated the expression of tight junction proteins.Furthermore,in a rat model of sciatic nerve defects bridged with silicon tubes,treatment with small extracellular vesicles derived from hair follicle neural crest stem cells resulted in higher tight junction protein expression in perineurial cells,thus facilitating neural tissue regeneration.At 10 weeks post-surgery,rats treated with small extracellular vesicles derived from hair follicle neural crest stem cells exhibited improved nerve function recovery and reduced muscle atrophy.Transcriptomic and micro RNA analyses revealed that small extracellular vesicles derived from hair follicle neural crest stem cells deliver mi R-21-5p,which inhibits mothers against decapentaplegic homolog 7 expression,thereby activating the transforming growth factor-β/mothers against decapentaplegic homolog signaling pathway and upregulating hyaluronan synthase 2 expression,and further enhancing tight junction protein expression.Together,our findings indicate that small extracellular vesicles derived from hair follicle neural crest stem cells promote the proliferation,migration,and tight junction protein formation of perineurial cells.These results provide new insights into peripheral nerve regeneration from the perspective of perineurial cells,and present a novel approach for the clinical treatment of peripheral nerve defects.
文摘为解决风电机组故障诊断中故障样本不足而导致模型准确率不高的问题,将当下备受关注的数据增强方法-去噪扩散概率模型(denoising diffusion probability model,DDPM)引入到故障诊断领域以生成大量高质量的故障样本数据集。因此,结合Transformer网络,提出了一种DDPM-Transformer风电机组故障样本生成方法。首先,将用于计算机视觉图像生成领域的DDPM模型应用于风电机组故障诊断领域中,通过前向加噪过程将数据逐渐转化为噪声,再通过逆向去噪过程将噪声逐步恢复为原始数据,实现从噪声中生成故障数据,解决数据不平衡问题;其次,通过对原始DDPM中使用的U-net模块进行改进,使用Transformer模型替换U-net网络,利用扩散后的数据和添加的噪声训练Transformer模型,实现噪声预测,以提高故障数据的生成质量;最后,使用多种生成模型评价指标对生成的故障数据进行评价,在监督控制和数据采集系统(supervisory control and data acquisition,SCADA)故障数据生成中论证改进DDPM-Transformer模型的性能。通过试验证明,所提DDPM-Transformer模型与现有的生成模型相比,最大均值异(maximum mean discrepancy,MMD)最大提升0.13,峰值信噪比(peak signal to noise ratio,PSNR)最大提升7.8。所提模型可以有效地生成质量更高的风电机组故障样本,从而基于该样本集辅助训练基于深度学习的故障诊断模型,可以使诊断模型具有更高精度和良好的稳定性。
文摘深度学习是人工智能领域的热门研究方向之一,它通过构建多层人工神经网络模仿人脑对数据的处理机制。大型语言模型(large language model,LLM)基于深度学习的架构,在无需编程指令的情况下,能通过分析大量数据以获得理解和生成人类语言的能力,被广泛应用于自然语言处理、计算机视觉、智慧医疗、智慧交通等诸多领域。文章总结了LLM在医疗领域的应用,涵盖了LLM针对医疗任务的基本训练流程、特殊策略以及在具体医疗场景中的应用。同时,进一步讨论了LLM在应用中面临的挑战,包括决策过程缺乏透明度、输出准确性以及隐私、伦理问题等,随后列举了相应的改进策略。最后,文章展望了LLM在医疗领域的未来发展趋势,及其对人类健康事业发展的潜在影响。