Organoids possess immense potential for unraveling the intricate functions of human tissues and facilitating preclinical disease treatment.Their applications span from high-throughput drug screening to the modeling of...Organoids possess immense potential for unraveling the intricate functions of human tissues and facilitating preclinical disease treatment.Their applications span from high-throughput drug screening to the modeling of complex diseases,with some even achieving clinical translation.Changes in the overall size,shape,boundary,and other morphological features of organoids provide a noninvasive method for assessing organoid drug sensitivity.However,the precise segmentation of organoids in bright-field microscopy images is made difficult by the complexity of the organoid morphology and interference,including overlapping organoids,bubbles,dust particles,and cell fragments.This paper introduces the precision organoid segmentation technique(POST),which is a deep-learning algorithm for segmenting challenging organoids under simple bright-field imaging conditions.Unlike existing methods,POST accurately segments each organoid and eliminates various artifacts encountered during organoid culturing and imaging.Furthermore,it is sensitive to and aligns with measurements of organoid activity in drug sensitivity experiments.POST is expected to be a valuable tool for drug screening using organoids owing to its capability of automatically and rapidly eliminating interfering substances and thereby streamlining the organoid analysis and drug screening process.展开更多
In the image fusion field,fusing infrared images(IRIs)and visible images(VIs)excelled is a key area.The differences between IRIs and VIs make it challenging to fuse both types into a high-quality image.Accordingly,eff...In the image fusion field,fusing infrared images(IRIs)and visible images(VIs)excelled is a key area.The differences between IRIs and VIs make it challenging to fuse both types into a high-quality image.Accordingly,efficiently combining the advantages of both images while overcoming their shortcomings is necessary.To handle this challenge,we developed an end-to-end IRI andVI fusionmethod based on frequency decomposition and enhancement.By applying concepts from frequency domain analysis,we used the layering mechanism to better capture the salient thermal targets from the IRIs and the rich textural information from the VIs,respectively,significantly boosting the image fusion quality and effectiveness.In addition,the backbone network combined Restormer Blocks and Dense Blocks;Restormer blocks utilize global attention to extract shallow features.Meanwhile,Dense Blocks ensure the integration between shallow and deep features,thereby avoiding the loss of shallow attributes.Extensive experiments on TNO and MSRS datasets demonstrated that the suggested method achieved state-of-the-art(SOTA)performance in various metrics:Entropy(EN),Mutual Information(MI),Standard Deviation(SD),The Structural Similarity Index Measure(SSIM),Fusion quality(Qabf),MI of the pixel(FMI_(pixel)),and modified Visual Information Fidelity(VIF_(m)).展开更多
Microscopy imaging is fundamental in analyzing bacterial morphology and dynamics,offering critical insights into bacterial physiology and pathogenicity.Image segmentation techniques enable quantitative analysis of bac...Microscopy imaging is fundamental in analyzing bacterial morphology and dynamics,offering critical insights into bacterial physiology and pathogenicity.Image segmentation techniques enable quantitative analysis of bacterial structures,facilitating precise measurement of morphological variations and population behaviors at single-cell resolution.This paper reviews advancements in bacterial image segmentation,emphasizing the shift from traditional thresholding and watershed methods to deep learning-driven approaches.Convolutional neural networks(CNNs),U-Net architectures,and three-dimensional(3D)frameworks excel at segmenting dense biofilms and resolving antibiotic-induced morphological changes.These methods combine automated feature extraction with physics-informed postprocessing.Despite progress,challenges persist in computational efficiency,cross-species generalizability,and integration with multimodal experimental workflows.Future progress will depend on improving model robustness across species and imaging modalities,integrating multimodal data for phenotype-function mapping,and developing standard pipelines that link computational tools with clinical diagnostics.These innovations will expand microbial phenotyping beyond structural analysis,enabling deeper insights into bacterial physiology and ecological interactions.展开更多
Schlieren imaging is a widely used technique to visualize the structure of supersonic flow field,which is usually dominated by shock waves.Precise identification of shock waves in schlieren image provides critical ins...Schlieren imaging is a widely used technique to visualize the structure of supersonic flow field,which is usually dominated by shock waves.Precise identification of shock waves in schlieren image provides critical insights for flow diagnostics,especially for supersonic inlet whose performance is highly associated with that of the whole flight.However,conventional shock wave identification methods have limited accuracy in segmenting the shock wave.To overcome the limitation,we proposed an automated shock wave identification method(SW-Segment)that can attain high resolution and automatic shock wave segmentation by integrating correlation-based feature extraction with graph search.We demonstrated the efficacy of SW-Segment via the identification of shock waves in simulatively and experimentally obtained schlieren image.The results proved that SW-Segment showed a shock wave identification accuracy of 95.24%in the numerical schlieren image and an accuracy of 88.33%in the experimental image,clearly demonstrating its reliability.SW-Segment holds broad applicability for shock wave detection in diverse schlieren imaging scenarios,offering robust data support for flow field analysis and supersonic flight design.展开更多
Background:Brain volume measurement serves as a critical approach for assessing brain health status.Considering the close biological connection between the eyes and brain,this study aims to investigate the feasibility...Background:Brain volume measurement serves as a critical approach for assessing brain health status.Considering the close biological connection between the eyes and brain,this study aims to investigate the feasibility of estimating brain volume through retinal fundus imaging integrated with clinical metadata,and to offer a cost-effective approach for assessing brain health.Methods:Based on clinical information,retinal fundus images,and neuroimaging data derived from a multicenter,population-based cohort study,the Kai Luan Study,we proposed a cross-modal correlation representation(CMCR)network to elucidate the intricate co-degenerative relationships between the eyes and brain for 755 subjects.Specifically,individual clinical information,which has been followed up for as long as 12 years,was encoded as a prompt to enhance the accuracy of brain volume estimation.Independent internal validation and external validation were performed to assess the robustness of the proposed model.Root mean square error(RMSE),peak signal-tonoise ratio(PSNR),and structural similarity index measure(SSIM)metrics were employed to quantitatively evaluate the quality of synthetic brain images derived from retinal imaging data.Results:The proposed framework yielded average RMSE,PSNR,and SSIM values of 98.23,35.78 d B,and 0.64,respectively,which significantly outperformed 5 other methods:multi-channel Variational Autoencoder(mcVAE),Pixelto-Pixel(Pixel2pixel),transformer-based U-Net(Trans UNet),multi-scale transformer network(MT-Net),and residual vision transformer(ResViT).The two-(2D)and three-dimensional(3D)visualization results showed that the shape and texture of the synthetic brain images generated by the proposed method most closely resembled those of actual brain images.Thus,the CMCR framework accurately captured the latent structural correlations between the fundus and the brain.The average difference between predicted and actual brain volumes was 61.36 cm~3,with a relative error of 4.54%.When all of the clinical information(including age and sex,daily habits,cardiovascular factors,metabolic factors,and inflammatory factors)was encoded,the difference was decreased to 53.89 cm~3,with a relative error of 3.98%.Based on the synthesized brain magnetic resonance images from retinal fundus images,the volumes of brain tissues could be estimated with high accuracy.Conclusion:This study provides an innovative,accurate,and cost-effective approach to characterize brain health status through readily accessible retinal fundus images.展开更多
Medical image segmentation is of critical importance in the domain of contemporary medical imaging.However,U-Net and its variants exhibit limitations in capturing complex nonlinear patterns and global contextual infor...Medical image segmentation is of critical importance in the domain of contemporary medical imaging.However,U-Net and its variants exhibit limitations in capturing complex nonlinear patterns and global contextual information.Although the subsequent U-KAN model enhances nonlinear representation capabilities,it still faces challenges such as gradient vanishing during deep network training and spatial detail loss during feature downsampling,resulting in insufficient segmentation accuracy for edge structures and minute lesions.To address these challenges,this paper proposes the RE-UKAN model,which innovatively improves upon U-KAN.Firstly,a residual network is introduced into the encoder to effectively mitigate gradient vanishing through cross-layer identity mappings,thus enhancing modelling capabilities for complex pathological structures.Secondly,Efficient Local Attention(ELA)is integrated to suppress spatial detail loss during downsampling,thereby improving the perception of edge structures and minute lesions.Experimental results on four public datasets demonstrate that RE-UKAN outperforms existing medical image segmentation methods across multiple evaluation metrics,with particularly outstanding performance on the TN-SCUI 2020 dataset,achieving IoU of 88.18%and Dice of 93.57%.Compared to the baseline model,it achieves improvements of 3.05%and 1.72%,respectively.These results fully demonstrate RE-UKAN’s superior detail retention capability and boundary recognition accuracy in complex medical image segmentation tasks,providing a reliable solution for clinical precision segmentation.展开更多
The historical image of Ouyang Xiu constructed during the Song Dynasty evolved from a multifaceted portrayal that balanced his political and literary achievements into a singular cultural symbol.In the Northern Song D...The historical image of Ouyang Xiu constructed during the Song Dynasty evolved from a multifaceted portrayal that balanced his political and literary achievements into a singular cultural symbol.In the Northern Song Dynasty,writings by Ouyang Xiu's family and epitaphs by his colleagues crafted a balanced narrative emphasizing both his official duties and literary merits,thus constructing a dual image of him as a principled remonstrator and a literary master.In the Southern Song Dynasty,official historiography gradually eroded his complex persona as a political reformer by selectively trimming political disputes and emphasizing his literary lineage,ultimately establishing him as a cultural exemplar beyond factional strife.Throughout this evolution of historical writing,Ouyang Xiu's sharpness as a remonstrator was gradually obscured in historical texts,while his image as a literary master,revered by all,became firmly established.The reshaping of Ouyang Xiu's image in historical writings across the Northern and Southern Song dynasties not only reflects the logic of selecting scholar-official role models under the influence of official ideology but also reveals the inherent pattern whereby individual distinctiveness fades into symbolic construction in historical writing.展开更多
Digital watermarking technology plays an important role in detecting malicious tampering and protecting image copyright.However,in practical applications,this technology faces various problems such as severe image dis...Digital watermarking technology plays an important role in detecting malicious tampering and protecting image copyright.However,in practical applications,this technology faces various problems such as severe image distortion,inaccurate localization of the tampered regions,and difficulty in recovering content.Given these shortcomings,a fragile image watermarking algorithm for tampering blind-detection and content self-recovery is proposed.The multi-feature watermarking authentication code(AC)is constructed using texture feature of local binary patterns(LBP),direct coefficient of discrete cosine transform(DCT)and contrast feature of gray level co-occurrence matrix(GLCM)for detecting the tampered region,and the recovery code(RC)is designed according to the average grayscale value of pixels in image blocks for recovering the tampered content.Optimal pixel adjustment process(OPAP)and least significant bit(LSB)algorithms are used to embed the recovery code and authentication code into the image in a staggered manner.When detecting the integrity of the image,the authentication code comparison method and threshold judgment method are used to perform two rounds of tampering detection on the image and blindly recover the tampered content.Experimental results show that this algorithm has good transparency,strong and blind detection,and self-recovery performance against four types of malicious attacks and some conventional signal processing operations.When resisting copy-paste,text addition,cropping and vector quantization under the tampering rate(TR)10%,the average tampering detection rate is up to 94.09%,and the peak signal-to-noise ratio(PSNR)of the watermarked image and the recovered image are both greater than 41.47 and 40.31 dB,which demonstrates its excellent advantages compared with other related algorithms in recent years.展开更多
Compact size,high brightness,and wide field of view(FOV)are key requirements for long-wave infrared imagers used in military surveillance or night navigation.However,to meet the imaging requirements of high resolution...Compact size,high brightness,and wide field of view(FOV)are key requirements for long-wave infrared imagers used in military surveillance or night navigation.However,to meet the imaging requirements of high resolution and wide FOV,infrared optical systems often adopt complex optical lens groups,which will increase the size and weight of the optical system.In this paper,a strategy based on wavefront coding(WFC)is proposed to design a compact wide-FOV infrared imager.A cubic phase mask is inserted into the pupil plane of the infrared imager to correct the aberration.The simulated results show that,the WFC infrared imager has good imaging quality in a wide FOV of±16°.In addition,the WFC infrared imager achieves compactness with its 40 mm×40 mm×40 mm size.A fast focal ratio of 1 combined with an entrance pupil diameter of 25 mm ensures brightness.This work is of significance for designing a compact wide-FOV infrared imager.展开更多
A large-scale view of the magnetospheric cusp is expected to be obtained by the Soft X-ray Imager(SXI)onboard the Solar wind Magnetosphere Ionosphere Link Explorer(SMILE).However,it is challenging to trace the three-d...A large-scale view of the magnetospheric cusp is expected to be obtained by the Soft X-ray Imager(SXI)onboard the Solar wind Magnetosphere Ionosphere Link Explorer(SMILE).However,it is challenging to trace the three-dimensional cusp boundary from a two-dimensional X-ray image because the detected X-ray signals will be integrated along the line of sight.In this work,a global magnetohydrodynamic code was used to simulate the X-ray images and photon count images,assuming an interplanetary magnetic field with a pure Bz component.The assumption of an elliptic cusp boundary at a given altitude was used to trace the equatorward and poleward boundaries of the cusp from a simulated X-ray image.The average discrepancy was less than 0.1 RE.To reduce the influence of instrument effects and cosmic X-ray backgrounds,image denoising was considered before applying the method above to SXI photon count images.The cusp boundaries were reasonably reconstructed from the noisy X-ray image.展开更多
Images taken in dim environments frequently exhibit issues like insufficient brightness,noise,color shifts,and loss of detail.These problems pose significant challenges to dark image enhancement tasks.Current approach...Images taken in dim environments frequently exhibit issues like insufficient brightness,noise,color shifts,and loss of detail.These problems pose significant challenges to dark image enhancement tasks.Current approaches,while effective in global illumination modeling,often struggle to simultaneously suppress noise and preserve structural details,especially under heterogeneous lighting.Furthermore,misalignment between luminance and color channels introduces additional challenges to accurate enhancement.In response to the aforementioned difficulties,we introduce a single-stage framework,M2ATNet,using the multi-scale multi-attention and Transformer architecture.First,to address the problems of texture blurring and residual noise,we design a multi-scale multi-attention denoising module(MMAD),which is applied separately to the luminance and color channels to enhance the structural and texture modeling capabilities.Secondly,to solve the non-alignment problem of the luminance and color channels,we introduce the multi-channel feature fusion Transformer(CFFT)module,which effectively recovers the dark details and corrects the color shifts through cross-channel alignment and deep feature interaction.To guide the model to learn more stably and efficiently,we also fuse multiple types of loss functions to form a hybrid loss term.We extensively evaluate the proposed method on various standard datasets,including LOL-v1,LOL-v2,DICM,LIME,and NPE.Evaluation in terms of numerical metrics and visual quality demonstrate that M2ATNet consistently outperforms existing advanced approaches.Ablation studies further confirm the critical roles played by the MMAD and CFFT modules to detail preservation and visual fidelity under challenging illumination-deficient environments.展开更多
Over the years,Generative Adversarial Networks(GANs)have revolutionized the medical imaging industry for applications such as image synthesis,denoising,super resolution,data augmentation,and cross-modality translation...Over the years,Generative Adversarial Networks(GANs)have revolutionized the medical imaging industry for applications such as image synthesis,denoising,super resolution,data augmentation,and cross-modality translation.The objective of this review is to evaluate the advances,relevances,and limitations of GANs in medical imaging.An organised literature review was conducted following the guidelines of PRISMA(Preferred Reporting Items for Systematic Reviews and Meta-Analyses).The literature considered included peer-reviewed papers published between 2020 and 2025 across databases including PubMed,IEEE Xplore,and Scopus.The studies related to applications of GAN architectures in medical imaging with reported experimental outcomes and published in English in reputable journals and conferences were considered for the review.Thesis,white papers,communication letters,and non-English articles were not included for the same.CLAIM based quality assessment criteria were applied to the included studies to assess the quality.The study classifies diverse GAN architectures,summarizing their clinical applications,technical performances,and their implementation hardships.Key findings reveal the increasing applications of GANs for enhancing diagnostic accuracy,reducing data scarcity through synthetic data generation,and supporting modality translation.However,concerns such as limited generalizability,lack of clinical validation,and regulatory constraints persist.This review provides a comprehensive study of the prevailing scenario of GANs in medical imaging and highlights crucial research gaps and future directions.Though GANs hold transformative capability for medical imaging,their integration into clinical use demands further validation,interpretability,and regulatory alignment.展开更多
Unmanned aerial vehicle(UAV)-borne gamma-ray spectrum survey plays a crucial role in geological mapping,radioactive mineral exploration,and environmental monitoring.However,raw data are often compromised by flight and...Unmanned aerial vehicle(UAV)-borne gamma-ray spectrum survey plays a crucial role in geological mapping,radioactive mineral exploration,and environmental monitoring.However,raw data are often compromised by flight and instrument background noise,as well as detector resolution limitations,which affect the accuracy of geological interpretations.This study aims to explore the application of the Real-ESRGAN algorithm in the super-resolution reconstruction of UAV-borne gamma-ray spectrum images to enhance spatial resolution and the quality of geological feature visualization.We conducted super-resolution reconstruction experiments with 2×,4×and 6×magnification using the Real-ESRGAN algorithm,comparing the results with three other mainstream algorithms(SRCNN,SRGAN,FSRCNN)to verify the superiority in image quality.The experimental results indicate that Real-ESRGAN achieved a structural similarity index(SSIM)value of 0.950 at 2×magnification,significantly higher than the other algorithms,demonstrating its advantage in detail preservation.Furthermore,Real-ESRGAN effectively reduced ringing and overshoot artifacts,enhancing the clarity of geological structures and mineral deposit sites,thus providing high-quality visual information for geological exploration.展开更多
The Chinese Giant Solar Telescope(CGST)low-dispersion spectrograph requires a large field-of-view(FOV)and high spatial resolution,which can be addressed by a carefully designed image slicer system.Our proposed design ...The Chinese Giant Solar Telescope(CGST)low-dispersion spectrograph requires a large field-of-view(FOV)and high spatial resolution,which can be addressed by a carefully designed image slicer system.Our proposed design divides the rectangular 50″×20″FOV at the telescope focal plane into four 50″×5″subfields.Each subfield undergoes optical reconstruction using its independent collimator-camera system(F/36-F/25.79),achieving vertical alignment and focal reduction of subfields to form a pseudo-slit.Using tilt mirrors for scanning allows simultaneous acquisition of spectral data with both a large FOV and a high angular resolution of 0.05″.This resolves manufacturing challenges for an image slicer,avoiding the requirement for hundreds of elements,multi-angle configurations,and compact dimensions,and also provides effective technical support for engineering work on the CGST.展开更多
Background Computed tomography(CT) and cone-beam computed tomography(CBCT) image registration play pivotal roles in computer-assisted navigation for orthopedic surgery. Traditional methods often apply uniform deformat...Background Computed tomography(CT) and cone-beam computed tomography(CBCT) image registration play pivotal roles in computer-assisted navigation for orthopedic surgery. Traditional methods often apply uniform deformation models, neglecting the biomechanical differences between rigid structures and soft tissues, which compromises registration accuracy, especially during significant bone displacements. Method To address this issue, we introduce RE-Reg, a rigid-elastic CT-CBCT image registration framework that jointly learns rigid bone motion and soft tissue deformation. RE-Reg incorporates a rigid alignment(RA) module to estimate global bone motion and an elastic deformation(ED) module to model soft tissue deformation, preserving bony structures through bone shape preservation(BSP) loss. Result Our comprehensive evaluation on publicly available datasets demonstrates that RE-Reg significantly outperforms existing methods in terms of registration accuracy and rigid bone structure preservation, achieving a 1.3% improvement in Dice similarity coefficient(DSC) and a 23% reduction in rigid bone deformation(%Δvol) compared with the best baseline. Conclusion This framework not only enhances anatomical fidelity but also ensures biomechanical plausibility and provides a valuable tool for image-guided orthopedic surgery. This code is available athttps://github.com/Zq-Huang/RE-Reg.展开更多
With the rapid development of image-generative AI (artificial intelligence) technology, its application in undergraduate Landscape Architecture education has demonstrated significant potential. Based on this, the pres...With the rapid development of image-generative AI (artificial intelligence) technology, its application in undergraduate Landscape Architecture education has demonstrated significant potential. Based on this, the present study explores the implications of integrating image-generative AI into Landscape Architecture courses from three perspectives: stimulating students creative design potential, expanding approaches to form and concept generation, and enhancing the visualization of spatial scenes. Furthermore, it discusses application strategies from three dimensions: AI-assisted conceptual generation, human-machine collaboration for design refinement, and optimization of scheme presentation and evaluation. This paper aims to provide relevant educators with insights and references.展开更多
High-resolution remote sensing images(HRSIs)are now an essential data source for gathering surface information due to advancements in remote sensing data capture technologies.However,their significant scale changes an...High-resolution remote sensing images(HRSIs)are now an essential data source for gathering surface information due to advancements in remote sensing data capture technologies.However,their significant scale changes and wealth of spatial details pose challenges for semantic segmentation.While convolutional neural networks(CNNs)excel at capturing local features,they are limited in modeling long-range dependencies.Conversely,transformers utilize multihead self-attention to integrate global context effectively,but this approach often incurs a high computational cost.This paper proposes a global-local multiscale context network(GLMCNet)to extract both global and local multiscale contextual information from HRSIs.A detail-enhanced filtering module(DEFM)is proposed at the end of the encoder to refine the encoder outputs further,thereby enhancing the key details extracted by the encoder and effectively suppressing redundant information.In addition,a global-local multiscale transformer block(GLMTB)is proposed in the decoding stage to enable the modeling of rich multiscale global and local information.We also design a stair fusion mechanism to transmit deep semantic information from deep to shallow layers progressively.Finally,we propose the semantic awareness enhancement module(SAEM),which further enhances the representation of multiscale semantic features through spatial attention and covariance channel attention.Extensive ablation analyses and comparative experiments were conducted to evaluate the performance of the proposed method.Specifically,our method achieved a mean Intersection over Union(mIoU)of 86.89%on the ISPRS Potsdam dataset and 84.34%on the ISPRS Vaihingen dataset,outperforming existing models such as ABCNet and BANet.展开更多
With the rapid development of transportation infrastructure,ensuring road safety through timely and accurate highway inspection has become increasingly critical.Traditional manual inspection methods are not only time-...With the rapid development of transportation infrastructure,ensuring road safety through timely and accurate highway inspection has become increasingly critical.Traditional manual inspection methods are not only time-consuming and labor-intensive,but they also struggle to provide consistent,high-precision detection and realtime monitoring of pavement surface defects.To overcome these limitations,we propose an Automatic Recognition of PavementDefect(ARPD)algorithm,which leverages unmanned aerial vehicle(UAV)-based aerial imagery to automate the inspection process.The ARPD framework incorporates a backbone network based on the Selective State Space Model(S3M),which is designed to capture long-range temporal dependencies.This enables effective modeling of dynamic correlations among redundant and often repetitive structures commonly found in road imagery.Furthermore,a neck structure based on Semantics and Detail Infusion(SDI)is introduced to guide cross-scale feature fusion.The SDI module enhances the integration of low-level spatial details with high-level semantic cues,thereby improving feature expressiveness and defect localization accuracy.Experimental evaluations demonstrate that theARPDalgorithm achieves a mean average precision(mAP)of 86.1%on a custom-labeled pavement defect dataset,outperforming the state-of-the-art YOLOv11 segmentation model.The algorithm also maintains strong generalization ability on public datasets.These results confirm that ARPD is well-suited for diverse real-world applications in intelligent,large-scale highway defect monitoring and maintenance planning.展开更多
Roadbed disease detection is essential for maintaining road functionality.Ground penetrating radar(GPR)enables non-destructive detection without drilling.However,current identification often relies on manual inspectio...Roadbed disease detection is essential for maintaining road functionality.Ground penetrating radar(GPR)enables non-destructive detection without drilling.However,current identification often relies on manual inspection,which requires extensive experience,suffers from low efficiency,and is highly subjective.As the results are presented as radar images,image processing methods can be applied for fast and objective identification.Deep learning-based approaches now offer a robust solution for automated roadbed disease detection.This study proposes an enhanced Faster Region-based Convolutional Neural Networks(R-CNN)framework integrating ResNet-50 as the backbone and two-dimensional discrete Fourier spectrum transformation(2D-DFT)for frequency-domain feature fusion.A dedicated GPR image dataset comprising 1650 annotated images was constructed and augmented to 6600 images via median filtering,histogram equalization,and binarization.The proposed model segments defect regions,applies binary masking,and fuses frequency-domain features to improve small-target detection under noisy backgrounds.Experimental results show that the improved Faster R-CNN achieves a mean Average Precision(mAP)of 0.92,representing a 0.22 increase over the baseline.Precision improved by 26%while recall remained stable at 87%.The model was further validated on real urban road data,demonstrating robust detection capability even under interference.These findings highlight the potential of combining GPR with deep learning for efficient,non-destructive roadbed health monitoring.展开更多
基金supported by the National Key R&D Program of China(No.2022YFC2504403)the National Natural Science Foundation of China(No.62172202)+1 种基金the Experiment Project of China Manned Space Program(No.HYZHXM01019)the Fundamental Research Funds for the Central Universities from Southeast University(No.3207032101C3)。
文摘Organoids possess immense potential for unraveling the intricate functions of human tissues and facilitating preclinical disease treatment.Their applications span from high-throughput drug screening to the modeling of complex diseases,with some even achieving clinical translation.Changes in the overall size,shape,boundary,and other morphological features of organoids provide a noninvasive method for assessing organoid drug sensitivity.However,the precise segmentation of organoids in bright-field microscopy images is made difficult by the complexity of the organoid morphology and interference,including overlapping organoids,bubbles,dust particles,and cell fragments.This paper introduces the precision organoid segmentation technique(POST),which is a deep-learning algorithm for segmenting challenging organoids under simple bright-field imaging conditions.Unlike existing methods,POST accurately segments each organoid and eliminates various artifacts encountered during organoid culturing and imaging.Furthermore,it is sensitive to and aligns with measurements of organoid activity in drug sensitivity experiments.POST is expected to be a valuable tool for drug screening using organoids owing to its capability of automatically and rapidly eliminating interfering substances and thereby streamlining the organoid analysis and drug screening process.
基金funded by Anhui Province University Key Science and Technology Project(2024AH053415)Anhui Province University Major Science and Technology Project(2024AH040229)+3 种基金Talent Research Initiation Fund Project of Tongling University(2024tlxyrc019)Tongling University School-Level Scientific Research Project(2024tlxyptZD07)TheUniversity Synergy Innovation Programof Anhui Province(GXXT-2023-050)Tongling City Science and Technology Major Special Project(Unveiling and Commanding Model)(200401JB004).
文摘In the image fusion field,fusing infrared images(IRIs)and visible images(VIs)excelled is a key area.The differences between IRIs and VIs make it challenging to fuse both types into a high-quality image.Accordingly,efficiently combining the advantages of both images while overcoming their shortcomings is necessary.To handle this challenge,we developed an end-to-end IRI andVI fusionmethod based on frequency decomposition and enhancement.By applying concepts from frequency domain analysis,we used the layering mechanism to better capture the salient thermal targets from the IRIs and the rich textural information from the VIs,respectively,significantly boosting the image fusion quality and effectiveness.In addition,the backbone network combined Restormer Blocks and Dense Blocks;Restormer blocks utilize global attention to extract shallow features.Meanwhile,Dense Blocks ensure the integration between shallow and deep features,thereby avoiding the loss of shallow attributes.Extensive experiments on TNO and MSRS datasets demonstrated that the suggested method achieved state-of-the-art(SOTA)performance in various metrics:Entropy(EN),Mutual Information(MI),Standard Deviation(SD),The Structural Similarity Index Measure(SSIM),Fusion quality(Qabf),MI of the pixel(FMI_(pixel)),and modified Visual Information Fidelity(VIF_(m)).
基金financially supported by the Open Project Program of Wuhan National Laboratory for Optoelectronics(No.2022WNLOKF009)the National Natural Science Foundation of China(No.62475216)+2 种基金the Key Research and Development Program of Shaanxi(No.2024GH-ZDXM-37)the Fujian Provincial Natural Science Foundation of China(No.2024J01060)the Startup Program of XMU,and the Fundamental Research Funds for the Central Universities.
文摘Microscopy imaging is fundamental in analyzing bacterial morphology and dynamics,offering critical insights into bacterial physiology and pathogenicity.Image segmentation techniques enable quantitative analysis of bacterial structures,facilitating precise measurement of morphological variations and population behaviors at single-cell resolution.This paper reviews advancements in bacterial image segmentation,emphasizing the shift from traditional thresholding and watershed methods to deep learning-driven approaches.Convolutional neural networks(CNNs),U-Net architectures,and three-dimensional(3D)frameworks excel at segmenting dense biofilms and resolving antibiotic-induced morphological changes.These methods combine automated feature extraction with physics-informed postprocessing.Despite progress,challenges persist in computational efficiency,cross-species generalizability,and integration with multimodal experimental workflows.Future progress will depend on improving model robustness across species and imaging modalities,integrating multimodal data for phenotype-function mapping,and developing standard pipelines that link computational tools with clinical diagnostics.These innovations will expand microbial phenotyping beyond structural analysis,enabling deeper insights into bacterial physiology and ecological interactions.
基金supported by the National Natural Science Foundation of China(Grant Nos.12402336,U20A2070,12025202)the Natural Science Foundation of Jiangsu Province(Grant No.BK20230876)+2 种基金the National High-Level Talent Project(Grant No.YQR23069)the Key Laboratory of Intake and Exhaust Technology,Ministry of Education(Grant No.CEPE2024015)the Key Laboratory of Mechanics and Control for Aerospace Structures(Nanjing University of Aeronautics and Astronautics)(Grant No.MCAS-I-0325K01)。
文摘Schlieren imaging is a widely used technique to visualize the structure of supersonic flow field,which is usually dominated by shock waves.Precise identification of shock waves in schlieren image provides critical insights for flow diagnostics,especially for supersonic inlet whose performance is highly associated with that of the whole flight.However,conventional shock wave identification methods have limited accuracy in segmenting the shock wave.To overcome the limitation,we proposed an automated shock wave identification method(SW-Segment)that can attain high resolution and automatic shock wave segmentation by integrating correlation-based feature extraction with graph search.We demonstrated the efficacy of SW-Segment via the identification of shock waves in simulatively and experimentally obtained schlieren image.The results proved that SW-Segment showed a shock wave identification accuracy of 95.24%in the numerical schlieren image and an accuracy of 88.33%in the experimental image,clearly demonstrating its reliability.SW-Segment holds broad applicability for shock wave detection in diverse schlieren imaging scenarios,offering robust data support for flow field analysis and supersonic flight design.
基金supported by the National Natural Science Foundation of China(62522119 and 62372358)the Beijing Natural Science Foundation(7242267)+2 种基金the Beijing Scholars Program([2015]160)the Natural Science Basic Research Program of Shaanxi(2023-JC-QN-0719)the Guangdong Basic and Applied Basic Research Foundation(2022A1515110453)。
文摘Background:Brain volume measurement serves as a critical approach for assessing brain health status.Considering the close biological connection between the eyes and brain,this study aims to investigate the feasibility of estimating brain volume through retinal fundus imaging integrated with clinical metadata,and to offer a cost-effective approach for assessing brain health.Methods:Based on clinical information,retinal fundus images,and neuroimaging data derived from a multicenter,population-based cohort study,the Kai Luan Study,we proposed a cross-modal correlation representation(CMCR)network to elucidate the intricate co-degenerative relationships between the eyes and brain for 755 subjects.Specifically,individual clinical information,which has been followed up for as long as 12 years,was encoded as a prompt to enhance the accuracy of brain volume estimation.Independent internal validation and external validation were performed to assess the robustness of the proposed model.Root mean square error(RMSE),peak signal-tonoise ratio(PSNR),and structural similarity index measure(SSIM)metrics were employed to quantitatively evaluate the quality of synthetic brain images derived from retinal imaging data.Results:The proposed framework yielded average RMSE,PSNR,and SSIM values of 98.23,35.78 d B,and 0.64,respectively,which significantly outperformed 5 other methods:multi-channel Variational Autoencoder(mcVAE),Pixelto-Pixel(Pixel2pixel),transformer-based U-Net(Trans UNet),multi-scale transformer network(MT-Net),and residual vision transformer(ResViT).The two-(2D)and three-dimensional(3D)visualization results showed that the shape and texture of the synthetic brain images generated by the proposed method most closely resembled those of actual brain images.Thus,the CMCR framework accurately captured the latent structural correlations between the fundus and the brain.The average difference between predicted and actual brain volumes was 61.36 cm~3,with a relative error of 4.54%.When all of the clinical information(including age and sex,daily habits,cardiovascular factors,metabolic factors,and inflammatory factors)was encoded,the difference was decreased to 53.89 cm~3,with a relative error of 3.98%.Based on the synthesized brain magnetic resonance images from retinal fundus images,the volumes of brain tissues could be estimated with high accuracy.Conclusion:This study provides an innovative,accurate,and cost-effective approach to characterize brain health status through readily accessible retinal fundus images.
文摘Medical image segmentation is of critical importance in the domain of contemporary medical imaging.However,U-Net and its variants exhibit limitations in capturing complex nonlinear patterns and global contextual information.Although the subsequent U-KAN model enhances nonlinear representation capabilities,it still faces challenges such as gradient vanishing during deep network training and spatial detail loss during feature downsampling,resulting in insufficient segmentation accuracy for edge structures and minute lesions.To address these challenges,this paper proposes the RE-UKAN model,which innovatively improves upon U-KAN.Firstly,a residual network is introduced into the encoder to effectively mitigate gradient vanishing through cross-layer identity mappings,thus enhancing modelling capabilities for complex pathological structures.Secondly,Efficient Local Attention(ELA)is integrated to suppress spatial detail loss during downsampling,thereby improving the perception of edge structures and minute lesions.Experimental results on four public datasets demonstrate that RE-UKAN outperforms existing medical image segmentation methods across multiple evaluation metrics,with particularly outstanding performance on the TN-SCUI 2020 dataset,achieving IoU of 88.18%and Dice of 93.57%.Compared to the baseline model,it achieves improvements of 3.05%and 1.72%,respectively.These results fully demonstrate RE-UKAN’s superior detail retention capability and boundary recognition accuracy in complex medical image segmentation tasks,providing a reliable solution for clinical precision segmentation.
基金an initial outcome of the Research on the Interactive Relationship Between Biographies and Epitaphs in Ancient China,a project(ID:24BZW023)supported by the National Social Science Fund of China。
文摘The historical image of Ouyang Xiu constructed during the Song Dynasty evolved from a multifaceted portrayal that balanced his political and literary achievements into a singular cultural symbol.In the Northern Song Dynasty,writings by Ouyang Xiu's family and epitaphs by his colleagues crafted a balanced narrative emphasizing both his official duties and literary merits,thus constructing a dual image of him as a principled remonstrator and a literary master.In the Southern Song Dynasty,official historiography gradually eroded his complex persona as a political reformer by selectively trimming political disputes and emphasizing his literary lineage,ultimately establishing him as a cultural exemplar beyond factional strife.Throughout this evolution of historical writing,Ouyang Xiu's sharpness as a remonstrator was gradually obscured in historical texts,while his image as a literary master,revered by all,became firmly established.The reshaping of Ouyang Xiu's image in historical writings across the Northern and Southern Song dynasties not only reflects the logic of selecting scholar-official role models under the influence of official ideology but also reveals the inherent pattern whereby individual distinctiveness fades into symbolic construction in historical writing.
基金supported by Postgraduate Research&Practice Innovation Program of Jiangsu Province,China(Grant No.SJCX24_1332)Jiangsu Province Education Science Planning Project in 2024(Grant No.B-b/2024/01/122)High-Level Talent Scientific Research Foundation of Jinling Institute of Technology,China(Grant No.jit-b-201918).
文摘Digital watermarking technology plays an important role in detecting malicious tampering and protecting image copyright.However,in practical applications,this technology faces various problems such as severe image distortion,inaccurate localization of the tampered regions,and difficulty in recovering content.Given these shortcomings,a fragile image watermarking algorithm for tampering blind-detection and content self-recovery is proposed.The multi-feature watermarking authentication code(AC)is constructed using texture feature of local binary patterns(LBP),direct coefficient of discrete cosine transform(DCT)and contrast feature of gray level co-occurrence matrix(GLCM)for detecting the tampered region,and the recovery code(RC)is designed according to the average grayscale value of pixels in image blocks for recovering the tampered content.Optimal pixel adjustment process(OPAP)and least significant bit(LSB)algorithms are used to embed the recovery code and authentication code into the image in a staggered manner.When detecting the integrity of the image,the authentication code comparison method and threshold judgment method are used to perform two rounds of tampering detection on the image and blindly recover the tampered content.Experimental results show that this algorithm has good transparency,strong and blind detection,and self-recovery performance against four types of malicious attacks and some conventional signal processing operations.When resisting copy-paste,text addition,cropping and vector quantization under the tampering rate(TR)10%,the average tampering detection rate is up to 94.09%,and the peak signal-to-noise ratio(PSNR)of the watermarked image and the recovered image are both greater than 41.47 and 40.31 dB,which demonstrates its excellent advantages compared with other related algorithms in recent years.
文摘Compact size,high brightness,and wide field of view(FOV)are key requirements for long-wave infrared imagers used in military surveillance or night navigation.However,to meet the imaging requirements of high resolution and wide FOV,infrared optical systems often adopt complex optical lens groups,which will increase the size and weight of the optical system.In this paper,a strategy based on wavefront coding(WFC)is proposed to design a compact wide-FOV infrared imager.A cubic phase mask is inserted into the pupil plane of the infrared imager to correct the aberration.The simulated results show that,the WFC infrared imager has good imaging quality in a wide FOV of±16°.In addition,the WFC infrared imager achieves compactness with its 40 mm×40 mm×40 mm size.A fast focal ratio of 1 combined with an entrance pupil diameter of 25 mm ensures brightness.This work is of significance for designing a compact wide-FOV infrared imager.
基金funded by the National Natural Science Foundation of China(NNSFC)under Grant Numbers 42322408,42188101,and 42441809Additional support was provided by the Climbing Program of the National Space Science Center(NSSC,Grant No.E4PD3005)as well as the Specialized Research Fund for State Key Laboratories of China.
文摘A large-scale view of the magnetospheric cusp is expected to be obtained by the Soft X-ray Imager(SXI)onboard the Solar wind Magnetosphere Ionosphere Link Explorer(SMILE).However,it is challenging to trace the three-dimensional cusp boundary from a two-dimensional X-ray image because the detected X-ray signals will be integrated along the line of sight.In this work,a global magnetohydrodynamic code was used to simulate the X-ray images and photon count images,assuming an interplanetary magnetic field with a pure Bz component.The assumption of an elliptic cusp boundary at a given altitude was used to trace the equatorward and poleward boundaries of the cusp from a simulated X-ray image.The average discrepancy was less than 0.1 RE.To reduce the influence of instrument effects and cosmic X-ray backgrounds,image denoising was considered before applying the method above to SXI photon count images.The cusp boundaries were reasonably reconstructed from the noisy X-ray image.
基金funded by the National Natural Science Foundation of China,grant numbers 52374156 and 62476005。
文摘Images taken in dim environments frequently exhibit issues like insufficient brightness,noise,color shifts,and loss of detail.These problems pose significant challenges to dark image enhancement tasks.Current approaches,while effective in global illumination modeling,often struggle to simultaneously suppress noise and preserve structural details,especially under heterogeneous lighting.Furthermore,misalignment between luminance and color channels introduces additional challenges to accurate enhancement.In response to the aforementioned difficulties,we introduce a single-stage framework,M2ATNet,using the multi-scale multi-attention and Transformer architecture.First,to address the problems of texture blurring and residual noise,we design a multi-scale multi-attention denoising module(MMAD),which is applied separately to the luminance and color channels to enhance the structural and texture modeling capabilities.Secondly,to solve the non-alignment problem of the luminance and color channels,we introduce the multi-channel feature fusion Transformer(CFFT)module,which effectively recovers the dark details and corrects the color shifts through cross-channel alignment and deep feature interaction.To guide the model to learn more stably and efficiently,we also fuse multiple types of loss functions to form a hybrid loss term.We extensively evaluate the proposed method on various standard datasets,including LOL-v1,LOL-v2,DICM,LIME,and NPE.Evaluation in terms of numerical metrics and visual quality demonstrate that M2ATNet consistently outperforms existing advanced approaches.Ablation studies further confirm the critical roles played by the MMAD and CFFT modules to detail preservation and visual fidelity under challenging illumination-deficient environments.
基金supported by Deanship of Research and Graduate Studies at King Khalid University for funding this work through Large Research Project under grant number RGP2/540/46.
文摘Over the years,Generative Adversarial Networks(GANs)have revolutionized the medical imaging industry for applications such as image synthesis,denoising,super resolution,data augmentation,and cross-modality translation.The objective of this review is to evaluate the advances,relevances,and limitations of GANs in medical imaging.An organised literature review was conducted following the guidelines of PRISMA(Preferred Reporting Items for Systematic Reviews and Meta-Analyses).The literature considered included peer-reviewed papers published between 2020 and 2025 across databases including PubMed,IEEE Xplore,and Scopus.The studies related to applications of GAN architectures in medical imaging with reported experimental outcomes and published in English in reputable journals and conferences were considered for the review.Thesis,white papers,communication letters,and non-English articles were not included for the same.CLAIM based quality assessment criteria were applied to the included studies to assess the quality.The study classifies diverse GAN architectures,summarizing their clinical applications,technical performances,and their implementation hardships.Key findings reveal the increasing applications of GANs for enhancing diagnostic accuracy,reducing data scarcity through synthetic data generation,and supporting modality translation.However,concerns such as limited generalizability,lack of clinical validation,and regulatory constraints persist.This review provides a comprehensive study of the prevailing scenario of GANs in medical imaging and highlights crucial research gaps and future directions.Though GANs hold transformative capability for medical imaging,their integration into clinical use demands further validation,interpretability,and regulatory alignment.
基金supported by the National Natural Science Foundation of China(Nos.12205044 and 12265003)2024 Jiangxi Province Civil-Military Integration Research Institute‘BeiDou+’Project Subtopic(No.2024JXRH0Y06).
文摘Unmanned aerial vehicle(UAV)-borne gamma-ray spectrum survey plays a crucial role in geological mapping,radioactive mineral exploration,and environmental monitoring.However,raw data are often compromised by flight and instrument background noise,as well as detector resolution limitations,which affect the accuracy of geological interpretations.This study aims to explore the application of the Real-ESRGAN algorithm in the super-resolution reconstruction of UAV-borne gamma-ray spectrum images to enhance spatial resolution and the quality of geological feature visualization.We conducted super-resolution reconstruction experiments with 2×,4×and 6×magnification using the Real-ESRGAN algorithm,comparing the results with three other mainstream algorithms(SRCNN,SRGAN,FSRCNN)to verify the superiority in image quality.The experimental results indicate that Real-ESRGAN achieved a structural similarity index(SSIM)value of 0.950 at 2×magnification,significantly higher than the other algorithms,demonstrating its advantage in detail preservation.Furthermore,Real-ESRGAN effectively reduced ringing and overshoot artifacts,enhancing the clarity of geological structures and mineral deposit sites,thus providing high-quality visual information for geological exploration.
基金supported by National Key Research and Development Programme‘Frontier Research on Large Scientific Devices’Key Special Project(2024YFA1612000)Sino-German Science Foundation Program(M-0086)Yunnan Science and Technology Leading Talent Program(202105AB160001).
文摘The Chinese Giant Solar Telescope(CGST)low-dispersion spectrograph requires a large field-of-view(FOV)and high spatial resolution,which can be addressed by a carefully designed image slicer system.Our proposed design divides the rectangular 50″×20″FOV at the telescope focal plane into four 50″×5″subfields.Each subfield undergoes optical reconstruction using its independent collimator-camera system(F/36-F/25.79),achieving vertical alignment and focal reduction of subfields to form a pseudo-slit.Using tilt mirrors for scanning allows simultaneous acquisition of spectral data with both a large FOV and a high angular resolution of 0.05″.This resolves manufacturing challenges for an image slicer,avoiding the requirement for hundreds of elements,multi-angle configurations,and compact dimensions,and also provides effective technical support for engineering work on the CGST.
基金Supported by the National Natural Science Foundation of China(Grant Nos.62025104,62331005,and U22A2052)the Beijing Natural Science Foundation(Grant No.L242100).
文摘Background Computed tomography(CT) and cone-beam computed tomography(CBCT) image registration play pivotal roles in computer-assisted navigation for orthopedic surgery. Traditional methods often apply uniform deformation models, neglecting the biomechanical differences between rigid structures and soft tissues, which compromises registration accuracy, especially during significant bone displacements. Method To address this issue, we introduce RE-Reg, a rigid-elastic CT-CBCT image registration framework that jointly learns rigid bone motion and soft tissue deformation. RE-Reg incorporates a rigid alignment(RA) module to estimate global bone motion and an elastic deformation(ED) module to model soft tissue deformation, preserving bony structures through bone shape preservation(BSP) loss. Result Our comprehensive evaluation on publicly available datasets demonstrates that RE-Reg significantly outperforms existing methods in terms of registration accuracy and rigid bone structure preservation, achieving a 1.3% improvement in Dice similarity coefficient(DSC) and a 23% reduction in rigid bone deformation(%Δvol) compared with the best baseline. Conclusion This framework not only enhances anatomical fidelity but also ensures biomechanical plausibility and provides a valuable tool for image-guided orthopedic surgery. This code is available athttps://github.com/Zq-Huang/RE-Reg.
基金Supported by Applied Brand Course of Mianyang Teacher's College(Investigation and Monitoring of Natural Resources).
文摘With the rapid development of image-generative AI (artificial intelligence) technology, its application in undergraduate Landscape Architecture education has demonstrated significant potential. Based on this, the present study explores the implications of integrating image-generative AI into Landscape Architecture courses from three perspectives: stimulating students creative design potential, expanding approaches to form and concept generation, and enhancing the visualization of spatial scenes. Furthermore, it discusses application strategies from three dimensions: AI-assisted conceptual generation, human-machine collaboration for design refinement, and optimization of scheme presentation and evaluation. This paper aims to provide relevant educators with insights and references.
基金provided by the Science Research Project of Hebei Education Department under grant No.BJK2024115.
文摘High-resolution remote sensing images(HRSIs)are now an essential data source for gathering surface information due to advancements in remote sensing data capture technologies.However,their significant scale changes and wealth of spatial details pose challenges for semantic segmentation.While convolutional neural networks(CNNs)excel at capturing local features,they are limited in modeling long-range dependencies.Conversely,transformers utilize multihead self-attention to integrate global context effectively,but this approach often incurs a high computational cost.This paper proposes a global-local multiscale context network(GLMCNet)to extract both global and local multiscale contextual information from HRSIs.A detail-enhanced filtering module(DEFM)is proposed at the end of the encoder to refine the encoder outputs further,thereby enhancing the key details extracted by the encoder and effectively suppressing redundant information.In addition,a global-local multiscale transformer block(GLMTB)is proposed in the decoding stage to enable the modeling of rich multiscale global and local information.We also design a stair fusion mechanism to transmit deep semantic information from deep to shallow layers progressively.Finally,we propose the semantic awareness enhancement module(SAEM),which further enhances the representation of multiscale semantic features through spatial attention and covariance channel attention.Extensive ablation analyses and comparative experiments were conducted to evaluate the performance of the proposed method.Specifically,our method achieved a mean Intersection over Union(mIoU)of 86.89%on the ISPRS Potsdam dataset and 84.34%on the ISPRS Vaihingen dataset,outperforming existing models such as ABCNet and BANet.
基金supported in part by the Technical Service for the Development and Application of an Intelligent Visual Management Platformfor Expressway Construction Progress Based on BIM Technology(grant NO.JKYZLX-2023-09)in partby the Technical Service for the Development of an Early Warning Model in the Research and Application of Key Technologies for Tunnel Operation Safety Monitoring and Early Warning Based on Digital Twin(grant NO.JK-S02-ZNGS-202412-JISHU-FA-0035)sponsored by Yunnan Transportation Science Research Institute Co.,Ltd.
文摘With the rapid development of transportation infrastructure,ensuring road safety through timely and accurate highway inspection has become increasingly critical.Traditional manual inspection methods are not only time-consuming and labor-intensive,but they also struggle to provide consistent,high-precision detection and realtime monitoring of pavement surface defects.To overcome these limitations,we propose an Automatic Recognition of PavementDefect(ARPD)algorithm,which leverages unmanned aerial vehicle(UAV)-based aerial imagery to automate the inspection process.The ARPD framework incorporates a backbone network based on the Selective State Space Model(S3M),which is designed to capture long-range temporal dependencies.This enables effective modeling of dynamic correlations among redundant and often repetitive structures commonly found in road imagery.Furthermore,a neck structure based on Semantics and Detail Infusion(SDI)is introduced to guide cross-scale feature fusion.The SDI module enhances the integration of low-level spatial details with high-level semantic cues,thereby improving feature expressiveness and defect localization accuracy.Experimental evaluations demonstrate that theARPDalgorithm achieves a mean average precision(mAP)of 86.1%on a custom-labeled pavement defect dataset,outperforming the state-of-the-art YOLOv11 segmentation model.The algorithm also maintains strong generalization ability on public datasets.These results confirm that ARPD is well-suited for diverse real-world applications in intelligent,large-scale highway defect monitoring and maintenance planning.
基金supported by the Second Batch of Key Textbook Construction Projects of“14th Five-Year Plan”of Zhejiang Vocational Colleges(SZDJC-2412).
文摘Roadbed disease detection is essential for maintaining road functionality.Ground penetrating radar(GPR)enables non-destructive detection without drilling.However,current identification often relies on manual inspection,which requires extensive experience,suffers from low efficiency,and is highly subjective.As the results are presented as radar images,image processing methods can be applied for fast and objective identification.Deep learning-based approaches now offer a robust solution for automated roadbed disease detection.This study proposes an enhanced Faster Region-based Convolutional Neural Networks(R-CNN)framework integrating ResNet-50 as the backbone and two-dimensional discrete Fourier spectrum transformation(2D-DFT)for frequency-domain feature fusion.A dedicated GPR image dataset comprising 1650 annotated images was constructed and augmented to 6600 images via median filtering,histogram equalization,and binarization.The proposed model segments defect regions,applies binary masking,and fuses frequency-domain features to improve small-target detection under noisy backgrounds.Experimental results show that the improved Faster R-CNN achieves a mean Average Precision(mAP)of 0.92,representing a 0.22 increase over the baseline.Precision improved by 26%while recall remained stable at 87%.The model was further validated on real urban road data,demonstrating robust detection capability even under interference.These findings highlight the potential of combining GPR with deep learning for efficient,non-destructive roadbed health monitoring.