The reverse design of solid rocket motor(SRM)propellant grain involves determining the grain geometry to closely match a predefined internal ballistic curve.While existing reverse design methods are feasible,they ofte...The reverse design of solid rocket motor(SRM)propellant grain involves determining the grain geometry to closely match a predefined internal ballistic curve.While existing reverse design methods are feasible,they often face challenges such as lengthy computation times and limited accuracy.To achieve rapid and accurate matching between the targeted ballistic curve and complex grain shape,this paper proposes a novel reverse design method for SRM propellant grain based on time-series data imaging and convolutional neural network(CNN).First,a finocyl grain shape-internal ballistic curve dataset is created using parametric modeling techniques to comprehensively cover the design space.Next,the internal ballistic time-series data is encoded into three-channel images,establishing a potential relationship between the ballistic curves and their image representations.A CNN is then constructed and trained using these encoded images.Once trained,the model enables efficient inference of propellant grain dimensions from a target internal ballistic curve.This paper conducts comparative experiments across various neural network models,validating the effectiveness of the feature extraction method that transforms internal ballistic time-series data into images,as well as its generalization capability across different CNN architectures.Ignition tests were performed based on the predicted propellant grain.The results demonstrate that the relative error between the experimental internal ballistic curves and the target curves is less than 5%,confirming the validity and feasibility of the proposed reverse design methodology.展开更多
Reversible data hiding(RDH)enables secret data embedding while preserving complete cover image recovery,making it crucial for applications requiring image integrity.The pixel value ordering(PVO)technique used in multi...Reversible data hiding(RDH)enables secret data embedding while preserving complete cover image recovery,making it crucial for applications requiring image integrity.The pixel value ordering(PVO)technique used in multi-stego images provides good image quality but often results in low embedding capability.To address these challenges,this paper proposes a high-capacity RDH scheme based on PVO that generates three stego images from a single cover image.The cover image is partitioned into non-overlapping blocks with pixels sorted in ascending order.Four secret bits are embedded into each block’s maximum pixel value,while three additional bits are embedded into the second-largest value when the pixel difference exceeds a predefined threshold.A similar embedding strategy is also applied to the minimum side of the block,including the second-smallest pixel value.This design enables each block to embed up to 14 bits of secret data.Experimental results demonstrate that the proposed method achieves significantly higher embedding capacity and improved visual quality compared to existing triple-stego RDH approaches,advancing the field of reversible steganography.展开更多
Recently,water extraction based on the indices method has been documented in many studies using various remote sensing data sources.Among them,Landsat satellites data have certain advantages in spatial resolution and ...Recently,water extraction based on the indices method has been documented in many studies using various remote sensing data sources.Among them,Landsat satellites data have certain advantages in spatial resolution and cost.After the successful launch of Landsat 8,the Operational Land Imager(OLI)data from the satellite are getting more and more attention because of its new improvements.In this study,we used the OLI imagery data source to study the water extraction performance based on the Normalized Difference Vegetation Index,Normalized Difference Water Index,Modified Normalized Water Index(MNDWI),and Automated Water Extraction Index(AWEI)and compared the results with the Thematic Mapper(TM)imagery data.Two test sites in Tianjin City of north China were selected as the study area to verify the applicability of OLI data and demonstrate its advantages over TM data.We found that the results of surface water extraction based on OLI data are slightly better than that based on TM in the two test sites,especially in the city site.The AWEI and MNDWI indices performs better than the other two indices,and the thresholds of water indices show more stability when using the OLI data.So,it is suitable to combine OLI imagery with other Landsat sensor data to study water changes for long periods of time.展开更多
A new object-oriented method has been developed for the extraction of Mars rocks from Mars rover data. It is based on a combination of Mars rover imagery and 3D point cloud data. First, Navcam or Pancam images taken b...A new object-oriented method has been developed for the extraction of Mars rocks from Mars rover data. It is based on a combination of Mars rover imagery and 3D point cloud data. First, Navcam or Pancam images taken by the Mars rovers are segmented into homogeneous objects with a mean-shift algorithm. Then, the objects in the segmented images are classified into small rock candidates, rock shadows, and large objects. Rock shadows and large objects are considered as the regions within which large rocks may exist. In these regions, large rock candidates are extracted through ground-plane fitting with the 3D point cloud data. Small and large rock candidates are combined and postprocessed to obtain the final rock extraction results. The shape properties of the rocks (angularity, circularity, width, height, and width-height ratio) have been calculated for subsequent ~eological studies.展开更多
The volume FeO and TiO_2 abundances(FTAs) of lunar regolith can be more important for understanding the geological evolution of the Moon compared to the optical and gamma-ray results. In this paper, the volume FTAs ar...The volume FeO and TiO_2 abundances(FTAs) of lunar regolith can be more important for understanding the geological evolution of the Moon compared to the optical and gamma-ray results. In this paper, the volume FTAs are retrieved with microwave sounder(CELMS) data from the Chang'E-2 satellite using the back propagation neural network(BPNN) method. Firstly, a three-layered BPNN network with five-dimensional input is constructed by taking nonlinearity into account. Then, the brightness temperature(TB) and surface slope are set as the inputs and the volume FTAs are set as the outputs of the BPNN network.Thereafter, the BPNN network is trained with the corresponding parameters collected from Apollo, Luna,and Surveyor missions. Finally, the volume FTAs are retrieved with the trained BPNN network using the four-channel TBderived from the CELMS data and the surface slope estimated from Lunar Orbiter Laser Altimeter(LOLA) data. The rationality of the retrieved FTAs is verified by comparing with the Clementine UV-VIS results and Lunar Prospector(LP) GRS results. The retrieved volume FTAs enable us to re-evaluate the geological features of the lunar surface. Several important results are as follows. Firstly, very-low-Ti(<1.5 wt.%) basalts are the most spatially abundant, and the surfaces with TiO_2> 5 wt.% constitute less than 10% of the maria. Also, two linear relationships occur between the FeO abundance(FA) and the TiO_2 abundance before and after the threshold, 16 wt.% for FA. Secondly, a new perspective on mare volcanism is derived with the volume FTAs in several important mare basins, although this conclusion should be verified with more sources of data. Thirdly, FTAs in the lunar regolith change with depth to the uppermost surface,and the change is complex over the lunar surface. Finally, the distribution of volume FTAs hints that the highlands crust is probably homogeneous, at least in terms of the microwave thermophysical parameters.展开更多
Automatic road detection, in dense urban areas, is a challenging application in the remote sensing community. This is mainly because of physical and geometrical variations of road pixels, their spectral similarity to ...Automatic road detection, in dense urban areas, is a challenging application in the remote sensing community. This is mainly because of physical and geometrical variations of road pixels, their spectral similarity to other features such as buildings, parking lots and sidewalks, and the obstruction by vehicles and trees. These problems are real obstacles in precise detection and identification of urban roads from high-resolution satellite imagery. One of the promising strategies to deal with this problem is using multi-sensors data to reduce the uncertainties of detection. In this paper, an integrated object-based analysis framework was developed for detecting and extracting various types of urban roads from high-resolution optical images and Lidar data. The proposed method is designed and implemented using a rule-oriented approach based on a masking strategy. The overall accuracy (OA) of the final road map was 89.2%, and the kappa coefficient of agreement was 0.83, which show the efficiency and performance of the method in different conditions and interclass noises. The results also demonstrate the high capability of this object-based method in simultaneous identification of a wide variety of road elements in complex urban areas using both high-resolution satellite images and Lidar data.展开更多
The integration of image analysis through deep learning(DL)into rock classification represents a significant leap forward in geological research.While traditional methods remain invaluable for their expertise and hist...The integration of image analysis through deep learning(DL)into rock classification represents a significant leap forward in geological research.While traditional methods remain invaluable for their expertise and historical context,DL offers a powerful complement by enhancing the speed,objectivity,and precision of the classification process.This research explores the significance of image data augmentation techniques in optimizing the performance of convolutional neural networks(CNNs)for geological image analysis,particularly in the classification of igneous,metamorphic,and sedimentary rock types from rock thin section(RTS)images.This study primarily focuses on classic image augmentation techniques and evaluates their impact on model accuracy and precision.Results demonstrate that augmentation techniques like Equalize significantly enhance the model's classification capabilities,achieving an F1-Score of 0.9869 for igneous rocks,0.9884 for metamorphic rocks,and 0.9929 for sedimentary rocks,representing improvements compared to the baseline original results.Moreover,the weighted average F1-Score across all classes and techniques is 0.9886,indicating an enhancement.Conversely,methods like Distort lead to decreased accuracy and F1-Score,with an F1-Score of 0.949 for igneous rocks,0.954 for metamorphic rocks,and 0.9416 for sedimentary rocks,exacerbating the performance compared to the baseline.The study underscores the practicality of image data augmentation in geological image classification and advocates for the adoption of DL methods in this domain for automation and improved results.The findings of this study can benefit various fields,including remote sensing,mineral exploration,and environmental monitoring,by enhancing the accuracy of geological image analysis both for scientific research and industrial applications.展开更多
Large-scale point cloud datasets form the basis for training various deep learning networks and achieving high-quality network processing tasks.Due to the diversity and robustness constraints of the data,data augmenta...Large-scale point cloud datasets form the basis for training various deep learning networks and achieving high-quality network processing tasks.Due to the diversity and robustness constraints of the data,data augmentation(DA)methods are utilised to expand dataset diversity and scale.However,due to the complex and distinct characteristics of LiDAR point cloud data from different platforms(such as missile-borne and vehicular LiDAR data),directly applying traditional 2D visual domain DA methods to 3D data can lead to networks trained using this approach not robustly achieving the corresponding tasks.To address this issue,the present study explores DA for missile-borne LiDAR point cloud using a Monte Carlo(MC)simulation method that closely resembles practical application.Firstly,the model of multi-sensor imaging system is established,taking into account the joint errors arising from the platform itself and the relative motion during the imaging process.A distortion simulation method based on MC simulation for augmenting missile-borne LiDAR point cloud data is proposed,underpinned by an analysis of combined errors between different modal sensors,achieving high-quality augmentation of point cloud data.The effectiveness of the proposed method in addressing imaging system errors and distortion simulation is validated using the imaging scene dataset constructed in this paper.Comparative experiments between the proposed point cloud DA algorithm and the current state-of-the-art algorithms in point cloud detection and single object tracking tasks demonstrate that the proposed method can improve the network performance obtained from unaugmented datasets by over 17.3%and 17.9%,surpassing SOTA performance of current point cloud DA algorithms.展开更多
【Objective】Medical imaging data has great value,but it contains a significant amount of sensitive information about patients.At present,laws and regulations regarding to the de-identification of medical imaging data...【Objective】Medical imaging data has great value,but it contains a significant amount of sensitive information about patients.At present,laws and regulations regarding to the de-identification of medical imaging data are not clearly defined around the world.This study aims to develop a tool that meets compliance-driven desensitization requirements tailored to diverse research needs.【Methods】To enhance the security of medical image data,we designed and implemented a DICOM format medical image de-identification system on the Windows operating system.【Results】Our custom de-identification system is adaptable to the legal standards of different countries and can accommodate specific research demands.The system offers both web-based online and desktop offline de-identification capabilities,enabling customization of de-identification rules and facilitating batch processing to improve efficiency.【Conclusions】This medical image de-identification system robustly strengthens the stewardship of sensitive medical data,aligning with data security protection requirements while facilitating the sharing and utilization of medical image data.This approach unlocks the intrinsic value inherent in such datasets.展开更多
Medical image segmentation,i.e.,labeling structures of interest in medical images,is crucial for disease diagnosis and treatment in radiology.In reversible data hiding in medical images(RDHMI),segmentation consists of...Medical image segmentation,i.e.,labeling structures of interest in medical images,is crucial for disease diagnosis and treatment in radiology.In reversible data hiding in medical images(RDHMI),segmentation consists of only two regions:the focal and nonfocal regions.The focal region mainly contains information for diagnosis,while the nonfocal region serves as the monochrome background.The current traditional segmentation methods utilized in RDHMI are inaccurate for complex medical images,and manual segmentation is time-consuming,poorly reproducible,and operator-dependent.Implementing state-of-the-art deep learning(DL)models will facilitate key benefits,but the lack of domain-specific labels for existing medical datasets makes it impossible.To address this problem,this study provides labels of existing medical datasets based on a hybrid segmentation approach to facilitate the implementation of DL segmentation models in this domain.First,an initial segmentation based on a 33 kernel is performed to analyze×identified contour pixels before classifying pixels into focal and nonfocal regions.Then,several human expert raters evaluate and classify the generated labels into accurate and inaccurate labels.The inaccurate labels undergo manual segmentation by medical practitioners and are scored based on a hierarchical voting scheme before being assigned to the proposed dataset.To ensure reliability and integrity in the proposed dataset,we evaluate the accurate automated labels with manually segmented labels by medical practitioners using five assessment metrics:dice coefficient,Jaccard index,precision,recall,and accuracy.The experimental results show labels in the proposed dataset are consistent with the subjective judgment of human experts,with an average accuracy score of 94%and dice coefficient scores between 90%-99%.The study further proposes a ResNet-UNet with concatenated spatial and channel squeeze and excitation(scSE)architecture for semantic segmentation to validate and illustrate the usefulness of the proposed dataset.The results demonstrate the superior performance of the proposed architecture in accurately separating the focal and nonfocal regions compared to state-of-the-art architectures.Dataset information is released under the following URL:https://www.kaggle.com/lordamoah/datasets(accessed on 31 March 2025).展开更多
The standing waves existing in radio telescope data are primarily due to reflections among the instruments,which significantly impact the spectral quality of the Five-hundred-meter Aperture Spherical radio Telescope(F...The standing waves existing in radio telescope data are primarily due to reflections among the instruments,which significantly impact the spectral quality of the Five-hundred-meter Aperture Spherical radio Telescope(FAST).Eliminating these standing waves for FAST is challenging given the constant changes in their phases and amplitudes.Over a ten-second period,the phases shift by 18°while the amplitudes fluctuate by 6 mK.Thus,we developed the fast Fourier transform(FFT)filter method to eliminate these standing waves for every individual spectrum.The FFT filter can decrease the rms from 3.2 to 1.15 times the theoretical estimate.Compared to other methods such as sine fitting and running median,the FFT filter achieves a median rms of approximately 1.2 times the theoretical expectation and the smallest scatter at 12%.Additionally,the FFT filter method avoids the flux loss issue encountered with some other methods.The FFT is also efficient in detecting harmonic radio frequency interference(RFI).In the FAST data,we identified three distinct types of harmonic RFI,each with amplitudes exceeding 100 mK and intrinsic frequency periods of 8.1,0.5,and 0.37 MHz,respectively.The FFT filter,proven as the most effective method,is integrated into the H I data calibration and imaging pipeline for FAST(HiFAST,https://hifast.readthedocs.io).展开更多
Dear Editor,Growing clinical evidence shows that brain disorders are heterogeneous in phenotype,genetics,and neuropathology[1].Diagnosis and treatment tend to be affected by symptom presentation and the heterogeneity ...Dear Editor,Growing clinical evidence shows that brain disorders are heterogeneous in phenotype,genetics,and neuropathology[1].Diagnosis and treatment tend to be affected by symptom presentation and the heterogeneity of pathology,potentially hindering clinical trials in the development of medical treatment.Brain-based subtyping studies utilize magnetic resonance imaging(MRI)and data-driven methods to discover the subtypes of diseases,providing a new perspective on disease heterogeneity.展开更多
Depression,a pervasive mental health disorder,has substantial impacts on both individuals and society.The conventional approach to predicting depression necessitates substantial collaboration between health care profe...Depression,a pervasive mental health disorder,has substantial impacts on both individuals and society.The conventional approach to predicting depression necessitates substantial collaboration between health care professionals and patients,leaving room for the influence of subjective factors.Consequently,it is imperative to develop a more efficient and accessible prediction methodology for depression.In recent years,numerous investigations have delved into depression prediction techniques,employing diverse data modalities and yielding notable advancements.Given the rapid progression of this domain,the present article comprehensively reviews major breakthroughs in depression prediction,encompassing multiple data modalities such as electrophysiological signals,brain imaging,audiovisual data,and text.By integrating depression prediction methods from various data modalities,it offers a comparative assessment of their advantages and limitations,providing a well-rounded perspective on how different modalities can complement each other for more accurate and holistic depression prediction.The survey begins by examining commonly used datasets,evaluation metrics,and methodological frameworks.For each data modality,it systematically analyzes traditional machine learning methods alongside the increasingly prevalent deep learning approaches,providing a comparative assessment of detection frameworks,feature representations,context modeling,and training strategies.Finally,the survey culminates with the identification of prospective avenues that warrant further exploration.It provides researchers with valuable insights and practical guidance to advance the field of depression prediction.展开更多
Data augmentation plays an important role in training deep neural model by expanding the size and diversity of the dataset.Initially,data augmentation mainly involved some simple transformations of images.Later,in ord...Data augmentation plays an important role in training deep neural model by expanding the size and diversity of the dataset.Initially,data augmentation mainly involved some simple transformations of images.Later,in order to increase the diversity and complexity of data,more advanced methods appeared and evolved to sophisticated generative models.However,these methods required a mass of computation of training or searching.In this paper,a novel training-free method that utilises the Pre-Trained Segment Anything Model(SAM)model as a data augmentation tool(PTSAM-DA)is proposed to generate the augmented annotations for images.Without the need for training,it obtains prompt boxes from the original annotations and then feeds the boxes to the pre-trained SAM to generate diverse and improved annotations.In this way,annotations are augmented more ingenious than simple manipulations without incurring huge computation for training a data augmentation model.Multiple comparative experiments on three datasets are conducted,including an in-house dataset,ADE20K and COCO2017.On this in-house dataset,namely Agricultural Plot Segmentation Dataset,maximum improvements of 3.77%and 8.92%are gained in two mainstream metrics,mIoU and mAcc,respectively.Consequently,large vision models like SAM are proven to be promising not only in image segmentation but also in data augmentation.展开更多
Data hiding methods involve embedding secret messages into cover objects to enable covert communication in a way that is difficult to detect.In data hiding methods based on image interpolation,the image size is reduce...Data hiding methods involve embedding secret messages into cover objects to enable covert communication in a way that is difficult to detect.In data hiding methods based on image interpolation,the image size is reduced and then enlarged through interpolation,followed by the embedding of secret data into the newly generated pixels.A general improving approach for embedding secret messages is proposed.The approach may be regarded a general model for enhancing the data embedding capacity of various existing image interpolation-based data hiding methods.This enhancement is achieved by expanding the range of pixel values available for embedding secret messages,removing the limitations of many existing methods,where the range is restricted to powers of two to facilitate the direct embedding of bit-based messages.This improvement is accomplished through the application of multiple-based number conversion to the secret message data.The method converts the message bits into a multiple-based number and uses an algorithm to embed each digit of this number into an individual pixel,thereby enhancing the message embedding efficiency,as proved by a theorem derived in this study.The proposed improvement method has been tested through experiments on three well-known image interpolation-based data hiding methods.The results show that the proposed method can enhance the three data embedding rates by approximately 14%,13%,and 10%,respectively,create stego-images with good quality,and resist RS steganalysis attacks.These experimental results indicate that the use of the multiple-based number conversion technique to improve the three interpolation-based methods for embedding secret messages increases the number of message bits embedded in the images.For many image interpolation-based data hiding methods,which use power-of-two pixel-value ranges for message embedding,other than the three tested ones,the proposed improvement method is also expected to be effective for enhancing their data embedding capabilities.展开更多
Objective This study aimed to assess the local staging of bladder tumors in patients utilizing preoperative multiparametric MRI(mpMRI)and to demonstrate the clinical efficacy of this method through a comparative analy...Objective This study aimed to assess the local staging of bladder tumors in patients utilizing preoperative multiparametric MRI(mpMRI)and to demonstrate the clinical efficacy of this method through a comparative analysis with corresponding histopathological findings.Methods Between November 2020 and April 2022,63 patients with a planned cystoscopy and a preliminary or previous diagnosis of bladder tumor were included.All participants underwent mpMRI,and Vesical Imaging Reporting and Data System(VI-RADS)criteria were applied to assess the recorded images.Subsequently,obtained biopsies were histopathologically examined and compared with radiological findings.Results Of the 63 participants,60 were male,and three were female.Categorizing tumors with a VI-RADS score of>3 as muscle invasive,84%were radiologically classified as having an invasive bladder tumor.However,histopathological results indicated invasive bladder tumors in 52%of cases.Sensitivity of the VI-RADS score was 100%;specificity was 23%;the negative predictive value was 100%;and the positive predictive value was 62%.Conclusion The scoring system obtained through mpMRI,VI-RADS,proves to be a successful method,particularly in determining the absence of muscle invasion in bladder cancer.Its efficacy in detecting muscle invasion in bladder tumors could be further enhanced with additional studies,suggesting potential for increased diagnostic efficiency through ongoing research.The VI-RADS could enhance the selection of patients eligible for accurate diagnosis and treatment.展开更多
Medical image classification is crucial in disease diagnosis,treatment planning,and clinical decisionmaking.We introduced a novel medical image classification approach that integrates Bayesian Random Semantic Data Aug...Medical image classification is crucial in disease diagnosis,treatment planning,and clinical decisionmaking.We introduced a novel medical image classification approach that integrates Bayesian Random Semantic Data Augmentation(BSDA)with a Vision Mamba-based model for medical image classification(MedMamba),enhanced by residual connection blocks,we named the model BSDA-Mamba.BSDA augments medical image data semantically,enhancing the model’s generalization ability and classification performance.MedMamba,a deep learning-based state space model,excels in capturing long-range dependencies in medical images.By incorporating residual connections,BSDA-Mamba further improves feature extraction capabilities.Through comprehensive experiments on eight medical image datasets,we demonstrate that BSDA-Mamba outperforms existing models in accuracy,area under the curve,and F1-score.Our results highlight BSDA-Mamba’s potential as a reliable tool for medical image analysis,particularly in handling diverse imaging modalities from X-rays to MRI.The open-sourcing of our model’s code and datasets,will facilitate the reproduction and extension of our work.展开更多
BFOSC and YFOSC are the most frequently used instruments in the Xinglong 2.16 m telescope and Lijiang 2.4 m telescope,respectively.We developed a software package named“BYSpec”(BFOSC and YFOSC Spectra Reduction Pack...BFOSC and YFOSC are the most frequently used instruments in the Xinglong 2.16 m telescope and Lijiang 2.4 m telescope,respectively.We developed a software package named“BYSpec”(BFOSC and YFOSC Spectra Reduction Package)dedicated to automatically reducing the long-slit and echelle spectra obtained by these two instruments.The package supports bias and flat-fielding correction,order location,background subtraction,automatic wavelength calibration,and absolute flux calibration.The optimal extraction method maximizes the signal-to-noise ratio and removes most of the cosmic rays imprinted in the spectra.A comparison with the 1D spectra reduced with IRAF verifies the reliability of the results.This open-source software is publicly available to the community.展开更多
The increasing demand for high-resolution solar observations has driven the development of advanced data processing and enhancement techniques for ground-based solar telescopes.This study focuses on developing a pytho...The increasing demand for high-resolution solar observations has driven the development of advanced data processing and enhancement techniques for ground-based solar telescopes.This study focuses on developing a python-based package(GT-scopy)for data processing and enhancing for giant solar telescopes,with application to the 1.6 m Goode Solar Telescope(GST)at Big Bear Solar Observatory.The objective is to develop a modern data processing software for refining existing data acquisition,processing,and enhancement methodologies to achieve atmospheric effect removal and accurate alignment at the sub-pixel level,particularly within the processing levels 1.0-1.5.In this research,we implemented an integrated and comprehensive data processing procedure that includes image de-rotation,zone-of-interest selection,coarse alignment,correction for atmospheric distortions,and fine alignment at the sub-pixel level with an advanced algorithm.The results demonstrate a significant improvement in image quality,with enhanced visibility of fine solar structures both in sunspots and quiet-Sun regions.The enhanced data processing package developed in this study significantly improves the utility of data obtained from the GST,paving the way for more precise solar research and contributing to a better understanding of solar dynamics.This package can be adapted for other ground-based solar telescopes,such as the Daniel K.Inouye Solar Telescope(DKIST),the European Solar Telescope(EST),and the 8 m Chinese Giant Solar Telescope,potentially benefiting the broader solar physics community.展开更多
Lunar wrinkle ridges are an important stress geological structure on the Moon, which reflect the stress state and geological activity on the Moon. They provide important insights into the evolution of the Moon and are...Lunar wrinkle ridges are an important stress geological structure on the Moon, which reflect the stress state and geological activity on the Moon. They provide important insights into the evolution of the Moon and are key factors influencing future lunar activity, such as the choice of landing sites. However, automatic extraction of lunar wrinkle ridges is a challenging task due to their complex morphology and ambiguous features. Traditional manual extraction methods are time-consuming and labor-intensive. To achieve automated and detailed detection of lunar wrinkle ridges, we have constructed a lunar wrinkle ridge data set, incorporating previously unused aspect data to provide edge information, and proposed a Dual-Branch Ridge Detection Network(DBR-Net) based on deep learning technology. This method employs a dual-branch architecture and an Attention Complementary Feature Fusion module to address the issue of insufficient lunar wrinkle ridge features. Through comparisons with the results of various deep learning approaches, it is demonstrated that the proposed method exhibits superior detection performance. Furthermore, the trained model was applied to lunar mare regions, generating a distribution map of lunar mare wrinkle ridges;a significant linear relationship between the length and area of the lunar wrinkle ridges was obtained through statistical analysis, and six previously unrecorded potential lunar wrinkle ridges were detected. The proposed method upgrades the automated extraction of lunar wrinkle ridges to a pixel-level precision and verifies the effectiveness of DBR-Net in lunar wrinkle ridge detection.展开更多
文摘The reverse design of solid rocket motor(SRM)propellant grain involves determining the grain geometry to closely match a predefined internal ballistic curve.While existing reverse design methods are feasible,they often face challenges such as lengthy computation times and limited accuracy.To achieve rapid and accurate matching between the targeted ballistic curve and complex grain shape,this paper proposes a novel reverse design method for SRM propellant grain based on time-series data imaging and convolutional neural network(CNN).First,a finocyl grain shape-internal ballistic curve dataset is created using parametric modeling techniques to comprehensively cover the design space.Next,the internal ballistic time-series data is encoded into three-channel images,establishing a potential relationship between the ballistic curves and their image representations.A CNN is then constructed and trained using these encoded images.Once trained,the model enables efficient inference of propellant grain dimensions from a target internal ballistic curve.This paper conducts comparative experiments across various neural network models,validating the effectiveness of the feature extraction method that transforms internal ballistic time-series data into images,as well as its generalization capability across different CNN architectures.Ignition tests were performed based on the predicted propellant grain.The results demonstrate that the relative error between the experimental internal ballistic curves and the target curves is less than 5%,confirming the validity and feasibility of the proposed reverse design methodology.
基金funded by University of Transport and Communications(UTC)under grant number T2025-CN-004.
文摘Reversible data hiding(RDH)enables secret data embedding while preserving complete cover image recovery,making it crucial for applications requiring image integrity.The pixel value ordering(PVO)technique used in multi-stego images provides good image quality but often results in low embedding capability.To address these challenges,this paper proposes a high-capacity RDH scheme based on PVO that generates three stego images from a single cover image.The cover image is partitioned into non-overlapping blocks with pixels sorted in ascending order.Four secret bits are embedded into each block’s maximum pixel value,while three additional bits are embedded into the second-largest value when the pixel difference exceeds a predefined threshold.A similar embedding strategy is also applied to the minimum side of the block,including the second-smallest pixel value.This design enables each block to embed up to 14 bits of secret data.Experimental results demonstrate that the proposed method achieves significantly higher embedding capacity and improved visual quality compared to existing triple-stego RDH approaches,advancing the field of reversible steganography.
基金The authors would like to thank the support by the Key Research Program of the Chinese Academy of Science[grant number KZZD–EW–14]the Visiting Scholar Foundation of Chinese Academy of Science.The authors would like to thank USGS for processing and providing Landsat data and the reviewers for their constructive comments and suggestions.The authors especially thank Prof Xiangming Xiao in the Earth Observation and Modeling Facility,University of Oklahoma,for his useful suggestions to this paper.
文摘Recently,water extraction based on the indices method has been documented in many studies using various remote sensing data sources.Among them,Landsat satellites data have certain advantages in spatial resolution and cost.After the successful launch of Landsat 8,the Operational Land Imager(OLI)data from the satellite are getting more and more attention because of its new improvements.In this study,we used the OLI imagery data source to study the water extraction performance based on the Normalized Difference Vegetation Index,Normalized Difference Water Index,Modified Normalized Water Index(MNDWI),and Automated Water Extraction Index(AWEI)and compared the results with the Thematic Mapper(TM)imagery data.Two test sites in Tianjin City of north China were selected as the study area to verify the applicability of OLI data and demonstrate its advantages over TM data.We found that the results of surface water extraction based on OLI data are slightly better than that based on TM in the two test sites,especially in the city site.The AWEI and MNDWI indices performs better than the other two indices,and the thresholds of water indices show more stability when using the OLI data.So,it is suitable to combine OLI imagery with other Landsat sensor data to study water changes for long periods of time.
基金supported by the National Natural Science Foundation of China(Nos.41171355and41002120)
文摘A new object-oriented method has been developed for the extraction of Mars rocks from Mars rover data. It is based on a combination of Mars rover imagery and 3D point cloud data. First, Navcam or Pancam images taken by the Mars rovers are segmented into homogeneous objects with a mean-shift algorithm. Then, the objects in the segmented images are classified into small rock candidates, rock shadows, and large objects. Rock shadows and large objects are considered as the regions within which large rocks may exist. In these regions, large rock candidates are extracted through ground-plane fitting with the 3D point cloud data. Small and large rock candidates are combined and postprocessed to obtain the final rock extraction results. The shape properties of the rocks (angularity, circularity, width, height, and width-height ratio) have been calculated for subsequent ~eological studies.
基金supported in part by the Key Research Program of the Chinese Academy of Sciences under Grant (XDPB11)in part by opening fund of State Key Laboratory of Lunar and Planetary Sciences (Macao University of Science and Technology) (Macao FDCT Grant No. 119/2017/A3)+1 种基金in part by the National Natural Science Foundation of China (Grant Nos. 41490633, 41371332 and 41802246)in part by the Science and Technology Development Fund of Macao (Grant 0012/2018/A1)
文摘The volume FeO and TiO_2 abundances(FTAs) of lunar regolith can be more important for understanding the geological evolution of the Moon compared to the optical and gamma-ray results. In this paper, the volume FTAs are retrieved with microwave sounder(CELMS) data from the Chang'E-2 satellite using the back propagation neural network(BPNN) method. Firstly, a three-layered BPNN network with five-dimensional input is constructed by taking nonlinearity into account. Then, the brightness temperature(TB) and surface slope are set as the inputs and the volume FTAs are set as the outputs of the BPNN network.Thereafter, the BPNN network is trained with the corresponding parameters collected from Apollo, Luna,and Surveyor missions. Finally, the volume FTAs are retrieved with the trained BPNN network using the four-channel TBderived from the CELMS data and the surface slope estimated from Lunar Orbiter Laser Altimeter(LOLA) data. The rationality of the retrieved FTAs is verified by comparing with the Clementine UV-VIS results and Lunar Prospector(LP) GRS results. The retrieved volume FTAs enable us to re-evaluate the geological features of the lunar surface. Several important results are as follows. Firstly, very-low-Ti(<1.5 wt.%) basalts are the most spatially abundant, and the surfaces with TiO_2> 5 wt.% constitute less than 10% of the maria. Also, two linear relationships occur between the FeO abundance(FA) and the TiO_2 abundance before and after the threshold, 16 wt.% for FA. Secondly, a new perspective on mare volcanism is derived with the volume FTAs in several important mare basins, although this conclusion should be verified with more sources of data. Thirdly, FTAs in the lunar regolith change with depth to the uppermost surface,and the change is complex over the lunar surface. Finally, the distribution of volume FTAs hints that the highlands crust is probably homogeneous, at least in terms of the microwave thermophysical parameters.
文摘Automatic road detection, in dense urban areas, is a challenging application in the remote sensing community. This is mainly because of physical and geometrical variations of road pixels, their spectral similarity to other features such as buildings, parking lots and sidewalks, and the obstruction by vehicles and trees. These problems are real obstacles in precise detection and identification of urban roads from high-resolution satellite imagery. One of the promising strategies to deal with this problem is using multi-sensors data to reduce the uncertainties of detection. In this paper, an integrated object-based analysis framework was developed for detecting and extracting various types of urban roads from high-resolution optical images and Lidar data. The proposed method is designed and implemented using a rule-oriented approach based on a masking strategy. The overall accuracy (OA) of the final road map was 89.2%, and the kappa coefficient of agreement was 0.83, which show the efficiency and performance of the method in different conditions and interclass noises. The results also demonstrate the high capability of this object-based method in simultaneous identification of a wide variety of road elements in complex urban areas using both high-resolution satellite images and Lidar data.
文摘The integration of image analysis through deep learning(DL)into rock classification represents a significant leap forward in geological research.While traditional methods remain invaluable for their expertise and historical context,DL offers a powerful complement by enhancing the speed,objectivity,and precision of the classification process.This research explores the significance of image data augmentation techniques in optimizing the performance of convolutional neural networks(CNNs)for geological image analysis,particularly in the classification of igneous,metamorphic,and sedimentary rock types from rock thin section(RTS)images.This study primarily focuses on classic image augmentation techniques and evaluates their impact on model accuracy and precision.Results demonstrate that augmentation techniques like Equalize significantly enhance the model's classification capabilities,achieving an F1-Score of 0.9869 for igneous rocks,0.9884 for metamorphic rocks,and 0.9929 for sedimentary rocks,representing improvements compared to the baseline original results.Moreover,the weighted average F1-Score across all classes and techniques is 0.9886,indicating an enhancement.Conversely,methods like Distort lead to decreased accuracy and F1-Score,with an F1-Score of 0.949 for igneous rocks,0.954 for metamorphic rocks,and 0.9416 for sedimentary rocks,exacerbating the performance compared to the baseline.The study underscores the practicality of image data augmentation in geological image classification and advocates for the adoption of DL methods in this domain for automation and improved results.The findings of this study can benefit various fields,including remote sensing,mineral exploration,and environmental monitoring,by enhancing the accuracy of geological image analysis both for scientific research and industrial applications.
基金Postgraduate Innovation Top notch Talent Training Project of Hunan Province,Grant/Award Number:CX20220045Scientific Research Project of National University of Defense Technology,Grant/Award Number:22-ZZCX-07+2 种基金New Era Education Quality Project of Anhui Province,Grant/Award Number:2023cxcysj194National Natural Science Foundation of China,Grant/Award Numbers:62201597,62205372,1210456foundation of Hefei Comprehensive National Science Center,Grant/Award Number:KY23C502。
文摘Large-scale point cloud datasets form the basis for training various deep learning networks and achieving high-quality network processing tasks.Due to the diversity and robustness constraints of the data,data augmentation(DA)methods are utilised to expand dataset diversity and scale.However,due to the complex and distinct characteristics of LiDAR point cloud data from different platforms(such as missile-borne and vehicular LiDAR data),directly applying traditional 2D visual domain DA methods to 3D data can lead to networks trained using this approach not robustly achieving the corresponding tasks.To address this issue,the present study explores DA for missile-borne LiDAR point cloud using a Monte Carlo(MC)simulation method that closely resembles practical application.Firstly,the model of multi-sensor imaging system is established,taking into account the joint errors arising from the platform itself and the relative motion during the imaging process.A distortion simulation method based on MC simulation for augmenting missile-borne LiDAR point cloud data is proposed,underpinned by an analysis of combined errors between different modal sensors,achieving high-quality augmentation of point cloud data.The effectiveness of the proposed method in addressing imaging system errors and distortion simulation is validated using the imaging scene dataset constructed in this paper.Comparative experiments between the proposed point cloud DA algorithm and the current state-of-the-art algorithms in point cloud detection and single object tracking tasks demonstrate that the proposed method can improve the network performance obtained from unaugmented datasets by over 17.3%and 17.9%,surpassing SOTA performance of current point cloud DA algorithms.
基金CAMS Innovation Fund for Medical Sciences(CIFMS):“Construction of an Intelligent Management and Efficient Utilization Technology System for Big Data in Population Health Science.”(2021-I2M-1-057)Key Projects of the Innovation Fund of the National Clinical Research Center for Orthopedics and Sports Rehabilitation:“National Orthopedics and Sports Rehabilitation Real-World Research Platform System Construction”(23-NCRC-CXJJ-ZD4)。
文摘【Objective】Medical imaging data has great value,but it contains a significant amount of sensitive information about patients.At present,laws and regulations regarding to the de-identification of medical imaging data are not clearly defined around the world.This study aims to develop a tool that meets compliance-driven desensitization requirements tailored to diverse research needs.【Methods】To enhance the security of medical image data,we designed and implemented a DICOM format medical image de-identification system on the Windows operating system.【Results】Our custom de-identification system is adaptable to the legal standards of different countries and can accommodate specific research demands.The system offers both web-based online and desktop offline de-identification capabilities,enabling customization of de-identification rules and facilitating batch processing to improve efficiency.【Conclusions】This medical image de-identification system robustly strengthens the stewardship of sensitive medical data,aligning with data security protection requirements while facilitating the sharing and utilization of medical image data.This approach unlocks the intrinsic value inherent in such datasets.
基金supported by the National Natural Science Foundation of China(Grant Nos.62072250,61772281,61702235,U1636117,U1804263,62172435,61872203 and 61802212)the Zhongyuan Science and Technology Innovation Leading Talent Project of China(Grant No.214200510019)+3 种基金the Suqian Municipal Science and Technology Plan Project in 2020(S202015)the Plan for Scientific Talent of Henan Province(Grant No.2018JR0018)the Opening Project of Guangdong Provincial Key Laboratory of Information Security Technology(Grant No.2020B1212060078)the Priority Academic Program Development of Jiangsu Higher Education Institutions(PAPD)Fund.
文摘Medical image segmentation,i.e.,labeling structures of interest in medical images,is crucial for disease diagnosis and treatment in radiology.In reversible data hiding in medical images(RDHMI),segmentation consists of only two regions:the focal and nonfocal regions.The focal region mainly contains information for diagnosis,while the nonfocal region serves as the monochrome background.The current traditional segmentation methods utilized in RDHMI are inaccurate for complex medical images,and manual segmentation is time-consuming,poorly reproducible,and operator-dependent.Implementing state-of-the-art deep learning(DL)models will facilitate key benefits,but the lack of domain-specific labels for existing medical datasets makes it impossible.To address this problem,this study provides labels of existing medical datasets based on a hybrid segmentation approach to facilitate the implementation of DL segmentation models in this domain.First,an initial segmentation based on a 33 kernel is performed to analyze×identified contour pixels before classifying pixels into focal and nonfocal regions.Then,several human expert raters evaluate and classify the generated labels into accurate and inaccurate labels.The inaccurate labels undergo manual segmentation by medical practitioners and are scored based on a hierarchical voting scheme before being assigned to the proposed dataset.To ensure reliability and integrity in the proposed dataset,we evaluate the accurate automated labels with manually segmented labels by medical practitioners using five assessment metrics:dice coefficient,Jaccard index,precision,recall,and accuracy.The experimental results show labels in the proposed dataset are consistent with the subjective judgment of human experts,with an average accuracy score of 94%and dice coefficient scores between 90%-99%.The study further proposes a ResNet-UNet with concatenated spatial and channel squeeze and excitation(scSE)architecture for semantic segmentation to validate and illustrate the usefulness of the proposed dataset.The results demonstrate the superior performance of the proposed architecture in accurately separating the focal and nonfocal regions compared to state-of-the-art architectures.Dataset information is released under the following URL:https://www.kaggle.com/lordamoah/datasets(accessed on 31 March 2025).
基金supported by the China National Key Program for Science and Technology Research and Development of China (2022YFA1602901,2023YFA1608204)the National SKA Program of China (No.2022SKA0110201)+5 种基金the National Natural Science Foundation of China (NSFC,grant Nos.11873051,11988101,12033008,12041305,12125302,12173016,and 12203065)the CAS Project for Young Scientists in Basic Research grant (No.YSBR-062)the K.C.Wong Education Foundationthe science research grants from the China Manned Space Projectsupport from the Cultivation Project for FAST Scientific Payoff and Research Achievement of CAMS-CASsupported by the China Postdoctoral Science Foundation grant No.2024M763213
文摘The standing waves existing in radio telescope data are primarily due to reflections among the instruments,which significantly impact the spectral quality of the Five-hundred-meter Aperture Spherical radio Telescope(FAST).Eliminating these standing waves for FAST is challenging given the constant changes in their phases and amplitudes.Over a ten-second period,the phases shift by 18°while the amplitudes fluctuate by 6 mK.Thus,we developed the fast Fourier transform(FFT)filter method to eliminate these standing waves for every individual spectrum.The FFT filter can decrease the rms from 3.2 to 1.15 times the theoretical estimate.Compared to other methods such as sine fitting and running median,the FFT filter achieves a median rms of approximately 1.2 times the theoretical expectation and the smallest scatter at 12%.Additionally,the FFT filter method avoids the flux loss issue encountered with some other methods.The FFT is also efficient in detecting harmonic radio frequency interference(RFI).In the FAST data,we identified three distinct types of harmonic RFI,each with amplitudes exceeding 100 mK and intrinsic frequency periods of 8.1,0.5,and 0.37 MHz,respectively.The FFT filter,proven as the most effective method,is integrated into the H I data calibration and imaging pipeline for FAST(HiFAST,https://hifast.readthedocs.io).
基金supported by the National Natural Science Foundation of China(82102018,62333002,T2425027,and 82327809)Data collection and sharing for this project were supported by the National Natural Science Foundation of China(61633018,81571062,81471120,and 81901101)+30 种基金Data collection and sharing for this project were funded by the ADNI(National Institutes of Health Grant U01 AG024904)the Department of Defense ADNI(award number W81XWH-12-2-0012).The ADNI is funded by the National Institute on Aging,the National Institute of Biomedical Imaging and Bioengineering,and through generous contributions from the following:AbbVie,Alzheimer’s AssociationAlzheimer’s Drug Discovery FoundationAraclon BiotechBioClinica,Inc.BiogenBristol-Myers Squibb Co.CereSpir,Inc.CogstateEisai Inc.Elan Pharmaceuticals,Inc.Eli Lilly and Co.EuroImmunF.Hoffmann-La Roche Ltd and its affiliated company Genentech,Inc.FujirebioG.E.HealthcareIXICO Ltd.Janssen Alzheimer Immunotherapy Research&Development,LLC.Johnson&Johnson Pharmaceutical Research&Development LLC.LumosityLundbeckMerck&Co.,Inc.Meso Scale Diagnostics,LLC.NeuroRx ResearchNeurotrack TechnologiesNovartis Pharmaceuticals Corp.Pfizer Inc.Piramal ImagingServierTakeda Pharmaceutical Co.and Transition Therapeutics.The Canadian Institutes of Health Research provides funds to support ADNI clinical sites in Canada.Private sector contributions are facilitated by the Foundation for the National Institutes of Health(www.fnih.org).The grantee organization was the Northern California Institute for Research and Education,and the study was coordinated by the Alzheimer’s Therapeutic Research Institute at the University of Southern California.ADNI data are disseminated by the Laboratory for Neuro Imaging at the University of Southern California.
文摘Dear Editor,Growing clinical evidence shows that brain disorders are heterogeneous in phenotype,genetics,and neuropathology[1].Diagnosis and treatment tend to be affected by symptom presentation and the heterogeneity of pathology,potentially hindering clinical trials in the development of medical treatment.Brain-based subtyping studies utilize magnetic resonance imaging(MRI)and data-driven methods to discover the subtypes of diseases,providing a new perspective on disease heterogeneity.
基金supported by the National Natural Science Foundation of China(62276025,62206022)the Shenzhen Technology Plan Program(KQTD20170331093217368)the China Postdoctoral Science Foundation(BX20230044,2023M730290).
文摘Depression,a pervasive mental health disorder,has substantial impacts on both individuals and society.The conventional approach to predicting depression necessitates substantial collaboration between health care professionals and patients,leaving room for the influence of subjective factors.Consequently,it is imperative to develop a more efficient and accessible prediction methodology for depression.In recent years,numerous investigations have delved into depression prediction techniques,employing diverse data modalities and yielding notable advancements.Given the rapid progression of this domain,the present article comprehensively reviews major breakthroughs in depression prediction,encompassing multiple data modalities such as electrophysiological signals,brain imaging,audiovisual data,and text.By integrating depression prediction methods from various data modalities,it offers a comparative assessment of their advantages and limitations,providing a well-rounded perspective on how different modalities can complement each other for more accurate and holistic depression prediction.The survey begins by examining commonly used datasets,evaluation metrics,and methodological frameworks.For each data modality,it systematically analyzes traditional machine learning methods alongside the increasingly prevalent deep learning approaches,providing a comparative assessment of detection frameworks,feature representations,context modeling,and training strategies.Finally,the survey culminates with the identification of prospective avenues that warrant further exploration.It provides researchers with valuable insights and practical guidance to advance the field of depression prediction.
基金Natural Science Foundation of Zhejiang Province,Grant/Award Number:LY23F020025Science and Technology Commissioner Program of Huzhou,Grant/Award Number:2023GZ42Sichuan Provincial Science and Technology Support Program,Grant/Award Numbers:2023ZHCG0005,2023ZHCG0008。
文摘Data augmentation plays an important role in training deep neural model by expanding the size and diversity of the dataset.Initially,data augmentation mainly involved some simple transformations of images.Later,in order to increase the diversity and complexity of data,more advanced methods appeared and evolved to sophisticated generative models.However,these methods required a mass of computation of training or searching.In this paper,a novel training-free method that utilises the Pre-Trained Segment Anything Model(SAM)model as a data augmentation tool(PTSAM-DA)is proposed to generate the augmented annotations for images.Without the need for training,it obtains prompt boxes from the original annotations and then feeds the boxes to the pre-trained SAM to generate diverse and improved annotations.In this way,annotations are augmented more ingenious than simple manipulations without incurring huge computation for training a data augmentation model.Multiple comparative experiments on three datasets are conducted,including an in-house dataset,ADE20K and COCO2017.On this in-house dataset,namely Agricultural Plot Segmentation Dataset,maximum improvements of 3.77%and 8.92%are gained in two mainstream metrics,mIoU and mAcc,respectively.Consequently,large vision models like SAM are proven to be promising not only in image segmentation but also in data augmentation.
文摘Data hiding methods involve embedding secret messages into cover objects to enable covert communication in a way that is difficult to detect.In data hiding methods based on image interpolation,the image size is reduced and then enlarged through interpolation,followed by the embedding of secret data into the newly generated pixels.A general improving approach for embedding secret messages is proposed.The approach may be regarded a general model for enhancing the data embedding capacity of various existing image interpolation-based data hiding methods.This enhancement is achieved by expanding the range of pixel values available for embedding secret messages,removing the limitations of many existing methods,where the range is restricted to powers of two to facilitate the direct embedding of bit-based messages.This improvement is accomplished through the application of multiple-based number conversion to the secret message data.The method converts the message bits into a multiple-based number and uses an algorithm to embed each digit of this number into an individual pixel,thereby enhancing the message embedding efficiency,as proved by a theorem derived in this study.The proposed improvement method has been tested through experiments on three well-known image interpolation-based data hiding methods.The results show that the proposed method can enhance the three data embedding rates by approximately 14%,13%,and 10%,respectively,create stego-images with good quality,and resist RS steganalysis attacks.These experimental results indicate that the use of the multiple-based number conversion technique to improve the three interpolation-based methods for embedding secret messages increases the number of message bits embedded in the images.For many image interpolation-based data hiding methods,which use power-of-two pixel-value ranges for message embedding,other than the three tested ones,the proposed improvement method is also expected to be effective for enhancing their data embedding capabilities.
文摘Objective This study aimed to assess the local staging of bladder tumors in patients utilizing preoperative multiparametric MRI(mpMRI)and to demonstrate the clinical efficacy of this method through a comparative analysis with corresponding histopathological findings.Methods Between November 2020 and April 2022,63 patients with a planned cystoscopy and a preliminary or previous diagnosis of bladder tumor were included.All participants underwent mpMRI,and Vesical Imaging Reporting and Data System(VI-RADS)criteria were applied to assess the recorded images.Subsequently,obtained biopsies were histopathologically examined and compared with radiological findings.Results Of the 63 participants,60 were male,and three were female.Categorizing tumors with a VI-RADS score of>3 as muscle invasive,84%were radiologically classified as having an invasive bladder tumor.However,histopathological results indicated invasive bladder tumors in 52%of cases.Sensitivity of the VI-RADS score was 100%;specificity was 23%;the negative predictive value was 100%;and the positive predictive value was 62%.Conclusion The scoring system obtained through mpMRI,VI-RADS,proves to be a successful method,particularly in determining the absence of muscle invasion in bladder cancer.Its efficacy in detecting muscle invasion in bladder tumors could be further enhanced with additional studies,suggesting potential for increased diagnostic efficiency through ongoing research.The VI-RADS could enhance the selection of patients eligible for accurate diagnosis and treatment.
文摘Medical image classification is crucial in disease diagnosis,treatment planning,and clinical decisionmaking.We introduced a novel medical image classification approach that integrates Bayesian Random Semantic Data Augmentation(BSDA)with a Vision Mamba-based model for medical image classification(MedMamba),enhanced by residual connection blocks,we named the model BSDA-Mamba.BSDA augments medical image data semantically,enhancing the model’s generalization ability and classification performance.MedMamba,a deep learning-based state space model,excels in capturing long-range dependencies in medical images.By incorporating residual connections,BSDA-Mamba further improves feature extraction capabilities.Through comprehensive experiments on eight medical image datasets,we demonstrate that BSDA-Mamba outperforms existing models in accuracy,area under the curve,and F1-score.Our results highlight BSDA-Mamba’s potential as a reliable tool for medical image analysis,particularly in handling diverse imaging modalities from X-rays to MRI.The open-sourcing of our model’s code and datasets,will facilitate the reproduction and extension of our work.
基金supported by the National Natural Science Foundation of China under grant No.U2031144partially supported by the Open Project Program of the Key Laboratory of Optical Astronomy,National Astronomical Observatories,Chinese Academy of Sciences+5 种基金supported by the National Key R&D Program of China with No.2021YFA1600404the National Natural Science Foundation of China(12173082)the Yunnan Fundamental Research Projects(grant 202201AT070069)the Top-notch Young Talents Program of Yunnan Provincethe Light of West China Program provided by the Chinese Academy of Sciencesthe International Centre of Supernovae,Yunnan Key Laboratory(No.202302AN360001)。
文摘BFOSC and YFOSC are the most frequently used instruments in the Xinglong 2.16 m telescope and Lijiang 2.4 m telescope,respectively.We developed a software package named“BYSpec”(BFOSC and YFOSC Spectra Reduction Package)dedicated to automatically reducing the long-slit and echelle spectra obtained by these two instruments.The package supports bias and flat-fielding correction,order location,background subtraction,automatic wavelength calibration,and absolute flux calibration.The optimal extraction method maximizes the signal-to-noise ratio and removes most of the cosmic rays imprinted in the spectra.A comparison with the 1D spectra reduced with IRAF verifies the reliability of the results.This open-source software is publicly available to the community.
基金supported by the National Natural Science Foundation of China(NSFC,12173012 and 12473050)the Guangdong Natural Science Funds for Distinguished Young Scholars(2023B1515020049)+2 种基金the Shenzhen Science and Technology Project(JCYJ20240813104805008)the Shenzhen Key Laboratory Launching Project(No.ZDSYS20210702140800001)the Specialized Research Fund for State Key Laboratory of Solar Activity and Space Weather。
文摘The increasing demand for high-resolution solar observations has driven the development of advanced data processing and enhancement techniques for ground-based solar telescopes.This study focuses on developing a python-based package(GT-scopy)for data processing and enhancing for giant solar telescopes,with application to the 1.6 m Goode Solar Telescope(GST)at Big Bear Solar Observatory.The objective is to develop a modern data processing software for refining existing data acquisition,processing,and enhancement methodologies to achieve atmospheric effect removal and accurate alignment at the sub-pixel level,particularly within the processing levels 1.0-1.5.In this research,we implemented an integrated and comprehensive data processing procedure that includes image de-rotation,zone-of-interest selection,coarse alignment,correction for atmospheric distortions,and fine alignment at the sub-pixel level with an advanced algorithm.The results demonstrate a significant improvement in image quality,with enhanced visibility of fine solar structures both in sunspots and quiet-Sun regions.The enhanced data processing package developed in this study significantly improves the utility of data obtained from the GST,paving the way for more precise solar research and contributing to a better understanding of solar dynamics.This package can be adapted for other ground-based solar telescopes,such as the Daniel K.Inouye Solar Telescope(DKIST),the European Solar Telescope(EST),and the 8 m Chinese Giant Solar Telescope,potentially benefiting the broader solar physics community.
文摘Lunar wrinkle ridges are an important stress geological structure on the Moon, which reflect the stress state and geological activity on the Moon. They provide important insights into the evolution of the Moon and are key factors influencing future lunar activity, such as the choice of landing sites. However, automatic extraction of lunar wrinkle ridges is a challenging task due to their complex morphology and ambiguous features. Traditional manual extraction methods are time-consuming and labor-intensive. To achieve automated and detailed detection of lunar wrinkle ridges, we have constructed a lunar wrinkle ridge data set, incorporating previously unused aspect data to provide edge information, and proposed a Dual-Branch Ridge Detection Network(DBR-Net) based on deep learning technology. This method employs a dual-branch architecture and an Attention Complementary Feature Fusion module to address the issue of insufficient lunar wrinkle ridge features. Through comparisons with the results of various deep learning approaches, it is demonstrated that the proposed method exhibits superior detection performance. Furthermore, the trained model was applied to lunar mare regions, generating a distribution map of lunar mare wrinkle ridges;a significant linear relationship between the length and area of the lunar wrinkle ridges was obtained through statistical analysis, and six previously unrecorded potential lunar wrinkle ridges were detected. The proposed method upgrades the automated extraction of lunar wrinkle ridges to a pixel-level precision and verifies the effectiveness of DBR-Net in lunar wrinkle ridge detection.