The Global Precipitation Measurement(GPM)dual-frequency precipitation radar(DPR)products(Version 07A)are employed for a rigorous comparative analysis with ground-based operational weather radar(GR)networks.The reflect...The Global Precipitation Measurement(GPM)dual-frequency precipitation radar(DPR)products(Version 07A)are employed for a rigorous comparative analysis with ground-based operational weather radar(GR)networks.The reflectivity observed by GPM Ku PR is compared quantitatively against GR networks from CINRAD of China and NEXRAD of the United States,and the volume matching method is used for spatial matching.Additionally,a novel frequency correction method for all phases as well as precipitation types is used to correct the GPM Ku PR radar frequency to the GR frequency.A total of 20 GRs(including 10 from CINRAD and 10 from NEXRAD)are included in this comparative analysis.The results indicate that,compared with CINRAD matched data,NEXRAD exhibits larger biases in reflectivity when compared with the frequency-corrected Ku PR.The root-mean-square difference for CINRAD is calculated at 2.38 d B,whereas for NEXRAD it is 3.23 d B.The mean bias of CINRAD matched data is-0.16 d B,while the mean bias of NEXRAD is-2.10 d B.The mean standard deviation of bias for CINRAD is 2.15 d B,while for NEXRAD it is 2.29 d B.This study effectively assesses weather radar data in both the United States and China,which is crucial for improving the overall consistency of global precipitation estimates.展开更多
Accurate cloud classification plays a crucial role in aviation safety,climate monitoring,and localized weather forecasting.Current research has been focusing on machine learning techniques,particularly deep learning b...Accurate cloud classification plays a crucial role in aviation safety,climate monitoring,and localized weather forecasting.Current research has been focusing on machine learning techniques,particularly deep learning based model,for the types identification.However,traditional approaches such as convolutional neural networks(CNNs)encounter difficulties in capturing global contextual information.In addition,they are computationally expensive,which restricts their usability in resource-limited environments.To tackle these issues,we present the Cloud Vision Transformer(CloudViT),a lightweight model that integrates CNNs with Transformers.The integration enables an effective balance between local and global feature extraction.To be specific,CloudViT comprises two innovative modules:Feature Extraction(E_Module)and Downsampling(D_Module).These modules are able to significantly reduce the number of model parameters and computational complexity while maintaining translation invariance and enhancing contextual comprehension.Overall,the CloudViT includes 0.93×10^(6)parameters,which decreases more than ten times compared to the SOTA(State-of-the-Art)model CloudNet.Comprehensive evaluations conducted on the HBMCD and SWIMCAT datasets showcase the outstanding performance of CloudViT.It achieves classification accuracies of 98.45%and 100%,respectively.Moreover,the efficiency and scalability of CloudViT make it an ideal candidate for deployment inmobile cloud observation systems,enabling real-time cloud image classification.The proposed hybrid architecture of CloudViT offers a promising approach for advancing ground-based cloud image classification.It holds significant potential for both optimizing performance and facilitating practical deployment scenarios.展开更多
Space target imaging simulation technology is an important tool for space target detection and identification,with advantages that include high flexibility and low cost.However,existing space target imaging simulation...Space target imaging simulation technology is an important tool for space target detection and identification,with advantages that include high flexibility and low cost.However,existing space target imaging simulation technologies are mostly based on target magnitudes for simulations,making it difficult to meet image simulation requirements for different signal-to-noise ratio(SNR)needs.Therefore,design of a simulation method that generates target image sequences with various SNRs based on the optical detection system parameters will be important for faint space target detection research.Addressing the SNR calculation issue in optical observation systems,this paper proposes a ground-based detection image SNR calculation method using the optical system parameters.This method calculates the SNR of an observed image precisely using radiative transfer theory,the optical system parameters,and the observation environment parameters.An SNR-based target sequence image simulation method for ground-based detection scenarios is proposed.This method calculates the imaging SNR using the optical system parameters and establishes a model for conversion between the target’s apparent magnitude and image grayscale values,thereby enabling generation of target sequence simulation images with corresponding SNRs for different system parameters.Experiments show that the SNR obtained using this calculation method has an average calculation error of<1 dB when compared with the theoretical SNR of the actual optical system.Additionally,the simulation images generated by the imaging simulation method show high consistency with real images,which meets the requirements of faint space target detection algorithm research and provides reliable data support for development of related technologies.展开更多
基金funded by the National Key Research and Development Program of China(Grant No.2023YFB3907500)the National Natural Science Foundation(Grant No.42330602)the“Fengyun Satellite Remote Sensing Product Validation and Verification”Youth Innovation Team of the China Meteorological Administration(Grant No.CMA2023QN12)。
文摘The Global Precipitation Measurement(GPM)dual-frequency precipitation radar(DPR)products(Version 07A)are employed for a rigorous comparative analysis with ground-based operational weather radar(GR)networks.The reflectivity observed by GPM Ku PR is compared quantitatively against GR networks from CINRAD of China and NEXRAD of the United States,and the volume matching method is used for spatial matching.Additionally,a novel frequency correction method for all phases as well as precipitation types is used to correct the GPM Ku PR radar frequency to the GR frequency.A total of 20 GRs(including 10 from CINRAD and 10 from NEXRAD)are included in this comparative analysis.The results indicate that,compared with CINRAD matched data,NEXRAD exhibits larger biases in reflectivity when compared with the frequency-corrected Ku PR.The root-mean-square difference for CINRAD is calculated at 2.38 d B,whereas for NEXRAD it is 3.23 d B.The mean bias of CINRAD matched data is-0.16 d B,while the mean bias of NEXRAD is-2.10 d B.The mean standard deviation of bias for CINRAD is 2.15 d B,while for NEXRAD it is 2.29 d B.This study effectively assesses weather radar data in both the United States and China,which is crucial for improving the overall consistency of global precipitation estimates.
基金funded by Innovation and Development Special Project of China Meteorological Administration(CXFZ2022J038,CXFZ2024J035)Sichuan Science and Technology Program(No.2023YFQ0072)+1 种基金Key Laboratory of Smart Earth(No.KF2023YB03-07)Automatic Software Generation and Intelligent Service Key Laboratory of Sichuan Province(CUIT-SAG202210).
文摘Accurate cloud classification plays a crucial role in aviation safety,climate monitoring,and localized weather forecasting.Current research has been focusing on machine learning techniques,particularly deep learning based model,for the types identification.However,traditional approaches such as convolutional neural networks(CNNs)encounter difficulties in capturing global contextual information.In addition,they are computationally expensive,which restricts their usability in resource-limited environments.To tackle these issues,we present the Cloud Vision Transformer(CloudViT),a lightweight model that integrates CNNs with Transformers.The integration enables an effective balance between local and global feature extraction.To be specific,CloudViT comprises two innovative modules:Feature Extraction(E_Module)and Downsampling(D_Module).These modules are able to significantly reduce the number of model parameters and computational complexity while maintaining translation invariance and enhancing contextual comprehension.Overall,the CloudViT includes 0.93×10^(6)parameters,which decreases more than ten times compared to the SOTA(State-of-the-Art)model CloudNet.Comprehensive evaluations conducted on the HBMCD and SWIMCAT datasets showcase the outstanding performance of CloudViT.It achieves classification accuracies of 98.45%and 100%,respectively.Moreover,the efficiency and scalability of CloudViT make it an ideal candidate for deployment inmobile cloud observation systems,enabling real-time cloud image classification.The proposed hybrid architecture of CloudViT offers a promising approach for advancing ground-based cloud image classification.It holds significant potential for both optimizing performance and facilitating practical deployment scenarios.
基金supported by Open Fund of National Key Laboratory of Deep Space Exploration(NKDSEL2024014)by Civil Aerospace Pre-research Project of State Administration of Science,Technology and Industry for National Defence,PRC(D040103).
文摘Space target imaging simulation technology is an important tool for space target detection and identification,with advantages that include high flexibility and low cost.However,existing space target imaging simulation technologies are mostly based on target magnitudes for simulations,making it difficult to meet image simulation requirements for different signal-to-noise ratio(SNR)needs.Therefore,design of a simulation method that generates target image sequences with various SNRs based on the optical detection system parameters will be important for faint space target detection research.Addressing the SNR calculation issue in optical observation systems,this paper proposes a ground-based detection image SNR calculation method using the optical system parameters.This method calculates the SNR of an observed image precisely using radiative transfer theory,the optical system parameters,and the observation environment parameters.An SNR-based target sequence image simulation method for ground-based detection scenarios is proposed.This method calculates the imaging SNR using the optical system parameters and establishes a model for conversion between the target’s apparent magnitude and image grayscale values,thereby enabling generation of target sequence simulation images with corresponding SNRs for different system parameters.Experiments show that the SNR obtained using this calculation method has an average calculation error of<1 dB when compared with the theoretical SNR of the actual optical system.Additionally,the simulation images generated by the imaging simulation method show high consistency with real images,which meets the requirements of faint space target detection algorithm research and provides reliable data support for development of related technologies.