The airborne two-dimensional stereo(2D-S) optical array probe has been operating for more than 10 yr, accumulating a large amount of cloud particle image data. However, due to the lack of reliable and unbiased classif...The airborne two-dimensional stereo(2D-S) optical array probe has been operating for more than 10 yr, accumulating a large amount of cloud particle image data. However, due to the lack of reliable and unbiased classification tools,our ability to extract meaningful morphological information related to cloud microphysical processes is limited. To solve this issue, we propose a novel classification algorithm for 2D-S cloud particle images based on a convolutional neural network(CNN), named CNN-2DS. A 2D-S cloud particle shape dataset was established by using the 2D-S cloud particle images observed from 13 aircraft detection flights in 6 regions of China(Northeast, Northwest, North,East, Central, and South China). This dataset contains 33,300 cloud particle images with 8 types of cloud particle shape(linear, sphere, dendrite, aggregate, graupel, plate, donut, and irregular). The CNN-2DS model was trained and tested based on the established 2D-S dataset. Experimental results show that the CNN-2DS model can accurately identify cloud particles with an average classification accuracy of 97%. Compared with other common classification models [e.g., Vision Transformer(ViT) and Residual Neural Network(ResNet)], the CNN-2DS model is lightweight(few parameters) and fast in calculations, and has the highest classification accuracy. In a word, the proposed CNN-2DS model is effective and reliable for the classification of cloud particles detected by the 2D-S probe.展开更多
It is widely accepted that urban plant leaves can capture airborne particles. Previous studies on the particle capture capacity of plant leaves have mostly focused on particle mass and/or size distribution. Fewer stud...It is widely accepted that urban plant leaves can capture airborne particles. Previous studies on the particle capture capacity of plant leaves have mostly focused on particle mass and/or size distribution. Fewer studies, however, have examined the particle density, and the size and shape characteristics of particles, which may have important implications for evaluating the particle capture efficiency of plants, and identifying the particle sources. In addition, the role of different vegetation types is as yet unclear. Here, we chose three species of different vegetation types, and firstly applied an object-based classification approach to automatically identify the particles from scanning electron microscope(SEM)micrographs. We then quantified the particle capture efficiency, and the major sources of particles were identified. We found(1) Rosa xanthina Lindl(shrub species) had greater retention efficiency than Broussonetia papyrifera(broadleaf species) and Pinus bungeana Zucc.(coniferous species), in terms of particle number and particle area cover.(2) 97.9% of the identified particles had diameter ≤10 μm, and 67.1% of them had diameter ≤2.5 μm. 89.8% of the particles had smooth boundaries, with 23.4% of them being nearly spherical.(3) 32.4%–74.1% of the particles were generated from bare soil and construction activities, and 15.5%–23.0% were mainly from vehicle exhaust and cooking fumes.展开更多
It is known that size alone, which is often defined as the volume-equivalent diameter, is not sufficient to characterize many particulate products. The shape of crystalline products can be as important as size in many...It is known that size alone, which is often defined as the volume-equivalent diameter, is not sufficient to characterize many particulate products. The shape of crystalline products can be as important as size in many applications, Traditionally, particulate shape is often defined by several simple descriptors such as the maximum length and the aspect ratio. Although these descriptors are intuitive, they result in a loss of information about the original shape. This paper presents a method to use principal component analysis to derive simple latent shape descriptors from microscope images of particulate products made in batch processes, and the use of these descriptors to identify batch-to-batch variations. Data from batch runs of both a laboratory crystalliser and an industrial crystallisation reactor are analysed using the described approach. Qualitative and quantitative comparisons with the use of traditional shape descriptors that have nhwical meanings and Fourier shape descriptors are also made.展开更多
Point cloud analysis is challenging because of the unordered and irregular data structure of point clouds.To describe geometric information in point clouds,existing methods mainly use convolution,graph,and attention o...Point cloud analysis is challenging because of the unordered and irregular data structure of point clouds.To describe geometric information in point clouds,existing methods mainly use convolution,graph,and attention operations to construct sophisticated local aggregation operators.These operators work well in extracting local information but bring unfavorable inference latency due to high computation complexity.To solve the above problem,this paper presents a novel point-voxel based geometry-adaptive network(PVGANet),which combines multiple representations of point and voxel to describe the point cloud from different granularities and can obtain features of different scales effectively.To extract fine-grained geometric features,we design the position-adaptive pooling operator,which uses point pairs’relative position and feature similarity to weight and aggregate the point features at local areas of point clouds.To extract coarse-grained local features,we design a depth-wise convolution operator,which conducts the depth-wise convolution on voxel grids.With an easy addition,fine-grained geometric and coarse-grained local features can be fused,and we can use the geometry-adaptive fused features to complete the efficient shape analysis of point clouds,such as shape classification and part segmentation.Extensive experiments on ModelNet40,ScanObjectNN,and ShapeNet Part benchmarks demonstrate that our PVGANet achieves competitive performance compared with the related methods.展开更多
Purpose The purpose of this study is to explore deep learning methods for processing high-throughput small-angle X-ray scattering(SAXS)experimental data.Methods The deep learning algorithm was trained and validated us...Purpose The purpose of this study is to explore deep learning methods for processing high-throughput small-angle X-ray scattering(SAXS)experimental data.Methods The deep learning algorithm was trained and validated using simulated SAXS data,which were generated in batches based on the theoretical SAXS formula using Python code.Our self-developed SAXSNET,a convolutional neural network based on PyTorch,was employed to classify SAXS data for various shapes of nanoparticles.Additionally,we conducted comparative analysis of classification algorithms including ResNet-18,ResNet-34 and Vision Transformer.Random Forest and XGboost regression algorithms were used for the nanoparticle size prediction.Finally,we evaluated the aforementioned shape classification and numerical regression methods using actual experimental data.A pipeline segment is established for the processing of SAXS data,incorporating deep learning classification algorithms and numerical regression algorithms.Results After being trained with simulated data,the four deep learning algorithms achieved a prediction accuracy of over 96%on the validation set.The fine-tuned deep learning model demonstrated robust generalization capabilities for predicting the shapes of experimental data,enabling rapid and accurate identification of morphological changes in nanoparticles during experiments.The Random Forest and XGboost regression algorithms can simultaneously provide faster and more accurate predictions of nanoparticle size.Conclusion The pipeline segment constructed in this study,integrating deep learning classification and regression algorithms,enables real-time processing of high-throughput SAXS data.It aims to effectively mitigates the impact of human factors on data processing results and enhances the standardization,automation,and intelligence of synchrotron radiation experiments.展开更多
基金Supported by the National Key Research and Development Program of China (2019YFC1510301)Key Innovation Team Fund of the China Meteorological Administration (CMA2022ZD10)Basic Research Fund of the Chinese Academy of Meteorological Sciences(2021Y010)。
文摘The airborne two-dimensional stereo(2D-S) optical array probe has been operating for more than 10 yr, accumulating a large amount of cloud particle image data. However, due to the lack of reliable and unbiased classification tools,our ability to extract meaningful morphological information related to cloud microphysical processes is limited. To solve this issue, we propose a novel classification algorithm for 2D-S cloud particle images based on a convolutional neural network(CNN), named CNN-2DS. A 2D-S cloud particle shape dataset was established by using the 2D-S cloud particle images observed from 13 aircraft detection flights in 6 regions of China(Northeast, Northwest, North,East, Central, and South China). This dataset contains 33,300 cloud particle images with 8 types of cloud particle shape(linear, sphere, dendrite, aggregate, graupel, plate, donut, and irregular). The CNN-2DS model was trained and tested based on the established 2D-S dataset. Experimental results show that the CNN-2DS model can accurately identify cloud particles with an average classification accuracy of 97%. Compared with other common classification models [e.g., Vision Transformer(ViT) and Residual Neural Network(ResNet)], the CNN-2DS model is lightweight(few parameters) and fast in calculations, and has the highest classification accuracy. In a word, the proposed CNN-2DS model is effective and reliable for the classification of cloud particles detected by the 2D-S probe.
基金supported by the “One-Hundred Talents” program of the Chinese Academy of Sciences (No. N234)the National Natural Science Foundation of China(Nos. 41430638 and 41301199)the project “Major Special Project-The China High-Resolution Earth Observation System”
文摘It is widely accepted that urban plant leaves can capture airborne particles. Previous studies on the particle capture capacity of plant leaves have mostly focused on particle mass and/or size distribution. Fewer studies, however, have examined the particle density, and the size and shape characteristics of particles, which may have important implications for evaluating the particle capture efficiency of plants, and identifying the particle sources. In addition, the role of different vegetation types is as yet unclear. Here, we chose three species of different vegetation types, and firstly applied an object-based classification approach to automatically identify the particles from scanning electron microscope(SEM)micrographs. We then quantified the particle capture efficiency, and the major sources of particles were identified. We found(1) Rosa xanthina Lindl(shrub species) had greater retention efficiency than Broussonetia papyrifera(broadleaf species) and Pinus bungeana Zucc.(coniferous species), in terms of particle number and particle area cover.(2) 97.9% of the identified particles had diameter ≤10 μm, and 67.1% of them had diameter ≤2.5 μm. 89.8% of the particles had smooth boundaries, with 23.4% of them being nearly spherical.(3) 32.4%–74.1% of the particles were generated from bare soil and construction activities, and 15.5%–23.0% were mainly from vehicle exhaust and cooking fumes.
文摘It is known that size alone, which is often defined as the volume-equivalent diameter, is not sufficient to characterize many particulate products. The shape of crystalline products can be as important as size in many applications, Traditionally, particulate shape is often defined by several simple descriptors such as the maximum length and the aspect ratio. Although these descriptors are intuitive, they result in a loss of information about the original shape. This paper presents a method to use principal component analysis to derive simple latent shape descriptors from microscope images of particulate products made in batch processes, and the use of these descriptors to identify batch-to-batch variations. Data from batch runs of both a laboratory crystalliser and an industrial crystallisation reactor are analysed using the described approach. Qualitative and quantitative comparisons with the use of traditional shape descriptors that have nhwical meanings and Fourier shape descriptors are also made.
基金supported by the National Natural Science Foundation of China under Grant Nos.62273034,61973029,and 62076026the Scientific and Technological Innovation Foundation of Foshan under Grant No.BK21BF004.
文摘Point cloud analysis is challenging because of the unordered and irregular data structure of point clouds.To describe geometric information in point clouds,existing methods mainly use convolution,graph,and attention operations to construct sophisticated local aggregation operators.These operators work well in extracting local information but bring unfavorable inference latency due to high computation complexity.To solve the above problem,this paper presents a novel point-voxel based geometry-adaptive network(PVGANet),which combines multiple representations of point and voxel to describe the point cloud from different granularities and can obtain features of different scales effectively.To extract fine-grained geometric features,we design the position-adaptive pooling operator,which uses point pairs’relative position and feature similarity to weight and aggregate the point features at local areas of point clouds.To extract coarse-grained local features,we design a depth-wise convolution operator,which conducts the depth-wise convolution on voxel grids.With an easy addition,fine-grained geometric and coarse-grained local features can be fused,and we can use the geometry-adaptive fused features to complete the efficient shape analysis of point clouds,such as shape classification and part segmentation.Extensive experiments on ModelNet40,ScanObjectNN,and ShapeNet Part benchmarks demonstrate that our PVGANet achieves competitive performance compared with the related methods.
基金supported by the Innovation Program of the Institute of High Energy Physics,CAS(Grant Number 2023000034)the National Natural Science Foundation of China(Grant Numbers 22273013,12275300)National Key R&D Program of China(Grant Numbers 2022YFA1603802,2017YFA0403000).
文摘Purpose The purpose of this study is to explore deep learning methods for processing high-throughput small-angle X-ray scattering(SAXS)experimental data.Methods The deep learning algorithm was trained and validated using simulated SAXS data,which were generated in batches based on the theoretical SAXS formula using Python code.Our self-developed SAXSNET,a convolutional neural network based on PyTorch,was employed to classify SAXS data for various shapes of nanoparticles.Additionally,we conducted comparative analysis of classification algorithms including ResNet-18,ResNet-34 and Vision Transformer.Random Forest and XGboost regression algorithms were used for the nanoparticle size prediction.Finally,we evaluated the aforementioned shape classification and numerical regression methods using actual experimental data.A pipeline segment is established for the processing of SAXS data,incorporating deep learning classification algorithms and numerical regression algorithms.Results After being trained with simulated data,the four deep learning algorithms achieved a prediction accuracy of over 96%on the validation set.The fine-tuned deep learning model demonstrated robust generalization capabilities for predicting the shapes of experimental data,enabling rapid and accurate identification of morphological changes in nanoparticles during experiments.The Random Forest and XGboost regression algorithms can simultaneously provide faster and more accurate predictions of nanoparticle size.Conclusion The pipeline segment constructed in this study,integrating deep learning classification and regression algorithms,enables real-time processing of high-throughput SAXS data.It aims to effectively mitigates the impact of human factors on data processing results and enhances the standardization,automation,and intelligence of synchrotron radiation experiments.