There is a contradiction between the evolution rate of materials and the time resolution of SR-CT characterization in the in situ synchrotron radiation computed tomography(SR-CT)characterization of ultrafast evolution...There is a contradiction between the evolution rate of materials and the time resolution of SR-CT characterization in the in situ synchrotron radiation computed tomography(SR-CT)characterization of ultrafast evolution process.The sampling strategy of the ultra-sparse angle is an effective method for improving time resolution.Accurate reconstruction under sparse sampling conditions has always been a bottleneck problem.In recent years,convolutional neural networks have shown outstanding advantages in sparse-angle CT reconstruction given the development of deep learning.However,existing ideas did not consider the expression of high-frequency details in neural networks,limiting their application in accurate SR-CT characterization.A novel high-frequency information-constrained deep learning network(HFIC-Net)is proposed in response to this problem.Additional high-frequency information constraints are added to improve the accuracy of the reconstruction results.Further,a series of numerical reconstruction experiments are conducted to verify this new method,and the results indicate that the reconstruction results of HFIC-Net method effectively improve reconstruction quality.This new method uses only eight-angle projections to achieve the reconstruction effect of the filtered backprojection method(FBP)method in 360 projections.The results of the HFIC-Net method demonstrate clear boundaries and accurate detailed structures,correcting the misinformation caused by using other methods.For quantitative evaluation,the SSIM used to evaluate image structure similarity is increased from 0.1951,0.9212,and 0.9308 for FBP,FBP-Conv,and DDC-Net,respectively,to 0.9620 for HFIC-Net.Finally,the results of actual SR-CT experimental data indicate that the new method can suppress artifacts and achieve accurate reconstruction,and it is suitable for the in situ SR-CT accurate characterization of ultxafast evolution process.展开更多
To efficiently predict the mechanical parameters of granular soil based on its random micro-structure,this study proposed a novel approach combining numerical simulation and machine learning algorithms.Initially,3500 ...To efficiently predict the mechanical parameters of granular soil based on its random micro-structure,this study proposed a novel approach combining numerical simulation and machine learning algorithms.Initially,3500 simulations of one-dimensional compression tests on coarse-grained sand using the three-dimensional(3D)discrete element method(DEM)were conducted to construct a database.In this process,the positions of the particles were randomly altered,and the particle assemblages changed.Interestingly,besides confirming the influence of particle size distribution parameters,the stress-strain curves differed despite an identical gradation size statistic when the particle position varied.Subsequently,the obtained data were partitioned into training,validation,and testing datasets at a 7:2:1 ratio.To convert the DEM model into a multi-dimensional matrix that computers can recognize,the 3D DEM models were first sliced to extract multi-layer two-dimensional(2D)cross-sectional data.Redundant information was then eliminated via gray processing,and the data were stacked to form a new 3D matrix representing the granular soil’s fabric.Subsequently,utilizing the Python language and Pytorch framework,a 3D convolutional neural networks(CNNs)model was developed to establish the relationship between the constrained modulus obtained from DEM simulations and the soil’s fabric.The mean squared error(MSE)function was utilized to assess the loss value during the training process.When the learning rate(LR)fell within the range of 10-5e10-1,and the batch sizes(BSs)were 4,8,16,32,and 64,the loss value stabilized after 100 training epochs in the training and validation dataset.For BS?32 and LR?10-3,the loss reached a minimum.In the testing set,a comparative evaluation of the predicted constrained modulus from the 3D CNNs versus the simulated modulus obtained via DEM reveals a minimum mean absolute percentage error(MAPE)of 4.43%under the optimized condition,demonstrating the accuracy of this approach.Thus,by combining DEM and CNNs,the variation of soil’s mechanical characteristics related to its random fabric would be efficiently evaluated by directly tracking the particle assemblages.展开更多
Robust watermarking requires finding invariant features under multiple attacks to ensure correct extraction.Deep learning has extremely powerful in extracting features,and watermarking algorithms based on deep learnin...Robust watermarking requires finding invariant features under multiple attacks to ensure correct extraction.Deep learning has extremely powerful in extracting features,and watermarking algorithms based on deep learning have attracted widespread attention.Most existing methods use 3×3 small kernel convolution to extract image features and embed the watermarking.However,the effective perception fields for small kernel convolution are extremely confined,so the pixels that each watermarking can affect are restricted,thus limiting the performance of the watermarking.To address these problems,we propose a watermarking network based on large kernel convolution and adaptive weight assignment for loss functions.It uses large-kernel depth-wise convolution to extract features for learning large-scale image information and subsequently projects the watermarking into a highdimensional space by 1×1 convolution to achieve adaptability in the channel dimension.Subsequently,the modification of the embedded watermarking on the cover image is extended to more pixels.Because the magnitude and convergence rates of each loss function are different,an adaptive loss weight assignment strategy is proposed to make theweights participate in the network training together and adjust theweight dynamically.Further,a high-frequency wavelet loss is proposed,by which the watermarking is restricted to only the low-frequency wavelet sub-bands,thereby enhancing the robustness of watermarking against image compression.The experimental results show that the peak signal-to-noise ratio(PSNR)of the encoded image reaches 40.12,the structural similarity(SSIM)reaches 0.9721,and the watermarking has good robustness against various types of noise.展开更多
Most of the existing algorithms for blind sources separation have a limitation that sources are statistically independent. However, in many practical applications, the source signals are non- negative and mutual stati...Most of the existing algorithms for blind sources separation have a limitation that sources are statistically independent. However, in many practical applications, the source signals are non- negative and mutual statistically dependent signals. When the observations are nonnegative linear combinations of nonnegative sources, the correlation coefficients of the observations are larger than these of source signals. In this letter, a novel Nonnegative Matrix Factorization (NMF) algorithm with least correlated component constraints to blind separation of convolutive mixed sources is proposed. The algorithm relaxes the source independence assumption and has low-complexity algebraic com- putations. Simulation results on blind source separation including real face image data indicate that the sources can be successfully recovered with the algorithm.展开更多
为了改善低层特征对图像内容描述不够精确而导致现勘图像分类准确率低的问题,提出一种利用深度学习特征的改进局部约束线性编码(local-constrained linear coding,LLC)算法。采用滑动窗口法提取图像密集卷积神经网络(convolutional neur...为了改善低层特征对图像内容描述不够精确而导致现勘图像分类准确率低的问题,提出一种利用深度学习特征的改进局部约束线性编码(local-constrained linear coding,LLC)算法。采用滑动窗口法提取图像密集卷积神经网络(convolutional neural networks,CNN)特征;利用近似LLC算法对提取的密集CNN特征进行快速编码和最大池化,并采用多尺度空间金字塔匹配产生包含空间位置信息的稀疏编码特征。最后,利用支持向量机对现勘图像进行分类从而得到高效的图像特征。对比实验结果表明,该算法的分类准确率较高。展开更多
基金supported by the National Nature Science Foundation of China(Nos.12027901 and 12041202)Synchrotron Radiation Joint Fund of University of Science and Technology of China(Nos.KY2090000059 and KY2090000054)。
文摘There is a contradiction between the evolution rate of materials and the time resolution of SR-CT characterization in the in situ synchrotron radiation computed tomography(SR-CT)characterization of ultrafast evolution process.The sampling strategy of the ultra-sparse angle is an effective method for improving time resolution.Accurate reconstruction under sparse sampling conditions has always been a bottleneck problem.In recent years,convolutional neural networks have shown outstanding advantages in sparse-angle CT reconstruction given the development of deep learning.However,existing ideas did not consider the expression of high-frequency details in neural networks,limiting their application in accurate SR-CT characterization.A novel high-frequency information-constrained deep learning network(HFIC-Net)is proposed in response to this problem.Additional high-frequency information constraints are added to improve the accuracy of the reconstruction results.Further,a series of numerical reconstruction experiments are conducted to verify this new method,and the results indicate that the reconstruction results of HFIC-Net method effectively improve reconstruction quality.This new method uses only eight-angle projections to achieve the reconstruction effect of the filtered backprojection method(FBP)method in 360 projections.The results of the HFIC-Net method demonstrate clear boundaries and accurate detailed structures,correcting the misinformation caused by using other methods.For quantitative evaluation,the SSIM used to evaluate image structure similarity is increased from 0.1951,0.9212,and 0.9308 for FBP,FBP-Conv,and DDC-Net,respectively,to 0.9620 for HFIC-Net.Finally,the results of actual SR-CT experimental data indicate that the new method can suppress artifacts and achieve accurate reconstruction,and it is suitable for the in situ SR-CT accurate characterization of ultxafast evolution process.
基金supported by the National Key R&D Program of China (Grant No.2022YFC3003401)the National Natural Science Foundation of China (Grant Nos.42041006 and 42377137).
文摘To efficiently predict the mechanical parameters of granular soil based on its random micro-structure,this study proposed a novel approach combining numerical simulation and machine learning algorithms.Initially,3500 simulations of one-dimensional compression tests on coarse-grained sand using the three-dimensional(3D)discrete element method(DEM)were conducted to construct a database.In this process,the positions of the particles were randomly altered,and the particle assemblages changed.Interestingly,besides confirming the influence of particle size distribution parameters,the stress-strain curves differed despite an identical gradation size statistic when the particle position varied.Subsequently,the obtained data were partitioned into training,validation,and testing datasets at a 7:2:1 ratio.To convert the DEM model into a multi-dimensional matrix that computers can recognize,the 3D DEM models were first sliced to extract multi-layer two-dimensional(2D)cross-sectional data.Redundant information was then eliminated via gray processing,and the data were stacked to form a new 3D matrix representing the granular soil’s fabric.Subsequently,utilizing the Python language and Pytorch framework,a 3D convolutional neural networks(CNNs)model was developed to establish the relationship between the constrained modulus obtained from DEM simulations and the soil’s fabric.The mean squared error(MSE)function was utilized to assess the loss value during the training process.When the learning rate(LR)fell within the range of 10-5e10-1,and the batch sizes(BSs)were 4,8,16,32,and 64,the loss value stabilized after 100 training epochs in the training and validation dataset.For BS?32 and LR?10-3,the loss reached a minimum.In the testing set,a comparative evaluation of the predicted constrained modulus from the 3D CNNs versus the simulated modulus obtained via DEM reveals a minimum mean absolute percentage error(MAPE)of 4.43%under the optimized condition,demonstrating the accuracy of this approach.Thus,by combining DEM and CNNs,the variation of soil’s mechanical characteristics related to its random fabric would be efficiently evaluated by directly tracking the particle assemblages.
基金supported,in part,by the National Nature Science Foundation of China under grant numbers 62272236in part,by the Natural Science Foundation of Jiangsu Province under grant numbers BK20201136,BK20191401in part,by the Priority Academic Program Development of Jiangsu Higher Education Institutions (PAPD)fund.
文摘Robust watermarking requires finding invariant features under multiple attacks to ensure correct extraction.Deep learning has extremely powerful in extracting features,and watermarking algorithms based on deep learning have attracted widespread attention.Most existing methods use 3×3 small kernel convolution to extract image features and embed the watermarking.However,the effective perception fields for small kernel convolution are extremely confined,so the pixels that each watermarking can affect are restricted,thus limiting the performance of the watermarking.To address these problems,we propose a watermarking network based on large kernel convolution and adaptive weight assignment for loss functions.It uses large-kernel depth-wise convolution to extract features for learning large-scale image information and subsequently projects the watermarking into a highdimensional space by 1×1 convolution to achieve adaptability in the channel dimension.Subsequently,the modification of the embedded watermarking on the cover image is extended to more pixels.Because the magnitude and convergence rates of each loss function are different,an adaptive loss weight assignment strategy is proposed to make theweights participate in the network training together and adjust theweight dynamically.Further,a high-frequency wavelet loss is proposed,by which the watermarking is restricted to only the low-frequency wavelet sub-bands,thereby enhancing the robustness of watermarking against image compression.The experimental results show that the peak signal-to-noise ratio(PSNR)of the encoded image reaches 40.12,the structural similarity(SSIM)reaches 0.9721,and the watermarking has good robustness against various types of noise.
基金Supported by the Specialized Research Fund for the Doctoral Program of Higher Education of China (No.20060280003)Shanghai Leading Academic Dis-cipline Project (T0102)
文摘Most of the existing algorithms for blind sources separation have a limitation that sources are statistically independent. However, in many practical applications, the source signals are non- negative and mutual statistically dependent signals. When the observations are nonnegative linear combinations of nonnegative sources, the correlation coefficients of the observations are larger than these of source signals. In this letter, a novel Nonnegative Matrix Factorization (NMF) algorithm with least correlated component constraints to blind separation of convolutive mixed sources is proposed. The algorithm relaxes the source independence assumption and has low-complexity algebraic com- putations. Simulation results on blind source separation including real face image data indicate that the sources can be successfully recovered with the algorithm.
文摘为了改善低层特征对图像内容描述不够精确而导致现勘图像分类准确率低的问题,提出一种利用深度学习特征的改进局部约束线性编码(local-constrained linear coding,LLC)算法。采用滑动窗口法提取图像密集卷积神经网络(convolutional neural networks,CNN)特征;利用近似LLC算法对提取的密集CNN特征进行快速编码和最大池化,并采用多尺度空间金字塔匹配产生包含空间位置信息的稀疏编码特征。最后,利用支持向量机对现勘图像进行分类从而得到高效的图像特征。对比实验结果表明,该算法的分类准确率较高。