A decision map contains complete and clear information about the image to be fused, which is crucial to various image fusion issues, especially multi-focus image fusion. However, in order to get a satisfactory image f...A decision map contains complete and clear information about the image to be fused, which is crucial to various image fusion issues, especially multi-focus image fusion. However, in order to get a satisfactory image fusion effect, getting a decision map is very necessary and usually difficult to finish. In this letter, we address this problem with convolutional neural network(CNN), aiming to get a state-of-the-art decision map. The main idea is that the max-pooling of CNN is replaced by a convolution layer, the residuals are propagated backwards by gradient descent, and the training parameters of the individual layers of the CNN are updated layer by layer. Based on this, we propose a new all CNN(ACNN)-based multi-focus image fusion method in spatial domain. We demonstrate that the decision map obtained from the ACNN is reliable and can lead to high-quality fusion results. Experimental results clearly validate that the proposed algorithm can obtain state-of-the-art fusion performance in terms of both qualitative and quantitative evaluations.展开更多
In the fusion of image,how to measure the local character and clarity is called activity measurement. According to the problem,the traditional measurement is decided only by the high-frequency detail coefficients, whi...In the fusion of image,how to measure the local character and clarity is called activity measurement. According to the problem,the traditional measurement is decided only by the high-frequency detail coefficients, which will make the energy expression insufficient to reflect the local clarity. Therefore,in this paper,a novel construction method for activity measurement is proposed. Firstly,it uses the wavelet decomposition for the fusion resource image, and then utilizes the high and low frequency wavelet coefficients synthetically. Meantime,it takes the normalized variance as the weight of high-frequency energy. Secondly,it calculates the measurement by the weighted energy,which can be used to measure the local character. Finally,the fusion coefficients can be got. In order to illustrate the superiority of this new method,three kinds of assessing indicators are provided. The experiment results show that,comparing with the traditional methods,this new method weakens the fuzzy and promotes the indicator value. Therefore,it has much more advantages for practical application.展开更多
Considering the continuous advancement in the field of imaging sensor, a host of other new issues have emerged. A major problem is how to find focus areas more accurately for multi-focus image fusion. The multi-focus ...Considering the continuous advancement in the field of imaging sensor, a host of other new issues have emerged. A major problem is how to find focus areas more accurately for multi-focus image fusion. The multi-focus image fusion extracts the focused information from the source images to construct a global in-focus image which includes more information than any of the source images. In this paper, a novel multi-focus image fusion based on Laplacian operator and region optimization is proposed. The evaluation of image saliency based on Laplacian operator can easily distinguish the focus region and out of focus region. And the decision map obtained by Laplacian operator processing has less the residual information than other methods. For getting precise decision map, focus area and edge optimization based on regional connectivity and edge detection have been taken. Finally, the original images are fused through the decision map. Experimental results indicate that the proposed algorithm outperforms the other series of algorithms in terms of both subjective and objective evaluations.展开更多
Fusion methods based on multi-scale transforms have become the mainstream of the pixel-level image fusion. However,most of these methods cannot fully exploit spatial domain information of source images, which lead to ...Fusion methods based on multi-scale transforms have become the mainstream of the pixel-level image fusion. However,most of these methods cannot fully exploit spatial domain information of source images, which lead to the degradation of image.This paper presents a fusion framework based on block-matching and 3D(BM3D) multi-scale transform. The algorithm first divides the image into different blocks and groups these 2D image blocks into 3D arrays by their similarity. Then it uses a 3D transform which consists of a 2D multi-scale and a 1D transform to transfer the arrays into transform coefficients, and then the obtained low-and high-coefficients are fused by different fusion rules. The final fused image is obtained from a series of fused 3D image block groups after the inverse transform by using an aggregation process. In the experimental part, we comparatively analyze some existing algorithms and the using of different transforms, e.g. non-subsampled Contourlet transform(NSCT), non-subsampled Shearlet transform(NSST), in the 3D transform step. Experimental results show that the proposed fusion framework can not only improve subjective visual effect, but also obtain better objective evaluation criteria than state-of-the-art methods.展开更多
Two key points of pixel-level multi-focus image fusion are the clarity measure and the pixel coeffi- cients fusion rule. Along with different improvements on these two points, various fusion schemes have been proposed...Two key points of pixel-level multi-focus image fusion are the clarity measure and the pixel coeffi- cients fusion rule. Along with different improvements on these two points, various fusion schemes have been proposed in literatures. However, the traditional clarity measures are not designed for compressive imaging measurements which are maps of source sense with random or likely ran- dom measurements matrix. This paper presents a novel efficient multi-focus image fusion frame- work for compressive imaging sensor network. Here the clarity measure of the raw compressive measurements is not obtained from the random sampling data itself but from the selected Hada- mard coefficients which can also be acquired from compressive imaging system efficiently. Then, the compressive measurements with different images are fused by selecting fusion rule. Finally, the block-based CS which coupled with iterative projection-based reconstruction is used to re- cover the fused image. Experimental results on common used testing data demonstrate the effectiveness of the proposed method.展开更多
The three-dimensional(3D)model is of great significance to analyze the performance of nonwovens.However,the existing modelling methods could not reconstruct the 3D structure of nonwovens at low cost.A new method based...The three-dimensional(3D)model is of great significance to analyze the performance of nonwovens.However,the existing modelling methods could not reconstruct the 3D structure of nonwovens at low cost.A new method based on deep learning was proposed to reconstruct 3D models of nonwovens from multi-focus images.A convolutional neural network was trained to extract clear fibers from sequence images.Image processing algorithms were used to obtain the radius,the central axis,and depth information of fibers from the extraction results.Based on this information,3D models were built in 3D space.Furthermore,self-developed algorithms optimized the central axis and depth of fibers,which made fibers more realistic and continuous.The method with lower cost could reconstruct 3D models of nonwovens conveniently.展开更多
The aim of the paper is to solve the problem of over-segmentation problem generated by Watershed segmentation algorithm or unstable clarity judgment by small areas in image fusion. A multi-focus image fusion algorithm...The aim of the paper is to solve the problem of over-segmentation problem generated by Watershed segmentation algorithm or unstable clarity judgment by small areas in image fusion. A multi-focus image fusion algorithm is proposed based on CNN segmentation and algebraic multi-grid method (CNN-AMG). Firstly, the CNN segmentation result was utilized to instruct the merging process of the regions generated by the Watershed segmentation method. Then the clear regions were selected into the temporary fusion image and the final fusion process was performed according to the clarity evaluation index, which was computed with the algebraic multi-grid method (AMG). The experimental results show that the fused image quality obtained by the CNNAMG algorithm outperforms the traditional fusion methods such as DSIFT fusion method, CNN fusion method, ASR fusion method, GFF fusion method and so on with some evaluation indexes.展开更多
We propose a multi-focus image fusion method,in which a fully convolutional network for focus detection(FD-FCN)is constructed.To obtain more precise focus detection maps,we propose to add skip layers in the network to...We propose a multi-focus image fusion method,in which a fully convolutional network for focus detection(FD-FCN)is constructed.To obtain more precise focus detection maps,we propose to add skip layers in the network to make both detailed and abstract visual information available when using FD-FCN to generate maps.A new training dataset for the proposed network is constructed based on dataset CIFAR-10.The image fusion algorithm using FD-FCN contains three steps:focus maps are obtained using FD-FCN,decision map generation occurs by applying a morphological process on the focus maps,and image fusion occurs using a decision map.We carry out several sets of experiments,and both subjective and objective assessments demonstrate the superiority of the proposed fusion method to state-of-the-art algorithms.展开更多
Multi-focus image fusion is an increasingly important component in image fusion,and it plays a key role in imaging.In this paper,we put forward a novel multi-focus image fusion method which employs fractional-order de...Multi-focus image fusion is an increasingly important component in image fusion,and it plays a key role in imaging.In this paper,we put forward a novel multi-focus image fusion method which employs fractional-order derivative and intuitionistic fuzzy sets.The original image is decomposed into a base layer and a detail layer.Furthermore,a new fractional-order spatial frequency is built to reflect the clarity of the image.The fractional-order spatial frequency is used as a rule for detail layers fusion,and intuitionistic fuzzy sets are introduced to fuse base layers.Experimental results demonstrate that the proposed fusion method outperforms the state-of-the-art methods for multi-focus image fusion.展开更多
Organoids possess immense potential for unraveling the intricate functions of human tissues and facilitating preclinical disease treatment.Their applications span from high-throughput drug screening to the modeling of...Organoids possess immense potential for unraveling the intricate functions of human tissues and facilitating preclinical disease treatment.Their applications span from high-throughput drug screening to the modeling of complex diseases,with some even achieving clinical translation.Changes in the overall size,shape,boundary,and other morphological features of organoids provide a noninvasive method for assessing organoid drug sensitivity.However,the precise segmentation of organoids in bright-field microscopy images is made difficult by the complexity of the organoid morphology and interference,including overlapping organoids,bubbles,dust particles,and cell fragments.This paper introduces the precision organoid segmentation technique(POST),which is a deep-learning algorithm for segmenting challenging organoids under simple bright-field imaging conditions.Unlike existing methods,POST accurately segments each organoid and eliminates various artifacts encountered during organoid culturing and imaging.Furthermore,it is sensitive to and aligns with measurements of organoid activity in drug sensitivity experiments.POST is expected to be a valuable tool for drug screening using organoids owing to its capability of automatically and rapidly eliminating interfering substances and thereby streamlining the organoid analysis and drug screening process.展开更多
Although Transformer-based image restoration methods have demonstrated impressive performance,existing Transformers still insufficiently exploit multiscale information.Previous non-Transformer-based studies have shown...Although Transformer-based image restoration methods have demonstrated impressive performance,existing Transformers still insufficiently exploit multiscale information.Previous non-Transformer-based studies have shown that incorporating multiscale features is crucial for improving restoration results.In this paper,we propose a multiscale Transformer(MST)that captures cross-scale attention among tokens,thereby effectively leveraging the multiscale patch recurrence prior of natural images.Furthermore,we introduce a channel-gate feed-forward network(CGFN)to enhance inter-channel information aggregation and reduce channel redundancy.To simultaneously utilise global,local and multiscale features,we design a multitype feature integration block(MFIB).Extensive experiments on both image super-resolution and HEVC compressed video artefact reduction demonstrate that the proposed MST achieves state-of-the-art performance.Ablation studies further verify the effectiveness of each proposed module.展开更多
The existence of absorption and reflection of light underwater leads to problems such as color distortion and blue-green bias in underwater images.In this study,a depthwise separable convolution-based generative adver...The existence of absorption and reflection of light underwater leads to problems such as color distortion and blue-green bias in underwater images.In this study,a depthwise separable convolution-based generative adversarial network(GAN)algorithm was proposed.Taking GAN as the basic framework,it combined a depthwise separable convolution module,attention mechanism,and reconstructed convolution module to realize the enhancement of underwater degraded images.Multi-scale features were captured by the depthwise separable convolution module,and the attention mechanism was utilized to enhance attention to important features.The reconstructed convolution module further extracts and fuses local and global features.Experimental results showed that the algorithm performs well in improving the color bias and blurring of underwater images,with PSNR reaching 27.835,SSIM reaching 0.883,UIQM reaching 3.205,and UCIQE reaching 0.713.The enhanced image outperforms the comparison algorithm in both subjective and objective metrics.展开更多
Near-infrared image sensors are widely used in fields such as material identification,machine vision,and autonomous driving.Lead sulfide colloidal quantum dot-based infrared photodiodes can be integrated with sil⁃icon...Near-infrared image sensors are widely used in fields such as material identification,machine vision,and autonomous driving.Lead sulfide colloidal quantum dot-based infrared photodiodes can be integrated with sil⁃icon-based readout circuits in a single step.Based on this,we propose a photodiode based on an n-i-p structure,which removes the buffer layer and further simplifies the manufacturing process of quantum dot image sensors,thus reducing manufacturing costs.Additionally,for the noise complexity in quantum dot image sensors when capturing images,traditional denoising and non-uniformity methods often do not achieve optimal denoising re⁃sults.For the noise and stripe-type non-uniformity commonly encountered in infrared quantum dot detector imag⁃es,a network architecture has been developed that incorporates multiple key modules.This network combines channel attention and spatial attention mechanisms,dynamically adjusting the importance of feature maps to en⁃hance the ability to distinguish between noise and details.Meanwhile,the residual dense feature fusion module further improves the network's ability to process complex image structures through hierarchical feature extraction and fusion.Furthermore,the pyramid pooling module effectively captures information at different scales,improv⁃ing the network's multi-scale feature representation ability.Through the collaborative effect of these modules,the network can better handle various mixed noise and image non-uniformity issues.Experimental results show that it outperforms the traditional U-Net network in denoising and image correction tasks.展开更多
In the image fusion field,fusing infrared images(IRIs)and visible images(VIs)excelled is a key area.The differences between IRIs and VIs make it challenging to fuse both types into a high-quality image.Accordingly,eff...In the image fusion field,fusing infrared images(IRIs)and visible images(VIs)excelled is a key area.The differences between IRIs and VIs make it challenging to fuse both types into a high-quality image.Accordingly,efficiently combining the advantages of both images while overcoming their shortcomings is necessary.To handle this challenge,we developed an end-to-end IRI andVI fusionmethod based on frequency decomposition and enhancement.By applying concepts from frequency domain analysis,we used the layering mechanism to better capture the salient thermal targets from the IRIs and the rich textural information from the VIs,respectively,significantly boosting the image fusion quality and effectiveness.In addition,the backbone network combined Restormer Blocks and Dense Blocks;Restormer blocks utilize global attention to extract shallow features.Meanwhile,Dense Blocks ensure the integration between shallow and deep features,thereby avoiding the loss of shallow attributes.Extensive experiments on TNO and MSRS datasets demonstrated that the suggested method achieved state-of-the-art(SOTA)performance in various metrics:Entropy(EN),Mutual Information(MI),Standard Deviation(SD),The Structural Similarity Index Measure(SSIM),Fusion quality(Qabf),MI of the pixel(FMI_(pixel)),and modified Visual Information Fidelity(VIF_(m)).展开更多
Microscopy imaging is fundamental in analyzing bacterial morphology and dynamics,offering critical insights into bacterial physiology and pathogenicity.Image segmentation techniques enable quantitative analysis of bac...Microscopy imaging is fundamental in analyzing bacterial morphology and dynamics,offering critical insights into bacterial physiology and pathogenicity.Image segmentation techniques enable quantitative analysis of bacterial structures,facilitating precise measurement of morphological variations and population behaviors at single-cell resolution.This paper reviews advancements in bacterial image segmentation,emphasizing the shift from traditional thresholding and watershed methods to deep learning-driven approaches.Convolutional neural networks(CNNs),U-Net architectures,and three-dimensional(3D)frameworks excel at segmenting dense biofilms and resolving antibiotic-induced morphological changes.These methods combine automated feature extraction with physics-informed postprocessing.Despite progress,challenges persist in computational efficiency,cross-species generalizability,and integration with multimodal experimental workflows.Future progress will depend on improving model robustness across species and imaging modalities,integrating multimodal data for phenotype-function mapping,and developing standard pipelines that link computational tools with clinical diagnostics.These innovations will expand microbial phenotyping beyond structural analysis,enabling deeper insights into bacterial physiology and ecological interactions.展开更多
Clouds are one of the leading causes of sun shading,which reduces the direct horizontal irradiance and curtails the photovoltaic(PV)power.It is critical to estimate cloud cover to accurately predict PV generation with...Clouds are one of the leading causes of sun shading,which reduces the direct horizontal irradiance and curtails the photovoltaic(PV)power.It is critical to estimate cloud cover to accurately predict PV generation within a very short horizon(second/minute).To achieve the precise forecasting of cloud cover,an image preprocessing method based on total-sky images is proposed to remove the interference and address the image edge distortion issue.An optimal threshold estimation method is further designed to achieve higher cloud identification precision.Considering the cloud's meteorological properties,a random hypersurface model(RHM)based on the Gaussian mixture probability hypothesis density(GM-PHD)filter is applied to track the cloud.The GM-PHD can track the rotation and diffusion of clouds,which helps to estimate sun-cloud collision.Furthermore,a hybrid autoregressive integrated moving average(ARIMA)and backpropagation(BP)neural network-based model is applied for intra-hour PV power forecasting.The experiment results demonstrate that the proposed cloud-tracking-based PV power forecasting model can capture the ramp behavior of PV power,improving forecasting precision.展开更多
Background:Brain volume measurement serves as a critical approach for assessing brain health status.Considering the close biological connection between the eyes and brain,this study aims to investigate the feasibility...Background:Brain volume measurement serves as a critical approach for assessing brain health status.Considering the close biological connection between the eyes and brain,this study aims to investigate the feasibility of estimating brain volume through retinal fundus imaging integrated with clinical metadata,and to offer a cost-effective approach for assessing brain health.Methods:Based on clinical information,retinal fundus images,and neuroimaging data derived from a multicenter,population-based cohort study,the Kai Luan Study,we proposed a cross-modal correlation representation(CMCR)network to elucidate the intricate co-degenerative relationships between the eyes and brain for 755 subjects.Specifically,individual clinical information,which has been followed up for as long as 12 years,was encoded as a prompt to enhance the accuracy of brain volume estimation.Independent internal validation and external validation were performed to assess the robustness of the proposed model.Root mean square error(RMSE),peak signal-tonoise ratio(PSNR),and structural similarity index measure(SSIM)metrics were employed to quantitatively evaluate the quality of synthetic brain images derived from retinal imaging data.Results:The proposed framework yielded average RMSE,PSNR,and SSIM values of 98.23,35.78 d B,and 0.64,respectively,which significantly outperformed 5 other methods:multi-channel Variational Autoencoder(mcVAE),Pixelto-Pixel(Pixel2pixel),transformer-based U-Net(Trans UNet),multi-scale transformer network(MT-Net),and residual vision transformer(ResViT).The two-(2D)and three-dimensional(3D)visualization results showed that the shape and texture of the synthetic brain images generated by the proposed method most closely resembled those of actual brain images.Thus,the CMCR framework accurately captured the latent structural correlations between the fundus and the brain.The average difference between predicted and actual brain volumes was 61.36 cm~3,with a relative error of 4.54%.When all of the clinical information(including age and sex,daily habits,cardiovascular factors,metabolic factors,and inflammatory factors)was encoded,the difference was decreased to 53.89 cm~3,with a relative error of 3.98%.Based on the synthesized brain magnetic resonance images from retinal fundus images,the volumes of brain tissues could be estimated with high accuracy.Conclusion:This study provides an innovative,accurate,and cost-effective approach to characterize brain health status through readily accessible retinal fundus images.展开更多
Medical image segmentation is of critical importance in the domain of contemporary medical imaging.However,U-Net and its variants exhibit limitations in capturing complex nonlinear patterns and global contextual infor...Medical image segmentation is of critical importance in the domain of contemporary medical imaging.However,U-Net and its variants exhibit limitations in capturing complex nonlinear patterns and global contextual information.Although the subsequent U-KAN model enhances nonlinear representation capabilities,it still faces challenges such as gradient vanishing during deep network training and spatial detail loss during feature downsampling,resulting in insufficient segmentation accuracy for edge structures and minute lesions.To address these challenges,this paper proposes the RE-UKAN model,which innovatively improves upon U-KAN.Firstly,a residual network is introduced into the encoder to effectively mitigate gradient vanishing through cross-layer identity mappings,thus enhancing modelling capabilities for complex pathological structures.Secondly,Efficient Local Attention(ELA)is integrated to suppress spatial detail loss during downsampling,thereby improving the perception of edge structures and minute lesions.Experimental results on four public datasets demonstrate that RE-UKAN outperforms existing medical image segmentation methods across multiple evaluation metrics,with particularly outstanding performance on the TN-SCUI 2020 dataset,achieving IoU of 88.18%and Dice of 93.57%.Compared to the baseline model,it achieves improvements of 3.05%and 1.72%,respectively.These results fully demonstrate RE-UKAN’s superior detail retention capability and boundary recognition accuracy in complex medical image segmentation tasks,providing a reliable solution for clinical precision segmentation.展开更多
基金supported by the National Natural Science Foundation of China(No.61174193)
文摘A decision map contains complete and clear information about the image to be fused, which is crucial to various image fusion issues, especially multi-focus image fusion. However, in order to get a satisfactory image fusion effect, getting a decision map is very necessary and usually difficult to finish. In this letter, we address this problem with convolutional neural network(CNN), aiming to get a state-of-the-art decision map. The main idea is that the max-pooling of CNN is replaced by a convolution layer, the residuals are propagated backwards by gradient descent, and the training parameters of the individual layers of the CNN are updated layer by layer. Based on this, we propose a new all CNN(ACNN)-based multi-focus image fusion method in spatial domain. We demonstrate that the decision map obtained from the ACNN is reliable and can lead to high-quality fusion results. Experimental results clearly validate that the proposed algorithm can obtain state-of-the-art fusion performance in terms of both qualitative and quantitative evaluations.
基金Sponsored by the Nation Nature Science Foundation of China(Grant No.61275010,61201237)the Fundamental Research Funds for the Central Universities(Grant No.HEUCFZ1129,No.HEUCF120805)
文摘In the fusion of image,how to measure the local character and clarity is called activity measurement. According to the problem,the traditional measurement is decided only by the high-frequency detail coefficients, which will make the energy expression insufficient to reflect the local clarity. Therefore,in this paper,a novel construction method for activity measurement is proposed. Firstly,it uses the wavelet decomposition for the fusion resource image, and then utilizes the high and low frequency wavelet coefficients synthetically. Meantime,it takes the normalized variance as the weight of high-frequency energy. Secondly,it calculates the measurement by the weighted energy,which can be used to measure the local character. Finally,the fusion coefficients can be got. In order to illustrate the superiority of this new method,three kinds of assessing indicators are provided. The experiment results show that,comparing with the traditional methods,this new method weakens the fuzzy and promotes the indicator value. Therefore,it has much more advantages for practical application.
文摘Considering the continuous advancement in the field of imaging sensor, a host of other new issues have emerged. A major problem is how to find focus areas more accurately for multi-focus image fusion. The multi-focus image fusion extracts the focused information from the source images to construct a global in-focus image which includes more information than any of the source images. In this paper, a novel multi-focus image fusion based on Laplacian operator and region optimization is proposed. The evaluation of image saliency based on Laplacian operator can easily distinguish the focus region and out of focus region. And the decision map obtained by Laplacian operator processing has less the residual information than other methods. For getting precise decision map, focus area and edge optimization based on regional connectivity and edge detection have been taken. Finally, the original images are fused through the decision map. Experimental results indicate that the proposed algorithm outperforms the other series of algorithms in terms of both subjective and objective evaluations.
基金supported by the National Natural Science Foundation of China(6157206361401308)+6 种基金the Fundamental Research Funds for the Central Universities(2016YJS039)the Natural Science Foundation of Hebei Province(F2016201142F2016201187)the Natural Social Foundation of Hebei Province(HB15TQ015)the Science Research Project of Hebei Province(QN2016085ZC2016040)the Natural Science Foundation of Hebei University(2014-303)
文摘Fusion methods based on multi-scale transforms have become the mainstream of the pixel-level image fusion. However,most of these methods cannot fully exploit spatial domain information of source images, which lead to the degradation of image.This paper presents a fusion framework based on block-matching and 3D(BM3D) multi-scale transform. The algorithm first divides the image into different blocks and groups these 2D image blocks into 3D arrays by their similarity. Then it uses a 3D transform which consists of a 2D multi-scale and a 1D transform to transfer the arrays into transform coefficients, and then the obtained low-and high-coefficients are fused by different fusion rules. The final fused image is obtained from a series of fused 3D image block groups after the inverse transform by using an aggregation process. In the experimental part, we comparatively analyze some existing algorithms and the using of different transforms, e.g. non-subsampled Contourlet transform(NSCT), non-subsampled Shearlet transform(NSST), in the 3D transform step. Experimental results show that the proposed fusion framework can not only improve subjective visual effect, but also obtain better objective evaluation criteria than state-of-the-art methods.
文摘Two key points of pixel-level multi-focus image fusion are the clarity measure and the pixel coeffi- cients fusion rule. Along with different improvements on these two points, various fusion schemes have been proposed in literatures. However, the traditional clarity measures are not designed for compressive imaging measurements which are maps of source sense with random or likely ran- dom measurements matrix. This paper presents a novel efficient multi-focus image fusion frame- work for compressive imaging sensor network. Here the clarity measure of the raw compressive measurements is not obtained from the random sampling data itself but from the selected Hada- mard coefficients which can also be acquired from compressive imaging system efficiently. Then, the compressive measurements with different images are fused by selecting fusion rule. Finally, the block-based CS which coupled with iterative projection-based reconstruction is used to re- cover the fused image. Experimental results on common used testing data demonstrate the effectiveness of the proposed method.
基金National Natural Science Foundation of China(No.61771123)。
文摘The three-dimensional(3D)model is of great significance to analyze the performance of nonwovens.However,the existing modelling methods could not reconstruct the 3D structure of nonwovens at low cost.A new method based on deep learning was proposed to reconstruct 3D models of nonwovens from multi-focus images.A convolutional neural network was trained to extract clear fibers from sequence images.Image processing algorithms were used to obtain the radius,the central axis,and depth information of fibers from the extraction results.Based on this information,3D models were built in 3D space.Furthermore,self-developed algorithms optimized the central axis and depth of fibers,which made fibers more realistic and continuous.The method with lower cost could reconstruct 3D models of nonwovens conveniently.
文摘The aim of the paper is to solve the problem of over-segmentation problem generated by Watershed segmentation algorithm or unstable clarity judgment by small areas in image fusion. A multi-focus image fusion algorithm is proposed based on CNN segmentation and algebraic multi-grid method (CNN-AMG). Firstly, the CNN segmentation result was utilized to instruct the merging process of the regions generated by the Watershed segmentation method. Then the clear regions were selected into the temporary fusion image and the final fusion process was performed according to the clarity evaluation index, which was computed with the algebraic multi-grid method (AMG). The experimental results show that the fused image quality obtained by the CNNAMG algorithm outperforms the traditional fusion methods such as DSIFT fusion method, CNN fusion method, ASR fusion method, GFF fusion method and so on with some evaluation indexes.
基金Project supported by the National Natural Science Foundation of China(No.61801190)the Natural Science Foundation of Jilin Province,China(No.20180101055JC)the Outstanding Young Talent Foundation of Jilin Province,China(No.20180520029JH)。
文摘We propose a multi-focus image fusion method,in which a fully convolutional network for focus detection(FD-FCN)is constructed.To obtain more precise focus detection maps,we propose to add skip layers in the network to make both detailed and abstract visual information available when using FD-FCN to generate maps.A new training dataset for the proposed network is constructed based on dataset CIFAR-10.The image fusion algorithm using FD-FCN contains three steps:focus maps are obtained using FD-FCN,decision map generation occurs by applying a morphological process on the focus maps,and image fusion occurs using a decision map.We carry out several sets of experiments,and both subjective and objective assessments demonstrate the superiority of the proposed fusion method to state-of-the-art algorithms.
文摘Multi-focus image fusion is an increasingly important component in image fusion,and it plays a key role in imaging.In this paper,we put forward a novel multi-focus image fusion method which employs fractional-order derivative and intuitionistic fuzzy sets.The original image is decomposed into a base layer and a detail layer.Furthermore,a new fractional-order spatial frequency is built to reflect the clarity of the image.The fractional-order spatial frequency is used as a rule for detail layers fusion,and intuitionistic fuzzy sets are introduced to fuse base layers.Experimental results demonstrate that the proposed fusion method outperforms the state-of-the-art methods for multi-focus image fusion.
基金supported by the National Key R&D Program of China(No.2022YFC2504403)the National Natural Science Foundation of China(No.62172202)+1 种基金the Experiment Project of China Manned Space Program(No.HYZHXM01019)the Fundamental Research Funds for the Central Universities from Southeast University(No.3207032101C3)。
文摘Organoids possess immense potential for unraveling the intricate functions of human tissues and facilitating preclinical disease treatment.Their applications span from high-throughput drug screening to the modeling of complex diseases,with some even achieving clinical translation.Changes in the overall size,shape,boundary,and other morphological features of organoids provide a noninvasive method for assessing organoid drug sensitivity.However,the precise segmentation of organoids in bright-field microscopy images is made difficult by the complexity of the organoid morphology and interference,including overlapping organoids,bubbles,dust particles,and cell fragments.This paper introduces the precision organoid segmentation technique(POST),which is a deep-learning algorithm for segmenting challenging organoids under simple bright-field imaging conditions.Unlike existing methods,POST accurately segments each organoid and eliminates various artifacts encountered during organoid culturing and imaging.Furthermore,it is sensitive to and aligns with measurements of organoid activity in drug sensitivity experiments.POST is expected to be a valuable tool for drug screening using organoids owing to its capability of automatically and rapidly eliminating interfering substances and thereby streamlining the organoid analysis and drug screening process.
基金supported in part by the National Natural Science Foundation of China under Grants 62101346 and 62301330the Guangdong Basic and Applied Basic Research Foundation under Grants 2021A1515011702 and 2022A1515110101+1 种基金the Shenzhen Science and Technology Programme under Grants JCYJ20240813141358076 and 20231121103807001the Guangdong Provincial Key Laboratory under Grant 2023B1212060076.
文摘Although Transformer-based image restoration methods have demonstrated impressive performance,existing Transformers still insufficiently exploit multiscale information.Previous non-Transformer-based studies have shown that incorporating multiscale features is crucial for improving restoration results.In this paper,we propose a multiscale Transformer(MST)that captures cross-scale attention among tokens,thereby effectively leveraging the multiscale patch recurrence prior of natural images.Furthermore,we introduce a channel-gate feed-forward network(CGFN)to enhance inter-channel information aggregation and reduce channel redundancy.To simultaneously utilise global,local and multiscale features,we design a multitype feature integration block(MFIB).Extensive experiments on both image super-resolution and HEVC compressed video artefact reduction demonstrate that the proposed MST achieves state-of-the-art performance.Ablation studies further verify the effectiveness of each proposed module.
文摘The existence of absorption and reflection of light underwater leads to problems such as color distortion and blue-green bias in underwater images.In this study,a depthwise separable convolution-based generative adversarial network(GAN)algorithm was proposed.Taking GAN as the basic framework,it combined a depthwise separable convolution module,attention mechanism,and reconstructed convolution module to realize the enhancement of underwater degraded images.Multi-scale features were captured by the depthwise separable convolution module,and the attention mechanism was utilized to enhance attention to important features.The reconstructed convolution module further extracts and fuses local and global features.Experimental results showed that the algorithm performs well in improving the color bias and blurring of underwater images,with PSNR reaching 27.835,SSIM reaching 0.883,UIQM reaching 3.205,and UCIQE reaching 0.713.The enhanced image outperforms the comparison algorithm in both subjective and objective metrics.
基金Supported by the National key research and development program in the 14th five year plan 2021YFA1200700)the National Natural Science Foundation of China(62535018,62431025,62561160113)the Natural Science Foundation of Shanghai(23ZR1473400).
文摘Near-infrared image sensors are widely used in fields such as material identification,machine vision,and autonomous driving.Lead sulfide colloidal quantum dot-based infrared photodiodes can be integrated with sil⁃icon-based readout circuits in a single step.Based on this,we propose a photodiode based on an n-i-p structure,which removes the buffer layer and further simplifies the manufacturing process of quantum dot image sensors,thus reducing manufacturing costs.Additionally,for the noise complexity in quantum dot image sensors when capturing images,traditional denoising and non-uniformity methods often do not achieve optimal denoising re⁃sults.For the noise and stripe-type non-uniformity commonly encountered in infrared quantum dot detector imag⁃es,a network architecture has been developed that incorporates multiple key modules.This network combines channel attention and spatial attention mechanisms,dynamically adjusting the importance of feature maps to en⁃hance the ability to distinguish between noise and details.Meanwhile,the residual dense feature fusion module further improves the network's ability to process complex image structures through hierarchical feature extraction and fusion.Furthermore,the pyramid pooling module effectively captures information at different scales,improv⁃ing the network's multi-scale feature representation ability.Through the collaborative effect of these modules,the network can better handle various mixed noise and image non-uniformity issues.Experimental results show that it outperforms the traditional U-Net network in denoising and image correction tasks.
基金funded by Anhui Province University Key Science and Technology Project(2024AH053415)Anhui Province University Major Science and Technology Project(2024AH040229)+3 种基金Talent Research Initiation Fund Project of Tongling University(2024tlxyrc019)Tongling University School-Level Scientific Research Project(2024tlxyptZD07)TheUniversity Synergy Innovation Programof Anhui Province(GXXT-2023-050)Tongling City Science and Technology Major Special Project(Unveiling and Commanding Model)(200401JB004).
文摘In the image fusion field,fusing infrared images(IRIs)and visible images(VIs)excelled is a key area.The differences between IRIs and VIs make it challenging to fuse both types into a high-quality image.Accordingly,efficiently combining the advantages of both images while overcoming their shortcomings is necessary.To handle this challenge,we developed an end-to-end IRI andVI fusionmethod based on frequency decomposition and enhancement.By applying concepts from frequency domain analysis,we used the layering mechanism to better capture the salient thermal targets from the IRIs and the rich textural information from the VIs,respectively,significantly boosting the image fusion quality and effectiveness.In addition,the backbone network combined Restormer Blocks and Dense Blocks;Restormer blocks utilize global attention to extract shallow features.Meanwhile,Dense Blocks ensure the integration between shallow and deep features,thereby avoiding the loss of shallow attributes.Extensive experiments on TNO and MSRS datasets demonstrated that the suggested method achieved state-of-the-art(SOTA)performance in various metrics:Entropy(EN),Mutual Information(MI),Standard Deviation(SD),The Structural Similarity Index Measure(SSIM),Fusion quality(Qabf),MI of the pixel(FMI_(pixel)),and modified Visual Information Fidelity(VIF_(m)).
基金financially supported by the Open Project Program of Wuhan National Laboratory for Optoelectronics(No.2022WNLOKF009)the National Natural Science Foundation of China(No.62475216)+2 种基金the Key Research and Development Program of Shaanxi(No.2024GH-ZDXM-37)the Fujian Provincial Natural Science Foundation of China(No.2024J01060)the Startup Program of XMU,and the Fundamental Research Funds for the Central Universities.
文摘Microscopy imaging is fundamental in analyzing bacterial morphology and dynamics,offering critical insights into bacterial physiology and pathogenicity.Image segmentation techniques enable quantitative analysis of bacterial structures,facilitating precise measurement of morphological variations and population behaviors at single-cell resolution.This paper reviews advancements in bacterial image segmentation,emphasizing the shift from traditional thresholding and watershed methods to deep learning-driven approaches.Convolutional neural networks(CNNs),U-Net architectures,and three-dimensional(3D)frameworks excel at segmenting dense biofilms and resolving antibiotic-induced morphological changes.These methods combine automated feature extraction with physics-informed postprocessing.Despite progress,challenges persist in computational efficiency,cross-species generalizability,and integration with multimodal experimental workflows.Future progress will depend on improving model robustness across species and imaging modalities,integrating multimodal data for phenotype-function mapping,and developing standard pipelines that link computational tools with clinical diagnostics.These innovations will expand microbial phenotyping beyond structural analysis,enabling deeper insights into bacterial physiology and ecological interactions.
基金supported by National Natural Science Foundation of China(U1909201,62206062).
文摘Clouds are one of the leading causes of sun shading,which reduces the direct horizontal irradiance and curtails the photovoltaic(PV)power.It is critical to estimate cloud cover to accurately predict PV generation within a very short horizon(second/minute).To achieve the precise forecasting of cloud cover,an image preprocessing method based on total-sky images is proposed to remove the interference and address the image edge distortion issue.An optimal threshold estimation method is further designed to achieve higher cloud identification precision.Considering the cloud's meteorological properties,a random hypersurface model(RHM)based on the Gaussian mixture probability hypothesis density(GM-PHD)filter is applied to track the cloud.The GM-PHD can track the rotation and diffusion of clouds,which helps to estimate sun-cloud collision.Furthermore,a hybrid autoregressive integrated moving average(ARIMA)and backpropagation(BP)neural network-based model is applied for intra-hour PV power forecasting.The experiment results demonstrate that the proposed cloud-tracking-based PV power forecasting model can capture the ramp behavior of PV power,improving forecasting precision.
基金supported by the National Natural Science Foundation of China(62522119 and 62372358)the Beijing Natural Science Foundation(7242267)+2 种基金the Beijing Scholars Program([2015]160)the Natural Science Basic Research Program of Shaanxi(2023-JC-QN-0719)the Guangdong Basic and Applied Basic Research Foundation(2022A1515110453)。
文摘Background:Brain volume measurement serves as a critical approach for assessing brain health status.Considering the close biological connection between the eyes and brain,this study aims to investigate the feasibility of estimating brain volume through retinal fundus imaging integrated with clinical metadata,and to offer a cost-effective approach for assessing brain health.Methods:Based on clinical information,retinal fundus images,and neuroimaging data derived from a multicenter,population-based cohort study,the Kai Luan Study,we proposed a cross-modal correlation representation(CMCR)network to elucidate the intricate co-degenerative relationships between the eyes and brain for 755 subjects.Specifically,individual clinical information,which has been followed up for as long as 12 years,was encoded as a prompt to enhance the accuracy of brain volume estimation.Independent internal validation and external validation were performed to assess the robustness of the proposed model.Root mean square error(RMSE),peak signal-tonoise ratio(PSNR),and structural similarity index measure(SSIM)metrics were employed to quantitatively evaluate the quality of synthetic brain images derived from retinal imaging data.Results:The proposed framework yielded average RMSE,PSNR,and SSIM values of 98.23,35.78 d B,and 0.64,respectively,which significantly outperformed 5 other methods:multi-channel Variational Autoencoder(mcVAE),Pixelto-Pixel(Pixel2pixel),transformer-based U-Net(Trans UNet),multi-scale transformer network(MT-Net),and residual vision transformer(ResViT).The two-(2D)and three-dimensional(3D)visualization results showed that the shape and texture of the synthetic brain images generated by the proposed method most closely resembled those of actual brain images.Thus,the CMCR framework accurately captured the latent structural correlations between the fundus and the brain.The average difference between predicted and actual brain volumes was 61.36 cm~3,with a relative error of 4.54%.When all of the clinical information(including age and sex,daily habits,cardiovascular factors,metabolic factors,and inflammatory factors)was encoded,the difference was decreased to 53.89 cm~3,with a relative error of 3.98%.Based on the synthesized brain magnetic resonance images from retinal fundus images,the volumes of brain tissues could be estimated with high accuracy.Conclusion:This study provides an innovative,accurate,and cost-effective approach to characterize brain health status through readily accessible retinal fundus images.
文摘Medical image segmentation is of critical importance in the domain of contemporary medical imaging.However,U-Net and its variants exhibit limitations in capturing complex nonlinear patterns and global contextual information.Although the subsequent U-KAN model enhances nonlinear representation capabilities,it still faces challenges such as gradient vanishing during deep network training and spatial detail loss during feature downsampling,resulting in insufficient segmentation accuracy for edge structures and minute lesions.To address these challenges,this paper proposes the RE-UKAN model,which innovatively improves upon U-KAN.Firstly,a residual network is introduced into the encoder to effectively mitigate gradient vanishing through cross-layer identity mappings,thus enhancing modelling capabilities for complex pathological structures.Secondly,Efficient Local Attention(ELA)is integrated to suppress spatial detail loss during downsampling,thereby improving the perception of edge structures and minute lesions.Experimental results on four public datasets demonstrate that RE-UKAN outperforms existing medical image segmentation methods across multiple evaluation metrics,with particularly outstanding performance on the TN-SCUI 2020 dataset,achieving IoU of 88.18%and Dice of 93.57%.Compared to the baseline model,it achieves improvements of 3.05%and 1.72%,respectively.These results fully demonstrate RE-UKAN’s superior detail retention capability and boundary recognition accuracy in complex medical image segmentation tasks,providing a reliable solution for clinical precision segmentation.