A decision map contains complete and clear information about the image to be fused, which is crucial to various image fusion issues, especially multi-focus image fusion. However, in order to get a satisfactory image f...A decision map contains complete and clear information about the image to be fused, which is crucial to various image fusion issues, especially multi-focus image fusion. However, in order to get a satisfactory image fusion effect, getting a decision map is very necessary and usually difficult to finish. In this letter, we address this problem with convolutional neural network(CNN), aiming to get a state-of-the-art decision map. The main idea is that the max-pooling of CNN is replaced by a convolution layer, the residuals are propagated backwards by gradient descent, and the training parameters of the individual layers of the CNN are updated layer by layer. Based on this, we propose a new all CNN(ACNN)-based multi-focus image fusion method in spatial domain. We demonstrate that the decision map obtained from the ACNN is reliable and can lead to high-quality fusion results. Experimental results clearly validate that the proposed algorithm can obtain state-of-the-art fusion performance in terms of both qualitative and quantitative evaluations.展开更多
In the fusion of image,how to measure the local character and clarity is called activity measurement. According to the problem,the traditional measurement is decided only by the high-frequency detail coefficients, whi...In the fusion of image,how to measure the local character and clarity is called activity measurement. According to the problem,the traditional measurement is decided only by the high-frequency detail coefficients, which will make the energy expression insufficient to reflect the local clarity. Therefore,in this paper,a novel construction method for activity measurement is proposed. Firstly,it uses the wavelet decomposition for the fusion resource image, and then utilizes the high and low frequency wavelet coefficients synthetically. Meantime,it takes the normalized variance as the weight of high-frequency energy. Secondly,it calculates the measurement by the weighted energy,which can be used to measure the local character. Finally,the fusion coefficients can be got. In order to illustrate the superiority of this new method,three kinds of assessing indicators are provided. The experiment results show that,comparing with the traditional methods,this new method weakens the fuzzy and promotes the indicator value. Therefore,it has much more advantages for practical application.展开更多
Considering the continuous advancement in the field of imaging sensor, a host of other new issues have emerged. A major problem is how to find focus areas more accurately for multi-focus image fusion. The multi-focus ...Considering the continuous advancement in the field of imaging sensor, a host of other new issues have emerged. A major problem is how to find focus areas more accurately for multi-focus image fusion. The multi-focus image fusion extracts the focused information from the source images to construct a global in-focus image which includes more information than any of the source images. In this paper, a novel multi-focus image fusion based on Laplacian operator and region optimization is proposed. The evaluation of image saliency based on Laplacian operator can easily distinguish the focus region and out of focus region. And the decision map obtained by Laplacian operator processing has less the residual information than other methods. For getting precise decision map, focus area and edge optimization based on regional connectivity and edge detection have been taken. Finally, the original images are fused through the decision map. Experimental results indicate that the proposed algorithm outperforms the other series of algorithms in terms of both subjective and objective evaluations.展开更多
Fusion methods based on multi-scale transforms have become the mainstream of the pixel-level image fusion. However,most of these methods cannot fully exploit spatial domain information of source images, which lead to ...Fusion methods based on multi-scale transforms have become the mainstream of the pixel-level image fusion. However,most of these methods cannot fully exploit spatial domain information of source images, which lead to the degradation of image.This paper presents a fusion framework based on block-matching and 3D(BM3D) multi-scale transform. The algorithm first divides the image into different blocks and groups these 2D image blocks into 3D arrays by their similarity. Then it uses a 3D transform which consists of a 2D multi-scale and a 1D transform to transfer the arrays into transform coefficients, and then the obtained low-and high-coefficients are fused by different fusion rules. The final fused image is obtained from a series of fused 3D image block groups after the inverse transform by using an aggregation process. In the experimental part, we comparatively analyze some existing algorithms and the using of different transforms, e.g. non-subsampled Contourlet transform(NSCT), non-subsampled Shearlet transform(NSST), in the 3D transform step. Experimental results show that the proposed fusion framework can not only improve subjective visual effect, but also obtain better objective evaluation criteria than state-of-the-art methods.展开更多
Two key points of pixel-level multi-focus image fusion are the clarity measure and the pixel coeffi- cients fusion rule. Along with different improvements on these two points, various fusion schemes have been proposed...Two key points of pixel-level multi-focus image fusion are the clarity measure and the pixel coeffi- cients fusion rule. Along with different improvements on these two points, various fusion schemes have been proposed in literatures. However, the traditional clarity measures are not designed for compressive imaging measurements which are maps of source sense with random or likely ran- dom measurements matrix. This paper presents a novel efficient multi-focus image fusion frame- work for compressive imaging sensor network. Here the clarity measure of the raw compressive measurements is not obtained from the random sampling data itself but from the selected Hada- mard coefficients which can also be acquired from compressive imaging system efficiently. Then, the compressive measurements with different images are fused by selecting fusion rule. Finally, the block-based CS which coupled with iterative projection-based reconstruction is used to re- cover the fused image. Experimental results on common used testing data demonstrate the effectiveness of the proposed method.展开更多
The three-dimensional(3D)model is of great significance to analyze the performance of nonwovens.However,the existing modelling methods could not reconstruct the 3D structure of nonwovens at low cost.A new method based...The three-dimensional(3D)model is of great significance to analyze the performance of nonwovens.However,the existing modelling methods could not reconstruct the 3D structure of nonwovens at low cost.A new method based on deep learning was proposed to reconstruct 3D models of nonwovens from multi-focus images.A convolutional neural network was trained to extract clear fibers from sequence images.Image processing algorithms were used to obtain the radius,the central axis,and depth information of fibers from the extraction results.Based on this information,3D models were built in 3D space.Furthermore,self-developed algorithms optimized the central axis and depth of fibers,which made fibers more realistic and continuous.The method with lower cost could reconstruct 3D models of nonwovens conveniently.展开更多
The aim of the paper is to solve the problem of over-segmentation problem generated by Watershed segmentation algorithm or unstable clarity judgment by small areas in image fusion. A multi-focus image fusion algorithm...The aim of the paper is to solve the problem of over-segmentation problem generated by Watershed segmentation algorithm or unstable clarity judgment by small areas in image fusion. A multi-focus image fusion algorithm is proposed based on CNN segmentation and algebraic multi-grid method (CNN-AMG). Firstly, the CNN segmentation result was utilized to instruct the merging process of the regions generated by the Watershed segmentation method. Then the clear regions were selected into the temporary fusion image and the final fusion process was performed according to the clarity evaluation index, which was computed with the algebraic multi-grid method (AMG). The experimental results show that the fused image quality obtained by the CNNAMG algorithm outperforms the traditional fusion methods such as DSIFT fusion method, CNN fusion method, ASR fusion method, GFF fusion method and so on with some evaluation indexes.展开更多
We propose a multi-focus image fusion method,in which a fully convolutional network for focus detection(FD-FCN)is constructed.To obtain more precise focus detection maps,we propose to add skip layers in the network to...We propose a multi-focus image fusion method,in which a fully convolutional network for focus detection(FD-FCN)is constructed.To obtain more precise focus detection maps,we propose to add skip layers in the network to make both detailed and abstract visual information available when using FD-FCN to generate maps.A new training dataset for the proposed network is constructed based on dataset CIFAR-10.The image fusion algorithm using FD-FCN contains three steps:focus maps are obtained using FD-FCN,decision map generation occurs by applying a morphological process on the focus maps,and image fusion occurs using a decision map.We carry out several sets of experiments,and both subjective and objective assessments demonstrate the superiority of the proposed fusion method to state-of-the-art algorithms.展开更多
Multi-focus image fusion is an increasingly important component in image fusion,and it plays a key role in imaging.In this paper,we put forward a novel multi-focus image fusion method which employs fractional-order de...Multi-focus image fusion is an increasingly important component in image fusion,and it plays a key role in imaging.In this paper,we put forward a novel multi-focus image fusion method which employs fractional-order derivative and intuitionistic fuzzy sets.The original image is decomposed into a base layer and a detail layer.Furthermore,a new fractional-order spatial frequency is built to reflect the clarity of the image.The fractional-order spatial frequency is used as a rule for detail layers fusion,and intuitionistic fuzzy sets are introduced to fuse base layers.Experimental results demonstrate that the proposed fusion method outperforms the state-of-the-art methods for multi-focus image fusion.展开更多
Organoids possess immense potential for unraveling the intricate functions of human tissues and facilitating preclinical disease treatment.Their applications span from high-throughput drug screening to the modeling of...Organoids possess immense potential for unraveling the intricate functions of human tissues and facilitating preclinical disease treatment.Their applications span from high-throughput drug screening to the modeling of complex diseases,with some even achieving clinical translation.Changes in the overall size,shape,boundary,and other morphological features of organoids provide a noninvasive method for assessing organoid drug sensitivity.However,the precise segmentation of organoids in bright-field microscopy images is made difficult by the complexity of the organoid morphology and interference,including overlapping organoids,bubbles,dust particles,and cell fragments.This paper introduces the precision organoid segmentation technique(POST),which is a deep-learning algorithm for segmenting challenging organoids under simple bright-field imaging conditions.Unlike existing methods,POST accurately segments each organoid and eliminates various artifacts encountered during organoid culturing and imaging.Furthermore,it is sensitive to and aligns with measurements of organoid activity in drug sensitivity experiments.POST is expected to be a valuable tool for drug screening using organoids owing to its capability of automatically and rapidly eliminating interfering substances and thereby streamlining the organoid analysis and drug screening process.展开更多
In the image fusion field,fusing infrared images(IRIs)and visible images(VIs)excelled is a key area.The differences between IRIs and VIs make it challenging to fuse both types into a high-quality image.Accordingly,eff...In the image fusion field,fusing infrared images(IRIs)and visible images(VIs)excelled is a key area.The differences between IRIs and VIs make it challenging to fuse both types into a high-quality image.Accordingly,efficiently combining the advantages of both images while overcoming their shortcomings is necessary.To handle this challenge,we developed an end-to-end IRI andVI fusionmethod based on frequency decomposition and enhancement.By applying concepts from frequency domain analysis,we used the layering mechanism to better capture the salient thermal targets from the IRIs and the rich textural information from the VIs,respectively,significantly boosting the image fusion quality and effectiveness.In addition,the backbone network combined Restormer Blocks and Dense Blocks;Restormer blocks utilize global attention to extract shallow features.Meanwhile,Dense Blocks ensure the integration between shallow and deep features,thereby avoiding the loss of shallow attributes.Extensive experiments on TNO and MSRS datasets demonstrated that the suggested method achieved state-of-the-art(SOTA)performance in various metrics:Entropy(EN),Mutual Information(MI),Standard Deviation(SD),The Structural Similarity Index Measure(SSIM),Fusion quality(Qabf),MI of the pixel(FMI_(pixel)),and modified Visual Information Fidelity(VIF_(m)).展开更多
Microscopy imaging is fundamental in analyzing bacterial morphology and dynamics,offering critical insights into bacterial physiology and pathogenicity.Image segmentation techniques enable quantitative analysis of bac...Microscopy imaging is fundamental in analyzing bacterial morphology and dynamics,offering critical insights into bacterial physiology and pathogenicity.Image segmentation techniques enable quantitative analysis of bacterial structures,facilitating precise measurement of morphological variations and population behaviors at single-cell resolution.This paper reviews advancements in bacterial image segmentation,emphasizing the shift from traditional thresholding and watershed methods to deep learning-driven approaches.Convolutional neural networks(CNNs),U-Net architectures,and three-dimensional(3D)frameworks excel at segmenting dense biofilms and resolving antibiotic-induced morphological changes.These methods combine automated feature extraction with physics-informed postprocessing.Despite progress,challenges persist in computational efficiency,cross-species generalizability,and integration with multimodal experimental workflows.Future progress will depend on improving model robustness across species and imaging modalities,integrating multimodal data for phenotype-function mapping,and developing standard pipelines that link computational tools with clinical diagnostics.These innovations will expand microbial phenotyping beyond structural analysis,enabling deeper insights into bacterial physiology and ecological interactions.展开更多
Schlieren imaging is a widely used technique to visualize the structure of supersonic flow field,which is usually dominated by shock waves.Precise identification of shock waves in schlieren image provides critical ins...Schlieren imaging is a widely used technique to visualize the structure of supersonic flow field,which is usually dominated by shock waves.Precise identification of shock waves in schlieren image provides critical insights for flow diagnostics,especially for supersonic inlet whose performance is highly associated with that of the whole flight.However,conventional shock wave identification methods have limited accuracy in segmenting the shock wave.To overcome the limitation,we proposed an automated shock wave identification method(SW-Segment)that can attain high resolution and automatic shock wave segmentation by integrating correlation-based feature extraction with graph search.We demonstrated the efficacy of SW-Segment via the identification of shock waves in simulatively and experimentally obtained schlieren image.The results proved that SW-Segment showed a shock wave identification accuracy of 95.24%in the numerical schlieren image and an accuracy of 88.33%in the experimental image,clearly demonstrating its reliability.SW-Segment holds broad applicability for shock wave detection in diverse schlieren imaging scenarios,offering robust data support for flow field analysis and supersonic flight design.展开更多
Background:Brain volume measurement serves as a critical approach for assessing brain health status.Considering the close biological connection between the eyes and brain,this study aims to investigate the feasibility...Background:Brain volume measurement serves as a critical approach for assessing brain health status.Considering the close biological connection between the eyes and brain,this study aims to investigate the feasibility of estimating brain volume through retinal fundus imaging integrated with clinical metadata,and to offer a cost-effective approach for assessing brain health.Methods:Based on clinical information,retinal fundus images,and neuroimaging data derived from a multicenter,population-based cohort study,the Kai Luan Study,we proposed a cross-modal correlation representation(CMCR)network to elucidate the intricate co-degenerative relationships between the eyes and brain for 755 subjects.Specifically,individual clinical information,which has been followed up for as long as 12 years,was encoded as a prompt to enhance the accuracy of brain volume estimation.Independent internal validation and external validation were performed to assess the robustness of the proposed model.Root mean square error(RMSE),peak signal-tonoise ratio(PSNR),and structural similarity index measure(SSIM)metrics were employed to quantitatively evaluate the quality of synthetic brain images derived from retinal imaging data.Results:The proposed framework yielded average RMSE,PSNR,and SSIM values of 98.23,35.78 d B,and 0.64,respectively,which significantly outperformed 5 other methods:multi-channel Variational Autoencoder(mcVAE),Pixelto-Pixel(Pixel2pixel),transformer-based U-Net(Trans UNet),multi-scale transformer network(MT-Net),and residual vision transformer(ResViT).The two-(2D)and three-dimensional(3D)visualization results showed that the shape and texture of the synthetic brain images generated by the proposed method most closely resembled those of actual brain images.Thus,the CMCR framework accurately captured the latent structural correlations between the fundus and the brain.The average difference between predicted and actual brain volumes was 61.36 cm~3,with a relative error of 4.54%.When all of the clinical information(including age and sex,daily habits,cardiovascular factors,metabolic factors,and inflammatory factors)was encoded,the difference was decreased to 53.89 cm~3,with a relative error of 3.98%.Based on the synthesized brain magnetic resonance images from retinal fundus images,the volumes of brain tissues could be estimated with high accuracy.Conclusion:This study provides an innovative,accurate,and cost-effective approach to characterize brain health status through readily accessible retinal fundus images.展开更多
Medical image segmentation is of critical importance in the domain of contemporary medical imaging.However,U-Net and its variants exhibit limitations in capturing complex nonlinear patterns and global contextual infor...Medical image segmentation is of critical importance in the domain of contemporary medical imaging.However,U-Net and its variants exhibit limitations in capturing complex nonlinear patterns and global contextual information.Although the subsequent U-KAN model enhances nonlinear representation capabilities,it still faces challenges such as gradient vanishing during deep network training and spatial detail loss during feature downsampling,resulting in insufficient segmentation accuracy for edge structures and minute lesions.To address these challenges,this paper proposes the RE-UKAN model,which innovatively improves upon U-KAN.Firstly,a residual network is introduced into the encoder to effectively mitigate gradient vanishing through cross-layer identity mappings,thus enhancing modelling capabilities for complex pathological structures.Secondly,Efficient Local Attention(ELA)is integrated to suppress spatial detail loss during downsampling,thereby improving the perception of edge structures and minute lesions.Experimental results on four public datasets demonstrate that RE-UKAN outperforms existing medical image segmentation methods across multiple evaluation metrics,with particularly outstanding performance on the TN-SCUI 2020 dataset,achieving IoU of 88.18%and Dice of 93.57%.Compared to the baseline model,it achieves improvements of 3.05%and 1.72%,respectively.These results fully demonstrate RE-UKAN’s superior detail retention capability and boundary recognition accuracy in complex medical image segmentation tasks,providing a reliable solution for clinical precision segmentation.展开更多
The historical image of Ouyang Xiu constructed during the Song Dynasty evolved from a multifaceted portrayal that balanced his political and literary achievements into a singular cultural symbol.In the Northern Song D...The historical image of Ouyang Xiu constructed during the Song Dynasty evolved from a multifaceted portrayal that balanced his political and literary achievements into a singular cultural symbol.In the Northern Song Dynasty,writings by Ouyang Xiu's family and epitaphs by his colleagues crafted a balanced narrative emphasizing both his official duties and literary merits,thus constructing a dual image of him as a principled remonstrator and a literary master.In the Southern Song Dynasty,official historiography gradually eroded his complex persona as a political reformer by selectively trimming political disputes and emphasizing his literary lineage,ultimately establishing him as a cultural exemplar beyond factional strife.Throughout this evolution of historical writing,Ouyang Xiu's sharpness as a remonstrator was gradually obscured in historical texts,while his image as a literary master,revered by all,became firmly established.The reshaping of Ouyang Xiu's image in historical writings across the Northern and Southern Song dynasties not only reflects the logic of selecting scholar-official role models under the influence of official ideology but also reveals the inherent pattern whereby individual distinctiveness fades into symbolic construction in historical writing.展开更多
Digital watermarking technology plays an important role in detecting malicious tampering and protecting image copyright.However,in practical applications,this technology faces various problems such as severe image dis...Digital watermarking technology plays an important role in detecting malicious tampering and protecting image copyright.However,in practical applications,this technology faces various problems such as severe image distortion,inaccurate localization of the tampered regions,and difficulty in recovering content.Given these shortcomings,a fragile image watermarking algorithm for tampering blind-detection and content self-recovery is proposed.The multi-feature watermarking authentication code(AC)is constructed using texture feature of local binary patterns(LBP),direct coefficient of discrete cosine transform(DCT)and contrast feature of gray level co-occurrence matrix(GLCM)for detecting the tampered region,and the recovery code(RC)is designed according to the average grayscale value of pixels in image blocks for recovering the tampered content.Optimal pixel adjustment process(OPAP)and least significant bit(LSB)algorithms are used to embed the recovery code and authentication code into the image in a staggered manner.When detecting the integrity of the image,the authentication code comparison method and threshold judgment method are used to perform two rounds of tampering detection on the image and blindly recover the tampered content.Experimental results show that this algorithm has good transparency,strong and blind detection,and self-recovery performance against four types of malicious attacks and some conventional signal processing operations.When resisting copy-paste,text addition,cropping and vector quantization under the tampering rate(TR)10%,the average tampering detection rate is up to 94.09%,and the peak signal-to-noise ratio(PSNR)of the watermarked image and the recovered image are both greater than 41.47 and 40.31 dB,which demonstrates its excellent advantages compared with other related algorithms in recent years.展开更多
Compact size,high brightness,and wide field of view(FOV)are key requirements for long-wave infrared imagers used in military surveillance or night navigation.However,to meet the imaging requirements of high resolution...Compact size,high brightness,and wide field of view(FOV)are key requirements for long-wave infrared imagers used in military surveillance or night navigation.However,to meet the imaging requirements of high resolution and wide FOV,infrared optical systems often adopt complex optical lens groups,which will increase the size and weight of the optical system.In this paper,a strategy based on wavefront coding(WFC)is proposed to design a compact wide-FOV infrared imager.A cubic phase mask is inserted into the pupil plane of the infrared imager to correct the aberration.The simulated results show that,the WFC infrared imager has good imaging quality in a wide FOV of±16°.In addition,the WFC infrared imager achieves compactness with its 40 mm×40 mm×40 mm size.A fast focal ratio of 1 combined with an entrance pupil diameter of 25 mm ensures brightness.This work is of significance for designing a compact wide-FOV infrared imager.展开更多
A large-scale view of the magnetospheric cusp is expected to be obtained by the Soft X-ray Imager(SXI)onboard the Solar wind Magnetosphere Ionosphere Link Explorer(SMILE).However,it is challenging to trace the three-d...A large-scale view of the magnetospheric cusp is expected to be obtained by the Soft X-ray Imager(SXI)onboard the Solar wind Magnetosphere Ionosphere Link Explorer(SMILE).However,it is challenging to trace the three-dimensional cusp boundary from a two-dimensional X-ray image because the detected X-ray signals will be integrated along the line of sight.In this work,a global magnetohydrodynamic code was used to simulate the X-ray images and photon count images,assuming an interplanetary magnetic field with a pure Bz component.The assumption of an elliptic cusp boundary at a given altitude was used to trace the equatorward and poleward boundaries of the cusp from a simulated X-ray image.The average discrepancy was less than 0.1 RE.To reduce the influence of instrument effects and cosmic X-ray backgrounds,image denoising was considered before applying the method above to SXI photon count images.The cusp boundaries were reasonably reconstructed from the noisy X-ray image.展开更多
基金supported by the National Natural Science Foundation of China(No.61174193)
文摘A decision map contains complete and clear information about the image to be fused, which is crucial to various image fusion issues, especially multi-focus image fusion. However, in order to get a satisfactory image fusion effect, getting a decision map is very necessary and usually difficult to finish. In this letter, we address this problem with convolutional neural network(CNN), aiming to get a state-of-the-art decision map. The main idea is that the max-pooling of CNN is replaced by a convolution layer, the residuals are propagated backwards by gradient descent, and the training parameters of the individual layers of the CNN are updated layer by layer. Based on this, we propose a new all CNN(ACNN)-based multi-focus image fusion method in spatial domain. We demonstrate that the decision map obtained from the ACNN is reliable and can lead to high-quality fusion results. Experimental results clearly validate that the proposed algorithm can obtain state-of-the-art fusion performance in terms of both qualitative and quantitative evaluations.
基金Sponsored by the Nation Nature Science Foundation of China(Grant No.61275010,61201237)the Fundamental Research Funds for the Central Universities(Grant No.HEUCFZ1129,No.HEUCF120805)
文摘In the fusion of image,how to measure the local character and clarity is called activity measurement. According to the problem,the traditional measurement is decided only by the high-frequency detail coefficients, which will make the energy expression insufficient to reflect the local clarity. Therefore,in this paper,a novel construction method for activity measurement is proposed. Firstly,it uses the wavelet decomposition for the fusion resource image, and then utilizes the high and low frequency wavelet coefficients synthetically. Meantime,it takes the normalized variance as the weight of high-frequency energy. Secondly,it calculates the measurement by the weighted energy,which can be used to measure the local character. Finally,the fusion coefficients can be got. In order to illustrate the superiority of this new method,three kinds of assessing indicators are provided. The experiment results show that,comparing with the traditional methods,this new method weakens the fuzzy and promotes the indicator value. Therefore,it has much more advantages for practical application.
文摘Considering the continuous advancement in the field of imaging sensor, a host of other new issues have emerged. A major problem is how to find focus areas more accurately for multi-focus image fusion. The multi-focus image fusion extracts the focused information from the source images to construct a global in-focus image which includes more information than any of the source images. In this paper, a novel multi-focus image fusion based on Laplacian operator and region optimization is proposed. The evaluation of image saliency based on Laplacian operator can easily distinguish the focus region and out of focus region. And the decision map obtained by Laplacian operator processing has less the residual information than other methods. For getting precise decision map, focus area and edge optimization based on regional connectivity and edge detection have been taken. Finally, the original images are fused through the decision map. Experimental results indicate that the proposed algorithm outperforms the other series of algorithms in terms of both subjective and objective evaluations.
基金supported by the National Natural Science Foundation of China(6157206361401308)+6 种基金the Fundamental Research Funds for the Central Universities(2016YJS039)the Natural Science Foundation of Hebei Province(F2016201142F2016201187)the Natural Social Foundation of Hebei Province(HB15TQ015)the Science Research Project of Hebei Province(QN2016085ZC2016040)the Natural Science Foundation of Hebei University(2014-303)
文摘Fusion methods based on multi-scale transforms have become the mainstream of the pixel-level image fusion. However,most of these methods cannot fully exploit spatial domain information of source images, which lead to the degradation of image.This paper presents a fusion framework based on block-matching and 3D(BM3D) multi-scale transform. The algorithm first divides the image into different blocks and groups these 2D image blocks into 3D arrays by their similarity. Then it uses a 3D transform which consists of a 2D multi-scale and a 1D transform to transfer the arrays into transform coefficients, and then the obtained low-and high-coefficients are fused by different fusion rules. The final fused image is obtained from a series of fused 3D image block groups after the inverse transform by using an aggregation process. In the experimental part, we comparatively analyze some existing algorithms and the using of different transforms, e.g. non-subsampled Contourlet transform(NSCT), non-subsampled Shearlet transform(NSST), in the 3D transform step. Experimental results show that the proposed fusion framework can not only improve subjective visual effect, but also obtain better objective evaluation criteria than state-of-the-art methods.
文摘Two key points of pixel-level multi-focus image fusion are the clarity measure and the pixel coeffi- cients fusion rule. Along with different improvements on these two points, various fusion schemes have been proposed in literatures. However, the traditional clarity measures are not designed for compressive imaging measurements which are maps of source sense with random or likely ran- dom measurements matrix. This paper presents a novel efficient multi-focus image fusion frame- work for compressive imaging sensor network. Here the clarity measure of the raw compressive measurements is not obtained from the random sampling data itself but from the selected Hada- mard coefficients which can also be acquired from compressive imaging system efficiently. Then, the compressive measurements with different images are fused by selecting fusion rule. Finally, the block-based CS which coupled with iterative projection-based reconstruction is used to re- cover the fused image. Experimental results on common used testing data demonstrate the effectiveness of the proposed method.
基金National Natural Science Foundation of China(No.61771123)。
文摘The three-dimensional(3D)model is of great significance to analyze the performance of nonwovens.However,the existing modelling methods could not reconstruct the 3D structure of nonwovens at low cost.A new method based on deep learning was proposed to reconstruct 3D models of nonwovens from multi-focus images.A convolutional neural network was trained to extract clear fibers from sequence images.Image processing algorithms were used to obtain the radius,the central axis,and depth information of fibers from the extraction results.Based on this information,3D models were built in 3D space.Furthermore,self-developed algorithms optimized the central axis and depth of fibers,which made fibers more realistic and continuous.The method with lower cost could reconstruct 3D models of nonwovens conveniently.
文摘The aim of the paper is to solve the problem of over-segmentation problem generated by Watershed segmentation algorithm or unstable clarity judgment by small areas in image fusion. A multi-focus image fusion algorithm is proposed based on CNN segmentation and algebraic multi-grid method (CNN-AMG). Firstly, the CNN segmentation result was utilized to instruct the merging process of the regions generated by the Watershed segmentation method. Then the clear regions were selected into the temporary fusion image and the final fusion process was performed according to the clarity evaluation index, which was computed with the algebraic multi-grid method (AMG). The experimental results show that the fused image quality obtained by the CNNAMG algorithm outperforms the traditional fusion methods such as DSIFT fusion method, CNN fusion method, ASR fusion method, GFF fusion method and so on with some evaluation indexes.
基金Project supported by the National Natural Science Foundation of China(No.61801190)the Natural Science Foundation of Jilin Province,China(No.20180101055JC)the Outstanding Young Talent Foundation of Jilin Province,China(No.20180520029JH)。
文摘We propose a multi-focus image fusion method,in which a fully convolutional network for focus detection(FD-FCN)is constructed.To obtain more precise focus detection maps,we propose to add skip layers in the network to make both detailed and abstract visual information available when using FD-FCN to generate maps.A new training dataset for the proposed network is constructed based on dataset CIFAR-10.The image fusion algorithm using FD-FCN contains three steps:focus maps are obtained using FD-FCN,decision map generation occurs by applying a morphological process on the focus maps,and image fusion occurs using a decision map.We carry out several sets of experiments,and both subjective and objective assessments demonstrate the superiority of the proposed fusion method to state-of-the-art algorithms.
文摘Multi-focus image fusion is an increasingly important component in image fusion,and it plays a key role in imaging.In this paper,we put forward a novel multi-focus image fusion method which employs fractional-order derivative and intuitionistic fuzzy sets.The original image is decomposed into a base layer and a detail layer.Furthermore,a new fractional-order spatial frequency is built to reflect the clarity of the image.The fractional-order spatial frequency is used as a rule for detail layers fusion,and intuitionistic fuzzy sets are introduced to fuse base layers.Experimental results demonstrate that the proposed fusion method outperforms the state-of-the-art methods for multi-focus image fusion.
基金supported by the National Key R&D Program of China(No.2022YFC2504403)the National Natural Science Foundation of China(No.62172202)+1 种基金the Experiment Project of China Manned Space Program(No.HYZHXM01019)the Fundamental Research Funds for the Central Universities from Southeast University(No.3207032101C3)。
文摘Organoids possess immense potential for unraveling the intricate functions of human tissues and facilitating preclinical disease treatment.Their applications span from high-throughput drug screening to the modeling of complex diseases,with some even achieving clinical translation.Changes in the overall size,shape,boundary,and other morphological features of organoids provide a noninvasive method for assessing organoid drug sensitivity.However,the precise segmentation of organoids in bright-field microscopy images is made difficult by the complexity of the organoid morphology and interference,including overlapping organoids,bubbles,dust particles,and cell fragments.This paper introduces the precision organoid segmentation technique(POST),which is a deep-learning algorithm for segmenting challenging organoids under simple bright-field imaging conditions.Unlike existing methods,POST accurately segments each organoid and eliminates various artifacts encountered during organoid culturing and imaging.Furthermore,it is sensitive to and aligns with measurements of organoid activity in drug sensitivity experiments.POST is expected to be a valuable tool for drug screening using organoids owing to its capability of automatically and rapidly eliminating interfering substances and thereby streamlining the organoid analysis and drug screening process.
基金funded by Anhui Province University Key Science and Technology Project(2024AH053415)Anhui Province University Major Science and Technology Project(2024AH040229)+3 种基金Talent Research Initiation Fund Project of Tongling University(2024tlxyrc019)Tongling University School-Level Scientific Research Project(2024tlxyptZD07)TheUniversity Synergy Innovation Programof Anhui Province(GXXT-2023-050)Tongling City Science and Technology Major Special Project(Unveiling and Commanding Model)(200401JB004).
文摘In the image fusion field,fusing infrared images(IRIs)and visible images(VIs)excelled is a key area.The differences between IRIs and VIs make it challenging to fuse both types into a high-quality image.Accordingly,efficiently combining the advantages of both images while overcoming their shortcomings is necessary.To handle this challenge,we developed an end-to-end IRI andVI fusionmethod based on frequency decomposition and enhancement.By applying concepts from frequency domain analysis,we used the layering mechanism to better capture the salient thermal targets from the IRIs and the rich textural information from the VIs,respectively,significantly boosting the image fusion quality and effectiveness.In addition,the backbone network combined Restormer Blocks and Dense Blocks;Restormer blocks utilize global attention to extract shallow features.Meanwhile,Dense Blocks ensure the integration between shallow and deep features,thereby avoiding the loss of shallow attributes.Extensive experiments on TNO and MSRS datasets demonstrated that the suggested method achieved state-of-the-art(SOTA)performance in various metrics:Entropy(EN),Mutual Information(MI),Standard Deviation(SD),The Structural Similarity Index Measure(SSIM),Fusion quality(Qabf),MI of the pixel(FMI_(pixel)),and modified Visual Information Fidelity(VIF_(m)).
基金financially supported by the Open Project Program of Wuhan National Laboratory for Optoelectronics(No.2022WNLOKF009)the National Natural Science Foundation of China(No.62475216)+2 种基金the Key Research and Development Program of Shaanxi(No.2024GH-ZDXM-37)the Fujian Provincial Natural Science Foundation of China(No.2024J01060)the Startup Program of XMU,and the Fundamental Research Funds for the Central Universities.
文摘Microscopy imaging is fundamental in analyzing bacterial morphology and dynamics,offering critical insights into bacterial physiology and pathogenicity.Image segmentation techniques enable quantitative analysis of bacterial structures,facilitating precise measurement of morphological variations and population behaviors at single-cell resolution.This paper reviews advancements in bacterial image segmentation,emphasizing the shift from traditional thresholding and watershed methods to deep learning-driven approaches.Convolutional neural networks(CNNs),U-Net architectures,and three-dimensional(3D)frameworks excel at segmenting dense biofilms and resolving antibiotic-induced morphological changes.These methods combine automated feature extraction with physics-informed postprocessing.Despite progress,challenges persist in computational efficiency,cross-species generalizability,and integration with multimodal experimental workflows.Future progress will depend on improving model robustness across species and imaging modalities,integrating multimodal data for phenotype-function mapping,and developing standard pipelines that link computational tools with clinical diagnostics.These innovations will expand microbial phenotyping beyond structural analysis,enabling deeper insights into bacterial physiology and ecological interactions.
基金supported by the National Natural Science Foundation of China(Grant Nos.12402336,U20A2070,12025202)the Natural Science Foundation of Jiangsu Province(Grant No.BK20230876)+2 种基金the National High-Level Talent Project(Grant No.YQR23069)the Key Laboratory of Intake and Exhaust Technology,Ministry of Education(Grant No.CEPE2024015)the Key Laboratory of Mechanics and Control for Aerospace Structures(Nanjing University of Aeronautics and Astronautics)(Grant No.MCAS-I-0325K01)。
文摘Schlieren imaging is a widely used technique to visualize the structure of supersonic flow field,which is usually dominated by shock waves.Precise identification of shock waves in schlieren image provides critical insights for flow diagnostics,especially for supersonic inlet whose performance is highly associated with that of the whole flight.However,conventional shock wave identification methods have limited accuracy in segmenting the shock wave.To overcome the limitation,we proposed an automated shock wave identification method(SW-Segment)that can attain high resolution and automatic shock wave segmentation by integrating correlation-based feature extraction with graph search.We demonstrated the efficacy of SW-Segment via the identification of shock waves in simulatively and experimentally obtained schlieren image.The results proved that SW-Segment showed a shock wave identification accuracy of 95.24%in the numerical schlieren image and an accuracy of 88.33%in the experimental image,clearly demonstrating its reliability.SW-Segment holds broad applicability for shock wave detection in diverse schlieren imaging scenarios,offering robust data support for flow field analysis and supersonic flight design.
基金supported by the National Natural Science Foundation of China(62522119 and 62372358)the Beijing Natural Science Foundation(7242267)+2 种基金the Beijing Scholars Program([2015]160)the Natural Science Basic Research Program of Shaanxi(2023-JC-QN-0719)the Guangdong Basic and Applied Basic Research Foundation(2022A1515110453)。
文摘Background:Brain volume measurement serves as a critical approach for assessing brain health status.Considering the close biological connection between the eyes and brain,this study aims to investigate the feasibility of estimating brain volume through retinal fundus imaging integrated with clinical metadata,and to offer a cost-effective approach for assessing brain health.Methods:Based on clinical information,retinal fundus images,and neuroimaging data derived from a multicenter,population-based cohort study,the Kai Luan Study,we proposed a cross-modal correlation representation(CMCR)network to elucidate the intricate co-degenerative relationships between the eyes and brain for 755 subjects.Specifically,individual clinical information,which has been followed up for as long as 12 years,was encoded as a prompt to enhance the accuracy of brain volume estimation.Independent internal validation and external validation were performed to assess the robustness of the proposed model.Root mean square error(RMSE),peak signal-tonoise ratio(PSNR),and structural similarity index measure(SSIM)metrics were employed to quantitatively evaluate the quality of synthetic brain images derived from retinal imaging data.Results:The proposed framework yielded average RMSE,PSNR,and SSIM values of 98.23,35.78 d B,and 0.64,respectively,which significantly outperformed 5 other methods:multi-channel Variational Autoencoder(mcVAE),Pixelto-Pixel(Pixel2pixel),transformer-based U-Net(Trans UNet),multi-scale transformer network(MT-Net),and residual vision transformer(ResViT).The two-(2D)and three-dimensional(3D)visualization results showed that the shape and texture of the synthetic brain images generated by the proposed method most closely resembled those of actual brain images.Thus,the CMCR framework accurately captured the latent structural correlations between the fundus and the brain.The average difference between predicted and actual brain volumes was 61.36 cm~3,with a relative error of 4.54%.When all of the clinical information(including age and sex,daily habits,cardiovascular factors,metabolic factors,and inflammatory factors)was encoded,the difference was decreased to 53.89 cm~3,with a relative error of 3.98%.Based on the synthesized brain magnetic resonance images from retinal fundus images,the volumes of brain tissues could be estimated with high accuracy.Conclusion:This study provides an innovative,accurate,and cost-effective approach to characterize brain health status through readily accessible retinal fundus images.
文摘Medical image segmentation is of critical importance in the domain of contemporary medical imaging.However,U-Net and its variants exhibit limitations in capturing complex nonlinear patterns and global contextual information.Although the subsequent U-KAN model enhances nonlinear representation capabilities,it still faces challenges such as gradient vanishing during deep network training and spatial detail loss during feature downsampling,resulting in insufficient segmentation accuracy for edge structures and minute lesions.To address these challenges,this paper proposes the RE-UKAN model,which innovatively improves upon U-KAN.Firstly,a residual network is introduced into the encoder to effectively mitigate gradient vanishing through cross-layer identity mappings,thus enhancing modelling capabilities for complex pathological structures.Secondly,Efficient Local Attention(ELA)is integrated to suppress spatial detail loss during downsampling,thereby improving the perception of edge structures and minute lesions.Experimental results on four public datasets demonstrate that RE-UKAN outperforms existing medical image segmentation methods across multiple evaluation metrics,with particularly outstanding performance on the TN-SCUI 2020 dataset,achieving IoU of 88.18%and Dice of 93.57%.Compared to the baseline model,it achieves improvements of 3.05%and 1.72%,respectively.These results fully demonstrate RE-UKAN’s superior detail retention capability and boundary recognition accuracy in complex medical image segmentation tasks,providing a reliable solution for clinical precision segmentation.
基金an initial outcome of the Research on the Interactive Relationship Between Biographies and Epitaphs in Ancient China,a project(ID:24BZW023)supported by the National Social Science Fund of China。
文摘The historical image of Ouyang Xiu constructed during the Song Dynasty evolved from a multifaceted portrayal that balanced his political and literary achievements into a singular cultural symbol.In the Northern Song Dynasty,writings by Ouyang Xiu's family and epitaphs by his colleagues crafted a balanced narrative emphasizing both his official duties and literary merits,thus constructing a dual image of him as a principled remonstrator and a literary master.In the Southern Song Dynasty,official historiography gradually eroded his complex persona as a political reformer by selectively trimming political disputes and emphasizing his literary lineage,ultimately establishing him as a cultural exemplar beyond factional strife.Throughout this evolution of historical writing,Ouyang Xiu's sharpness as a remonstrator was gradually obscured in historical texts,while his image as a literary master,revered by all,became firmly established.The reshaping of Ouyang Xiu's image in historical writings across the Northern and Southern Song dynasties not only reflects the logic of selecting scholar-official role models under the influence of official ideology but also reveals the inherent pattern whereby individual distinctiveness fades into symbolic construction in historical writing.
基金supported by Postgraduate Research&Practice Innovation Program of Jiangsu Province,China(Grant No.SJCX24_1332)Jiangsu Province Education Science Planning Project in 2024(Grant No.B-b/2024/01/122)High-Level Talent Scientific Research Foundation of Jinling Institute of Technology,China(Grant No.jit-b-201918).
文摘Digital watermarking technology plays an important role in detecting malicious tampering and protecting image copyright.However,in practical applications,this technology faces various problems such as severe image distortion,inaccurate localization of the tampered regions,and difficulty in recovering content.Given these shortcomings,a fragile image watermarking algorithm for tampering blind-detection and content self-recovery is proposed.The multi-feature watermarking authentication code(AC)is constructed using texture feature of local binary patterns(LBP),direct coefficient of discrete cosine transform(DCT)and contrast feature of gray level co-occurrence matrix(GLCM)for detecting the tampered region,and the recovery code(RC)is designed according to the average grayscale value of pixels in image blocks for recovering the tampered content.Optimal pixel adjustment process(OPAP)and least significant bit(LSB)algorithms are used to embed the recovery code and authentication code into the image in a staggered manner.When detecting the integrity of the image,the authentication code comparison method and threshold judgment method are used to perform two rounds of tampering detection on the image and blindly recover the tampered content.Experimental results show that this algorithm has good transparency,strong and blind detection,and self-recovery performance against four types of malicious attacks and some conventional signal processing operations.When resisting copy-paste,text addition,cropping and vector quantization under the tampering rate(TR)10%,the average tampering detection rate is up to 94.09%,and the peak signal-to-noise ratio(PSNR)of the watermarked image and the recovered image are both greater than 41.47 and 40.31 dB,which demonstrates its excellent advantages compared with other related algorithms in recent years.
文摘Compact size,high brightness,and wide field of view(FOV)are key requirements for long-wave infrared imagers used in military surveillance or night navigation.However,to meet the imaging requirements of high resolution and wide FOV,infrared optical systems often adopt complex optical lens groups,which will increase the size and weight of the optical system.In this paper,a strategy based on wavefront coding(WFC)is proposed to design a compact wide-FOV infrared imager.A cubic phase mask is inserted into the pupil plane of the infrared imager to correct the aberration.The simulated results show that,the WFC infrared imager has good imaging quality in a wide FOV of±16°.In addition,the WFC infrared imager achieves compactness with its 40 mm×40 mm×40 mm size.A fast focal ratio of 1 combined with an entrance pupil diameter of 25 mm ensures brightness.This work is of significance for designing a compact wide-FOV infrared imager.
基金funded by the National Natural Science Foundation of China(NNSFC)under Grant Numbers 42322408,42188101,and 42441809Additional support was provided by the Climbing Program of the National Space Science Center(NSSC,Grant No.E4PD3005)as well as the Specialized Research Fund for State Key Laboratories of China.
文摘A large-scale view of the magnetospheric cusp is expected to be obtained by the Soft X-ray Imager(SXI)onboard the Solar wind Magnetosphere Ionosphere Link Explorer(SMILE).However,it is challenging to trace the three-dimensional cusp boundary from a two-dimensional X-ray image because the detected X-ray signals will be integrated along the line of sight.In this work,a global magnetohydrodynamic code was used to simulate the X-ray images and photon count images,assuming an interplanetary magnetic field with a pure Bz component.The assumption of an elliptic cusp boundary at a given altitude was used to trace the equatorward and poleward boundaries of the cusp from a simulated X-ray image.The average discrepancy was less than 0.1 RE.To reduce the influence of instrument effects and cosmic X-ray backgrounds,image denoising was considered before applying the method above to SXI photon count images.The cusp boundaries were reasonably reconstructed from the noisy X-ray image.