The diversity of tree species and the complexity of land use in cities create challenging issues for tree species classification.The combination of deep learning methods and RGB optical images obtained by unmanned aer...The diversity of tree species and the complexity of land use in cities create challenging issues for tree species classification.The combination of deep learning methods and RGB optical images obtained by unmanned aerial vehicles(UAVs) provides a new research direction for urban tree species classification.We proposed an RGB optical image dataset with 10 urban tree species,termed TCC10,which is a benchmark for tree canopy classification(TCC).TCC10 dataset contains two types of data:tree canopy images with simple backgrounds and those with complex backgrounds.The objective was to examine the possibility of using deep learning methods(AlexNet,VGG-16,and ResNet-50) for individual tree species classification.The results of convolutional neural networks(CNNs) were compared with those of K-nearest neighbor(KNN) and BP neural network.Our results demonstrated:(1) ResNet-50 achieved an overall accuracy(OA) of 92.6% and a kappa coefficient of 0.91 for tree species classification on TCC10 and outperformed AlexNet and VGG-16.(2) The classification accuracy of KNN and BP neural network was less than70%,while the accuracy of CNNs was relatively higher.(3)The classification accuracy of tree canopy images with complex backgrounds was lower than that for images with simple backgrounds.For the deciduous tree species in TCC10,the classification accuracy of ResNet-50 was higher in summer than that in autumn.Therefore,the deep learning is effective for urban tree species classification using RGB optical images.展开更多
It is difficult to balance local details and global distribution using a single source image in marine target detection of a large scene.To solve this problem,a technique based on the fusion of optical image and synth...It is difficult to balance local details and global distribution using a single source image in marine target detection of a large scene.To solve this problem,a technique based on the fusion of optical image and synthetic aperture radar(SAR)image for the extraction of sea ice is proposed in this paper.The Band 2(B2 image of Sentinel-2(S2 in the research area is selected as optical image data.Preprocessing on the optical image,such as resampling,projection transformation and format conversion,are conducted to the S2 dataset before fusion.Imaging characteristics of the sea ice have been analyzed,and a new deep learning(DL)model,OceanTDL5,is built to detect sea ices.The fusion of the Sentinel-1(S1 and S2 images is realized by solving the optimal pixel values based on deriving Poisson Equation.The experimental results indicate that the use of a fused image improves the accuracy of sea ice detection compared with the use of a single data source.The fused image has richer spatial details and a clearer texture compared with the original optical image,and its material sense and color are more abundant.展开更多
Some existing image encryption schemes use simple low-dimensional chaotic systems, which makes the algorithms insecure and vulnerable to brute force attacks and cracking. Some algorithms have issues such as weak corre...Some existing image encryption schemes use simple low-dimensional chaotic systems, which makes the algorithms insecure and vulnerable to brute force attacks and cracking. Some algorithms have issues such as weak correlation with plaintext images, poor image reconstruction quality, and low efficiency in transmission and storage. To solve these issues,this paper proposes an optical image encryption algorithm based on a new four-dimensional memristive hyperchaotic system(4D MHS) and compressed sensing(CS). Firstly, this paper proposes a new 4D MHS, which has larger key space, richer dynamic behavior, and more complex hyperchaotic characteristics. The introduction of CS can reduce the image size and the transmission burden of hardware devices. The introduction of double random phase encoding(DRPE) enables this algorithm has the ability of parallel data processing and multi-dimensional coding space, and the hyperchaotic characteristics of 4D MHS make up for the nonlinear deficiency of DRPE. Secondly, a construction method of the deterministic chaotic measurement matrix(DCMM) is proposed. Using DCMM can not only save a lot of transmission bandwidth and storage space, but also ensure good quality of reconstructed images. Thirdly, the confusion method and diffusion method proposed are related to plaintext images, which require both four hyperchaotic sequences of 4D MHS and row and column keys based on plaintext images. The generation process of hyperchaotic sequences is closely related to the hash value of plaintext images. Therefore, this algorithm has high sensitivity to plaintext images. The experimental testing and comparative analysis results show that proposed algorithm has good security and effectiveness.展开更多
Mangroves are indispensable to coastlines,maintaining biodiversity,and mitigating climate change.Therefore,improving the accuracy of mangrove information identification is crucial for their ecological protection.Aimin...Mangroves are indispensable to coastlines,maintaining biodiversity,and mitigating climate change.Therefore,improving the accuracy of mangrove information identification is crucial for their ecological protection.Aiming at the limited morphological information of synthetic aperture radar(SAR)images,which is greatly interfered by noise,and the susceptibility of optical images to weather and lighting conditions,this paper proposes a pixel-level weighted fusion method for SAR and optical images.Image fusion enhanced the target features and made mangrove monitoring more comprehensive and accurate.To address the problem of high similarity between mangrove forests and other forests,this paper is based on the U-Net convolutional neural network,and an attention mechanism is added in the feature extraction stage to make the model pay more attention to the mangrove vegetation area in the image.In order to accelerate the convergence and normalize the input,batch normalization(BN)layer and Dropout layer are added after each convolutional layer.Since mangroves are a minority class in the image,an improved cross-entropy loss function is introduced in this paper to improve the model’s ability to recognize mangroves.The AttU-Net model for mangrove recognition in high similarity environments is thus constructed based on the fused images.Through comparison experiments,the overall accuracy of the improved U-Net model trained from the fused images to recognize the predicted regions is significantly improved.Based on the fused images,the recognition results of the AttU-Net model proposed in this paper are compared with its benchmark model,U-Net,and the Dense-Net,Res-Net,and Seg-Net methods.The AttU-Net model captured mangroves’complex structures and textural features in images more effectively.The average OA,F1-score,and Kappa coefficient in the four tested regions were 94.406%,90.006%,and 84.045%,which were significantly higher than several other methods.This method can provide some technical support for the monitoring and protection of mangrove ecosystems.展开更多
We propose an optical image watermarking scheme based on orbital angular momentum(OAM)holography.Multiple topological charges(TCs,l)of OAM,as multiple cryptographic sub-keys,are embedded into the host image along with...We propose an optical image watermarking scheme based on orbital angular momentum(OAM)holography.Multiple topological charges(TCs,l)of OAM,as multiple cryptographic sub-keys,are embedded into the host image along with the watermark information.Moreover,the Arnold transformation is employed to further enhance the security and the scrambling time(m)is also served as another cryptographic key.The watermark image is embedded into the host image by using the discrete wavelet transformation(DWT)and singular value decomposition(SVD)methods.Importantly,the interference image is utilized to further enhance security.The imperceptibility of our proposed method is analyzed by using the peak signal-to-noise ratio(PSNR)and the histogram of the watermarked host image.To demonstrate robustness,a series of attack tests,including Gaussian noise,Poisson noise,salt-and-pepper noise,JPEG compression,Gaussian lowpass filtering,cropping,and rotation,are conducted.The experimental results show that our proposed method has advanced security,imperceptibility,and robustness,making it a promising option for optical image watermarking applications.展开更多
Mountain glaciers are sensitive to environment. It is important to acquire ice flow velocities over time for glacier research and hazard forecast. For this paper, cross-correlating of optical images is used to monitor...Mountain glaciers are sensitive to environment. It is important to acquire ice flow velocities over time for glacier research and hazard forecast. For this paper, cross-correlating of optical images is used to monitor ice flow velocities, and an improvement, which is called "moving grid," is made to this method. For this research, two remote-sensing images in a certain glacier area, dur-ing different times are selected. The first image is divided into grids, and we calculated the correlation coefficient of each window in the grid with the window on the second image. The window with the highest correlation coefficient is considered the counter-part one on the first image. The displacement of the two corresponding windows is the movement of the glacier, and it is used to calculate glacier surface velocity. Compared to the traditional way of dividing an image with ascertain grid, this method uses small steps to move the grid from one location to another adjacent location until the whole glacier area is covered in the image, thus in-creasing corresponding point density. We selected a glacier in the Tianshan Mountains for this experiment and used two re-mote-sensing images with a 10-year interval to determine this method.展开更多
Vascular segmentation is a crucial task in biomedical image processing,which is significant for analyzing and modeling vascular networks under physiological and pathological states.With advances in fluorescent labelin...Vascular segmentation is a crucial task in biomedical image processing,which is significant for analyzing and modeling vascular networks under physiological and pathological states.With advances in fluorescent labeling and mesoscopic optical techniques,it has become possible to map the whole-mouse-brain vascular networks at capillary resolution.However,segmenting vessels from mesoscopic optical images is a challenging task.The problems,such as vascular signal discontinuities,vessel lumens,and background fluorescence signals in mesoscopic optical images,belong to global semantic information during vascular segmentation.Traditional vascular segmentation methods based on convolutional neural networks(CNNs)have been limited by their insufficient receptive fields,making it challenging to capture global semantic information of vessels and resulting in inaccurate segmentation results.Here,we propose SegVesseler,a vascular segmentation method based on Swin Transformer.SegVesseler adopts 3D Swin Transformer blocks to extract global contextual information in 3D images.This approach is able to maintain the connectivity and topology of blood vessels during segmentation.We evaluated the performance of our method on mouse cerebrovascular datasets generated from three different labeling and imaging modalities.The experimental results demonstrate that the segmentation effect of our method is significantly better than traditional CNNs and achieves state-of-the-art performance.展开更多
Mudflat vegetation plays a crucial role in the ecological function of wetland environment,and obtaining its fine spatial distri-bution is of great significance for wetland protection and management.Remote sensing tech...Mudflat vegetation plays a crucial role in the ecological function of wetland environment,and obtaining its fine spatial distri-bution is of great significance for wetland protection and management.Remote sensing techniques can realize the rapid extraction of wetland vegetation over a large area.However,the imaging of optical sensors is easily restricted by weather conditions,and the backs-cattered information reflected by Synthetic Aperture Radar(SAR)images is easily disturbed by many factors.Although both data sources have been applied in wetland vegetation classification,there is a lack of comparative study on how the selection of data sources affects the classification effect.This study takes the vegetation of the tidal flat wetland in Chongming Island,Shanghai,China,in 2019,as the research subject.A total of 22 optical feature parameters and 11 SAR feature parameters were extracted from the optical data source(Sentinel-2)and SAR data source(Sentinel-1),respectively.The performance of optical and SAR data and their feature paramet-ers in wetland vegetation classification was quantitatively compared and analyzed by different feature combinations.Furthermore,by simulating the scenario of missing optical images,the impact of optical image missing on vegetation classification accuracy and the compensatory effect of integrating SAR data were revealed.Results show that:1)under the same classification algorithm,the Overall Accuracy(OA)of the combined use of optical and SAR images was the highest,reaching 95.50%.The OA of using only optical images was slightly lower,while using only SAR images yields the lowest accuracy,but still achieved 86.48%.2)Compared to using the spec-tral reflectance of optical data and the backscattering coefficient of SAR data directly,the constructed optical and SAR feature paramet-ers contributed to improving classification accuracy.The inclusion of optical(vegetation index,spatial texture,and phenology features)and SAR feature parameters(SAR index and SAR texture features)in the classification algorithm resulted in an OA improvement of 4.56%and 9.47%,respectively.SAR backscatter,SAR index,optical phenological features,and vegetation index were identified as the top-ranking important features.3)When the optical data were missing continuously for six months,the OA dropped to a minimum of 41.56%.However,when combined with SAR data,the OA could be improved to 71.62%.This indicates that the incorporation of SAR features can effectively compensate for the loss of accuracy caused by optical image missing,especially in regions with long-term cloud cover.展开更多
In this work,using the thin disk model,we examine the optical observations of asymmetric thin-shell wormholes(ATWs)within the theoretical framework of higher-order non-commutative geometry.By utilizing ray tracing tec...In this work,using the thin disk model,we examine the optical observations of asymmetric thin-shell wormholes(ATWs)within the theoretical framework of higher-order non-commutative geometry.By utilizing ray tracing technology,the trajectories of photons under various relevant parameters,as well as the optical observational appearance of ATW,can be accurately simulated.Compared to the black hole(BH)spacetime,observational images of ATW will exhibit extra bright ring structures.The results show that an increase in the non-commutative parameter leads to the innermost extra photon ring moving away from the shadow region,while the second extra photon ring moves closer to the shadow region.However,only one extra bright ring structure is observed in the image when the non-commutative parameter increases toθ=0.03,implying that the observed features of ATWs seem to become increasingly visually similar to a BH with increasingθ.Furthermore,an increase in the mass ratio will result in a reduction of the radius of the innermost extra photon ring,whereas an increase in the throat radius will lead to an expansion of its radius.Notably,neither parameter has a significant impact on the size of the second extra photon ring.These findings significantly advance our theoretical understanding of the optical features of ATWs with higher-order non-commutative corrections.展开更多
BACKGROUND Optical coherence tomography(OCT)enables high-resolution,non-invasive visualization of retinal structures.Recent evidence suggests that retinal layer alterations may reflect central nervous system changes a...BACKGROUND Optical coherence tomography(OCT)enables high-resolution,non-invasive visualization of retinal structures.Recent evidence suggests that retinal layer alterations may reflect central nervous system changes associated with psychiatric disorders such as schizophrenia(SZ).AIM To develop an advanced deep learning model to classify OCT images and distinguish patients with SZ from healthy controls using retinal biomarkers.METHODS A novel convolutional neural network,Self-AttentionNeXt,was designed by integrating grouped self-attention mechanisms,residual and inverted bottleneck blocks,and a final 1×1 convolution for feature refinement.The model was trained and tested on both a custom OCT dataset collected from patients with SZ and a publicly available OCT dataset(OCT2017).RESULTS Self-AttentionNeXt achieved 97.0%accuracy on the collected SZ OCT dataset and over 95%accuracy on the public OCT2017 dataset.Gradient-weighted class activation mapping visualizations confirmed the model’s attention to clinically relevant retinal regions,suggesting effective feature localization.CONCLUSION Self-AttentionNeXt effectively combines transformer-inspired attention mechanisms with convolutional neural networks architecture to support the early and accurate detection of SZ using OCT images.This approach offers a promising direction for artificial intelligence-assisted psychiatric diagnostics and clinical decision support.展开更多
Bragg processing using a volume hologram offers an alternative in optical image processing in contrast to Fourier-plane processing. By placing a volume hologram near the object in an optical imaging setup, we achieve ...Bragg processing using a volume hologram offers an alternative in optical image processing in contrast to Fourier-plane processing. By placing a volume hologram near the object in an optical imaging setup, we achieve Bragg processing. In this review, we discuss various image processing methods achievable with acousto-optic modulators as dynamic and programmable volume holograms. In particular, we concentrate on the discussion of various differentiation operations leading to edge extraction capabilities.展开更多
Although Convolutional Neural Networks(CNNs)have significantly improved the development of image Super-Resolution(SR)technology in recent years,the existing SR methods for SAR image with large scale factors have rarel...Although Convolutional Neural Networks(CNNs)have significantly improved the development of image Super-Resolution(SR)technology in recent years,the existing SR methods for SAR image with large scale factors have rarely been studied due to technical difficulty.A more efficient method is to obtain comprehensive information to guide the SAR image reconstruction.Indeed,the co-registered High-Resolution(HR)optical image has been successfully applied to enhance the quality of SAR image due to its discriminative characteristics.Inspired by this,we propose a novel Optical-Guided Super-Resolution Network(OGSRN)for SAR image with large scale factors.Specifically,our proposed OGSRN consists of two sub-nets:a SAR image SuperResolution U-Net(SRUN)and a SAR-to-Optical Residual Translation Network(SORTN).The whole process during training includes two stages.In stage-1,the SR SAR images are reconstructed by the SRUN.And an Enhanced Residual Attention Module(ERAM),which is comprised of the Channel Attention(CA)and Spatial Attention(SA)mechanisms,is constructed to boost the representation ability of the network.In stage-2,the output of the stage-1 and its corresponding HR SAR images are translated to optical images by the SORTN,respectively.And then the differences between SR images and HR images are computed in the optical space to obtain feedback information that can reduce the space of possible SR solution.After that,we can use the optimized SRUN to directly produce HR SAR image from Low-Resolution(LR)SAR image in the testing phase.The experimental results show that under the guidance of optical image,our OGSRN can achieve excellent performance in both quantitative assessment metrics and visual quality.展开更多
Sea ice as a disaster has recently attracted a great deal of attention in China. Its monitoring has become a routine task for the maritime sector. Remote sensing, which depends mainly on SAR and optical sensors, has b...Sea ice as a disaster has recently attracted a great deal of attention in China. Its monitoring has become a routine task for the maritime sector. Remote sensing, which depends mainly on SAR and optical sensors, has become the primary means for sea-ice research. Optical images contain abundant sea-ice multi-spectral in-formation, whereas SAR images contain rich sea-ice texture information. If the characteristic advantages of SAR and optical images could be combined for sea-ice study, the ability of sea-ice monitoring would be im-proved. In this study, in accordance with the characteristics of sea-ice SAR and optical images, the transfor-mation and fusion methods for these images were chosen. Also, a fusion method of optical and SAR images was proposed in order to improve sea-ice identification. Texture information can play an important role in sea-ice classification. Haar wavelet transformation was found to be suitable for the sea-ice SAR images, and the texture information of the sea-ice SAR image from Advanced Synthetic Aperture Radar (ASAR) loaded on ENVISAT was documented. The results of our studies showed that, the optical images in the hue-intensi-ty-saturation (HIS) space could reflect the spectral characteristics of the sea-ice types more efficiently than in the red-green-blue (RGB) space, and the optical image from the China-Brazil Earth Resources Satellite (CBERS-02B) was transferred from the RGB space to the HIS space. The principal component analysis (PCA) method could potentially contain the maximum information of the sea-ice images by fusing the HIS and texture images. The fusion image was obtained by a PCA method, which included the advantages of both the sea-ice SAR image and the optical image. To validate the fusion method, three methods were used to evaluate the fused image, i.e., objective, subjective, and comprehensive evaluations. It was concluded that the fusion method proposed could improve the ability of image interpretation and sea-ice identification.展开更多
This paper presents a bathymetry inversion method using single-frame fine-resolution optical remote sensing imagery based on ocean-wave refraction and shallow-water wave theory. First, the relationship among water dep...This paper presents a bathymetry inversion method using single-frame fine-resolution optical remote sensing imagery based on ocean-wave refraction and shallow-water wave theory. First, the relationship among water depth, wavelength and wave radian frequency in shallow water was deduced based on shallow-water wave theory. Considering the complex wave distribution in the optical remote sensing imagery, Fast Fourier Transform (FFT) and spatial profile measurements were applied for measuring the wavelengths. Then, the wave radian frequency was calculated by analyzing the long-distance fluctuation in the wavelength, which solved a key problem in obtaining the wave radian frequency in a single-frame image. A case study was conducted for Sanya Bay of Hainan Island, China. Single-flame fine-resolution optical remote sensing imagery from QuickBird satellite was used to invert the bathymetry without external input parameters. The result of the digital elevation model (DEM) was evaluated against a sea chart with a scale of 1:25 000. The root-mean-square error of the inverted bathymetry was 1.07 m, and the relative error was 16.2%. Therefore, the proposed method has the advantages including no requirement for true depths and environmental parameters, and is feasible for mapping the bathymetry of shallow coastal water.展开更多
Due to the bird’s eye view of remote sensing sensors,the orientational information of an object is a key factor that has to be considered in object detection.To obtain rotating bounding boxes,existing studies either ...Due to the bird’s eye view of remote sensing sensors,the orientational information of an object is a key factor that has to be considered in object detection.To obtain rotating bounding boxes,existing studies either rely on rotated anchoring schemes or adding complex rotating ROI transfer layers,leading to increased computational demand and reduced detection speeds.In this study,we propose a novel internal-external optimized convolutional neural network for arbitrary orientated object detection in optical remote sensing images.For the internal opti-mization,we designed an anchor-based single-shot head detector that adopts the concept of coarse-to-fine detection for two-stage object detection networks.The refined rotating anchors are generated from the coarse detection head module and fed into the refining detection head module with a link of an embedded deformable convolutional layer.For the external optimiza-tion,we propose an IOU balanced loss that addresses the regression challenges related to arbitrary orientated bounding boxes.Experimental results on the DOTA and HRSC2016 bench-mark datasets show that our proposed method outperforms selected methods.展开更多
To address the issue of imbalanced detection performance and detection speed in current mainstream object detection algorithms for optical remote sensing images,this paper proposes a multi-scale object detection model...To address the issue of imbalanced detection performance and detection speed in current mainstream object detection algorithms for optical remote sensing images,this paper proposes a multi-scale object detection model for remote sensing images on complex backgrounds,called DI-YOLO,based on You Only Look Once v7-tiny(YOLOv7-tiny).Firstly,to enhance the model’s ability to capture irregular-shaped objects and deformation features,as well as to extract high-level semantic information,deformable convolutions are used to replace standard convolutions in the original model.Secondly,a Content Coordination Attention Feature Pyramid Network(CCA-FPN)structure is designed to replace the Neck part of the original model,which can further perceive relationships between different pixels,reduce feature loss in remote sensing images,and improve the overall model’s ability to detect multi-scale objects.Thirdly,an Implicitly Efficient Decoupled Head(IEDH)is proposed to increase the model’s flexibility,making it more adaptable to complex detection tasks in various scenarios.Finally,the Smoothed Intersection over Union(SIoU)loss function replaces the Complete Intersection over Union(CIoU)loss function in the original model,resulting in more accurate prediction of bounding boxes and continuous model optimization.Experimental results on the High-Resolution Remote Sensing Detection(HRRSD)dataset demonstrate that the proposed DI-YOLO model outperforms mainstream target detection algorithms in terms of mean Average Precision(mAP)for optical remote sensing image detection.Furthermore,it achieves Frames Per Second(FPS)of 138.9,meeting fast and accurate detection requirements.展开更多
Automatic Digital Orthophoto Map(DOM)generation plays an important role in many downstream works such as land use and cover detection,urban planning,and disaster assessment.Existing DOM generation methods can generate...Automatic Digital Orthophoto Map(DOM)generation plays an important role in many downstream works such as land use and cover detection,urban planning,and disaster assessment.Existing DOM generation methods can generate promising results but always need ground object filtered DEM generation before otho-rectification;this can consume much time and produce building facade contained results.To address this problem,a pixel-by-pixel digital differential rectification-based automatic DOM generation method is proposed in this paper.Firstly,3D point clouds with texture are generated by dense image matching based on an optical flow field for a stereo pair of images,respectively.Then,the grayscale of the digital differential rectification image is extracted directly from the point clouds element by element according to the nearest neighbor method for matched points.Subsequently,the elevation is repaired grid-by-grid using the multi-layer Locally Refined B-spline(LR-B)interpolation method with triangular mesh constraint for the point clouds void area,and the grayscale is obtained by the indirect scheme of digital differential rectification to generate the pixel-by-pixel digital differentially rectified image of a single image slice.Finally,a seamline network is automatically searched using a disparity map optimization algorithm,and DOM is smartly mosaicked.The qualitative and quantitative experimental results on three datasets were produced and evaluated,which confirmed the feasibility of the proposed method,and the DOM accuracy can reach 1 Ground Sample Distance(GSD)level.The comparison experiment with the state-of-the-art commercial softwares showed that the proposed method generated DOM has a better visual effect on building boundaries and roof completeness with comparable accuracy and computational efficiency.展开更多
To verify the effectiveness of digital optical 3D image analyzer EvaSKIN in the objective and quantitative evaluation of wrinkles.A total of 115 subjects were recruited,the facial images of the subjects were collected...To verify the effectiveness of digital optical 3D image analyzer EvaSKIN in the objective and quantitative evaluation of wrinkles.A total of 115 subjects were recruited,the facial images of the subjects were collected by digital optical 3D image analyzer and manual camera,the changes of crow’s feet with age were analyzed.Pictures obtained by manual photography can be directly used for observation and preliminary grading of wrinkles.However,the requirements for evaluators are high,and the results are prone to errors,which will affect the accuracy of the evaluation.Therefore,skilled raters are needed.Compared with the manual photography method,the digital optical 3D image analyzer EvaSKIN can realize three-dimensional extraction of wrinkles,and obtain the change trend of crow’s feet with age.20~30 years old,wrinkles begin to appear slowly;wrinkles will increase rapidly at the age of 30~50;The length of 50~60 year old wrinkles is basically fixed,the wrinkles develop longitudewise,gradually widen and deepen,and the area,depth and volume increase is obvious,and the skin aging condition is intensified.the digital optical 3D image analyzer EvaSKIN realizes the 3D extraction of wrinkles,quantifies the circumference,area,average depth,maximum depth and volume of wrinkles,realizes the objective and quantitative evaluation of wrinkle state,is more accurate in the measurement of wrinkles,and provides a new instrument and method for the evaluation of wrinkles.it is a perfect and supplement to the traditional evaluation methods,and to a certain extent,it helps the research and development and evaluation institutions of cosmetics to obtain more abundant and three-dimensional data support.展开更多
Pre-operative X ray mammography and int raoperative X-ray specimen radiography are routinely used to identify breast cancer pathology.Recent advances in optical coherence tomography(OCT)have enabled its 1use for the i...Pre-operative X ray mammography and int raoperative X-ray specimen radiography are routinely used to identify breast cancer pathology.Recent advances in optical coherence tomography(OCT)have enabled its 1use for the intraoperative assessment of surgical margins during breast cancer surgery.While each modality offers distinct contrast of normal and pathological features,there is an essential need to correlate image based features between the two modalities to take adv antage of the diagnostic capabilities of each technique.We compare OCT to X-ray images of resected human breast tissue and correlate different tissue features between modalities for future use in real-tine intraoperative OCT imaging.X ray imaging(specimen radiography)is currently used during surgical breast cancer procedures to verify tumor margins,but cannot image tissue in situ.OCT has the potential to solve this problem by providing intrao-perative imaging of the resected specimen as well as the in situ tumor cavity.OCT and micro-CT(X-ray)images are automatically segmented using different computational approaches,and quantitatively compared to determine the ability of these algorithms to automat ically differentiate regions of adipose tissue from tumor.Furthermore,two-dimensional(2D)and three-dimensional(3D)results are compared.These correlations,combined with real-time intraoperative OCT,have the potential to identify possible regions of tumor within breast tissue which correlate to tumor regions identified previously on X-ray imaging(mammography or specimen radiography).展开更多
In this paper,a lifted Haar transform(LHT)image compression optical chip has been researched to achieve rapid image compression.The chip comprises 32 same image compression optical circuits,and each circuit contains a...In this paper,a lifted Haar transform(LHT)image compression optical chip has been researched to achieve rapid image compression.The chip comprises 32 same image compression optical circuits,and each circuit contains a 2×2 multimode interference(MMI)coupler and aπ/2 delay line phase shifter as the key components.The chip uses highly borosilicate glass as the substrate,Su8 negative photoresist as the core layer,and air as the cladding layer.Its horizontal and longitudinal dimensions are 8011μm×10000μm.Simulation results present that the designed optical circuit has a coupling ratio(CR)of 0:100 and an insertion loss(IL)of 0.001548 d B.Then the chip is fabricated by femtosecond laser and testing results illustrate that the chip has a CR of 6:94 and an IL of 0.518 d B.So,the prepared chip possesses good image compression performance.展开更多
基金supported by Joint Fund of Natural Science Foundation of Zhejiang-Qingshanhu Science and Technology City(Grant No.LQY18C160002)National Natural Science Foundation of China(Grant No.U1809208)+1 种基金Zhejiang Science and Technology Key R&D Program Funded Project(Grant No.2018C02013)Natural Science Foundation of Zhejiang Province(Grant No.LQ20F020005).
文摘The diversity of tree species and the complexity of land use in cities create challenging issues for tree species classification.The combination of deep learning methods and RGB optical images obtained by unmanned aerial vehicles(UAVs) provides a new research direction for urban tree species classification.We proposed an RGB optical image dataset with 10 urban tree species,termed TCC10,which is a benchmark for tree canopy classification(TCC).TCC10 dataset contains two types of data:tree canopy images with simple backgrounds and those with complex backgrounds.The objective was to examine the possibility of using deep learning methods(AlexNet,VGG-16,and ResNet-50) for individual tree species classification.The results of convolutional neural networks(CNNs) were compared with those of K-nearest neighbor(KNN) and BP neural network.Our results demonstrated:(1) ResNet-50 achieved an overall accuracy(OA) of 92.6% and a kappa coefficient of 0.91 for tree species classification on TCC10 and outperformed AlexNet and VGG-16.(2) The classification accuracy of KNN and BP neural network was less than70%,while the accuracy of CNNs was relatively higher.(3)The classification accuracy of tree canopy images with complex backgrounds was lower than that for images with simple backgrounds.For the deciduous tree species in TCC10,the classification accuracy of ResNet-50 was higher in summer than that in autumn.Therefore,the deep learning is effective for urban tree species classification using RGB optical images.
基金the Natural Science Foun-dation of Shandong Province(No.ZR2019MD034)。
文摘It is difficult to balance local details and global distribution using a single source image in marine target detection of a large scene.To solve this problem,a technique based on the fusion of optical image and synthetic aperture radar(SAR)image for the extraction of sea ice is proposed in this paper.The Band 2(B2 image of Sentinel-2(S2 in the research area is selected as optical image data.Preprocessing on the optical image,such as resampling,projection transformation and format conversion,are conducted to the S2 dataset before fusion.Imaging characteristics of the sea ice have been analyzed,and a new deep learning(DL)model,OceanTDL5,is built to detect sea ices.The fusion of the Sentinel-1(S1 and S2 images is realized by solving the optimal pixel values based on deriving Poisson Equation.The experimental results indicate that the use of a fused image improves the accuracy of sea ice detection compared with the use of a single data source.The fused image has richer spatial details and a clearer texture compared with the original optical image,and its material sense and color are more abundant.
文摘Some existing image encryption schemes use simple low-dimensional chaotic systems, which makes the algorithms insecure and vulnerable to brute force attacks and cracking. Some algorithms have issues such as weak correlation with plaintext images, poor image reconstruction quality, and low efficiency in transmission and storage. To solve these issues,this paper proposes an optical image encryption algorithm based on a new four-dimensional memristive hyperchaotic system(4D MHS) and compressed sensing(CS). Firstly, this paper proposes a new 4D MHS, which has larger key space, richer dynamic behavior, and more complex hyperchaotic characteristics. The introduction of CS can reduce the image size and the transmission burden of hardware devices. The introduction of double random phase encoding(DRPE) enables this algorithm has the ability of parallel data processing and multi-dimensional coding space, and the hyperchaotic characteristics of 4D MHS make up for the nonlinear deficiency of DRPE. Secondly, a construction method of the deterministic chaotic measurement matrix(DCMM) is proposed. Using DCMM can not only save a lot of transmission bandwidth and storage space, but also ensure good quality of reconstructed images. Thirdly, the confusion method and diffusion method proposed are related to plaintext images, which require both four hyperchaotic sequences of 4D MHS and row and column keys based on plaintext images. The generation process of hyperchaotic sequences is closely related to the hash value of plaintext images. Therefore, this algorithm has high sensitivity to plaintext images. The experimental testing and comparative analysis results show that proposed algorithm has good security and effectiveness.
基金The Key R&D Project of Hainan Province under contract No.ZDYF2023SHFZ097the National Natural Science Foundation of China under contract No.42376180。
文摘Mangroves are indispensable to coastlines,maintaining biodiversity,and mitigating climate change.Therefore,improving the accuracy of mangrove information identification is crucial for their ecological protection.Aiming at the limited morphological information of synthetic aperture radar(SAR)images,which is greatly interfered by noise,and the susceptibility of optical images to weather and lighting conditions,this paper proposes a pixel-level weighted fusion method for SAR and optical images.Image fusion enhanced the target features and made mangrove monitoring more comprehensive and accurate.To address the problem of high similarity between mangrove forests and other forests,this paper is based on the U-Net convolutional neural network,and an attention mechanism is added in the feature extraction stage to make the model pay more attention to the mangrove vegetation area in the image.In order to accelerate the convergence and normalize the input,batch normalization(BN)layer and Dropout layer are added after each convolutional layer.Since mangroves are a minority class in the image,an improved cross-entropy loss function is introduced in this paper to improve the model’s ability to recognize mangroves.The AttU-Net model for mangrove recognition in high similarity environments is thus constructed based on the fused images.Through comparison experiments,the overall accuracy of the improved U-Net model trained from the fused images to recognize the predicted regions is significantly improved.Based on the fused images,the recognition results of the AttU-Net model proposed in this paper are compared with its benchmark model,U-Net,and the Dense-Net,Res-Net,and Seg-Net methods.The AttU-Net model captured mangroves’complex structures and textural features in images more effectively.The average OA,F1-score,and Kappa coefficient in the four tested regions were 94.406%,90.006%,and 84.045%,which were significantly higher than several other methods.This method can provide some technical support for the monitoring and protection of mangrove ecosystems.
基金Project supported by the National Natural Science Foundation of China(Grant No.62375140)the Natural Science Foundation of Suqian,Jiangsu Province,China(Grant No.S202108)+1 种基金the Open Research Fund of the National Laboratory of Solid State Microstructures(Grant No.M36055)the Postgraduate Research&Practice Innovation Program of Jiangsu Province,China(Grant No.KYCX21-0745)。
文摘We propose an optical image watermarking scheme based on orbital angular momentum(OAM)holography.Multiple topological charges(TCs,l)of OAM,as multiple cryptographic sub-keys,are embedded into the host image along with the watermark information.Moreover,the Arnold transformation is employed to further enhance the security and the scrambling time(m)is also served as another cryptographic key.The watermark image is embedded into the host image by using the discrete wavelet transformation(DWT)and singular value decomposition(SVD)methods.Importantly,the interference image is utilized to further enhance security.The imperceptibility of our proposed method is analyzed by using the peak signal-to-noise ratio(PSNR)and the histogram of the watermarked host image.To demonstrate robustness,a series of attack tests,including Gaussian noise,Poisson noise,salt-and-pepper noise,JPEG compression,Gaussian lowpass filtering,cropping,and rotation,are conducted.The experimental results show that our proposed method has advanced security,imperceptibility,and robustness,making it a promising option for optical image watermarking applications.
基金supported by the National Basic Research Program of China (Grant No. 2009CB723901)863 program (2009AA12Z145)the Chinese Academy of Sciences (kzcx2-yw-301)
文摘Mountain glaciers are sensitive to environment. It is important to acquire ice flow velocities over time for glacier research and hazard forecast. For this paper, cross-correlating of optical images is used to monitor ice flow velocities, and an improvement, which is called "moving grid," is made to this method. For this research, two remote-sensing images in a certain glacier area, dur-ing different times are selected. The first image is divided into grids, and we calculated the correlation coefficient of each window in the grid with the window on the second image. The window with the highest correlation coefficient is considered the counter-part one on the first image. The displacement of the two corresponding windows is the movement of the glacier, and it is used to calculate glacier surface velocity. Compared to the traditional way of dividing an image with ascertain grid, this method uses small steps to move the grid from one location to another adjacent location until the whole glacier area is covered in the image, thus in-creasing corresponding point density. We selected a glacier in the Tianshan Mountains for this experiment and used two re-mote-sensing images with a 10-year interval to determine this method.
基金supported by the STI2030-Major Projects (2021ZD0201002)the National Natural Science Foundation of China (82102137,T2122015)+2 种基金Natural Science Foundation of Shaanxi Provincial Department of Education (21JK0796)the Open Project Program of Wuhan National Laboratory for Optoelectronics (2021WNL OKF006)the Natural Science Foundation of Sichuan Province (2022NSFSC0964).
文摘Vascular segmentation is a crucial task in biomedical image processing,which is significant for analyzing and modeling vascular networks under physiological and pathological states.With advances in fluorescent labeling and mesoscopic optical techniques,it has become possible to map the whole-mouse-brain vascular networks at capillary resolution.However,segmenting vessels from mesoscopic optical images is a challenging task.The problems,such as vascular signal discontinuities,vessel lumens,and background fluorescence signals in mesoscopic optical images,belong to global semantic information during vascular segmentation.Traditional vascular segmentation methods based on convolutional neural networks(CNNs)have been limited by their insufficient receptive fields,making it challenging to capture global semantic information of vessels and resulting in inaccurate segmentation results.Here,we propose SegVesseler,a vascular segmentation method based on Swin Transformer.SegVesseler adopts 3D Swin Transformer blocks to extract global contextual information in 3D images.This approach is able to maintain the connectivity and topology of blood vessels during segmentation.We evaluated the performance of our method on mouse cerebrovascular datasets generated from three different labeling and imaging modalities.The experimental results demonstrate that the segmentation effect of our method is significantly better than traditional CNNs and achieves state-of-the-art performance.
基金Under the auspices of the National Key Research and Development Program of China(No.2023YFC3208500)Shanghai Municipal Natural Science Foundation(No.22ZR1421500)+3 种基金National Natural Science Foundation of China(No.U2243207)National Science and Technology Basic Resources Survey Project(No.2023FY01001)Open Research Fund of State Key Laboratory of Estuarine and Coastal Research(No.SKLEC-KF202406)Project from Science and Technology Commission of Shanghai Municipality(No.22DZ1202700)。
文摘Mudflat vegetation plays a crucial role in the ecological function of wetland environment,and obtaining its fine spatial distri-bution is of great significance for wetland protection and management.Remote sensing techniques can realize the rapid extraction of wetland vegetation over a large area.However,the imaging of optical sensors is easily restricted by weather conditions,and the backs-cattered information reflected by Synthetic Aperture Radar(SAR)images is easily disturbed by many factors.Although both data sources have been applied in wetland vegetation classification,there is a lack of comparative study on how the selection of data sources affects the classification effect.This study takes the vegetation of the tidal flat wetland in Chongming Island,Shanghai,China,in 2019,as the research subject.A total of 22 optical feature parameters and 11 SAR feature parameters were extracted from the optical data source(Sentinel-2)and SAR data source(Sentinel-1),respectively.The performance of optical and SAR data and their feature paramet-ers in wetland vegetation classification was quantitatively compared and analyzed by different feature combinations.Furthermore,by simulating the scenario of missing optical images,the impact of optical image missing on vegetation classification accuracy and the compensatory effect of integrating SAR data were revealed.Results show that:1)under the same classification algorithm,the Overall Accuracy(OA)of the combined use of optical and SAR images was the highest,reaching 95.50%.The OA of using only optical images was slightly lower,while using only SAR images yields the lowest accuracy,but still achieved 86.48%.2)Compared to using the spec-tral reflectance of optical data and the backscattering coefficient of SAR data directly,the constructed optical and SAR feature paramet-ers contributed to improving classification accuracy.The inclusion of optical(vegetation index,spatial texture,and phenology features)and SAR feature parameters(SAR index and SAR texture features)in the classification algorithm resulted in an OA improvement of 4.56%and 9.47%,respectively.SAR backscatter,SAR index,optical phenological features,and vegetation index were identified as the top-ranking important features.3)When the optical data were missing continuously for six months,the OA dropped to a minimum of 41.56%.However,when combined with SAR data,the OA could be improved to 71.62%.This indicates that the incorporation of SAR features can effectively compensate for the loss of accuracy caused by optical image missing,especially in regions with long-term cloud cover.
基金supported by the National Natural Science Foundation of China(11903025)by the Sichuan Science and Technology Program(2024NSFSC1999)。
文摘In this work,using the thin disk model,we examine the optical observations of asymmetric thin-shell wormholes(ATWs)within the theoretical framework of higher-order non-commutative geometry.By utilizing ray tracing technology,the trajectories of photons under various relevant parameters,as well as the optical observational appearance of ATW,can be accurately simulated.Compared to the black hole(BH)spacetime,observational images of ATW will exhibit extra bright ring structures.The results show that an increase in the non-commutative parameter leads to the innermost extra photon ring moving away from the shadow region,while the second extra photon ring moves closer to the shadow region.However,only one extra bright ring structure is observed in the image when the non-commutative parameter increases toθ=0.03,implying that the observed features of ATWs seem to become increasingly visually similar to a BH with increasingθ.Furthermore,an increase in the mass ratio will result in a reduction of the radius of the innermost extra photon ring,whereas an increase in the throat radius will lead to an expansion of its radius.Notably,neither parameter has a significant impact on the size of the second extra photon ring.These findings significantly advance our theoretical understanding of the optical features of ATWs with higher-order non-commutative corrections.
文摘BACKGROUND Optical coherence tomography(OCT)enables high-resolution,non-invasive visualization of retinal structures.Recent evidence suggests that retinal layer alterations may reflect central nervous system changes associated with psychiatric disorders such as schizophrenia(SZ).AIM To develop an advanced deep learning model to classify OCT images and distinguish patients with SZ from healthy controls using retinal biomarkers.METHODS A novel convolutional neural network,Self-AttentionNeXt,was designed by integrating grouped self-attention mechanisms,residual and inverted bottleneck blocks,and a final 1×1 convolution for feature refinement.The model was trained and tested on both a custom OCT dataset collected from patients with SZ and a publicly available OCT dataset(OCT2017).RESULTS Self-AttentionNeXt achieved 97.0%accuracy on the collected SZ OCT dataset and over 95%accuracy on the public OCT2017 dataset.Gradient-weighted class activation mapping visualizations confirmed the model’s attention to clinically relevant retinal regions,suggesting effective feature localization.CONCLUSION Self-AttentionNeXt effectively combines transformer-inspired attention mechanisms with convolutional neural networks architecture to support the early and accurate detection of SZ using OCT images.This approach offers a promising direction for artificial intelligence-assisted psychiatric diagnostics and clinical decision support.
基金The work was supported by the National Natural Science Foundation of China(Nos.11762009 and 61865007)the Natural Science Foundation of Yunnan Province,China(No.2018FB101)+1 种基金the Key Program of Science and Technology of Yunnan Province(No.2019FA025)the Yunnan Provincial Program for Foreign Talent(No.104126760027)。
文摘Bragg processing using a volume hologram offers an alternative in optical image processing in contrast to Fourier-plane processing. By placing a volume hologram near the object in an optical imaging setup, we achieve Bragg processing. In this review, we discuss various image processing methods achievable with acousto-optic modulators as dynamic and programmable volume holograms. In particular, we concentrate on the discussion of various differentiation operations leading to edge extraction capabilities.
基金supported by the National Natural Science Foundation of China(Nos.61771319,62076165 and 61871154)the Natural Science Foundation of Guangdong Province,China(No.2019A1515011307)+1 种基金Shenzhen Science and Technology Project,China(Nos.JCYJ20180507182259896 and 20200826154022001)the other project(Nos.2020KCXTD004 and WDZC20195500201)。
文摘Although Convolutional Neural Networks(CNNs)have significantly improved the development of image Super-Resolution(SR)technology in recent years,the existing SR methods for SAR image with large scale factors have rarely been studied due to technical difficulty.A more efficient method is to obtain comprehensive information to guide the SAR image reconstruction.Indeed,the co-registered High-Resolution(HR)optical image has been successfully applied to enhance the quality of SAR image due to its discriminative characteristics.Inspired by this,we propose a novel Optical-Guided Super-Resolution Network(OGSRN)for SAR image with large scale factors.Specifically,our proposed OGSRN consists of two sub-nets:a SAR image SuperResolution U-Net(SRUN)and a SAR-to-Optical Residual Translation Network(SORTN).The whole process during training includes two stages.In stage-1,the SR SAR images are reconstructed by the SRUN.And an Enhanced Residual Attention Module(ERAM),which is comprised of the Channel Attention(CA)and Spatial Attention(SA)mechanisms,is constructed to boost the representation ability of the network.In stage-2,the output of the stage-1 and its corresponding HR SAR images are translated to optical images by the SORTN,respectively.And then the differences between SR images and HR images are computed in the optical space to obtain feedback information that can reduce the space of possible SR solution.After that,we can use the optimized SRUN to directly produce HR SAR image from Low-Resolution(LR)SAR image in the testing phase.The experimental results show that under the guidance of optical image,our OGSRN can achieve excellent performance in both quantitative assessment metrics and visual quality.
基金The National Science Foundation for Young Scientists of China under contract No.41306193the National Special Research Fund for Non-Profit Marine Sector of China under contract No.201105016the ESA-MOST Dragon 3 Cooperation Programme under contract No.10501
文摘Sea ice as a disaster has recently attracted a great deal of attention in China. Its monitoring has become a routine task for the maritime sector. Remote sensing, which depends mainly on SAR and optical sensors, has become the primary means for sea-ice research. Optical images contain abundant sea-ice multi-spectral in-formation, whereas SAR images contain rich sea-ice texture information. If the characteristic advantages of SAR and optical images could be combined for sea-ice study, the ability of sea-ice monitoring would be im-proved. In this study, in accordance with the characteristics of sea-ice SAR and optical images, the transfor-mation and fusion methods for these images were chosen. Also, a fusion method of optical and SAR images was proposed in order to improve sea-ice identification. Texture information can play an important role in sea-ice classification. Haar wavelet transformation was found to be suitable for the sea-ice SAR images, and the texture information of the sea-ice SAR image from Advanced Synthetic Aperture Radar (ASAR) loaded on ENVISAT was documented. The results of our studies showed that, the optical images in the hue-intensi-ty-saturation (HIS) space could reflect the spectral characteristics of the sea-ice types more efficiently than in the red-green-blue (RGB) space, and the optical image from the China-Brazil Earth Resources Satellite (CBERS-02B) was transferred from the RGB space to the HIS space. The principal component analysis (PCA) method could potentially contain the maximum information of the sea-ice images by fusing the HIS and texture images. The fusion image was obtained by a PCA method, which included the advantages of both the sea-ice SAR image and the optical image. To validate the fusion method, three methods were used to evaluate the fused image, i.e., objective, subjective, and comprehensive evaluations. It was concluded that the fusion method proposed could improve the ability of image interpretation and sea-ice identification.
基金The Public Science and Technology Research Fund Project of Ocean under contract No.201105001the National Nature Science Foundation of China under contract No.41576174the Public Science and Technology Research Fund Project of Surveying,Mapping and Geoinformation under contract No.201512030
文摘This paper presents a bathymetry inversion method using single-frame fine-resolution optical remote sensing imagery based on ocean-wave refraction and shallow-water wave theory. First, the relationship among water depth, wavelength and wave radian frequency in shallow water was deduced based on shallow-water wave theory. Considering the complex wave distribution in the optical remote sensing imagery, Fast Fourier Transform (FFT) and spatial profile measurements were applied for measuring the wavelengths. Then, the wave radian frequency was calculated by analyzing the long-distance fluctuation in the wavelength, which solved a key problem in obtaining the wave radian frequency in a single-frame image. A case study was conducted for Sanya Bay of Hainan Island, China. Single-flame fine-resolution optical remote sensing imagery from QuickBird satellite was used to invert the bathymetry without external input parameters. The result of the digital elevation model (DEM) was evaluated against a sea chart with a scale of 1:25 000. The root-mean-square error of the inverted bathymetry was 1.07 m, and the relative error was 16.2%. Therefore, the proposed method has the advantages including no requirement for true depths and environmental parameters, and is feasible for mapping the bathymetry of shallow coastal water.
基金This work is supported by the National Natural Science Foundation of China[grant numbers 41890820,41771452,41771454,and 41901340]。
文摘Due to the bird’s eye view of remote sensing sensors,the orientational information of an object is a key factor that has to be considered in object detection.To obtain rotating bounding boxes,existing studies either rely on rotated anchoring schemes or adding complex rotating ROI transfer layers,leading to increased computational demand and reduced detection speeds.In this study,we propose a novel internal-external optimized convolutional neural network for arbitrary orientated object detection in optical remote sensing images.For the internal opti-mization,we designed an anchor-based single-shot head detector that adopts the concept of coarse-to-fine detection for two-stage object detection networks.The refined rotating anchors are generated from the coarse detection head module and fed into the refining detection head module with a link of an embedded deformable convolutional layer.For the external optimiza-tion,we propose an IOU balanced loss that addresses the regression challenges related to arbitrary orientated bounding boxes.Experimental results on the DOTA and HRSC2016 bench-mark datasets show that our proposed method outperforms selected methods.
基金Funding for this research was provided by 511 Shaanxi Province’s Key Research and Development Plan(No.2022NY-087).
文摘To address the issue of imbalanced detection performance and detection speed in current mainstream object detection algorithms for optical remote sensing images,this paper proposes a multi-scale object detection model for remote sensing images on complex backgrounds,called DI-YOLO,based on You Only Look Once v7-tiny(YOLOv7-tiny).Firstly,to enhance the model’s ability to capture irregular-shaped objects and deformation features,as well as to extract high-level semantic information,deformable convolutions are used to replace standard convolutions in the original model.Secondly,a Content Coordination Attention Feature Pyramid Network(CCA-FPN)structure is designed to replace the Neck part of the original model,which can further perceive relationships between different pixels,reduce feature loss in remote sensing images,and improve the overall model’s ability to detect multi-scale objects.Thirdly,an Implicitly Efficient Decoupled Head(IEDH)is proposed to increase the model’s flexibility,making it more adaptable to complex detection tasks in various scenarios.Finally,the Smoothed Intersection over Union(SIoU)loss function replaces the Complete Intersection over Union(CIoU)loss function in the original model,resulting in more accurate prediction of bounding boxes and continuous model optimization.Experimental results on the High-Resolution Remote Sensing Detection(HRRSD)dataset demonstrate that the proposed DI-YOLO model outperforms mainstream target detection algorithms in terms of mean Average Precision(mAP)for optical remote sensing image detection.Furthermore,it achieves Frames Per Second(FPS)of 138.9,meeting fast and accurate detection requirements.
基金supported by the National Natural Science Foundation of China[Grant No.41771479]the National High-Resolution Earth Observation System(the Civil Part)[Grant No.50-H31D01-0508-13/15]the Japan Society for the Promotion of Science[Grant No.22H03573].
文摘Automatic Digital Orthophoto Map(DOM)generation plays an important role in many downstream works such as land use and cover detection,urban planning,and disaster assessment.Existing DOM generation methods can generate promising results but always need ground object filtered DEM generation before otho-rectification;this can consume much time and produce building facade contained results.To address this problem,a pixel-by-pixel digital differential rectification-based automatic DOM generation method is proposed in this paper.Firstly,3D point clouds with texture are generated by dense image matching based on an optical flow field for a stereo pair of images,respectively.Then,the grayscale of the digital differential rectification image is extracted directly from the point clouds element by element according to the nearest neighbor method for matched points.Subsequently,the elevation is repaired grid-by-grid using the multi-layer Locally Refined B-spline(LR-B)interpolation method with triangular mesh constraint for the point clouds void area,and the grayscale is obtained by the indirect scheme of digital differential rectification to generate the pixel-by-pixel digital differentially rectified image of a single image slice.Finally,a seamline network is automatically searched using a disparity map optimization algorithm,and DOM is smartly mosaicked.The qualitative and quantitative experimental results on three datasets were produced and evaluated,which confirmed the feasibility of the proposed method,and the DOM accuracy can reach 1 Ground Sample Distance(GSD)level.The comparison experiment with the state-of-the-art commercial softwares showed that the proposed method generated DOM has a better visual effect on building boundaries and roof completeness with comparable accuracy and computational efficiency.
文摘To verify the effectiveness of digital optical 3D image analyzer EvaSKIN in the objective and quantitative evaluation of wrinkles.A total of 115 subjects were recruited,the facial images of the subjects were collected by digital optical 3D image analyzer and manual camera,the changes of crow’s feet with age were analyzed.Pictures obtained by manual photography can be directly used for observation and preliminary grading of wrinkles.However,the requirements for evaluators are high,and the results are prone to errors,which will affect the accuracy of the evaluation.Therefore,skilled raters are needed.Compared with the manual photography method,the digital optical 3D image analyzer EvaSKIN can realize three-dimensional extraction of wrinkles,and obtain the change trend of crow’s feet with age.20~30 years old,wrinkles begin to appear slowly;wrinkles will increase rapidly at the age of 30~50;The length of 50~60 year old wrinkles is basically fixed,the wrinkles develop longitudewise,gradually widen and deepen,and the area,depth and volume increase is obvious,and the skin aging condition is intensified.the digital optical 3D image analyzer EvaSKIN realizes the 3D extraction of wrinkles,quantifies the circumference,area,average depth,maximum depth and volume of wrinkles,realizes the objective and quantitative evaluation of wrinkle state,is more accurate in the measurement of wrinkles,and provides a new instrument and method for the evaluation of wrinkles.it is a perfect and supplement to the traditional evaluation methods,and to a certain extent,it helps the research and development and evaluation institutions of cosmetics to obtain more abundant and three-dimensional data support.
基金supported in part by a grant from the U.S.National Institutes of Health,R01 EB012479(S.A.B.).
文摘Pre-operative X ray mammography and int raoperative X-ray specimen radiography are routinely used to identify breast cancer pathology.Recent advances in optical coherence tomography(OCT)have enabled its 1use for the intraoperative assessment of surgical margins during breast cancer surgery.While each modality offers distinct contrast of normal and pathological features,there is an essential need to correlate image based features between the two modalities to take adv antage of the diagnostic capabilities of each technique.We compare OCT to X-ray images of resected human breast tissue and correlate different tissue features between modalities for future use in real-tine intraoperative OCT imaging.X ray imaging(specimen radiography)is currently used during surgical breast cancer procedures to verify tumor margins,but cannot image tissue in situ.OCT has the potential to solve this problem by providing intrao-perative imaging of the resected specimen as well as the in situ tumor cavity.OCT and micro-CT(X-ray)images are automatically segmented using different computational approaches,and quantitatively compared to determine the ability of these algorithms to automat ically differentiate regions of adipose tissue from tumor.Furthermore,two-dimensional(2D)and three-dimensional(3D)results are compared.These correlations,combined with real-time intraoperative OCT,have the potential to identify possible regions of tumor within breast tissue which correlate to tumor regions identified previously on X-ray imaging(mammography or specimen radiography).
基金the Natural Science Foundation of Hubei Province(No.2017CFB685)Hubei University of Technology"Advanced Manufacturing Technology and Equipment"Collaborative Innovation Center Open Research Fund(Nos.038/1201501 and 038/1201803)the College-level Project of Hubei University of Technology(Nos.4201/01758,4201/01802,4201/01889,and 4128/21025)。
文摘In this paper,a lifted Haar transform(LHT)image compression optical chip has been researched to achieve rapid image compression.The chip comprises 32 same image compression optical circuits,and each circuit contains a 2×2 multimode interference(MMI)coupler and aπ/2 delay line phase shifter as the key components.The chip uses highly borosilicate glass as the substrate,Su8 negative photoresist as the core layer,and air as the cladding layer.Its horizontal and longitudinal dimensions are 8011μm×10000μm.Simulation results present that the designed optical circuit has a coupling ratio(CR)of 0:100 and an insertion loss(IL)of 0.001548 d B.Then the chip is fabricated by femtosecond laser and testing results illustrate that the chip has a CR of 6:94 and an IL of 0.518 d B.So,the prepared chip possesses good image compression performance.