Crop leaf area index(LAI)and biomass are two major biophysical parameters to measure crop growth and health condition.Measuring LAI and biomass in field experiments is a destructive method.Therefore,we focused on the ...Crop leaf area index(LAI)and biomass are two major biophysical parameters to measure crop growth and health condition.Measuring LAI and biomass in field experiments is a destructive method.Therefore,we focused on the application of unmanned aerial vehicles(UAVs)in agriculture,which is a cost and labor-efficientmethod.Hence,UAV-captured multispectral images were applied to monitor crop growth,identify plant bio-physical conditions,and so on.In this study,we monitored soybean crops using UAV and field experiments.This experiment was conducted at theMAFES(Mississippi Agricultural and Forestry Experiment Station)Pontotoc Ridge-Flatwoods Branch Experiment Station.It followed a randomized block design with five cover crops:Cereal Rye,Vetch,Wheat,MC:mixed Mustard and Cereal Rye,and native vegetation.Planting was made in the fall,and three fertilizer treatments were applied:Synthetic Fertilizer,Poultry Litter,and none,applied before planting the soybean,in a full factorial combination.We monitored soybean reproductive phases at R3(initial pod development),R5(initial seed development),R6(full seed development),and R7(initial maturity)and used UAV multispectral remote sensing for soybean LAI and biomass estimations.The major goal of this study was to assess LAI and biomass estimations from UAV multispectral images in the reproductive stages when the development of leaves and biomass was stabilized.Wemade about fourteen vegetation indices(VIs)fromUAVmultispectral images at these stages to estimate LAI and biomass.Wemodeled LAI and biomass based on these remotely sensed VIs and ground-truth measurements usingmachine learning methods,including linear regression,Random Forest(RF),and support vector regression(SVR).Thereafter,the models were applied to estimate LAI and biomass.According to the model results,LAI was better estimated at the R6 stage and biomass at the R3 stage.Compared to the other models,the RF models showed better estimation,i.e.,an R^(2) of about 0.58–0.68 with an RMSE(rootmean square error)of 0.52–0.60(m^(2)/m^(2))for the LAI and about 0.44–0.64 for R^(2) and 21–26(g dry weight/5 plants)for RMSE of biomass estimation.We performed a leave-one-out cross-validation.Based on cross-validatedmodels with field experiments,we also found that the R6 stage was the best for estimating LAI,and the R3 stage for estimating crop biomass.The cross-validated RF model showed the estimation ability with an R^(2) about 0.25–0.44 and RMSE of 0.65–0.85(m^(2)/m^(2))for LAI estimation;and R^(2) about 0.1–0.31 and an RMSE of about 28–35(g dry weight/5 plants)for crop biomass estimation.This result will be helpful to promote the use of non-destructive remote sensing methods to determine the crop LAI and biomass status,which may bring more efficient crop production and management.展开更多
Camouflaged people are extremely expert in actively concealing themselves by effectively utilizing cover and the surrounding environment. Despite advancements in optical detection capabilities through imaging systems,...Camouflaged people are extremely expert in actively concealing themselves by effectively utilizing cover and the surrounding environment. Despite advancements in optical detection capabilities through imaging systems, including spectral, polarization, and infrared technologies, there is still a lack of effective real-time method for accurately detecting small-size and high-efficient camouflaged people in complex real-world scenes. Here, this study proposes a snapshot multispectral image-based camouflaged detection model, multispectral YOLO(MS-YOLO), which utilizes the SPD-Conv and Sim AM modules to effectively represent targets and suppress background interference by exploiting the spatial-spectral target information. Besides, the study constructs the first real-shot multispectral camouflaged people dataset(MSCPD), which encompasses diverse scenes, target scales, and attitudes. To minimize information redundancy, MS-YOLO selects an optimal subset of 12 bands with strong feature representation and minimal inter-band correlation as input. Through experiments on the MSCPD, MS-YOLO achieves a mean Average Precision of 94.31% and real-time detection at 65 frames per second, which confirms the effectiveness and efficiency of our method in detecting camouflaged people in various typical desert and forest scenes. Our approach offers valuable support to improve the perception capabilities of unmanned aerial vehicles in detecting enemy forces and rescuing personnel in battlefield.展开更多
An extreme ultraviolet solar corona multispectral imager can allow direct observation of high temperature coronal plasma,which is related to solar flares,coronal mass ejections and other significant coronal activities...An extreme ultraviolet solar corona multispectral imager can allow direct observation of high temperature coronal plasma,which is related to solar flares,coronal mass ejections and other significant coronal activities.This manuscript proposes a novel end-to-end computational design method for an extreme ultraviolet(EUV)solar corona multispectral imager operating at wavelengths near 100 nm,including a stray light suppression design and computational image recovery.To suppress the strong stray light from the solar disk,an outer opto-mechanical structure is designed to protect the imaging component of the system.Considering the low reflectivity(less than 70%)and strong-scattering(roughness)of existing extreme ultraviolet optical elements,the imaging component comprises only a primary mirror and a curved grating.A Lyot aperture is used to further suppress any residual stray light.Finally,a deep learning computational imaging method is used to correct the individual multi-wavelength images from the original recorded multi-slit data.In results and data,this can achieve a far-field angular resolution below 7",and spectral resolution below 0.05 nm.The field of view is±3 R_(☉)along the multi-slit moving direction,where R☉represents the radius of the solar disk.The ratio of the corona's stray light intensity to the solar center's irradiation intensity is less than 10-6 at the circle of 1.3 R_(☉).展开更多
Feature extraction is the most critical step in classification of multispectral image.The classification accuracy is mainly influenced by the feature sets that are selected to classify the image.In the past,handcrafte...Feature extraction is the most critical step in classification of multispectral image.The classification accuracy is mainly influenced by the feature sets that are selected to classify the image.In the past,handcrafted feature sets are used which are not adaptive for different image domains.To overcome this,an evolu-tionary learning method is developed to automatically learn the spatial-spectral features for classification.A modified Firefly Algorithm(FA)which achieves maximum classification accuracy with reduced size of feature set is proposed to gain the interest of feature selection for this purpose.For extracting the most effi-cient features from the data set,we have used 3-D discrete wavelet transform which decompose the multispectral image in all three dimensions.For selecting spatial and spectral features we have studied three different approaches namely overlapping window(OW-3DFS),non-overlapping window(NW-3DFS)adaptive window cube(AW-3DFS)and Pixel based technique.Fivefold Multiclass Support Vector Machine(MSVM)is used for classification purpose.Experiments con-ducted on Madurai LISS IV multispectral image exploited that the adaptive win-dow approach is used to increase the classification accuracy.展开更多
Distributed source coding (DSC) is applied to interferential multispectral image compression owing to strong correlation among the image frames. Many DSC systems in the literature use feedback channel (FC) to cont...Distributed source coding (DSC) is applied to interferential multispectral image compression owing to strong correlation among the image frames. Many DSC systems in the literature use feedback channel (FC) to control rate at the decoder, which limits the application of DSC. Upon an analysis of the image data, a rate control approach is proposed to avoid FC. Low-complexity motion compensation is applied first to estimate side information at the encoder. Using a polynomial fitting method, a new mathematical model is then derived to estimate rate based on the correlation between the source and side information. The experimental results show that our estimated rate is a good approximation to the actual rate required by FC while incurring a little bit-rate overhead. Our compression scheme performs comparable with the FC based DSC system and outperforms JPEG2000 significantly.展开更多
Automatic reading procedures in colon cells biopsies allow a faster and precise reading of microscopic biopsies. These procedures implement automatic image segmentation in order to classify cell types as cancerous or ...Automatic reading procedures in colon cells biopsies allow a faster and precise reading of microscopic biopsies. These procedures implement automatic image segmentation in order to classify cell types as cancerous or noncancerous. The authors have developed a new approach aiming to detect colon cancer cells derived from the "Snake" method but using a progressive division of the dimensions of the image to achieve rapid segmentation. The aim of the present paper was to classify different cancerous cell types based on nine morphological parameters and on probabilistic neural network. Three types of cells were used to assess the efficiency of our classifications models, including BH (Benign Hyperplasia), IN (Intraepithelial Neoplasia) that is a precursor state for cancer, and Ca (Carcinoma) that corresponds to abnormal tissue proliferation (cancer). Results showed that among the nine parameters used to classify cells, only three morphologic parameters (area, Xor convex and solidity) were found to be effective in distinguishing the three types of cells. In addition, classification of unknown cells was possible using this method.展开更多
A novel fusion method of multispectral image and panchromatic image based on nonsubsampled contourlet transform(NSCT) and non-negative matrix factorization(NMF) is presented,the aim of which is to preserve both sp...A novel fusion method of multispectral image and panchromatic image based on nonsubsampled contourlet transform(NSCT) and non-negative matrix factorization(NMF) is presented,the aim of which is to preserve both spectral and spatial information simultaneously in fused image.NMF is a matrix factorization method,which can extract the local feature by choosing suitable dimension of the feature subspace.Firstly the multispectral image was represented in intensity hue saturation(IHS) system.Then the I component and panchromatic image were decomposed by NSCT.Next we used NMF to learn the feature of both multispectral and panchromatic images' low-frequency subbands,and the selection principle of the other coefficients was absolute maximum criterion.Finally the new coefficients were reconstructed to get the fused image.Experiments are carried out and the results are compared with some other methods,which show that the new method performs better in improving the spatial resolution and preserving the feature information than the other existing relative methods.展开更多
Facing the very high-resolution( VHR) image classification problem,a feature extraction and fusion framework is presented for VHR panchromatic and multispectral image classification based on deep learning techniques. ...Facing the very high-resolution( VHR) image classification problem,a feature extraction and fusion framework is presented for VHR panchromatic and multispectral image classification based on deep learning techniques. The proposed approach combines spectral and spatial information based on the fusion of features extracted from panchromatic( PAN) and multispectral( MS) images using sparse autoencoder and its deep version. There are three steps in the proposed method,the first one is to extract spatial information of PAN image,and the second one is to describe spectral information of MS image. Finally,in the third step,the features obtained from PAN and MS images are concatenated directly as a simple fusion feature. The classification is performed using the support vector machine( SVM) and the experiments carried out on two datasets with very high spatial resolution. MS and PAN images from WorldView-2 satellite indicate that the classifier provides an efficient solution and demonstrate that the fusion of the features extracted by deep learning techniques from PAN and MS images performs better than that when these techniques are used separately. In addition,this framework shows that deep learning models can extract and fuse spatial and spectral information greatly,and have huge potential to achieve higher accuracy for classification of multispectral and panchromatic images.展开更多
Accurate recognition of maize seedlings on the plot scale under the disturbance of weeds is crucial for early seedling replenishment and weed removal.Currently,UAV-based maize seedling recognition depends primarily on...Accurate recognition of maize seedlings on the plot scale under the disturbance of weeds is crucial for early seedling replenishment and weed removal.Currently,UAV-based maize seedling recognition depends primarily on RGB images.The main purpose of this study is to compare the performances of multispectral images and RGB images of unmanned aerial vehicle(UAV)on maize seeding recognition using deep learning algorithms.Additionally,we aim to assess the disturbance of different weed coverage on the recognition of maize seeding.Firstly,principal component analysis was used in multispectral image transformation.Secondly,by introducing the CARAFE sampling operator and a small target detection layer(SLAY),we extracted the contextual information of each pixel to retain weak features in the maize seedling image.Thirdly,the global attention mechanism(GAM)was employed to capture the features of maize seedlings using the dual attention mechanism of spatial and channel information.The CGS-YOLO algorithm was constructed and formed.Finally,we compared the performance of the improved algorithm with a series of deep learning algorithms,including YOLO v3,v5,v6 and v8.The results show that after PCA transformation,the recognition mAP of maize seedlings reaches 82.6%,representing 3.1 percentage points improvement compared to RGB images.Compared with YOLOv8,YOLOv6,YOLOv5,and YOLOv3,the CGS-YOLO algorithm has improved mAP by 3.8,4.2,4.5 and 6.6 percentage points,respectively.With the increase of weed coverage,the recognition effect of maize seedlings gradually decreased.When weed coverage was more than 70%,the mAP difference becomes significant,but CGS-YOLO still maintains a recognition mAP of 72%.Therefore,in maize seedings recognition,UAV-based multispectral images perform better than RGB images.The application of CGS-YOLO deep learning algorithm with UAV multi-spectral images proves beneficial in the recognition of maize seedlings under weed disturbance.展开更多
Accurate segmentation of camouflage objects in aerial imagery is vital for improving the efficiency of UAV-based reconnaissance and rescue missions.However,camouflage object segmentation is increasingly challenging du...Accurate segmentation of camouflage objects in aerial imagery is vital for improving the efficiency of UAV-based reconnaissance and rescue missions.However,camouflage object segmentation is increasingly challenging due to advances in both camouflage materials and biological mimicry.Although multispectral-RGB based technology shows promise,conventional dual-aperture multispectral-RGB imaging systems are constrained by imprecise and time-consuming registration and fusion across different modalities,limiting their performance.Here,we propose the Reconstructed Multispectral-RGB Fusion Network(RMRF-Net),which reconstructs RGB images into multispectral ones,enabling efficient multimodal segmentation using only an RGB camera.Specifically,RMRF-Net employs a divergentsimilarity feature correction strategy to minimize reconstruction errors and includes an efficient boundary-aware decoder to enhance object contours.Notably,we establish the first real-world aerial multispectral-RGB semantic segmentation of camouflage objects dataset,including 11 object categories.Experimental results demonstrate that RMRF-Net outperforms existing methods,achieving 17.38 FPS on the NVIDIA Jetson AGX Orin,with only a 0.96%drop in mIoU compared to the RTX 3090,showing its practical applicability in multimodal remote sensing.展开更多
Multispectral imaging systems combined with deep learning classification models can be cost-effective tools for the early detection of apple scab(Venturia inaequalis)disease in commercial orchards.Near-infrared(NIR)im...Multispectral imaging systems combined with deep learning classification models can be cost-effective tools for the early detection of apple scab(Venturia inaequalis)disease in commercial orchards.Near-infrared(NIR)imagery can display apple scab symptoms earlier and at a greater severity than visible-spectrum(RGB)imagery.Early apple scab diagnosis based on NIR imagery may be automated using deep learning convolutional neural networks(CNNs).CNN models have previously been used to classify a range of apple diseases accurately but have primarily focused on identifying late-stage rather than early-stage detection.This study fine-tunes CNN models to classify apple scab symptoms as they progress from the early to late stages of infection using a novel multispectral(RGB-NIR)time series created especially for this purpose.This novel multispectral dataset was used in conjunction with a large Apple Disease Identification(ADID)dataset created from publicly available,pre-existing disease datasets.This ADID dataset contained 29,000 images of infection symptoms across six disease classes.Two CNN models,the lightweight MobileNetV2 and heavyweight EfficientNetV2L,were fine-tuned and used to classify each disease class in a testing dataset,with performance assessed through metrics derived from confusion matrices.The models achieved scab-prediction accuracies of 97.13%and 97.57%for MobileNetV2 and EfficientNetV2L,respectively,on the secondary data but only achieved accuracies of 74.12%and 78.91%when applied to the multispectral dataset in isolation.These lower performance scores were attributed to a higher proportion of false-positive scab predictions in the multispectral dataset.Time series analyses revealed that both models could classify apple scab infections earlier than the manual classification techniques,leading to more false-positive assessments,and could accurately distinguish between healthy and infected samples up to 7 days post-inoculation in NIR imagery.展开更多
Information on Land Use and Land Cover Map(LULCM)is essential for environment and socioeconomic applications.Such maps are generally derived from Multispectral Remote Sensing Images(MRSI)via classification.The classif...Information on Land Use and Land Cover Map(LULCM)is essential for environment and socioeconomic applications.Such maps are generally derived from Multispectral Remote Sensing Images(MRSI)via classification.The classification process can be described as information flow from images to maps through a trained classifier.Characterizing the information flow is essential for understanding the classification mechanism,providing solutions that address such theoretical issues as“what is the maximum number of classes that can be classified from a given MRSI?”and“how much information gain can be obtained?”Consequently,two interesting questions naturally arise,i.e.(i)How can we characterize the information flow?and(ii)What is the mathematical form of the information flow?To answer these two questions,this study first hypothesizes that thermodynamic entropy is the appropriate measure of information for both MRSI and LULCM.This hypothesis is then supported by kinetic-theory-based experiments.Thereafter,upon such an entropy,a generalized Jarzynski equation is formulated to mathematically model the information flow,which contains such parameters as thermodynamic entropy of MRSI,thermodynamic entropy of LULCM,weighted F1-score(classification accuracy),and total number of classes.This generalized Jarzynski equation has been successfully validated by hypothesis-driven experiments where 694 Sentinel-2 images are classified into 10 classes by four classical classifiers.This study provides a way for linking thermodynamic laws and concepts to the characterization and understanding of information flow in land cover classification,opening a new door for constructing domain knowledge.展开更多
Multispectral imaging plays a crucial role in simultaneously capturing detailed spatial and spectral information,which is fundamental for understanding complex phenomena across various domains.Traditional systems face...Multispectral imaging plays a crucial role in simultaneously capturing detailed spatial and spectral information,which is fundamental for understanding complex phenomena across various domains.Traditional systems face significant challenges,such as large volume,static function,and limited wavelength selectivity.Here,we propose an innovative dynamic reflective multispectral imaging system via a thermally responsive cholesteric liquid crystal based planar lens.By employing advanced photoalignment technology,the phase distribution of a lens is imprinted to the liquid crystal director.The reflection band is reversibly tuned from 450 nm to 750 nm by thermally controlling the helical pitch of the cholesteric liquid crystal,allowing selectively capturing images in different colors.This capability increases imaging versatility,showing great potential in precision agriculture for assessing crop health,noninvasive diagnostics in healthcare,and advanced remote sensing for environmental monitoring.展开更多
Variogram plays a crucial role in remote sensing application and geostatistics.It is very important to estimate variogram reliably from sufficient data.In this study,the analysis of variograms computed on various samp...Variogram plays a crucial role in remote sensing application and geostatistics.It is very important to estimate variogram reliably from sufficient data.In this study,the analysis of variograms computed on various sample sizes of remotely sensed data was conducted.A 100×100-pixel subset was chosen randomly from an aerial multispectral image which contains three wavebands,Green,Red and near-infrared(NIR).Green,Red,NIR and Normalized Difference Vegetation Index(NDVI)datasets were imported into R software for spatial analysis.Variograms of these four full image datasets and sub-samples with simple random sampling method were investigated.In this case,half size of the subset image data was enough to reliably estimate the variograms for NIR and Red wavebands.To map the variation on NDVI within the weed field,ground sampling interval should be smaller than 12 m.The information will be particularly important for Kriging and also give a good guide of field sampling on the weed field in the future study.展开更多
Multispectral imaging (MSI) technique is often used to capture imagesof the fundus by illuminating it with different wavelengths of light. However,these images are taken at different points in time such that eyeball m...Multispectral imaging (MSI) technique is often used to capture imagesof the fundus by illuminating it with different wavelengths of light. However,these images are taken at different points in time such that eyeball movementscan cause misalignment between consecutive images. The multispectral imagesequence reveals important information in the form of retinal and choroidal bloodvessel maps, which can help ophthalmologists to analyze the morphology of theseblood vessels in detail. This in turn can lead to a high diagnostic accuracy of several diseases. In this paper, we propose a novel semi-supervised end-to-end deeplearning framework called “Adversarial Segmentation and Registration Nets”(ASRNet) for the simultaneous estimation of the blood vessel segmentation andthe registration of multispectral images via an adversarial learning process. ASRNet consists of two subnetworks: (i) A segmentation module S that fulfills theblood vessel segmentation task, and (ii) A registration module R that estimatesthe spatial correspondence of an image pair. Based on the segmention-drivenregistration network, we train the segmentation network using a semi-supervisedadversarial learning strategy. Our experimental results show that the proposedASRNet can achieve state-of-the-art accuracy in segmentation and registrationtasks performed with real MSI datasets.展开更多
Multispectral microscopy enables information enhancement in the study of specimens because of the large spectral band used in this technique. A low cost multimode multispectral microscope using a camera and a set of q...Multispectral microscopy enables information enhancement in the study of specimens because of the large spectral band used in this technique. A low cost multimode multispectral microscope using a camera and a set of quasi-monochromatic Light Emitting Diodes (LEDs) ranging from ultraviolet to near-infrared wavelengths as illumination sources was constructed. But the use of a large spectral band provided by non-monochromatic sources induces variation of focal plan of the imager due to chromatic aberration which rises up the diffraction effects and blurs the images causing shadow around them. It results in discrepancies between standard spectra and extracted spectra with microscope. So we need to calibrate that instrument to be a standard one. We proceed with two types of images comparison to choose the reference wavelength for image acquisition where diffraction effect is more reduced. At each wavelength chosen as a reference, one image is well contrasted. First, we compare the thirteen well contrasted images to identify that presenting more reduced shadow. In second time, we determine the mean of the shadow size over the images from each set. The correction of the discrepancies required measurements on filters using a standard spectrometer and the microscope in transmission mode and reflection mode. To evaluate the capacity of our device to transmit information in frequency domain, its modulation transfer function is evaluated. Multivariate analysis is used to test its capacity to recognize properties of well-known sample. The wavelength 700 nm was chosen to be the reference for the image acquisition, because at this wavelength the images are well contrasted. The measurement made on the filters suggested correction coefficients in transmission mode and reflection mode. The experimental instrument recognized the microsphere’s properties and led to the extraction of the standard transmittance and reflectance spectra. Therefore, this microscope is used as a conventional instrument.展开更多
Multispectral time delay and integration charge coupled device (TDICCD) image compression requires a low- complexity encoder because it is usually completed on board where the energy and memory are limited. The Cons...Multispectral time delay and integration charge coupled device (TDICCD) image compression requires a low- complexity encoder because it is usually completed on board where the energy and memory are limited. The Consultative Committee for Space Data Systems (CCSDS) has proposed an image data compression (CCSDS-IDC) algorithm which is so far most widely implemented in hardware. However, it cannot reduce spectral redundancy in mukispectral images. In this paper, we propose a low-complexity improved CCSDS-IDC (ICCSDS-IDC)-based distributed source coding (DSC) scheme for multispectral TDICCD image consisting of a few bands. Our scheme is based on an ICCSDS-IDC approach that uses a bit plane extractor to parse the differences in the original image and its wavelet transformed coefficient. The output of bit plane extractor will be encoded by a first order entropy coder. Low-density parity-check-based Slepian-Wolf (SW) coder is adopted to implement the DSC strategy. Experimental results on space multispectral TDICCD images show that the proposed scheme significantly outperforms the CCSDS-IDC-based coder in each band.展开更多
Remote sensing is of great importance for analyzing and studying various phenomena occurrence and development on Earth.Today is possible to extract features specific to various fields of application with the applicati...Remote sensing is of great importance for analyzing and studying various phenomena occurrence and development on Earth.Today is possible to extract features specific to various fields of application with the application of modern machine learning techniques,such as Convolutional Neural Networks(CNN)on MultiSpectral Images(MSI).This systematic review examines the application of 1D-,2D-,3D-,and 4D-CNNs to MSI,following Preferred Reporting Items for Systematic Reviews and Meta-Analyses(PRISMA)guidelines.This review addresses three Research Questions(RQ):RQ1:“In which application domains different CNN models have been successfully applied for processing MSI data?”,RQ2:“What are the commonly utilized MSI datasets for training CNN models in the context of processing multispectral satellite imagery?”,and RQ3:“How does the degree of CNN complexity impact the performance of classification,regression or segmentation tasks for multispectral satellite imagery?”.Publications are selected from three databases,Web of Science,IEEE Xplore,and Scopus.Based on the obtained results,the main conclusions are:(1)The majority of studies are applied in the field of agriculture and are using Sentinel-2 satellite data;(2)Publications implementing 1D-,2D-,and 3D-CNNs mostly utilize classification.For 4D-CNN,there are limited number of studies,and all of them use segmentation;(3)This study shows that 2D-CNNs prevail in all application domains,but 3D-CNNs prove to be better for spatio-temporal pattern recognition,more specifically in agricultural and environmental monitoring applications.1D-CNNs are less common compared to 2D-CNNs and 3D-CNNs,but they show good performance in spectral analysis tasks.4D-CNNs are more complex and still underutilized,but they have potential for complex data analysis.More details about metrics according to each CNN are provided in the text and supplementary files,offering a comprehensive overview of the evaluation metrics for each type of machine learning technique applied.展开更多
The use of unmanned aerial vehicles(UAV)for forest monitoring has grown significantly in recent years,providing information with high spatial resolution and temporal versatility.UAV with multispectral sensors allow th...The use of unmanned aerial vehicles(UAV)for forest monitoring has grown significantly in recent years,providing information with high spatial resolution and temporal versatility.UAV with multispectral sensors allow the use of indexes such as the normalized difference vegetation index(NDVI),which determines the vigor,physiological stress and photo synthetic activity of vegetation.This study aimed to analyze the spectral responses and variations of NDVI in tree crowns,as well as their correlation with climatic factors over the course of one year.The study area encompassed a 1.6-ha site in Durango,Mexico,where Pinus cembroides,Pinus engelmannii,and Quercus grisea coexist.Multispectral images were acquired with UAV and information on meteorological variables was obtained from NASA/POWER database.An ANOVA explored possible differences in NDVI among the three species.Pearson correlation was performed to identify the linear relationship between NDVI and meteorological variables.Significant differences in NDVI values were found at the genus level(Pinus and Quercus),possibly related to the physiological features of the species and their phenology.Quercus grisea had the lowest NDVI values throughout the year which may be attributed to its sensitivity to relative humidity and temperatures.Although the use of UAV with a multispectral sensor for NDVI monitoring allowed genera differentiation,in more complex forest analyses hyperspectral and LiDAR sensors should be integrated,as well other vegetation indexes be considered.展开更多
Earth surveillance through aerial images allows more accurate identification and characterization of objects present on the surface from space and airborne platforms.The progression of deep learning and computer visio...Earth surveillance through aerial images allows more accurate identification and characterization of objects present on the surface from space and airborne platforms.The progression of deep learning and computer vision methods and the availability of heterogeneous multispectral remote sensing data make the field more fertile for research.With the evolution of optical sensors,aerial images are becoming more precise and larger,which leads to a new kind of problem for object detection algorithms.This paper proposes the“Sliding Region-based Convolutional Neural Network(SRCNN),”which is an extension of the Faster Region-based Convolutional Neural Network(RCNN)object detection framework to make it independent of the image’s spatial resolution and size.The sliding box strategy is used in the proposed model to segment the image while detecting.The proposed framework outperforms the state-of-the-art Faster RCNN model while processing images with significantly different spatial resolution values.The SRCNN is also capable of detecting objects in images of any size.展开更多
基金This research was supported in part by a postdoctoral research fellow appointment to the Agricultural Research Service(ARS)Research Participation Program administered by the Oak Ridge Institute for Science and Education(ORISE)through an interagency agreement between the U.S.Department of Energy(DOE)and the U.S.Department of Agriculture(USDA).
文摘Crop leaf area index(LAI)and biomass are two major biophysical parameters to measure crop growth and health condition.Measuring LAI and biomass in field experiments is a destructive method.Therefore,we focused on the application of unmanned aerial vehicles(UAVs)in agriculture,which is a cost and labor-efficientmethod.Hence,UAV-captured multispectral images were applied to monitor crop growth,identify plant bio-physical conditions,and so on.In this study,we monitored soybean crops using UAV and field experiments.This experiment was conducted at theMAFES(Mississippi Agricultural and Forestry Experiment Station)Pontotoc Ridge-Flatwoods Branch Experiment Station.It followed a randomized block design with five cover crops:Cereal Rye,Vetch,Wheat,MC:mixed Mustard and Cereal Rye,and native vegetation.Planting was made in the fall,and three fertilizer treatments were applied:Synthetic Fertilizer,Poultry Litter,and none,applied before planting the soybean,in a full factorial combination.We monitored soybean reproductive phases at R3(initial pod development),R5(initial seed development),R6(full seed development),and R7(initial maturity)and used UAV multispectral remote sensing for soybean LAI and biomass estimations.The major goal of this study was to assess LAI and biomass estimations from UAV multispectral images in the reproductive stages when the development of leaves and biomass was stabilized.Wemade about fourteen vegetation indices(VIs)fromUAVmultispectral images at these stages to estimate LAI and biomass.Wemodeled LAI and biomass based on these remotely sensed VIs and ground-truth measurements usingmachine learning methods,including linear regression,Random Forest(RF),and support vector regression(SVR).Thereafter,the models were applied to estimate LAI and biomass.According to the model results,LAI was better estimated at the R6 stage and biomass at the R3 stage.Compared to the other models,the RF models showed better estimation,i.e.,an R^(2) of about 0.58–0.68 with an RMSE(rootmean square error)of 0.52–0.60(m^(2)/m^(2))for the LAI and about 0.44–0.64 for R^(2) and 21–26(g dry weight/5 plants)for RMSE of biomass estimation.We performed a leave-one-out cross-validation.Based on cross-validatedmodels with field experiments,we also found that the R6 stage was the best for estimating LAI,and the R3 stage for estimating crop biomass.The cross-validated RF model showed the estimation ability with an R^(2) about 0.25–0.44 and RMSE of 0.65–0.85(m^(2)/m^(2))for LAI estimation;and R^(2) about 0.1–0.31 and an RMSE of about 28–35(g dry weight/5 plants)for crop biomass estimation.This result will be helpful to promote the use of non-destructive remote sensing methods to determine the crop LAI and biomass status,which may bring more efficient crop production and management.
基金support by the National Natural Science Foundation of China (Grant No. 62005049)Natural Science Foundation of Fujian Province (Grant Nos. 2020J01451, 2022J05113)Education and Scientific Research Program for Young and Middleaged Teachers in Fujian Province (Grant No. JAT210035)。
文摘Camouflaged people are extremely expert in actively concealing themselves by effectively utilizing cover and the surrounding environment. Despite advancements in optical detection capabilities through imaging systems, including spectral, polarization, and infrared technologies, there is still a lack of effective real-time method for accurately detecting small-size and high-efficient camouflaged people in complex real-world scenes. Here, this study proposes a snapshot multispectral image-based camouflaged detection model, multispectral YOLO(MS-YOLO), which utilizes the SPD-Conv and Sim AM modules to effectively represent targets and suppress background interference by exploiting the spatial-spectral target information. Besides, the study constructs the first real-shot multispectral camouflaged people dataset(MSCPD), which encompasses diverse scenes, target scales, and attitudes. To minimize information redundancy, MS-YOLO selects an optimal subset of 12 bands with strong feature representation and minimal inter-band correlation as input. Through experiments on the MSCPD, MS-YOLO achieves a mean Average Precision of 94.31% and real-time detection at 65 frames per second, which confirms the effectiveness and efficiency of our method in detecting camouflaged people in various typical desert and forest scenes. Our approach offers valuable support to improve the perception capabilities of unmanned aerial vehicles in detecting enemy forces and rescuing personnel in battlefield.
基金This study is partially supported by the National Natural Science Foundation of China(NSFC)(62005120,62125504).
文摘An extreme ultraviolet solar corona multispectral imager can allow direct observation of high temperature coronal plasma,which is related to solar flares,coronal mass ejections and other significant coronal activities.This manuscript proposes a novel end-to-end computational design method for an extreme ultraviolet(EUV)solar corona multispectral imager operating at wavelengths near 100 nm,including a stray light suppression design and computational image recovery.To suppress the strong stray light from the solar disk,an outer opto-mechanical structure is designed to protect the imaging component of the system.Considering the low reflectivity(less than 70%)and strong-scattering(roughness)of existing extreme ultraviolet optical elements,the imaging component comprises only a primary mirror and a curved grating.A Lyot aperture is used to further suppress any residual stray light.Finally,a deep learning computational imaging method is used to correct the individual multi-wavelength images from the original recorded multi-slit data.In results and data,this can achieve a far-field angular resolution below 7",and spectral resolution below 0.05 nm.The field of view is±3 R_(☉)along the multi-slit moving direction,where R☉represents the radius of the solar disk.The ratio of the corona's stray light intensity to the solar center's irradiation intensity is less than 10-6 at the circle of 1.3 R_(☉).
文摘Feature extraction is the most critical step in classification of multispectral image.The classification accuracy is mainly influenced by the feature sets that are selected to classify the image.In the past,handcrafted feature sets are used which are not adaptive for different image domains.To overcome this,an evolu-tionary learning method is developed to automatically learn the spatial-spectral features for classification.A modified Firefly Algorithm(FA)which achieves maximum classification accuracy with reduced size of feature set is proposed to gain the interest of feature selection for this purpose.For extracting the most effi-cient features from the data set,we have used 3-D discrete wavelet transform which decompose the multispectral image in all three dimensions.For selecting spatial and spectral features we have studied three different approaches namely overlapping window(OW-3DFS),non-overlapping window(NW-3DFS)adaptive window cube(AW-3DFS)and Pixel based technique.Fivefold Multiclass Support Vector Machine(MSVM)is used for classification purpose.Experiments con-ducted on Madurai LISS IV multispectral image exploited that the adaptive win-dow approach is used to increase the classification accuracy.
基金Supported by the National Natural Science Foundation of China (No. 60532060 60672117), the Program for Changjiang Scholars and Innovative Research Team in University (PCS1TR).
文摘Distributed source coding (DSC) is applied to interferential multispectral image compression owing to strong correlation among the image frames. Many DSC systems in the literature use feedback channel (FC) to control rate at the decoder, which limits the application of DSC. Upon an analysis of the image data, a rate control approach is proposed to avoid FC. Low-complexity motion compensation is applied first to estimate side information at the encoder. Using a polynomial fitting method, a new mathematical model is then derived to estimate rate based on the correlation between the source and side information. The experimental results show that our estimated rate is a good approximation to the actual rate required by FC while incurring a little bit-rate overhead. Our compression scheme performs comparable with the FC based DSC system and outperforms JPEG2000 significantly.
文摘Automatic reading procedures in colon cells biopsies allow a faster and precise reading of microscopic biopsies. These procedures implement automatic image segmentation in order to classify cell types as cancerous or noncancerous. The authors have developed a new approach aiming to detect colon cancer cells derived from the "Snake" method but using a progressive division of the dimensions of the image to achieve rapid segmentation. The aim of the present paper was to classify different cancerous cell types based on nine morphological parameters and on probabilistic neural network. Three types of cells were used to assess the efficiency of our classifications models, including BH (Benign Hyperplasia), IN (Intraepithelial Neoplasia) that is a precursor state for cancer, and Ca (Carcinoma) that corresponds to abnormal tissue proliferation (cancer). Results showed that among the nine parameters used to classify cells, only three morphologic parameters (area, Xor convex and solidity) were found to be effective in distinguishing the three types of cells. In addition, classification of unknown cells was possible using this method.
基金Supported by the National Natural Science Foundation of China(60872065)
文摘A novel fusion method of multispectral image and panchromatic image based on nonsubsampled contourlet transform(NSCT) and non-negative matrix factorization(NMF) is presented,the aim of which is to preserve both spectral and spatial information simultaneously in fused image.NMF is a matrix factorization method,which can extract the local feature by choosing suitable dimension of the feature subspace.Firstly the multispectral image was represented in intensity hue saturation(IHS) system.Then the I component and panchromatic image were decomposed by NSCT.Next we used NMF to learn the feature of both multispectral and panchromatic images' low-frequency subbands,and the selection principle of the other coefficients was absolute maximum criterion.Finally the new coefficients were reconstructed to get the fused image.Experiments are carried out and the results are compared with some other methods,which show that the new method performs better in improving the spatial resolution and preserving the feature information than the other existing relative methods.
基金Supported by the National Natural Science Foundation of China(No.61472103,61772158,U.1711265)
文摘Facing the very high-resolution( VHR) image classification problem,a feature extraction and fusion framework is presented for VHR panchromatic and multispectral image classification based on deep learning techniques. The proposed approach combines spectral and spatial information based on the fusion of features extracted from panchromatic( PAN) and multispectral( MS) images using sparse autoencoder and its deep version. There are three steps in the proposed method,the first one is to extract spatial information of PAN image,and the second one is to describe spectral information of MS image. Finally,in the third step,the features obtained from PAN and MS images are concatenated directly as a simple fusion feature. The classification is performed using the support vector machine( SVM) and the experiments carried out on two datasets with very high spatial resolution. MS and PAN images from WorldView-2 satellite indicate that the classifier provides an efficient solution and demonstrate that the fusion of the features extracted by deep learning techniques from PAN and MS images performs better than that when these techniques are used separately. In addition,this framework shows that deep learning models can extract and fuse spatial and spectral information greatly,and have huge potential to achieve higher accuracy for classification of multispectral and panchromatic images.
基金supported by the Major Science and Technology Project of Guizhou Province([2024]004)Science and Technology Program Project of Guizhou Provincial Tobacco Company of CNTC(2024520000240087).
文摘Accurate recognition of maize seedlings on the plot scale under the disturbance of weeds is crucial for early seedling replenishment and weed removal.Currently,UAV-based maize seedling recognition depends primarily on RGB images.The main purpose of this study is to compare the performances of multispectral images and RGB images of unmanned aerial vehicle(UAV)on maize seeding recognition using deep learning algorithms.Additionally,we aim to assess the disturbance of different weed coverage on the recognition of maize seeding.Firstly,principal component analysis was used in multispectral image transformation.Secondly,by introducing the CARAFE sampling operator and a small target detection layer(SLAY),we extracted the contextual information of each pixel to retain weak features in the maize seedling image.Thirdly,the global attention mechanism(GAM)was employed to capture the features of maize seedlings using the dual attention mechanism of spatial and channel information.The CGS-YOLO algorithm was constructed and formed.Finally,we compared the performance of the improved algorithm with a series of deep learning algorithms,including YOLO v3,v5,v6 and v8.The results show that after PCA transformation,the recognition mAP of maize seedlings reaches 82.6%,representing 3.1 percentage points improvement compared to RGB images.Compared with YOLOv8,YOLOv6,YOLOv5,and YOLOv3,the CGS-YOLO algorithm has improved mAP by 3.8,4.2,4.5 and 6.6 percentage points,respectively.With the increase of weed coverage,the recognition effect of maize seedlings gradually decreased.When weed coverage was more than 70%,the mAP difference becomes significant,but CGS-YOLO still maintains a recognition mAP of 72%.Therefore,in maize seedings recognition,UAV-based multispectral images perform better than RGB images.The application of CGS-YOLO deep learning algorithm with UAV multi-spectral images proves beneficial in the recognition of maize seedlings under weed disturbance.
基金National Natural Science Foundation of China(Grant Nos.62005049 and 62072110)Natural Science Foundation of Fujian Province(Grant No.2020J01451).
文摘Accurate segmentation of camouflage objects in aerial imagery is vital for improving the efficiency of UAV-based reconnaissance and rescue missions.However,camouflage object segmentation is increasingly challenging due to advances in both camouflage materials and biological mimicry.Although multispectral-RGB based technology shows promise,conventional dual-aperture multispectral-RGB imaging systems are constrained by imprecise and time-consuming registration and fusion across different modalities,limiting their performance.Here,we propose the Reconstructed Multispectral-RGB Fusion Network(RMRF-Net),which reconstructs RGB images into multispectral ones,enabling efficient multimodal segmentation using only an RGB camera.Specifically,RMRF-Net employs a divergentsimilarity feature correction strategy to minimize reconstruction errors and includes an efficient boundary-aware decoder to enhance object contours.Notably,we establish the first real-world aerial multispectral-RGB semantic segmentation of camouflage objects dataset,including 11 object categories.Experimental results demonstrate that RMRF-Net outperforms existing methods,achieving 17.38 FPS on the NVIDIA Jetson AGX Orin,with only a 0.96%drop in mIoU compared to the RTX 3090,showing its practical applicability in multimodal remote sensing.
基金funded by the Biotechnology and Biological Sciences Research Council under grant BB/T508950/1 as part of the Waitrose Collaborative Training Partnership and conducted at Lancaster University.
文摘Multispectral imaging systems combined with deep learning classification models can be cost-effective tools for the early detection of apple scab(Venturia inaequalis)disease in commercial orchards.Near-infrared(NIR)imagery can display apple scab symptoms earlier and at a greater severity than visible-spectrum(RGB)imagery.Early apple scab diagnosis based on NIR imagery may be automated using deep learning convolutional neural networks(CNNs).CNN models have previously been used to classify a range of apple diseases accurately but have primarily focused on identifying late-stage rather than early-stage detection.This study fine-tunes CNN models to classify apple scab symptoms as they progress from the early to late stages of infection using a novel multispectral(RGB-NIR)time series created especially for this purpose.This novel multispectral dataset was used in conjunction with a large Apple Disease Identification(ADID)dataset created from publicly available,pre-existing disease datasets.This ADID dataset contained 29,000 images of infection symptoms across six disease classes.Two CNN models,the lightweight MobileNetV2 and heavyweight EfficientNetV2L,were fine-tuned and used to classify each disease class in a testing dataset,with performance assessed through metrics derived from confusion matrices.The models achieved scab-prediction accuracies of 97.13%and 97.57%for MobileNetV2 and EfficientNetV2L,respectively,on the secondary data but only achieved accuracies of 74.12%and 78.91%when applied to the multispectral dataset in isolation.These lower performance scores were attributed to a higher proportion of false-positive scab predictions in the multispectral dataset.Time series analyses revealed that both models could classify apple scab infections earlier than the manual classification techniques,leading to more false-positive assessments,and could accurately distinguish between healthy and infected samples up to 7 days post-inoculation in NIR imagery.
基金supported by the National Natural Science Foundation of China[grant number 41930104]by the Research Grants Council of Hong Kong[grant number PolyU 152219/18E].
文摘Information on Land Use and Land Cover Map(LULCM)is essential for environment and socioeconomic applications.Such maps are generally derived from Multispectral Remote Sensing Images(MRSI)via classification.The classification process can be described as information flow from images to maps through a trained classifier.Characterizing the information flow is essential for understanding the classification mechanism,providing solutions that address such theoretical issues as“what is the maximum number of classes that can be classified from a given MRSI?”and“how much information gain can be obtained?”Consequently,two interesting questions naturally arise,i.e.(i)How can we characterize the information flow?and(ii)What is the mathematical form of the information flow?To answer these two questions,this study first hypothesizes that thermodynamic entropy is the appropriate measure of information for both MRSI and LULCM.This hypothesis is then supported by kinetic-theory-based experiments.Thereafter,upon such an entropy,a generalized Jarzynski equation is formulated to mathematically model the information flow,which contains such parameters as thermodynamic entropy of MRSI,thermodynamic entropy of LULCM,weighted F1-score(classification accuracy),and total number of classes.This generalized Jarzynski equation has been successfully validated by hypothesis-driven experiments where 694 Sentinel-2 images are classified into 10 classes by four classical classifiers.This study provides a way for linking thermodynamic laws and concepts to the characterization and understanding of information flow in land cover classification,opening a new door for constructing domain knowledge.
基金supported by the National Key Research and Development Program of China(No.2022YFA1203700)the National Natural Science Foundation of China(NSFC)(Nos.62405129 and 62035008)+1 种基金the University Research Project of Guangzhou Education Bureau(No.202235053)the Natural Science Foundation of Jiangsu Province(No.BK20241197).
文摘Multispectral imaging plays a crucial role in simultaneously capturing detailed spatial and spectral information,which is fundamental for understanding complex phenomena across various domains.Traditional systems face significant challenges,such as large volume,static function,and limited wavelength selectivity.Here,we propose an innovative dynamic reflective multispectral imaging system via a thermally responsive cholesteric liquid crystal based planar lens.By employing advanced photoalignment technology,the phase distribution of a lens is imprinted to the liquid crystal director.The reflection band is reversibly tuned from 450 nm to 750 nm by thermally controlling the helical pitch of the cholesteric liquid crystal,allowing selectively capturing images in different colors.This capability increases imaging versatility,showing great potential in precision agriculture for assessing crop health,noninvasive diagnostics in healthcare,and advanced remote sensing for environmental monitoring.
文摘Variogram plays a crucial role in remote sensing application and geostatistics.It is very important to estimate variogram reliably from sufficient data.In this study,the analysis of variograms computed on various sample sizes of remotely sensed data was conducted.A 100×100-pixel subset was chosen randomly from an aerial multispectral image which contains three wavebands,Green,Red and near-infrared(NIR).Green,Red,NIR and Normalized Difference Vegetation Index(NDVI)datasets were imported into R software for spatial analysis.Variograms of these four full image datasets and sub-samples with simple random sampling method were investigated.In this case,half size of the subset image data was enough to reliably estimate the variograms for NIR and Red wavebands.To map the variation on NDVI within the weed field,ground sampling interval should be smaller than 12 m.The information will be particularly important for Kriging and also give a good guide of field sampling on the weed field in the future study.
基金supported by the National Natural Science Foundation of China(Grant Nos.81871508 and 61773246)the Major Program of Shandong Province Natural Science Foundation(Grant No.ZR2019ZD04 and ZR2018ZB0419)the Taishan Scholar Program of Shandong Province of China(Grant No.TSHW201502038).
文摘Multispectral imaging (MSI) technique is often used to capture imagesof the fundus by illuminating it with different wavelengths of light. However,these images are taken at different points in time such that eyeball movementscan cause misalignment between consecutive images. The multispectral imagesequence reveals important information in the form of retinal and choroidal bloodvessel maps, which can help ophthalmologists to analyze the morphology of theseblood vessels in detail. This in turn can lead to a high diagnostic accuracy of several diseases. In this paper, we propose a novel semi-supervised end-to-end deeplearning framework called “Adversarial Segmentation and Registration Nets”(ASRNet) for the simultaneous estimation of the blood vessel segmentation andthe registration of multispectral images via an adversarial learning process. ASRNet consists of two subnetworks: (i) A segmentation module S that fulfills theblood vessel segmentation task, and (ii) A registration module R that estimatesthe spatial correspondence of an image pair. Based on the segmention-drivenregistration network, we train the segmentation network using a semi-supervisedadversarial learning strategy. Our experimental results show that the proposedASRNet can achieve state-of-the-art accuracy in segmentation and registrationtasks performed with real MSI datasets.
文摘Multispectral microscopy enables information enhancement in the study of specimens because of the large spectral band used in this technique. A low cost multimode multispectral microscope using a camera and a set of quasi-monochromatic Light Emitting Diodes (LEDs) ranging from ultraviolet to near-infrared wavelengths as illumination sources was constructed. But the use of a large spectral band provided by non-monochromatic sources induces variation of focal plan of the imager due to chromatic aberration which rises up the diffraction effects and blurs the images causing shadow around them. It results in discrepancies between standard spectra and extracted spectra with microscope. So we need to calibrate that instrument to be a standard one. We proceed with two types of images comparison to choose the reference wavelength for image acquisition where diffraction effect is more reduced. At each wavelength chosen as a reference, one image is well contrasted. First, we compare the thirteen well contrasted images to identify that presenting more reduced shadow. In second time, we determine the mean of the shadow size over the images from each set. The correction of the discrepancies required measurements on filters using a standard spectrometer and the microscope in transmission mode and reflection mode. To evaluate the capacity of our device to transmit information in frequency domain, its modulation transfer function is evaluated. Multivariate analysis is used to test its capacity to recognize properties of well-known sample. The wavelength 700 nm was chosen to be the reference for the image acquisition, because at this wavelength the images are well contrasted. The measurement made on the filters suggested correction coefficients in transmission mode and reflection mode. The experimental instrument recognized the microsphere’s properties and led to the extraction of the standard transmittance and reflectance spectra. Therefore, this microscope is used as a conventional instrument.
基金supported by the National High Technology Research and Development Program of China (Grant No. 863-2-5-1-13B)
文摘Multispectral time delay and integration charge coupled device (TDICCD) image compression requires a low- complexity encoder because it is usually completed on board where the energy and memory are limited. The Consultative Committee for Space Data Systems (CCSDS) has proposed an image data compression (CCSDS-IDC) algorithm which is so far most widely implemented in hardware. However, it cannot reduce spectral redundancy in mukispectral images. In this paper, we propose a low-complexity improved CCSDS-IDC (ICCSDS-IDC)-based distributed source coding (DSC) scheme for multispectral TDICCD image consisting of a few bands. Our scheme is based on an ICCSDS-IDC approach that uses a bit plane extractor to parse the differences in the original image and its wavelet transformed coefficient. The output of bit plane extractor will be encoded by a first order entropy coder. Low-density parity-check-based Slepian-Wolf (SW) coder is adopted to implement the DSC strategy. Experimental results on space multispectral TDICCD images show that the proposed scheme significantly outperforms the CCSDS-IDC-based coder in each band.
基金supported through Project Coastal Auto-purification Assessment Technology(CAAT),funded by the European Union from European Structural and Investment Funds 2014–2020(No.KK.01.1.1.04.0064).
文摘Remote sensing is of great importance for analyzing and studying various phenomena occurrence and development on Earth.Today is possible to extract features specific to various fields of application with the application of modern machine learning techniques,such as Convolutional Neural Networks(CNN)on MultiSpectral Images(MSI).This systematic review examines the application of 1D-,2D-,3D-,and 4D-CNNs to MSI,following Preferred Reporting Items for Systematic Reviews and Meta-Analyses(PRISMA)guidelines.This review addresses three Research Questions(RQ):RQ1:“In which application domains different CNN models have been successfully applied for processing MSI data?”,RQ2:“What are the commonly utilized MSI datasets for training CNN models in the context of processing multispectral satellite imagery?”,and RQ3:“How does the degree of CNN complexity impact the performance of classification,regression or segmentation tasks for multispectral satellite imagery?”.Publications are selected from three databases,Web of Science,IEEE Xplore,and Scopus.Based on the obtained results,the main conclusions are:(1)The majority of studies are applied in the field of agriculture and are using Sentinel-2 satellite data;(2)Publications implementing 1D-,2D-,and 3D-CNNs mostly utilize classification.For 4D-CNN,there are limited number of studies,and all of them use segmentation;(3)This study shows that 2D-CNNs prevail in all application domains,but 3D-CNNs prove to be better for spatio-temporal pattern recognition,more specifically in agricultural and environmental monitoring applications.1D-CNNs are less common compared to 2D-CNNs and 3D-CNNs,but they show good performance in spectral analysis tasks.4D-CNNs are more complex and still underutilized,but they have potential for complex data analysis.More details about metrics according to each CNN are provided in the text and supplementary files,offering a comprehensive overview of the evaluation metrics for each type of machine learning technique applied.
基金supported by the National Council of Science and Technology of Mexico(CONACyT),which provided financial support through scholarships for postgraduate studies to J.L.G.S.(815176)and M.R.C.(507523)。
文摘The use of unmanned aerial vehicles(UAV)for forest monitoring has grown significantly in recent years,providing information with high spatial resolution and temporal versatility.UAV with multispectral sensors allow the use of indexes such as the normalized difference vegetation index(NDVI),which determines the vigor,physiological stress and photo synthetic activity of vegetation.This study aimed to analyze the spectral responses and variations of NDVI in tree crowns,as well as their correlation with climatic factors over the course of one year.The study area encompassed a 1.6-ha site in Durango,Mexico,where Pinus cembroides,Pinus engelmannii,and Quercus grisea coexist.Multispectral images were acquired with UAV and information on meteorological variables was obtained from NASA/POWER database.An ANOVA explored possible differences in NDVI among the three species.Pearson correlation was performed to identify the linear relationship between NDVI and meteorological variables.Significant differences in NDVI values were found at the genus level(Pinus and Quercus),possibly related to the physiological features of the species and their phenology.Quercus grisea had the lowest NDVI values throughout the year which may be attributed to its sensitivity to relative humidity and temperatures.Although the use of UAV with a multispectral sensor for NDVI monitoring allowed genera differentiation,in more complex forest analyses hyperspectral and LiDAR sensors should be integrated,as well other vegetation indexes be considered.
文摘Earth surveillance through aerial images allows more accurate identification and characterization of objects present on the surface from space and airborne platforms.The progression of deep learning and computer vision methods and the availability of heterogeneous multispectral remote sensing data make the field more fertile for research.With the evolution of optical sensors,aerial images are becoming more precise and larger,which leads to a new kind of problem for object detection algorithms.This paper proposes the“Sliding Region-based Convolutional Neural Network(SRCNN),”which is an extension of the Faster Region-based Convolutional Neural Network(RCNN)object detection framework to make it independent of the image’s spatial resolution and size.The sliding box strategy is used in the proposed model to segment the image while detecting.The proposed framework outperforms the state-of-the-art Faster RCNN model while processing images with significantly different spatial resolution values.The SRCNN is also capable of detecting objects in images of any size.