期刊文献+
共找到2,165篇文章
< 1 2 109 >
每页显示 20 50 100
Efficient Reconstruction of Spatial Features for Remote Sensing Image-Text Retrieval
1
作者 ZHANG Weihang CHEN Jialiang +3 位作者 ZHANG Wenkai LI Xinming GAO Xin SUN Xian 《Transactions of Nanjing University of Aeronautics and Astronautics》 2025年第1期101-111,共11页
Remote sensing cross-modal image-text retrieval(RSCIR)can flexibly and subjectively retrieve remote sensing images utilizing query text,which has received more researchers’attention recently.However,with the increasi... Remote sensing cross-modal image-text retrieval(RSCIR)can flexibly and subjectively retrieve remote sensing images utilizing query text,which has received more researchers’attention recently.However,with the increasing volume of visual-language pre-training model parameters,direct transfer learning consumes a substantial amount of computational and storage resources.Moreover,recently proposed parameter-efficient transfer learning methods mainly focus on the reconstruction of channel features,ignoring the spatial features which are vital for modeling key entity relationships.To address these issues,we design an efficient transfer learning framework for RSCIR,which is based on spatial feature efficient reconstruction(SPER).A concise and efficient spatial adapter is introduced to enhance the extraction of spatial relationships.The spatial adapter is able to spatially reconstruct the features in the backbone with few parameters while incorporating the prior information from the channel dimension.We conduct quantitative and qualitative experiments on two different commonly used RSCIR datasets.Compared with traditional methods,our approach achieves an improvement of 3%-11% in sumR metric.Compared with methods finetuning all parameters,our proposed method only trains less than 1% of the parameters,while maintaining an overall performance of about 96%. 展开更多
关键词 remote sensing cross-modal image-text retrieval(RSCIR) spatial features channel features contrastive learning parameter effective transfer learning
在线阅读 下载PDF
Multi-scale feature fusion optical remote sensing target detection method 被引量:1
2
作者 BAI Liang DING Xuewen +1 位作者 LIU Ying CHANG Limei 《Optoelectronics Letters》 2025年第4期226-233,共8页
An improved model based on you only look once version 8(YOLOv8)is proposed to solve the problem of low detection accuracy due to the diversity of object sizes in optical remote sensing images.Firstly,the feature pyram... An improved model based on you only look once version 8(YOLOv8)is proposed to solve the problem of low detection accuracy due to the diversity of object sizes in optical remote sensing images.Firstly,the feature pyramid network(FPN)structure of the original YOLOv8 mode is replaced by the generalized-FPN(GFPN)structure in GiraffeDet to realize the"cross-layer"and"cross-scale"adaptive feature fusion,to enrich the semantic information and spatial information on the feature map to improve the target detection ability of the model.Secondly,a pyramid-pool module of multi atrous spatial pyramid pooling(MASPP)is designed by using the idea of atrous convolution and feature pyramid structure to extract multi-scale features,so as to improve the processing ability of the model for multi-scale objects.The experimental results show that the detection accuracy of the improved YOLOv8 model on DIOR dataset is 92%and mean average precision(mAP)is 87.9%,respectively 3.5%and 1.7%higher than those of the original model.It is proved the detection and classification ability of the proposed model on multi-dimensional optical remote sensing target has been improved. 展开更多
关键词 multi scale feature fusion optical remote sensing feature map improve target detection ability optical remote sensing imagesfirstlythe target detection feature fusionto enrich semantic information spatial information
原文传递
CG-FCLNet:Category-Guided Feature Collaborative Learning Network for Semantic Segmentation of Remote Sensing Images
3
作者 Min Yao Guangjie Hu Yaozu Zhang 《Computers, Materials & Continua》 2025年第5期2751-2771,共21页
Semantic segmentation of remote sensing images is a critical research area in the field of remote sensing.Despite the success of Convolutional Neural Networks(CNNs),they often fail to capture inter-layer feature relat... Semantic segmentation of remote sensing images is a critical research area in the field of remote sensing.Despite the success of Convolutional Neural Networks(CNNs),they often fail to capture inter-layer feature relationships and fully leverage contextual information,leading to the loss of important details.Additionally,due to significant intraclass variation and small inter-class differences in remote sensing images,CNNs may experience class confusion.To address these issues,we propose a novel Category-Guided Feature Collaborative Learning Network(CG-FCLNet),which enables fine-grained feature extraction and adaptive fusion.Specifically,we design a Feature Collaborative Learning Module(FCLM)to facilitate the tight interaction of multi-scale features.We also introduce a Scale-Aware Fusion Module(SAFM),which iteratively fuses features from different layers using a spatial attention mechanism,enabling deeper feature fusion.Furthermore,we design a Category-Guided Module(CGM)to extract category-aware information that guides feature fusion,ensuring that the fused featuresmore accurately reflect the semantic information of each category,thereby improving detailed segmentation.The experimental results show that CG-FCLNet achieves a Mean Intersection over Union(mIoU)of 83.46%,an mF1 of 90.87%,and an Overall Accuracy(OA)of 91.34% on the Vaihingen dataset.On the Potsdam dataset,it achieves a mIoU of 86.54%,an mF1 of 92.65%,and an OA of 91.29%.These results highlight the superior performance of CG-FCLNet compared to existing state-of-the-art methods. 展开更多
关键词 Semantic segmentation remote sensing feature context interaction attentionmodule category-guided module
在线阅读 下载PDF
Hyperspectral remote sensing identification of marine oil emulsions based on the fusion of spatial and spectral features
4
作者 Xinyue Huang Yi Ma +1 位作者 Zongchen Jiang Junfang Yang 《Acta Oceanologica Sinica》 SCIE CAS CSCD 2024年第3期139-154,共16页
Marine oil spill emulsions are difficult to recover,and the damage to the environment is not easy to eliminate.The use of remote sensing to accurately identify oil spill emulsions is highly important for the protectio... Marine oil spill emulsions are difficult to recover,and the damage to the environment is not easy to eliminate.The use of remote sensing to accurately identify oil spill emulsions is highly important for the protection of marine environments.However,the spectrum of oil emulsions changes due to different water content.Hyperspectral remote sensing and deep learning can use spectral and spatial information to identify different types of oil emulsions.Nonetheless,hyperspectral data can also cause information redundancy,reducing classification accuracy and efficiency,and even overfitting in machine learning models.To address these problems,an oil emulsion deep-learning identification model with spatial-spectral feature fusion is established,and feature bands that can distinguish between crude oil,seawater,water-in-oil emulsion(WO),and oil-in-water emulsion(OW)are filtered based on a standard deviation threshold–mutual information method.Using oil spill airborne hyperspectral data,we conducted identification experiments on oil emulsions in different background waters and under different spatial and temporal conditions,analyzed the transferability of the model,and explored the effects of feature band selection and spectral resolution on the identification of oil emulsions.The results show the following.(1)The standard deviation–mutual information feature selection method is able to effectively extract feature bands that can distinguish between WO,OW,oil slick,and seawater.The number of bands was reduced from 224 to 134 after feature selection on the Airborne Visible Infrared Imaging Spectrometer(AVIRIS)data and from 126 to 100 on the S185 data.(2)With feature selection,the overall accuracy and Kappa of the identification results for the training area are 91.80%and 0.86,respectively,improved by 2.62%and 0.04,and the overall accuracy and Kappa of the identification results for the migration area are 86.53%and 0.80,respectively,improved by 3.45%and 0.05.(3)The oil emulsion identification model has a certain degree of transferability and can effectively identify oil spill emulsions for AVIRIS data at different times and locations,with an overall accuracy of more than 80%,Kappa coefficient of more than 0.7,and F1 score of 0.75 or more for each category.(4)As the spectral resolution decreasing,the model yields different degrees of misclassification for areas with a mixed distribution of oil slick and seawater or mixed distribution of WO and OW.Based on the above experimental results,we demonstrate that the oil emulsion identification model with spatial–spectral feature fusion achieves a high accuracy rate in identifying oil emulsion using airborne hyperspectral data,and can be applied to images under different spatial and temporal conditions.Furthermore,we also elucidate the impact of factors such as spectral resolution and background water bodies on the identification process.These findings provide new reference for future endeavors in automated marine oil spill detection. 展开更多
关键词 oil emulsions IDENTIFICATION hyperspectral remote sensing feature selection convolutional neural network(CNN) spatial-temporal transferability
在线阅读 下载PDF
Application of Unmanned Aerial Vehicle Remote Sensing on Dangerous Rock Mass Identification and Deformation Analysis:Case Study of a High-Steep Slope in an Open Pit Mine
5
作者 Wenjie Du Qian Sheng +5 位作者 Xiaodong Fu Jian Chen Jingyu Kang Xin Pang Daochun Wan Wei Yuan 《Journal of Earth Science》 2025年第2期750-763,共14页
Source identification and deformation analysis of disaster bodies are the main contents of high-steep slope risk assessment,the establishment of high-precision model and the quantification of the fine geometric featur... Source identification and deformation analysis of disaster bodies are the main contents of high-steep slope risk assessment,the establishment of high-precision model and the quantification of the fine geometric features of the slope are the prerequisites for the above work.In this study,based on the UAV remote sensing technology in acquiring refined model and quantitative parameters,a semi-automatic dangerous rock identification method based on multi-source data is proposed.In terms of the periodicity UAV-based deformation monitoring,the monitoring accuracy is defined according to the relative accuracy of multi-temporal point cloud.Taking a high-steep slope as research object,the UAV equipped with special sensors was used to obtain multi-source and multitemporal data,including high-precision DOM and multi-temporal 3D point clouds.The geometric features of the outcrop were extracted and superimposed with DOM images to carry out semi-automatic identification of dangerous rock mass,realizes the closed-loop of identification and accuracy verification;changing detection of multi-temporal 3D point clouds was conducted to capture deformation of slope with centimeter accuracy.The results show that the multi-source data-based semiautomatic dangerous rock identification method can complement each other to improve the efficiency and accuracy of identification,and the UAV-based multi-temporal monitoring can reveal the near real-time deformation state of slopes. 展开更多
关键词 high-steep slope UAV remote sensing dangerous rock identification multi-temporal monitoring multi-source data fusion engineering geology
原文传递
CE-CDNet:A Transformer-Based Channel Optimization Approach for Change Detection in Remote Sensing
6
作者 Jia Liu Hang Gu +5 位作者 Fangmei Liu Hao Chen Zuhe Li Gang Xu Qidong Liu Wei Wang 《Computers, Materials & Continua》 2025年第4期803-822,共20页
In recent years,convolutional neural networks(CNN)and Transformer architectures have made significant progress in the field of remote sensing(RS)change detection(CD).Most of the existing methods directly stack multipl... In recent years,convolutional neural networks(CNN)and Transformer architectures have made significant progress in the field of remote sensing(RS)change detection(CD).Most of the existing methods directly stack multiple layers of Transformer blocks,which achieves considerable improvement in capturing variations,but at a rather high computational cost.We propose a channel-Efficient Change Detection Network(CE-CDNet)to address the problems of high computational cost and imbalanced detection accuracy in remote sensing building change detection.The adaptive multi-scale feature fusion module(CAMSF)and lightweight Transformer decoder(LTD)are introduced to improve the change detection effect.The CAMSF module can adaptively fuse multi-scale features to improve the model’s ability to detect building changes in complex scenes.In addition,the LTD module reduces computational costs and maintains high detection accuracy through an optimized self-attention mechanism and dimensionality reduction operation.Experimental test results on three commonly used remote sensing building change detection data sets show that CE-CDNet can reduce a certain amount of computational overhead while maintaining detection accuracy comparable to existing mainstream models,showing good performance advantages. 展开更多
关键词 remote sensing change detection attention mechanism channel optimization multi-scale feature fusion
在线阅读 下载PDF
Remote sensing image semantic segmentation algorithm based on improved DeepLabv3+
7
作者 SONG Xirui GE Hongwei LI Ting 《Journal of Measurement Science and Instrumentation》 2025年第2期205-215,共11页
The convolutional neural network(CNN)method based on DeepLabv3+has some problems in the semantic segmentation task of high-resolution remote sensing images,such as fixed receiving field size of feature extraction,lack... The convolutional neural network(CNN)method based on DeepLabv3+has some problems in the semantic segmentation task of high-resolution remote sensing images,such as fixed receiving field size of feature extraction,lack of semantic information,high decoder magnification,and insufficient detail retention ability.A hierarchical feature fusion network(HFFNet)was proposed.Firstly,a combination of transformer and CNN architectures was employed for feature extraction from images of varying resolutions.The extracted features were processed independently.Subsequently,the features from the transformer and CNN were fused under the guidance of features from different sources.This fusion process assisted in restoring information more comprehensively during the decoding stage.Furthermore,a spatial channel attention module was designed in the final stage of decoding to refine features and reduce the semantic gap between shallow CNN features and deep decoder features.The experimental results showed that HFFNet had superior performance on UAVid,LoveDA,Potsdam,and Vaihingen datasets,and its cross-linking index was better than DeepLabv3+and other competing methods,showing strong generalization ability. 展开更多
关键词 semantic segmentation high-resolution remote sensing image deep learning transformer model attention mechanism feature fusion ENCODER DECODER
在线阅读 下载PDF
Wetland Vegetation Species Classification Using Optical and SAR Remote Sensing Images: A Case Study of Chongming Island, Shanghai, China
8
作者 DENG Yaozi SHI Runhe +3 位作者 ZHANG Chao WANG Xiaoyang LIU Chaoshun GAO Wei 《Chinese Geographical Science》 2025年第3期510-527,共18页
Mudflat vegetation plays a crucial role in the ecological function of wetland environment,and obtaining its fine spatial distri-bution is of great significance for wetland protection and management.Remote sensing tech... Mudflat vegetation plays a crucial role in the ecological function of wetland environment,and obtaining its fine spatial distri-bution is of great significance for wetland protection and management.Remote sensing techniques can realize the rapid extraction of wetland vegetation over a large area.However,the imaging of optical sensors is easily restricted by weather conditions,and the backs-cattered information reflected by Synthetic Aperture Radar(SAR)images is easily disturbed by many factors.Although both data sources have been applied in wetland vegetation classification,there is a lack of comparative study on how the selection of data sources affects the classification effect.This study takes the vegetation of the tidal flat wetland in Chongming Island,Shanghai,China,in 2019,as the research subject.A total of 22 optical feature parameters and 11 SAR feature parameters were extracted from the optical data source(Sentinel-2)and SAR data source(Sentinel-1),respectively.The performance of optical and SAR data and their feature paramet-ers in wetland vegetation classification was quantitatively compared and analyzed by different feature combinations.Furthermore,by simulating the scenario of missing optical images,the impact of optical image missing on vegetation classification accuracy and the compensatory effect of integrating SAR data were revealed.Results show that:1)under the same classification algorithm,the Overall Accuracy(OA)of the combined use of optical and SAR images was the highest,reaching 95.50%.The OA of using only optical images was slightly lower,while using only SAR images yields the lowest accuracy,but still achieved 86.48%.2)Compared to using the spec-tral reflectance of optical data and the backscattering coefficient of SAR data directly,the constructed optical and SAR feature paramet-ers contributed to improving classification accuracy.The inclusion of optical(vegetation index,spatial texture,and phenology features)and SAR feature parameters(SAR index and SAR texture features)in the classification algorithm resulted in an OA improvement of 4.56%and 9.47%,respectively.SAR backscatter,SAR index,optical phenological features,and vegetation index were identified as the top-ranking important features.3)When the optical data were missing continuously for six months,the OA dropped to a minimum of 41.56%.However,when combined with SAR data,the OA could be improved to 71.62%.This indicates that the incorporation of SAR features can effectively compensate for the loss of accuracy caused by optical image missing,especially in regions with long-term cloud cover. 展开更多
关键词 optical images Synthetic Aperture Radar(SAR) multi-source remote sensing vegetation classification tidal flat wetland Chongming Island Shanghai China
在线阅读 下载PDF
Remote Sensing Imagery for Multi-Stage Vehicle Detection and Classification via YOLOv9 and Deep Learner
9
作者 Naif Al Mudawi Muhammad Hanzla +4 位作者 Abdulwahab Alazeb Mohammed Alshehri Haifa F.Alhasson Dina Abdulaziz AlHammadi Ahmad Jalal 《Computers, Materials & Continua》 2025年第9期4491-4509,共19页
Unmanned Aerial Vehicles(UAVs)are increasingly employed in traffic surveillance,urban planning,and infrastructure monitoring due to their cost-effectiveness,flexibility,and high-resolution imaging.However,vehicle dete... Unmanned Aerial Vehicles(UAVs)are increasingly employed in traffic surveillance,urban planning,and infrastructure monitoring due to their cost-effectiveness,flexibility,and high-resolution imaging.However,vehicle detection and classification in aerial imagery remain challenging due to scale variations from fluctuating UAV altitudes,frequent occlusions in dense traffic,and environmental noise,such as shadows and lighting inconsistencies.Traditional methods,including sliding-window searches and shallow learning techniques,struggle with computational inefficiency and robustness under dynamic conditions.To address these limitations,this study proposes a six-stage hierarchical framework integrating radiometric calibration,deep learning,and classical feature engineering.The workflow begins with radiometric calibration to normalize pixel intensities and mitigate sensor noise,followed by Conditional Random Field(CRF)segmentation to isolate vehicles.YOLOv9,equipped with a bi-directional feature pyramid network(BiFPN),ensures precise multi-scale object detection.Hybrid feature extraction employs Maximally Stable Extremal Regions(MSER)for stable contour detection,Binary Robust Independent Elementary Features(BRIEF)for texture encoding,and Affine-SIFT(ASIFT)for viewpoint invariance.Quadratic Discriminant Analysis(QDA)enhances feature discrimination,while a Probabilistic Neural Network(PNN)performs Bayesian probability-based classification.Tested on the Roundabout Aerial Imagery(15,474 images,985K instances)and AU-AIR(32,823 instances,7 classes)datasets,the model achieves state-of-the-art accuracy of 95.54%and 94.14%,respectively.Its superior performance in detecting small-scale vehicles and resolving occlusions highlights its potential for intelligent traffic systems.Future work will extend testing to nighttime and adverse weather conditions while optimizing real-time UAV inference. 展开更多
关键词 feature extraction traffic analysis unmanned aerial vehicles(UAV) you only look once version 9(YOLOv9) machine learning remote sensing for traffic monitoring computer vision
在线阅读 下载PDF
Multi-source Remote Sensing Image Registration Based on Contourlet Transform and Multiple Feature Fusion 被引量:6
10
作者 Huan Liu Gen-Fu Xiao +1 位作者 Yun-Lan Tan Chun-Juan Ouyang 《International Journal of Automation and computing》 EI CSCD 2019年第5期575-588,共14页
Image registration is an indispensable component in multi-source remote sensing image processing. In this paper, we put forward a remote sensing image registration method by including an improved multi-scale and multi... Image registration is an indispensable component in multi-source remote sensing image processing. In this paper, we put forward a remote sensing image registration method by including an improved multi-scale and multi-direction Harris algorithm and a novel compound feature. Multi-scale circle Gaussian combined invariant moments and multi-direction gray level co-occurrence matrix are extracted as features for image matching. The proposed algorithm is evaluated on numerous multi-source remote sensor images with noise and illumination changes. Extensive experimental studies prove that our proposed method is capable of receiving stable and even distribution of key points as well as obtaining robust and accurate correspondence matches. It is a promising scheme in multi-source remote sensing image registration. 展开更多
关键词 feature fusion multi-scale circle Gaussian combined invariant MOMENT multi-direction GRAY level CO-OCCURRENCE matrix multi-source remote sensing image registration CONTOURLET transform
原文传递
Advancements in Remote Sensing Image Dehazing: Introducing URA-Net with Multi-Scale Dense Feature Fusion Clusters and Gated Jump Connection
11
作者 Hongchi Liu Xing Deng Haijian Shao 《Computer Modeling in Engineering & Sciences》 SCIE EI 2024年第9期2397-2424,共28页
The degradation of optical remote sensing images due to atmospheric haze poses a significant obstacle,profoundly impeding their effective utilization across various domains.Dehazing methodologies have emerged as pivot... The degradation of optical remote sensing images due to atmospheric haze poses a significant obstacle,profoundly impeding their effective utilization across various domains.Dehazing methodologies have emerged as pivotal components of image preprocessing,fostering an improvement in the quality of remote sensing imagery.This enhancement renders remote sensing data more indispensable,thereby enhancing the accuracy of target iden-tification.Conventional defogging techniques based on simplistic atmospheric degradation models have proven inadequate for mitigating non-uniform haze within remotely sensed images.In response to this challenge,a novel UNet Residual Attention Network(URA-Net)is proposed.This paradigmatic approach materializes as an end-to-end convolutional neural network distinguished by its utilization of multi-scale dense feature fusion clusters and gated jump connections.The essence of our methodology lies in local feature fusion within dense residual clusters,enabling the extraction of pertinent features from both preceding and current local data,depending on contextual demands.The intelligently orchestrated gated structures facilitate the propagation of these features to the decoder,resulting in superior outcomes in haze removal.Empirical validation through a plethora of experiments substantiates the efficacy of URA-Net,demonstrating its superior performance compared to existing methods when applied to established datasets for remote sensing image defogging.On the RICE-1 dataset,URA-Net achieves a Peak Signal-to-Noise Ratio(PSNR)of 29.07 dB,surpassing the Dark Channel Prior(DCP)by 11.17 dB,the All-in-One Network for Dehazing(AOD)by 7.82 dB,the Optimal Transmission Map and Adaptive Atmospheric Light For Dehazing(OTM-AAL)by 5.37 dB,the Unsupervised Single Image Dehazing(USID)by 8.0 dB,and the Superpixel-based Remote Sensing Image Dehazing(SRD)by 8.5 dB.Particularly noteworthy,on the SateHaze1k dataset,URA-Net attains preeminence in overall performance,yielding defogged images characterized by consistent visual quality.This underscores the contribution of the research to the advancement of remote sensing technology,providing a robust and efficient solution for alleviating the adverse effects of haze on image quality. 展开更多
关键词 remote sensing image image dehazing deep learning feature fusion
在线阅读 下载PDF
A Dense Feature Iterative Fusion Network for Extracting Building Contours from Remote Sensing Imagery
12
作者 WU Jiangyan WANG Tong 《Journal of Donghua University(English Edition)》 CAS 2024年第6期654-661,共8页
Extracting building contours from aerial images is a fundamental task in remote sensing.Current building extraction methods cannot accurately extract building contour information and have errors in extracting small-sc... Extracting building contours from aerial images is a fundamental task in remote sensing.Current building extraction methods cannot accurately extract building contour information and have errors in extracting small-scale buildings.This paper introduces a novel dense feature iterative(DFI)fusion network,denoted as DFINet,for extracting building contours.The network uses a DFI decoder to fuse semantic information at different scales and learns the building contour knowledge,producing the last features through iterative fusion.The dense feature fusion(DFF)module combines features at multiple scales.We employ the contour reconstruction(CR)module to access the final predictions.Extensive experiments validate the effectiveness of the DFINet on two different remote sensing datasets,INRIA aerial image dataset and Wuhan University(WHU)building dataset.On the INRIA aerial image dataset,our method achieves the highest intersection over union(IoU),overall accuracy(OA)and F 1 scores compared to other state-of-the-art methods. 展开更多
关键词 remote sensing image building contour extraction feature iteration
在线阅读 下载PDF
Multi-Scale PIIFD for Registration of Multi-Source Remote Sensing Images 被引量:2
13
作者 Chenzhong Gao Wei Li 《Journal of Beijing Institute of Technology》 EI CAS 2021年第2期113-124,共12页
This paper aims at providing multi-source remote sensing images registered in geometric space for image fusion.Focusing on the characteristics and differences of multi-source remote sensing images,a feature-based regi... This paper aims at providing multi-source remote sensing images registered in geometric space for image fusion.Focusing on the characteristics and differences of multi-source remote sensing images,a feature-based registration algorithm is implemented.The key technologies include image scale-space for implementing multi-scale properties,Harris corner detection for keypoints extraction,and partial intensity invariant feature descriptor(PIIFD)for keypoints description.Eventually,a multi-scale Harris-PIIFD image registration algorithm framework is proposed.The experimental results of fifteen sets of representative real data show that the algorithm has excellent,stable performance in multi-source remote sensing image registration,and can achieve accurate spatial alignment,which has strong practical application value and certain generalization ability. 展开更多
关键词 image registration multi-source remote sensing SCALE-SPACE Harris corner partial intensity invariant feature descriptor(PIIFD)
在线阅读 下载PDF
The Identification and Geological Significance of Fault Buried in the Gasikule Salt Lake in China based on the Multi-source Remote Sensing Data 被引量:2
14
作者 WANG Junhu ZHAO Yingjun +1 位作者 WU Ding LU Donghua 《Acta Geologica Sinica(English Edition)》 SCIE CAS CSCD 2021年第3期996-1007,共12页
The salinity of the salt lake is an important factor to evaluate whether it contains some mineral resources or not,the fault buried in the salt lake could control the abundance of the salinity.Therefore,it is of great... The salinity of the salt lake is an important factor to evaluate whether it contains some mineral resources or not,the fault buried in the salt lake could control the abundance of the salinity.Therefore,it is of great geological importance to identify the fault buried in the salt lake.Taking the Gasikule Salt Lake in China for example,the paper established a new method to identify the fault buried in the salt lake based on the multi-source remote sensing data including Landsat TM,SPOT-5 and ASTER data.It includes the acquisition and selection of the multi-source remote sensing data,data preprocessing,lake waterfront extraction,spectrum extraction of brine with different salinity,salinity index construction,salinity separation,analysis of the abnormal salinity and identification of the fault buried in salt lake,temperature inversion of brine and the fault verification.As a result,the study identified an important fault buried in the east of the Gasikule Salt Lake that controls the highest salinity abnormal.Because the level of the salinity is positively correlated to the mineral abundance,the result provides the important reference to identify the water body rich in mineral resources in the salt lake. 展开更多
关键词 multi-source remote sensing data Gasikule Salt Lake Mangya depression China
在线阅读 下载PDF
Accuracy Analysis on the Automatic Registration of Multi-Source Remote Sensing Images Based on the Software of ERDAS Imagine 被引量:1
15
作者 Debao Yuan Ximin Cui +2 位作者 Yahui Qiu Xueyun Gu Li Zhang 《Advances in Remote Sensing》 2013年第2期140-148,共9页
The automatic registration of multi-source remote sensing images (RSI) is a research hotspot of remote sensing image preprocessing currently. A special automatic image registration module named the Image Autosync has ... The automatic registration of multi-source remote sensing images (RSI) is a research hotspot of remote sensing image preprocessing currently. A special automatic image registration module named the Image Autosync has been embedded into the ERDAS IMAGINE software of version 9.0 and above. The registration accuracies of the module verified for the remote sensing images obtained from different platforms or their different spatial resolution. Four tested registration experiments are discussed in this article to analyze the accuracy differences based on the remote sensing data which have different spatial resolution. The impact factors inducing the differences of registration accuracy are also analyzed. 展开更多
关键词 multi-source remote sensing Images Automatic REGISTRATION Image Autosync REGISTRATION ACCURACY
在线阅读 下载PDF
Retrieval of urban land surface component temperature using multi-source remote-sensing data
16
作者 郑文武 曾永年 《Journal of Central South University》 SCIE EI CAS 2013年第9期2489-2497,共9页
The components of urban surface cover are diversified,and component temperature has greater physical significance and application values in the studies on urban thermal environment.Although the multi-angle retrieval a... The components of urban surface cover are diversified,and component temperature has greater physical significance and application values in the studies on urban thermal environment.Although the multi-angle retrieval algorithm of component temperature has been matured gradually,its application in the studies on urban thermal environment is restricted due to the difficulty in acquiring urban-scale multi-angle thermal infrared data.Therefore,based on the existing multi-source multi-band remote sensing data,access to appropriate urban-scale component temperature is an urgent issue to be solved in current studies on urban thermal infrared remote sensing.Then,a retrieval algorithm of urban component temperature by multi-source multi-band remote sensing data on the basis of MODIS and Landsat TM images was proposed with expectations achieved in this work,which was finally validated by the experiment on urban images of Changsha,China.The results show that:1) Mean temperatures of impervious surface components and vegetation components are the maximum and minimum,respectively,which are in accordance with the distribution laws of actual surface temperature; 2) High-accuracy retrieval results are obtained in vegetation component temperature.Moreover,through a contrast between retrieval results and measured data,it is found that the retrieval temperature of impervious surface component has the maximum deviation from measured temperature and its deviation is greater than 1 ℃,while the deviation in vegetation component temperature is relatively low at 0.5 ℃. 展开更多
关键词 component temperature urban thermal environment multi-source remote sensing thermal infrared remote sensing
在线阅读 下载PDF
Red Tide Information Extraction Based on Multi-source Remote Sensing Data in Haizhou Bay
17
作者 LU Xia JIAO Ming-lian 《Meteorological and Environmental Research》 CAS 2011年第8期78-81,共4页
[Objective] The aim was to extract red tide information in Haizhou Bay on the basis of multi-source remote sensing data.[Method] Red tide in Haizhou Bay was studied based on multi-source remote sensing data,such as IR... [Objective] The aim was to extract red tide information in Haizhou Bay on the basis of multi-source remote sensing data.[Method] Red tide in Haizhou Bay was studied based on multi-source remote sensing data,such as IRS-P6 data on October 8,2005,Landsat 5-TM data on May 20,2006,MODIS 1B data on October 6,2006 and HY-1B second-grade data on April 22,2009,which were firstly preprocessed through geometric correction,atmospheric correction,image resizing and so on.At the same time,the synchronous environment monitoring data of red tide water were acquired.Then,band ratio method,chlorophyll-a concentration method and secondary filtering method were adopted to extract red tide information.[Result] On October 8,2005,the area of red tide was about 20.0 km2 in Haizhou Bay.There was no red tide in Haizhou bay on May 20,2006.On October 6,2006,large areas of red tide occurred in Haizhou bay,with area of 436.5 km2.On April 22,2009,red tide scattered in Haizhou bay,and its area was about 10.8 km2.[Conclusion] The research would provide technical ideas for the environmental monitoring department of Lianyungang to implement red tide forecast and warning effectively. 展开更多
关键词 Haizhou Bay Red tide monitoring region multi-source remote sensing data Secondary filtering method Band ratio method Chlorophyll-a concentration method China
在线阅读 下载PDF
Remote Sensing Image Retrieval Based on 3D-Local Ternary Pattern(LTP)Features and Non-subsampled Shearlet Transform(NSST)Domain Statistical Features
18
作者 Hilly Gohain Baruah Vijay Kumar Nath Deepika Hazarika 《Computer Modeling in Engineering & Sciences》 SCIE EI 2022年第4期137-164,共28页
With the increasing popularity of high-resolution remote sensing images,the remote sensing image retrieval(RSIR)has always been a topic of major issue.A combined,global non-subsampled shearlet transform(NSST)-domain s... With the increasing popularity of high-resolution remote sensing images,the remote sensing image retrieval(RSIR)has always been a topic of major issue.A combined,global non-subsampled shearlet transform(NSST)-domain statistical features(NSSTds)and local three dimensional local ternary pattern(3D-LTP)features,is proposed for high-resolution remote sensing images.We model the NSST image coefficients of detail subbands using 2-state laplacian mixture(LM)distribution and its three parameters are estimated using Expectation-Maximization(EM)algorithm.We also calculate the statistical parameters such as subband kurtosis and skewness from detail subbands along with mean and standard deviation calculated from approximation subband,and concatenate all of them with the 2-state LM parameters to describe the global features of the image.The various properties of NSST such as multiscale,localization and flexible directional sensitivity make it a suitable choice to provide an effective approximation of an image.In order to extract the dense local features,a new 3D-LTP is proposed where dimension reduction is performed via selection of‘uniform’patterns.The 3D-LTP is calculated from spatial RGB planes of the input image.The proposed inter-channel 3D-LTP not only exploits the local texture information but the color information is captured too.Finally,a fused feature representation(NSSTds-3DLTP)is proposed using new global(NSSTds)and local(3D-LTP)features to enhance the discriminativeness of features.The retrieval performance of proposed NSSTds-3DLTP features are tested on three challenging remote sensing image datasets such as WHU-RS19,Aerial Image Dataset(AID)and PatternNet in terms of mean average precision(MAP),average normalized modified retrieval rank(ANMRR)and precision-recall(P-R)graph.The experimental results are encouraging and the NSSTds-3DLTP features leads to superior retrieval performance compared to many well known existing descriptors such as Gabor RGB,Granulometry,local binary pattern(LBP),Fisher vector(FV),vector of locally aggregated descriptors(VLAD)and median robust extended local binary pattern(MRELBP).For WHU-RS19 dataset,in terms of{MAP,ANMRR},the NSSTds-3DLTP improves upon Gabor RGB,Granulometry,LBP,FV,VLAD and MRELBP descriptors by{41.93%,20.87%},{92.30%,32.68%},{86.14%,31.97%},{18.18%,15.22%},{8.96%,19.60%}and{15.60%,13.26%},respectively.For AID,in terms of{MAP,ANMRR},the NSSTds-3DLTP improves upon Gabor RGB,Granulometry,LBP,FV,VLAD and MRELBP descriptors by{152.60%,22.06%},{226.65%,25.08%},{185.03%,23.33%},{80.06%,12.16%},{50.58%,10.49%}and{62.34%,3.24%},respectively.For PatternNet,the NSSTds-3DLTP respectively improves upon Gabor RGB,Granulometry,LBP,FV,VLAD and MRELBP descriptors by{32.79%,10.34%},{141.30%,24.72%},{17.47%,10.34%},{83.20%,19.07%},{21.56%,3.60%},and{19.30%,0.48%}in terms of{MAP,ANMRR}.The moderate dimensionality of simple NSSTds-3DLTP allows the system to run in real-time. 展开更多
关键词 remote sensing image retrieval laplacian mixture model local ternary pattern statistical modeling KS test texture global features
在线阅读 下载PDF
Using Neural Networks to Combine Multiple Features in Remote Sensing Image Classification
19
作者 俞璐 谢钧 张艳艳 《Journal of Donghua University(English Edition)》 EI CAS 2015年第2期225-228,共4页
Remote sensing image classification is the basis of remote sensing image analysis and understanding.It aims to assign each pixel an object class label.To achieve satisfactory classification accuracy,single feature is ... Remote sensing image classification is the basis of remote sensing image analysis and understanding.It aims to assign each pixel an object class label.To achieve satisfactory classification accuracy,single feature is not enough.Multiple features are usually integrated in remote sensing image classification.In this paper,a method based on neural network to combine multiple features was proposed.A single network was used to perform the task instead of ensemble of neural networks.A special architecture of network was designed to fit the task.The method effectively avoids the problems in direct conjunction of multiple features.Experiments on Indian93 data set show that the method has obvious advantages over conjunction of features on both recognition rate and training time. 展开更多
关键词 pixel satisfactory instead label Gabor combine hidden ensemble trained histogram
在线阅读 下载PDF
CFM-UNet:A Joint CNN and Transformer Network via Cross Feature Modulation for Remote Sensing Images Segmentation 被引量:8
20
作者 Min WANG Peidong WANG 《Journal of Geodesy and Geoinformation Science》 CSCD 2023年第4期40-47,共8页
The semantic segmentation methods based on CNN have made great progress,but there are still some shortcomings in the application of remote sensing images segmentation,such as the small receptive field can not effectiv... The semantic segmentation methods based on CNN have made great progress,but there are still some shortcomings in the application of remote sensing images segmentation,such as the small receptive field can not effectively capture global context.In order to solve this problem,this paper proposes a hybrid model based on ResNet50 and swin transformer to directly capture long-range dependence,which fuses features through Cross Feature Modulation Module(CFMM).Experimental results on two publicly available datasets,Vaihingen and Potsdam,are mIoU of 70.27%and 76.63%,respectively.Thus,CFM-UNet can maintain a high segmentation performance compared with other competitive networks. 展开更多
关键词 remote sensing images semantic segmentation swin transformer feature modulation module
在线阅读 下载PDF
上一页 1 2 109 下一页 到第
使用帮助 返回顶部