Accurate cloud classification plays a crucial role in aviation safety,climate monitoring,and localized weather forecasting.Current research has been focusing on machine learning techniques,particularly deep learning b...Accurate cloud classification plays a crucial role in aviation safety,climate monitoring,and localized weather forecasting.Current research has been focusing on machine learning techniques,particularly deep learning based model,for the types identification.However,traditional approaches such as convolutional neural networks(CNNs)encounter difficulties in capturing global contextual information.In addition,they are computationally expensive,which restricts their usability in resource-limited environments.To tackle these issues,we present the Cloud Vision Transformer(CloudViT),a lightweight model that integrates CNNs with Transformers.The integration enables an effective balance between local and global feature extraction.To be specific,CloudViT comprises two innovative modules:Feature Extraction(E_Module)and Downsampling(D_Module).These modules are able to significantly reduce the number of model parameters and computational complexity while maintaining translation invariance and enhancing contextual comprehension.Overall,the CloudViT includes 0.93×10^(6)parameters,which decreases more than ten times compared to the SOTA(State-of-the-Art)model CloudNet.Comprehensive evaluations conducted on the HBMCD and SWIMCAT datasets showcase the outstanding performance of CloudViT.It achieves classification accuracies of 98.45%and 100%,respectively.Moreover,the efficiency and scalability of CloudViT make it an ideal candidate for deployment inmobile cloud observation systems,enabling real-time cloud image classification.The proposed hybrid architecture of CloudViT offers a promising approach for advancing ground-based cloud image classification.It holds significant potential for both optimizing performance and facilitating practical deployment scenarios.展开更多
This paper presents an automated method for discontinuity trace mapping using three-dimensional point clouds of rock mass surfaces.Specifically,the method consists of five steps:(1)detection of trace feature points by...This paper presents an automated method for discontinuity trace mapping using three-dimensional point clouds of rock mass surfaces.Specifically,the method consists of five steps:(1)detection of trace feature points by normal tensor voting theory,(2)co ntraction of trace feature points,(3)connection of trace feature points,(4)linearization of trace segments,and(5)connection of trace segments.A sensitivity analysis was then conducted to identify the optimal parameters of the proposed method.Three field cases,a natural rock mass outcrop and two excavated rock tunnel surfaces,were analyzed using the proposed method to evaluate its validity and efficiency.The results show that the proposed method is more efficient and accurate than the traditional trace mapping method,and the efficiency enhancement is more robust as the number of feature points increases.展开更多
Cloud base height(CBH) is a crucial parameter for cloud radiative effect estimates, climate change simulations, and aviation guidance. However, due to the limited information on cloud vertical structures included in p...Cloud base height(CBH) is a crucial parameter for cloud radiative effect estimates, climate change simulations, and aviation guidance. However, due to the limited information on cloud vertical structures included in passive satellite radiometer observations, few operational satellite CBH products are currently available. This study presents a new method for retrieving CBH from satellite radiometers. The method first uses the combined measurements of satellite radiometers and ground-based cloud radars to develop a lookup table(LUT) of effective cloud water content(ECWC), representing the vertically varying cloud water content. This LUT allows for the conversion of cloud water path to cloud geometric thickness(CGT), enabling the estimation of CBH as the difference between cloud top height and CGT. Detailed comparative analysis of CBH estimates from the state-of-the-art ECWC LUT are conducted against four ground-based millimeter-wave cloud radar(MMCR) measurements, and results show that the mean bias(correlation coefficient) is0.18±1.79 km(0.73), which is lower(higher) than 0.23±2.11 km(0.67) as derived from the combined measurements of satellite radiometers and satellite radar-lidar(i.e., Cloud Sat and CALIPSO). Furthermore, the percentages of the CBH biases within 250 m increase by 5% to 10%, which varies by location. This indicates that the CBH estimates from our algorithm are more consistent with ground-based MMCR measurements. Therefore, this algorithm shows great potential for further improvement of the CBH retrievals as ground-based MMCR are being increasingly included in global surface meteorological observing networks, and the improved CBH retrievals will contribute to better cloud radiative effect estimates.展开更多
Objective and accurate classification model or method of cloud image is a prerequisite for accurate weather monitoring and forecast.Thus safety of aircraft taking off and landing and air flight can be guaranteed.Thres...Objective and accurate classification model or method of cloud image is a prerequisite for accurate weather monitoring and forecast.Thus safety of aircraft taking off and landing and air flight can be guaranteed.Thresholding is a kind of simple and effective method of cloud classification.It can realize automated ground-based cloud detection and cloudage observation.The existing segmentation methods based on fixed threshold and single threshold cannot achieve good segmentation effect.Thus it is difficult to obtain the accurate result of cloud detection and cloudage observation.In view of the above-mentioned problems,multi-thresholding methods of ground-based cloud based on exponential entropy/exponential gray entropy and uniform searching particle swarm optimization(UPSO)are proposed.Exponential entropy and exponential gray entropy make up for the defects of undefined value and zero value in Shannon entropy.In addition,exponential gray entropy reflects the relative uniformity of gray levels within the cloud cluster and background cluster.Cloud regions and background regions of different gray level ranges can be distinguished more precisely using the multi-thresholding strategy.In order to reduce computational complexity of original exhaustive algorithm for multi-threshold selection,the UPSO algorithm is adopted.It can find the optimal thresholds quickly and accurately.As a result,the real-time processing of segmentation of groundbased cloud image can be realized.The experimental results show that,in comparison with the existing groundbased cloud image segmentation methods and multi-thresholding method based on maximum Shannon entropy,the proposed methods can extract the boundary shape,textures and details feature of cloud more clearly.Therefore,the accuracies of cloudage detection and morphology classification for ground-based cloud are both improved.展开更多
Cloud computing has created a paradigm shift that affects the way in which business applications are developed. Many business organizations use cloud infrastructures as platforms on which to deploy business applicatio...Cloud computing has created a paradigm shift that affects the way in which business applications are developed. Many business organizations use cloud infrastructures as platforms on which to deploy business applications. Increasing numbers of vendors are supplying the cloud marketplace with a wide range of cloud products. Different vendors offer cloud products in different formats. The cost structures for consuming cloud products can be complex. Finding a suitable set of cloud products that meets an application’s requirements and budget can be a challenging task. In this paper, an ontology-based resource mapping mechanism is proposed. Domain-specific ontologies are used to specify high-level application’s requirements. These are then translated into high-level infrastructure ontologies which then can be mapped onto low-level descriptions of cloud resources. Cost ontologies are proposed for cloud resources. An exemplar media transcoding and delivery service is studied in order to illustrate how high-level requirements can be modeled and mapped onto cloud resources within a budget constraint. The proposed ontologies provide an application-centric mechanism for specifying cloud requirements which can then be used for searching for suitable resources in a multi-provider cloud environment.展开更多
The long awaited cloud computing concept is a reality now due to the transformation of computer generations.However,security challenges have become the biggest obstacles for the advancement of this emerging technology...The long awaited cloud computing concept is a reality now due to the transformation of computer generations.However,security challenges have become the biggest obstacles for the advancement of this emerging technology.A well-established policy framework is defined in this paper to generate security policies which are compliant to requirements and capabilities.Moreover,a federated policy management schema is introduced based on the policy definition framework and a multi-level policy application to create and manage virtual clusters with identical or common security levels.The proposed model consists in the design of a well-established ontology according to security mechanisms,a procedure which classifies nodes with common policies into virtual clusters,a policy engine to enhance the process of mapping requests to a specific node as well as an associated cluster and matchmaker engine to eliminate inessential mapping processes.The suggested model has been evaluated according to performance and security parameters to prove the efficiency and reliability of this multilayered engine in cloud computing environments during policy definition,application and mapping procedures.展开更多
The degree of spatial similarity plays an important role in map generalization, yet there has been no quantitative research into it. To fill this gap, this study first defines map scale change and spatial similarity d...The degree of spatial similarity plays an important role in map generalization, yet there has been no quantitative research into it. To fill this gap, this study first defines map scale change and spatial similarity degree/relation in multi-scale map spaces and then proposes a model for calculating the degree of spatial similarity between a point cloud at one scale and its gener- alized counterpart at another scale. After validation, the new model features 16 points with map scale change as the x coordinate and the degree of spatial similarity as the y coordinate. Finally, using an application for curve fitting, the model achieves an empirical formula that can calculate the degree of spatial similarity using map scale change as the sole independent variable, and vice versa. This formula can be used to automate algorithms for point feature generalization and to determine when to terminate them during the generalization.展开更多
The autonomous exploration and mapping of an unknown environment is useful in a wide range of applications and thus holds great significance. Existing methods mostly use range sensors to generate twodimensional (2D) g...The autonomous exploration and mapping of an unknown environment is useful in a wide range of applications and thus holds great significance. Existing methods mostly use range sensors to generate twodimensional (2D) grid maps. Red/green/blue-depth (RGB-D) sensors provide both color and depth information on the environment, thereby enabling the generation of a three-dimensional (3D) point cloud map that is intuitive for human perception. In this paper, we present a systematic approach with dual RGB-D sensors to achieve the autonomous exploration and mapping of an unknown indoor environment. With the synchronized and processed RGB-D data, location points were generated and a 3D point cloud map and 2D grid map were incrementally built. Next, the exploration was modeled as a partially observable Markov decision process. Partial map simulation and global frontier search methods were combined for autonomous exploration, and dynamic action constraints were utilized in motion control. In this way, the local optimum can be avoided and the exploration efficacy can be ensured. Experiments with single connected and multi-branched regions demonstrated the high robustness, efficiency, and superiority of the developed system and methods.展开更多
The large scale and distribution of cloud computing storage have become the major challenges in cloud forensics for file extraction. Current disk forensic methods do not adapt to cloud computing well and the forensic ...The large scale and distribution of cloud computing storage have become the major challenges in cloud forensics for file extraction. Current disk forensic methods do not adapt to cloud computing well and the forensic research on distributed file system is inadequate. To address the forensic problems, this paper uses the Hadoop distributed file system (HDFS) as a case study and proposes a forensic method for efficient file extraction based on three-level (3L) mapping. First, HDFS is analyzed from overall architecture to local file system. Second, the 3L mapping of an HDFS file from HDFS namespace to data blocks on local file system is established and a recovery method for deleted files based on 3L mapping is presented. Third, a multi-node Hadoop framework via Xen virtualization platform is set up to test the performance of the method. The results indicate that the proposed method could succeed in efficient location of large files stored across data nodes, make selective image of disk data and get high recovery rate of deleted files.展开更多
The Western Yunnan Earthquake Predication Test Site set up jointly by the China Earthquake Administration,the National Science Foundation Commission of America,and United States Geological Survey has played an importa...The Western Yunnan Earthquake Predication Test Site set up jointly by the China Earthquake Administration,the National Science Foundation Commission of America,and United States Geological Survey has played an important role in development of early earthquake research work in China. Due to various objective reasons, most of the predicted targets in the earthquake prediction test site have not been achieved,and the development has been hindered. In recent years, the experiment site has been reconsidered,and renamed the "Earthquake Science Experimental Site". Combined with the current development of seismology and the practical needs of disaster prevention and mitigation,we propose adding the "Underground Cloud Map"as the new direction of the experimental site. Using highly repeatable, environmentally friendly and safe airgun sources,we could send constant seismic signals,which realizes continuous monitoring of subsurface velocity changes. Utilizing the high-resolution 3-D crustal structure from ambient noise tomography,we could obtain 4-D (3-D space+1-D time) images of subsurface structures, which we termed the "Underground Cloud Map". The"Underground Cloud Map" can reflect underground velocity and stress changes,providing new means for the earthquake monitoring forecast nationwide,which promotes the conversion of experience-based earthquake prediction to physics-based prediction.展开更多
视觉同步定位与建图(simultaneous localization and mapping,SLAM)是实现移动机器人自主定位并构建环境地图的关键环节。SLAM技术虽能精确重建环境几何结构,却难以为机器人提供执行复杂任务所需的语义理解能力;建筑信息模型(building i...视觉同步定位与建图(simultaneous localization and mapping,SLAM)是实现移动机器人自主定位并构建环境地图的关键环节。SLAM技术虽能精确重建环境几何结构,却难以为机器人提供执行复杂任务所需的语义理解能力;建筑信息模型(building information model,BIM)包含丰富的建筑信息,但与机器人操作系统(robot operating system,ROS)之间存在显著的数据格式和表达方式差异,且现有研究多采用人工方式进行转换,效率低下难以规模化应用,且室内环境并非静态不变,从而会影响机器人的导航决策。因此,提出一种集成BIM数据的ROS室内语义地图构建与动态更新方法。通过研发工业基础类(industry foundation classes,IFC)到统一机器人描述格式(unified robot description format,URDF)自动转换器,实现从BIM到机器人仿真环境的自动化建模;融合YOLOv8与随机采样一致性(random sample consensus,RANSAC)算法,建立视觉驱动的语义地图动态更新机制。结果表明,静态建筑元素还原准确率达98%以上,动态物体识别精度达0.9以上,显著提升了语义地图的自动化程度、知识丰富度及环境适应性。展开更多
当今主流地图构建系统由于定位精度不高、重投影误差较大等问题,限制了稠密地图的生成。尤其在动态场景中,系统的实时性和地图的高精度之间无法共存,以及物体的往复移动为后续地图精度的提升带来了额外的困难。针对上述问题,提出了一种...当今主流地图构建系统由于定位精度不高、重投影误差较大等问题,限制了稠密地图的生成。尤其在动态场景中,系统的实时性和地图的高精度之间无法共存,以及物体的往复移动为后续地图精度的提升带来了额外的困难。针对上述问题,提出了一种基于闭环检测和自适应降采样的视觉SLAM点云地图构建方法(Visual SLAM point cloud map construction method based on closed-loop detection and adaptive downsampling,PCL-LCAD)。上述方法从视觉SLAM系统建图的角度出发,加入3D点云技术,构建一个闭环检测优化模型,扩大生成地图的面积,再建立一个点云自适应降采样模型,利用KD-tree算法对其体素滤波进行改进。实验结果表明,PCL-LCAD方法能在保障准确性和实时性的同时,降低地图占用空间并且提高地图稠密度。展开更多
基金funded by Innovation and Development Special Project of China Meteorological Administration(CXFZ2022J038,CXFZ2024J035)Sichuan Science and Technology Program(No.2023YFQ0072)+1 种基金Key Laboratory of Smart Earth(No.KF2023YB03-07)Automatic Software Generation and Intelligent Service Key Laboratory of Sichuan Province(CUIT-SAG202210).
文摘Accurate cloud classification plays a crucial role in aviation safety,climate monitoring,and localized weather forecasting.Current research has been focusing on machine learning techniques,particularly deep learning based model,for the types identification.However,traditional approaches such as convolutional neural networks(CNNs)encounter difficulties in capturing global contextual information.In addition,they are computationally expensive,which restricts their usability in resource-limited environments.To tackle these issues,we present the Cloud Vision Transformer(CloudViT),a lightweight model that integrates CNNs with Transformers.The integration enables an effective balance between local and global feature extraction.To be specific,CloudViT comprises two innovative modules:Feature Extraction(E_Module)and Downsampling(D_Module).These modules are able to significantly reduce the number of model parameters and computational complexity while maintaining translation invariance and enhancing contextual comprehension.Overall,the CloudViT includes 0.93×10^(6)parameters,which decreases more than ten times compared to the SOTA(State-of-the-Art)model CloudNet.Comprehensive evaluations conducted on the HBMCD and SWIMCAT datasets showcase the outstanding performance of CloudViT.It achieves classification accuracies of 98.45%and 100%,respectively.Moreover,the efficiency and scalability of CloudViT make it an ideal candidate for deployment inmobile cloud observation systems,enabling real-time cloud image classification.The proposed hybrid architecture of CloudViT offers a promising approach for advancing ground-based cloud image classification.It holds significant potential for both optimizing performance and facilitating practical deployment scenarios.
基金supported by the Special Fund for Basic Research on Scientific Instruments of the National Natural Science Foundation of China(Grant No.4182780021)Emeishan-Hanyuan Highway ProgramTaihang Mountain Highway Program。
文摘This paper presents an automated method for discontinuity trace mapping using three-dimensional point clouds of rock mass surfaces.Specifically,the method consists of five steps:(1)detection of trace feature points by normal tensor voting theory,(2)co ntraction of trace feature points,(3)connection of trace feature points,(4)linearization of trace segments,and(5)connection of trace segments.A sensitivity analysis was then conducted to identify the optimal parameters of the proposed method.Three field cases,a natural rock mass outcrop and two excavated rock tunnel surfaces,were analyzed using the proposed method to evaluate its validity and efficiency.The results show that the proposed method is more efficient and accurate than the traditional trace mapping method,and the efficiency enhancement is more robust as the number of feature points increases.
基金funded by the National Natural Science Foundation of China (Grant Nos. 42305150 and 42325501)the China Postdoctoral Science Foundation (Grant No. 2023M741774)。
文摘Cloud base height(CBH) is a crucial parameter for cloud radiative effect estimates, climate change simulations, and aviation guidance. However, due to the limited information on cloud vertical structures included in passive satellite radiometer observations, few operational satellite CBH products are currently available. This study presents a new method for retrieving CBH from satellite radiometers. The method first uses the combined measurements of satellite radiometers and ground-based cloud radars to develop a lookup table(LUT) of effective cloud water content(ECWC), representing the vertically varying cloud water content. This LUT allows for the conversion of cloud water path to cloud geometric thickness(CGT), enabling the estimation of CBH as the difference between cloud top height and CGT. Detailed comparative analysis of CBH estimates from the state-of-the-art ECWC LUT are conducted against four ground-based millimeter-wave cloud radar(MMCR) measurements, and results show that the mean bias(correlation coefficient) is0.18±1.79 km(0.73), which is lower(higher) than 0.23±2.11 km(0.67) as derived from the combined measurements of satellite radiometers and satellite radar-lidar(i.e., Cloud Sat and CALIPSO). Furthermore, the percentages of the CBH biases within 250 m increase by 5% to 10%, which varies by location. This indicates that the CBH estimates from our algorithm are more consistent with ground-based MMCR measurements. Therefore, this algorithm shows great potential for further improvement of the CBH retrievals as ground-based MMCR are being increasingly included in global surface meteorological observing networks, and the improved CBH retrievals will contribute to better cloud radiative effect estimates.
基金Supported by the National Natural Science Foundation of China(60872065)the Open Foundation of Key Laboratory of Meteorological Disaster of Ministry of Education at Nanjing University of Information Science & Technology(KLME1108)the Priority Academic Program Development of Jiangsu Higher Education Institutions
文摘Objective and accurate classification model or method of cloud image is a prerequisite for accurate weather monitoring and forecast.Thus safety of aircraft taking off and landing and air flight can be guaranteed.Thresholding is a kind of simple and effective method of cloud classification.It can realize automated ground-based cloud detection and cloudage observation.The existing segmentation methods based on fixed threshold and single threshold cannot achieve good segmentation effect.Thus it is difficult to obtain the accurate result of cloud detection and cloudage observation.In view of the above-mentioned problems,multi-thresholding methods of ground-based cloud based on exponential entropy/exponential gray entropy and uniform searching particle swarm optimization(UPSO)are proposed.Exponential entropy and exponential gray entropy make up for the defects of undefined value and zero value in Shannon entropy.In addition,exponential gray entropy reflects the relative uniformity of gray levels within the cloud cluster and background cluster.Cloud regions and background regions of different gray level ranges can be distinguished more precisely using the multi-thresholding strategy.In order to reduce computational complexity of original exhaustive algorithm for multi-threshold selection,the UPSO algorithm is adopted.It can find the optimal thresholds quickly and accurately.As a result,the real-time processing of segmentation of groundbased cloud image can be realized.The experimental results show that,in comparison with the existing groundbased cloud image segmentation methods and multi-thresholding method based on maximum Shannon entropy,the proposed methods can extract the boundary shape,textures and details feature of cloud more clearly.Therefore,the accuracies of cloudage detection and morphology classification for ground-based cloud are both improved.
文摘Cloud computing has created a paradigm shift that affects the way in which business applications are developed. Many business organizations use cloud infrastructures as platforms on which to deploy business applications. Increasing numbers of vendors are supplying the cloud marketplace with a wide range of cloud products. Different vendors offer cloud products in different formats. The cost structures for consuming cloud products can be complex. Finding a suitable set of cloud products that meets an application’s requirements and budget can be a challenging task. In this paper, an ontology-based resource mapping mechanism is proposed. Domain-specific ontologies are used to specify high-level application’s requirements. These are then translated into high-level infrastructure ontologies which then can be mapped onto low-level descriptions of cloud resources. Cost ontologies are proposed for cloud resources. An exemplar media transcoding and delivery service is studied in order to illustrate how high-level requirements can be modeled and mapped onto cloud resources within a budget constraint. The proposed ontologies provide an application-centric mechanism for specifying cloud requirements which can then be used for searching for suitable resources in a multi-provider cloud environment.
文摘The long awaited cloud computing concept is a reality now due to the transformation of computer generations.However,security challenges have become the biggest obstacles for the advancement of this emerging technology.A well-established policy framework is defined in this paper to generate security policies which are compliant to requirements and capabilities.Moreover,a federated policy management schema is introduced based on the policy definition framework and a multi-level policy application to create and manage virtual clusters with identical or common security levels.The proposed model consists in the design of a well-established ontology according to security mechanisms,a procedure which classifies nodes with common policies into virtual clusters,a policy engine to enhance the process of mapping requests to a specific node as well as an associated cluster and matchmaker engine to eliminate inessential mapping processes.The suggested model has been evaluated according to performance and security parameters to prove the efficiency and reliability of this multilayered engine in cloud computing environments during policy definition,application and mapping procedures.
基金funded by the Natural Science Foundation Committee,China(41364001,41371435)
文摘The degree of spatial similarity plays an important role in map generalization, yet there has been no quantitative research into it. To fill this gap, this study first defines map scale change and spatial similarity degree/relation in multi-scale map spaces and then proposes a model for calculating the degree of spatial similarity between a point cloud at one scale and its gener- alized counterpart at another scale. After validation, the new model features 16 points with map scale change as the x coordinate and the degree of spatial similarity as the y coordinate. Finally, using an application for curve fitting, the model achieves an empirical formula that can calculate the degree of spatial similarity using map scale change as the sole independent variable, and vice versa. This formula can be used to automate algorithms for point feature generalization and to determine when to terminate them during the generalization.
基金the National Natural Science Foundation of China (61720106012 and 61403215)the Foundation of State Key Laboratory of Robotics (2006-003)the Fundamental Research Funds for the Central Universities for the financial support of this work.
文摘The autonomous exploration and mapping of an unknown environment is useful in a wide range of applications and thus holds great significance. Existing methods mostly use range sensors to generate twodimensional (2D) grid maps. Red/green/blue-depth (RGB-D) sensors provide both color and depth information on the environment, thereby enabling the generation of a three-dimensional (3D) point cloud map that is intuitive for human perception. In this paper, we present a systematic approach with dual RGB-D sensors to achieve the autonomous exploration and mapping of an unknown indoor environment. With the synchronized and processed RGB-D data, location points were generated and a 3D point cloud map and 2D grid map were incrementally built. Next, the exploration was modeled as a partially observable Markov decision process. Partial map simulation and global frontier search methods were combined for autonomous exploration, and dynamic action constraints were utilized in motion control. In this way, the local optimum can be avoided and the exploration efficacy can be ensured. Experiments with single connected and multi-branched regions demonstrated the high robustness, efficiency, and superiority of the developed system and methods.
基金Supported by the National High Technology Research and Development Program of China(863 Program)(2015AA016006)the National Natural Science Foundation of China(60903220)
文摘The large scale and distribution of cloud computing storage have become the major challenges in cloud forensics for file extraction. Current disk forensic methods do not adapt to cloud computing well and the forensic research on distributed file system is inadequate. To address the forensic problems, this paper uses the Hadoop distributed file system (HDFS) as a case study and proposes a forensic method for efficient file extraction based on three-level (3L) mapping. First, HDFS is analyzed from overall architecture to local file system. Second, the 3L mapping of an HDFS file from HDFS namespace to data blocks on local file system is established and a recovery method for deleted files based on 3L mapping is presented. Third, a multi-node Hadoop framework via Xen virtualization platform is set up to test the performance of the method. The results indicate that the proposed method could succeed in efficient location of large files stored across data nodes, make selective image of disk data and get high recovery rate of deleted files.
基金sponsored by the National Natural Science Foundation of China(Grant Nos.41790463 and 41674058)
文摘The Western Yunnan Earthquake Predication Test Site set up jointly by the China Earthquake Administration,the National Science Foundation Commission of America,and United States Geological Survey has played an important role in development of early earthquake research work in China. Due to various objective reasons, most of the predicted targets in the earthquake prediction test site have not been achieved,and the development has been hindered. In recent years, the experiment site has been reconsidered,and renamed the "Earthquake Science Experimental Site". Combined with the current development of seismology and the practical needs of disaster prevention and mitigation,we propose adding the "Underground Cloud Map"as the new direction of the experimental site. Using highly repeatable, environmentally friendly and safe airgun sources,we could send constant seismic signals,which realizes continuous monitoring of subsurface velocity changes. Utilizing the high-resolution 3-D crustal structure from ambient noise tomography,we could obtain 4-D (3-D space+1-D time) images of subsurface structures, which we termed the "Underground Cloud Map". The"Underground Cloud Map" can reflect underground velocity and stress changes,providing new means for the earthquake monitoring forecast nationwide,which promotes the conversion of experience-based earthquake prediction to physics-based prediction.
文摘视觉同步定位与建图(simultaneous localization and mapping,SLAM)是实现移动机器人自主定位并构建环境地图的关键环节。SLAM技术虽能精确重建环境几何结构,却难以为机器人提供执行复杂任务所需的语义理解能力;建筑信息模型(building information model,BIM)包含丰富的建筑信息,但与机器人操作系统(robot operating system,ROS)之间存在显著的数据格式和表达方式差异,且现有研究多采用人工方式进行转换,效率低下难以规模化应用,且室内环境并非静态不变,从而会影响机器人的导航决策。因此,提出一种集成BIM数据的ROS室内语义地图构建与动态更新方法。通过研发工业基础类(industry foundation classes,IFC)到统一机器人描述格式(unified robot description format,URDF)自动转换器,实现从BIM到机器人仿真环境的自动化建模;融合YOLOv8与随机采样一致性(random sample consensus,RANSAC)算法,建立视觉驱动的语义地图动态更新机制。结果表明,静态建筑元素还原准确率达98%以上,动态物体识别精度达0.9以上,显著提升了语义地图的自动化程度、知识丰富度及环境适应性。
文摘当今主流地图构建系统由于定位精度不高、重投影误差较大等问题,限制了稠密地图的生成。尤其在动态场景中,系统的实时性和地图的高精度之间无法共存,以及物体的往复移动为后续地图精度的提升带来了额外的困难。针对上述问题,提出了一种基于闭环检测和自适应降采样的视觉SLAM点云地图构建方法(Visual SLAM point cloud map construction method based on closed-loop detection and adaptive downsampling,PCL-LCAD)。上述方法从视觉SLAM系统建图的角度出发,加入3D点云技术,构建一个闭环检测优化模型,扩大生成地图的面积,再建立一个点云自适应降采样模型,利用KD-tree算法对其体素滤波进行改进。实验结果表明,PCL-LCAD方法能在保障准确性和实时性的同时,降低地图占用空间并且提高地图稠密度。