Spectrogram representations of acoustic scenes have achieved competitive performance for acoustic scene classification. Yet, the spectrogram alone does not take into account a substantial amount of time-frequency info...Spectrogram representations of acoustic scenes have achieved competitive performance for acoustic scene classification. Yet, the spectrogram alone does not take into account a substantial amount of time-frequency information. In this study, we present an approach for exploring the benefits of deep scalogram representations, extracted in segments from an audio stream. The approach presented firstly transforms the segmented acoustic scenes into bump and morse scalograms, as well as spectrograms; secondly, the spectrograms or scalograms are sent into pre-trained convolutional neural networks; thirdly,the features extracted from a subsequent fully connected layer are fed into(bidirectional) gated recurrent neural networks, which are followed by a single highway layer and a softmax layer;finally, predictions from these three systems are fused by a margin sampling value strategy. We then evaluate the proposed approach using the acoustic scene classification data set of 2017 IEEE AASP Challenge on Detection and Classification of Acoustic Scenes and Events(DCASE). On the evaluation set, an accuracy of 64.0 % from bidirectional gated recurrent neural networks is obtained when fusing the spectrogram and the bump scalogram, which is an improvement on the 61.0 % baseline result provided by the DCASE 2017 organisers. This result shows that extracted bump scalograms are capable of improving the classification accuracy,when fusing with a spectrogram-based system.展开更多
Scene recognition is a fundamental task in computer vision,which generally includes three vital stages,namely feature extraction,feature transformation and classification.Early research mainly focuses on feature extra...Scene recognition is a fundamental task in computer vision,which generally includes three vital stages,namely feature extraction,feature transformation and classification.Early research mainly focuses on feature extraction,but with the rise of Convolutional Neural Networks(CNNs),more and more feature transformation methods are proposed based on CNN features.In this work,a novel feature transformation algorithm called Graph Encoded Local Discriminative Region Representation(GEDRR)is proposed to find discriminative local representations for scene images and explore the relationship between the discriminative regions.In addition,we propose a method using the multi-head attention module to enhance and fuse convolutional feature maps.Combining the two methods and the global representation,a scene recognition framework called Global and Graph Encoded Local Discriminative Region Representation(G2ELDR2)is proposed.The experimental results on three scene datasets demonstrate the effectiveness of our model,which outperforms many state-of-the-arts.展开更多
In dynamic scenarios,visual simultaneous localization and mapping(SLAM)algorithms often incorrectly incorporate dynamic points during camera pose computation,leading to reduced accuracy and robustness.This paper prese...In dynamic scenarios,visual simultaneous localization and mapping(SLAM)algorithms often incorrectly incorporate dynamic points during camera pose computation,leading to reduced accuracy and robustness.This paper presents a dynamic SLAM algorithm that leverages object detection and regional dynamic probability.Firstly,a parallel thread employs the YOLOX object detectionmodel to gather 2D semantic information and compensate for missed detections.Next,an improved K-means++clustering algorithm clusters bounding box regions,adaptively determining the threshold for extracting dynamic object contours as dynamic points change.This process divides the image into low dynamic,suspicious dynamic,and high dynamic regions.In the tracking thread,the dynamic point removal module assigns dynamic probability weights to the feature points in these regions.Combined with geometric methods,it detects and removes the dynamic points.The final evaluation on the public TUM RGB-D dataset shows that the proposed dynamic SLAM algorithm surpasses most existing SLAM algorithms,providing better pose estimation accuracy and robustness in dynamic environments.展开更多
Let F_(1)be the virtual field consisting of one element and(Q,I)a string pair.In this paper,we study the representations of string pairs over the virtual field F_(1).It is proved that an indecomposable F_(1)-represent...Let F_(1)be the virtual field consisting of one element and(Q,I)a string pair.In this paper,we study the representations of string pairs over the virtual field F_(1).It is proved that an indecomposable F_(1)-representation is either a string representation or a band representation by using the coefficient quivers.It is worth noting that for a given band and a positive integer,there exists a unique band representation up to isomorphism.展开更多
With the upgrading of tourism consumption patterns,the traditional renovation models of waterfront recreational spaces centered on landscape design can no longer meet the commercial and humanistic demands of modern cu...With the upgrading of tourism consumption patterns,the traditional renovation models of waterfront recreational spaces centered on landscape design can no longer meet the commercial and humanistic demands of modern cultural and tourism development.Based on scene theory as the analytical framework and taking the Xuan en Night Banquet Project in Enshi as a case study,this paper explores the design pathway for transforming waterfront areas in tourism cities from"spatial reconstruction"to"scene construction".The study argues that waterfront space renewal should transcend mere physical renovation.By implementing three core strategies:spatial narrative framework,ecological industry creation,and cultural empowerment,it is possible to construct integrated scenarios that blend cultural value,consumption spaces,and lifestyle elements.This approach ultimately fosters sustained vitality in waterfront areas and promotes the high-quality development of cultural and tourism industry.展开更多
Scene graph prediction has emerged as a critical task in computer vision,focusing on transforming complex visual scenes into structured representations by identifying objects,their attributes,and the relationships amo...Scene graph prediction has emerged as a critical task in computer vision,focusing on transforming complex visual scenes into structured representations by identifying objects,their attributes,and the relationships among them.Extending this to 3D semantic scene graph(3DSSG)prediction introduces an additional layer of complexity because it requires the processing of point-cloud data to accurately capture the spatial and volumetric characteristics of a scene.A significant challenge in 3DSSG is the long-tailed distribution of object and relationship labels,causing certain classes to be severely underrepresented and suboptimal performance in these rare categories.To address this,we proposed a fusion prototypical network(FPN),which combines the strengths of conventional neural networks for 3DSSG with a Prototypical Network.The former are known for their ability to handle complex scene graph predictions while the latter excels in few-shot learning scenarios.By leveraging this fusion,our approach enhances the overall prediction accuracy and substantially improves the handling of underrepresented labels.Through extensive experiments using the 3DSSG dataset,we demonstrated that the FPN achieves state-of-the-art performance in 3D scene graph prediction as a single model and effectively mitigates the impact of the long-tailed distribution,providing a more balanced and comprehensive understanding of complex 3D environments.展开更多
Crime scene investigation(CSI)is an important link in the criminal justice system as it serves as a bridge between establishing the happenings during an incident and possibly identifying the accountable persons,provid...Crime scene investigation(CSI)is an important link in the criminal justice system as it serves as a bridge between establishing the happenings during an incident and possibly identifying the accountable persons,providing light in the dark.The International Organization for Standardization(ISO)and the International Electrotechnical Commission(IEC)collaborated to develop the ISO/IEC 17020:2012 standard to govern the quality of CSI,a branch of inspection activity.These protocols include the impartiality and competence of the crime scene investigators involved,contemporary recording of scene observations and data obtained,the correct use of resources during scene processing,forensic evidence collection and handling procedures,and the confidentiality and integrity of any scene information obtained from other parties etc.The preparatory work,the accreditation processes involved and the implementation of new quality measures to the existing quality management system in order to achieve the ISO/IE 17020:2012 accreditation at the Forensic Science Division of the Government Laboratory in Hong Kong are discussed in this paper.展开更多
Remote sensing scene image classification is a prominent research area within remote sensing.Deep learningbased methods have been extensively utilized and have shown significant advancements in this field.Recent progr...Remote sensing scene image classification is a prominent research area within remote sensing.Deep learningbased methods have been extensively utilized and have shown significant advancements in this field.Recent progress in these methods primarily focuses on enhancing feature representation capabilities to improve performance.The challenge lies in the limited spatial resolution of small-sized remote sensing images,as well as image blurring and sparse data.These factors contribute to lower accuracy in current deep learning models.Additionally,deeper networks with attention-based modules require a substantial number of network parameters,leading to high computational costs and memory usage.In this article,we introduce ERSNet,a lightweight novel attention-guided network for remote sensing scene image classification.ERSNet is constructed using a deep separable convolutional network and incorporates an attention mechanism.It utilizes spatial attention,channel attention,and channel self-attention to enhance feature representation and accuracy,while also reducing computational complexity and memory usage.Experimental results indicate that,compared to existing state-of-the-art methods,ERSNet has a significantly lower parameter count of only 1.2 M and reduced Flops.It achieves the highest classification accuracy of 99.14%on the EuroSAT dataset,demonstrating its suitability for application on mobile terminal devices.Furthermore,experimental results from the UCMerced land use dataset and the Brazilian coffee scene also confirm the strong generalization ability of this method.展开更多
In the dynamic scene of autonomous vehicles,the depth estimation of monocular cameras often faces the problem of inaccurate edge depth estimation.To solve this problem,we propose an unsupervised monocular depth estima...In the dynamic scene of autonomous vehicles,the depth estimation of monocular cameras often faces the problem of inaccurate edge depth estimation.To solve this problem,we propose an unsupervised monocular depth estimation model based on edge enhancement,which is specifically aimed at the depth perception challenge in dynamic scenes.The model consists of two core networks:a deep prediction network and a motion estimation network,both of which adopt an encoder-decoder architecture.The depth prediction network is based on the U-Net structure of ResNet18,which is responsible for generating the depth map of the scene.The motion estimation network is based on the U-Net structure of Flow-Net,focusing on the motion estimation of dynamic targets.In the decoding stage of the motion estimation network,we innovatively introduce an edge-enhanced decoder,which integrates a convolutional block attention module(CBAM)in the decoding process to enhance the recognition ability of the edge features of moving objects.In addition,we also designed a strip convolution module to improve the model’s capture efficiency of discrete moving targets.To further improve the performance of the model,we propose a novel edge regularization method based on the Laplace operator,which effectively accelerates the convergence process of themodel.Experimental results on the KITTI and Cityscapes datasets show that compared with the current advanced dynamic unsupervised monocular model,the proposed model has a significant improvement in depth estimation accuracy and convergence speed.Specifically,the rootmean square error(RMSE)is reduced by 4.8%compared with the DepthMotion algorithm,while the training convergence speed is increased by 36%,which shows the superior performance of the model in the depth estimation task in dynamic scenes.展开更多
This paper presents a comprehensive framework that enables communication scene recognition through deep learning and multi-sensor fusion.This study aims to address the challenge of current communication scene recognit...This paper presents a comprehensive framework that enables communication scene recognition through deep learning and multi-sensor fusion.This study aims to address the challenge of current communication scene recognition methods that struggle to adapt in dynamic environments,as they typically rely on post-response mechanisms that fail to detect scene changes before users experience latency.The proposed framework leverages data from multiple smartphone sensors,including acceleration sensors,gyroscopes,magnetic field sensors,and orientation sensors,to identify different communication scenes,such as walking,running,cycling,and various modes of transportation.Extensive experimental comparative analysis with existing methods on the open-source SHL-2018 dataset confirmed the superior performance of our approach in terms of F1 score and processing speed.Additionally,tests using a Microsoft Surface Pro tablet and a self-collected Beijing-2023 dataset have validated the framework's efficiency and generalization capability.The results show that our framework achieved an F1 score of 95.15%on SHL-2018and 94.6%on Beijing-2023,highlighting its robustness across different datasets and conditions.Furthermore,the levels of computational complexity and power consumption associated with the algorithm are moderate,making it suitable for deployment on mobile devices.展开更多
Semantic segmentation in street scenes is a crucial technology for autonomous driving to analyze the surrounding environment.In street scenes,issues such as high image resolution caused by a large viewpoints and diffe...Semantic segmentation in street scenes is a crucial technology for autonomous driving to analyze the surrounding environment.In street scenes,issues such as high image resolution caused by a large viewpoints and differences in object scales lead to a decline in real-time performance and difficulties in multi-scale feature extraction.To address this,we propose a bilateral-branch real-time semantic segmentationmethod based on semantic information distillation(BSDNet)for street scene images.The BSDNet consists of a Feature Conversion Convolutional Block(FCB),a Semantic Information Distillation Module(SIDM),and a Deep Aggregation Atrous Convolution Pyramid Pooling(DASP).FCB reduces the semantic gap between the backbone and the semantic branch.SIDM extracts high-quality semantic information fromthe Transformer branch to reduce computational costs.DASP aggregates information lost in atrous convolutions,effectively capturingmulti-scale objects.Extensive experiments conducted on Cityscapes,CamVid,and ADE20K,achieving an accuracy of 81.7% Mean Intersection over Union(mIoU)at 70.6 Frames Per Second(FPS)on Cityscapes,demonstrate that our method achieves a better balance between accuracy and inference speed.展开更多
As a new research direction in contemporary cognitive science,predictive processing surpasses traditional computational representation and embodied cognition and has emerged as a new paradigm in cognitive science rese...As a new research direction in contemporary cognitive science,predictive processing surpasses traditional computational representation and embodied cognition and has emerged as a new paradigm in cognitive science research.The predictive processing theory advocates that the brain is a hierarchical predictive model based on Bayesian inference,and its purpose is to minimize the difference between the predicted world and the actual world,so as to minimize the prediction error.Predictive processing is therefore essentially a context-dependent model representation,an adaptive representational system designed to achieve its cognitive goals through the minimization of prediction error.展开更多
Recognizing road scene context from a single image remains a critical challenge for intelligent autonomous driving systems,particularly in dynamic and unstructured environments.While recent advancements in deep learni...Recognizing road scene context from a single image remains a critical challenge for intelligent autonomous driving systems,particularly in dynamic and unstructured environments.While recent advancements in deep learning have significantly enhanced road scene classification,simultaneously achieving high accuracy,computational efficiency,and adaptability across diverse conditions continues to be difficult.To address these challenges,this study proposes HybridLSTM,a novel and efficient framework that integrates deep learning-based,object-based,and handcrafted feature extraction methods within a unified architecture.HybridLSTM is designed to classify four distinct road scene categories—crosswalk(CW),highway(HW),overpass/tunnel(OP/T),and parking(P)—by leveraging multiple publicly available datasets,including Places-365,BDD100K,LabelMe,and KITTI,thereby promoting domain generalization.The framework fuses object-level features extracted using YOLOv5 and VGG19,scene-level global representations obtained from a modified VGG19,and fine-grained texture features captured through eight handcrafted descriptors.This hybrid feature fusion enables the model to capture both semantic context and low-level visual cues,which are critical for robust scene understanding.To model spatial arrangements and latent sequential dependencies present even in static imagery,the combined features are processed through a Long Short-Term Memory(LSTM)network,allowing the extraction of discriminative patterns across heterogeneous feature spaces.Extensive experiments conducted on 2725 annotated road scene images,with an 80:20 training-to-testing split,validate the effectiveness of the proposed model.HybridLSTM achieves a classification accuracy of 96.3%,a precision of 95.8%,a recall of 96.1%,and an F1-score of 96.0%,outperforming several existing state-of-the-art methods.These results demonstrate the robustness,scalability,and generalization capability of HybridLSTM across varying environments and scene complexities.Moreover,the framework is optimized to balance classification performance with computational efficiency,making it highly suitable for real-time deployment in embedded autonomous driving systems.Future work will focus on extending the model to multi-class detection within a single frame and optimizing it further for edge-device deployments to reduce computational overhead in practical applications.展开更多
The autonomous landing guidance of fixed-wing aircraft in unknown structured scenes presents a substantial technological challenge,particularly regarding the effectiveness of solutions for monocular visual relative po...The autonomous landing guidance of fixed-wing aircraft in unknown structured scenes presents a substantial technological challenge,particularly regarding the effectiveness of solutions for monocular visual relative pose estimation.This study proposes a novel airborne monocular visual estimation method based on structured scene features to address this challenge.First,a multitask neural network model is established for segmentation,depth estimation,and slope estimation on monocular images.And a monocular image comprehensive three-dimensional information metric is designed,encompassing length,span,flatness,and slope information.Subsequently,structured edge features are leveraged to filter candidate landing regions adaptively.By leveraging the three-dimensional information metric,the optimal landing region is accurately and efficiently identified.Finally,sparse two-dimensional key point is used to parameterize the optimal landing region for the first time and a high-precision relative pose estimation is achieved.Additional measurement information is introduced to provide the autonomous landing guidance information between the aircraft and the optimal landing region.Experimental results obtained from both synthetic and real data demonstrate the effectiveness of the proposed method in monocular pose estimation for autonomous aircraft landing guidance in unknown structured scenes.展开更多
In global navigation satellite system denial environment,cross-view geo-localization based on image retrieval presents an exceedingly critical visual localization solution for Unmanned Aerial Vehicle(UAV)systems.The e...In global navigation satellite system denial environment,cross-view geo-localization based on image retrieval presents an exceedingly critical visual localization solution for Unmanned Aerial Vehicle(UAV)systems.The essence of cross-view geo-localization resides in matching images containing the same geographical targets from disparate platforms,such as UAV-view and satellite-view images.However,images of the same geographical targets may suffer from occlusions and geometric distortions due to variations in the capturing platform,view,and timing.The existing methods predominantly extract features by segmenting feature maps,which overlook the holistic semantic distribution and structural information of objects,resulting in loss of image information.To address these challenges,dilated neighborhood attention Transformer is employed as the feature extraction backbone,and Multi-feature representations based on Multi-scale Hierarchical Contextual Aggregation(MMHCA)is proposed.In the proposed MMHCA method,the multiscale hierarchical contextual aggregation method is utilized to extract contextual information from local to global across various granularity levels,establishing feature associations of contextual information with global and local information in the image.Subsequently,the multi-feature representations method is utilized to obtain rich discriminative feature information,bolstering the robustness of model in scenarios characterized by positional shifts,varying distances,and scale ambiguities.Comprehensive experiments conducted on the extensively utilized University-1652 and SUES-200 benchmarks indicate that the MMHCA method surpasses the existing techniques.showing outstanding results in UAV localization and navigation.展开更多
Establishing and maintaining protected areas is a pivotal strategy for attaining the post-2020 biodiversity target. The conservation objectives of protected areas have shifted from a narrow emphasis on biodiversity to...Establishing and maintaining protected areas is a pivotal strategy for attaining the post-2020 biodiversity target. The conservation objectives of protected areas have shifted from a narrow emphasis on biodiversity to encompass broader considerations such as ecosystem stability, community resilience to climate change, and enhancement of human well-being. Given these multifaceted objectives, it is imperative to judiciously allocate resources to effectively conserve biodiversity by identifying strategically significant areas for conservation, particularly for mountainous areas. In this study, we evaluated the representativeness of the protected area network in the Qin ling Mountains concerning species diversity, ecosystem services, climate stability and ecological stability. The results indicate that some of the ecological indicators are spatially correlated with topographic gradient effects. The conservation priority areas predominantly lie in the northern foothills, the southeastern, and southwestern parts of the Qinling Mountain with areas concentrated at altitudes between 1,500-2,000 m and slopes between 40°-50° as hotspots. The conservation priority areas identified through the framework of inclusive conservation optimization account for 22.9 % of the Qinling Mountain. Existing protected areas comprise only 6.1 % of the Qinling Mountain and 13.18 % of the conservation priority areas. This will play an important role in achiev ing sustainable development in the region and in meeting the post-2020 biodiversity target. The framework can advance the different objectives of achieving a quadruple win and can also be extended to other regions.展开更多
Physics-informed neural networks(PINNs)are promising to replace conventional mesh-based partial tial differen-equation(PDE)solvers by offering more accurate and flexible PDE solutions.However,PINNs are hampered by the...Physics-informed neural networks(PINNs)are promising to replace conventional mesh-based partial tial differen-equation(PDE)solvers by offering more accurate and flexible PDE solutions.However,PINNs are hampered by the relatively slow convergence and the need to perform additional,potentially expensive training for new PDE parameters.To solve this limitation,we introduce LatentPINN,a framework that utilizes latent representations of the PDE parameters as additional(to the coordinates)inputs into PINNs and allows for training over the distribution of these parameters.Motivated by the recent progress on generative models,we promote using latent diffusion models to learn compressed latent representations of the distribution of PDE parameters as they act as input parameters for NN functional solutions.We use a two-stage training scheme in which,in the first stage,we learn the latent representations for the distribution of PDE parameters.In the second stage,we train a physics-informed neural network over inputs given by randomly drawn samples from the coordinate space within the solution domain and samples from the learned latent representation of the PDE parameters.Considering their importance in capturing evolving interfaces and fronts in various fields,we test the approach on a class of level set equations given,for example,by the nonlinear Eikonal equation.We share results corresponding to three Eikonal parameters(velocity models)sets.The proposed method performs well on new phase velocity models without the need for any additional training.展开更多
Considering that the algorithm accuracy of the traditional sparse representation models is not high under the influence of multiple complex environmental factors,this study focuses on the improvement of feature extrac...Considering that the algorithm accuracy of the traditional sparse representation models is not high under the influence of multiple complex environmental factors,this study focuses on the improvement of feature extraction and model construction.Firstly,the convolutional neural network(CNN)features of the face are extracted by the trained deep learning network.Next,the steady-state and dynamic classifiers for face recognition are constructed based on the CNN features and Haar features respectively,with two-stage sparse representation introduced in the process of constructing the steady-state classifier and the feature templates with high reliability are dynamically selected as alternative templates from the sparse representation template dictionary constructed using the CNN features.Finally,the results of face recognition are given based on the classification results of the steady-state classifier and the dynamic classifier together.Based on this,the feature weights of the steady-state classifier template are adjusted in real time and the dictionary set is dynamically updated to reduce the probability of irrelevant features entering the dictionary set.The average recognition accuracy of this method is 94.45%on the CMU PIE face database and 96.58%on the AR face database,which is significantly improved compared with that of the traditional face recognition methods.展开更多
Clustering is a pivotal data analysis method for deciphering the charge transport properties of single molecules in break junction experiments.However,given the high dimensionality and variability of the data,feature ...Clustering is a pivotal data analysis method for deciphering the charge transport properties of single molecules in break junction experiments.However,given the high dimensionality and variability of the data,feature extraction remains a bottleneck in the development of efficient clustering methods.In this regard,extensive research over the past two decades has focused on feature engineering and dimensionality reduction in break junction conductance.However,extracting highly relevant features without expert knowledge remains an unresolved challenge.To address this issue,we propose a deep clustering method driven by task-oriented representation learning(CTRL)in which the clustering module serves as a guide for the representation learning(RepL)module.First,we determine an optimal autoencoder(AE)structure through a neural architecture search(NAS)to ensure efficient RepL;second,the RepL process is guided by a joint training strategy that combines AE reconstruction loss with the clustering objective.The results demonstrate that CTRL achieves excellent performance on both the generated and experimental data.Further inspection of the RepL step reveals that joint training robustly learns more compact features than the unconstrained AE or traditional dimensionality reduction methods,significantly reducing misclustering possibilities.Our method provides a general end-to-end automatic clustering solution for analyzing single-molecule break junction data.展开更多
基金supported by the German National BMBF IKT2020-Grant(16SV7213)(EmotAsS)the European-Unions Horizon 2020 Research and Innovation Programme(688835)(DE-ENIGMA)the China Scholarship Council(CSC)
文摘Spectrogram representations of acoustic scenes have achieved competitive performance for acoustic scene classification. Yet, the spectrogram alone does not take into account a substantial amount of time-frequency information. In this study, we present an approach for exploring the benefits of deep scalogram representations, extracted in segments from an audio stream. The approach presented firstly transforms the segmented acoustic scenes into bump and morse scalograms, as well as spectrograms; secondly, the spectrograms or scalograms are sent into pre-trained convolutional neural networks; thirdly,the features extracted from a subsequent fully connected layer are fed into(bidirectional) gated recurrent neural networks, which are followed by a single highway layer and a softmax layer;finally, predictions from these three systems are fused by a margin sampling value strategy. We then evaluate the proposed approach using the acoustic scene classification data set of 2017 IEEE AASP Challenge on Detection and Classification of Acoustic Scenes and Events(DCASE). On the evaluation set, an accuracy of 64.0 % from bidirectional gated recurrent neural networks is obtained when fusing the spectrogram and the bump scalogram, which is an improvement on the 61.0 % baseline result provided by the DCASE 2017 organisers. This result shows that extracted bump scalograms are capable of improving the classification accuracy,when fusing with a spectrogram-based system.
基金This research is partially supported by the Programme for Professor of Special Appointment(Eastern Scholar)at Shanghai Institutions of Higher Learning,and also partially supported by JSPS KAKENHI Grant No.15K00159.
文摘Scene recognition is a fundamental task in computer vision,which generally includes three vital stages,namely feature extraction,feature transformation and classification.Early research mainly focuses on feature extraction,but with the rise of Convolutional Neural Networks(CNNs),more and more feature transformation methods are proposed based on CNN features.In this work,a novel feature transformation algorithm called Graph Encoded Local Discriminative Region Representation(GEDRR)is proposed to find discriminative local representations for scene images and explore the relationship between the discriminative regions.In addition,we propose a method using the multi-head attention module to enhance and fuse convolutional feature maps.Combining the two methods and the global representation,a scene recognition framework called Global and Graph Encoded Local Discriminative Region Representation(G2ELDR2)is proposed.The experimental results on three scene datasets demonstrate the effectiveness of our model,which outperforms many state-of-the-arts.
基金the National Natural Science Foundation of China(No.62063006)to the Guangxi Natural Science Foundation under Grant(Nos.2023GXNSFAA026025,AA24010001)+3 种基金to the Innovation Fund of Chinese Universities Industry-University-Research(ID:2023RY018)to the Special Guangxi Industry and Information Technology Department,Textile and Pharmaceutical Division(ID:2021 No.231)to the Special Research Project of Hechi University(ID:2021GCC028)to the Key Laboratory of AI and Information Processing,Education Department of Guangxi Zhuang Autonomous Region(Hechi University),No.2024GXZDSY009。
文摘In dynamic scenarios,visual simultaneous localization and mapping(SLAM)algorithms often incorrectly incorporate dynamic points during camera pose computation,leading to reduced accuracy and robustness.This paper presents a dynamic SLAM algorithm that leverages object detection and regional dynamic probability.Firstly,a parallel thread employs the YOLOX object detectionmodel to gather 2D semantic information and compensate for missed detections.Next,an improved K-means++clustering algorithm clusters bounding box regions,adaptively determining the threshold for extracting dynamic object contours as dynamic points change.This process divides the image into low dynamic,suspicious dynamic,and high dynamic regions.In the tracking thread,the dynamic point removal module assigns dynamic probability weights to the feature points in these regions.Combined with geometric methods,it detects and removes the dynamic points.The final evaluation on the public TUM RGB-D dataset shows that the proposed dynamic SLAM algorithm surpasses most existing SLAM algorithms,providing better pose estimation accuracy and robustness in dynamic environments.
文摘Let F_(1)be the virtual field consisting of one element and(Q,I)a string pair.In this paper,we study the representations of string pairs over the virtual field F_(1).It is proved that an indecomposable F_(1)-representation is either a string representation or a band representation by using the coefficient quivers.It is worth noting that for a given band and a positive integer,there exists a unique band representation up to isomorphism.
文摘With the upgrading of tourism consumption patterns,the traditional renovation models of waterfront recreational spaces centered on landscape design can no longer meet the commercial and humanistic demands of modern cultural and tourism development.Based on scene theory as the analytical framework and taking the Xuan en Night Banquet Project in Enshi as a case study,this paper explores the design pathway for transforming waterfront areas in tourism cities from"spatial reconstruction"to"scene construction".The study argues that waterfront space renewal should transcend mere physical renovation.By implementing three core strategies:spatial narrative framework,ecological industry creation,and cultural empowerment,it is possible to construct integrated scenarios that blend cultural value,consumption spaces,and lifestyle elements.This approach ultimately fosters sustained vitality in waterfront areas and promotes the high-quality development of cultural and tourism industry.
基金supported by the Glocal University 30 Project Fund of Gyeongsang National University in 2025.
文摘Scene graph prediction has emerged as a critical task in computer vision,focusing on transforming complex visual scenes into structured representations by identifying objects,their attributes,and the relationships among them.Extending this to 3D semantic scene graph(3DSSG)prediction introduces an additional layer of complexity because it requires the processing of point-cloud data to accurately capture the spatial and volumetric characteristics of a scene.A significant challenge in 3DSSG is the long-tailed distribution of object and relationship labels,causing certain classes to be severely underrepresented and suboptimal performance in these rare categories.To address this,we proposed a fusion prototypical network(FPN),which combines the strengths of conventional neural networks for 3DSSG with a Prototypical Network.The former are known for their ability to handle complex scene graph predictions while the latter excels in few-shot learning scenarios.By leveraging this fusion,our approach enhances the overall prediction accuracy and substantially improves the handling of underrepresented labels.Through extensive experiments using the 3DSSG dataset,we demonstrated that the FPN achieves state-of-the-art performance in 3D scene graph prediction as a single model and effectively mitigates the impact of the long-tailed distribution,providing a more balanced and comprehensive understanding of complex 3D environments.
文摘Crime scene investigation(CSI)is an important link in the criminal justice system as it serves as a bridge between establishing the happenings during an incident and possibly identifying the accountable persons,providing light in the dark.The International Organization for Standardization(ISO)and the International Electrotechnical Commission(IEC)collaborated to develop the ISO/IEC 17020:2012 standard to govern the quality of CSI,a branch of inspection activity.These protocols include the impartiality and competence of the crime scene investigators involved,contemporary recording of scene observations and data obtained,the correct use of resources during scene processing,forensic evidence collection and handling procedures,and the confidentiality and integrity of any scene information obtained from other parties etc.The preparatory work,the accreditation processes involved and the implementation of new quality measures to the existing quality management system in order to achieve the ISO/IE 17020:2012 accreditation at the Forensic Science Division of the Government Laboratory in Hong Kong are discussed in this paper.
文摘Remote sensing scene image classification is a prominent research area within remote sensing.Deep learningbased methods have been extensively utilized and have shown significant advancements in this field.Recent progress in these methods primarily focuses on enhancing feature representation capabilities to improve performance.The challenge lies in the limited spatial resolution of small-sized remote sensing images,as well as image blurring and sparse data.These factors contribute to lower accuracy in current deep learning models.Additionally,deeper networks with attention-based modules require a substantial number of network parameters,leading to high computational costs and memory usage.In this article,we introduce ERSNet,a lightweight novel attention-guided network for remote sensing scene image classification.ERSNet is constructed using a deep separable convolutional network and incorporates an attention mechanism.It utilizes spatial attention,channel attention,and channel self-attention to enhance feature representation and accuracy,while also reducing computational complexity and memory usage.Experimental results indicate that,compared to existing state-of-the-art methods,ERSNet has a significantly lower parameter count of only 1.2 M and reduced Flops.It achieves the highest classification accuracy of 99.14%on the EuroSAT dataset,demonstrating its suitability for application on mobile terminal devices.Furthermore,experimental results from the UCMerced land use dataset and the Brazilian coffee scene also confirm the strong generalization ability of this method.
基金funded by the Yangtze River Delta Science and Technology Innovation Community Joint Research Project(2023CSJGG1600)the Natural Science Foundation of Anhui Province(2208085MF173)Wuhu“ChiZhu Light”Major Science and Technology Project(2023ZD01,2023ZD03).
文摘In the dynamic scene of autonomous vehicles,the depth estimation of monocular cameras often faces the problem of inaccurate edge depth estimation.To solve this problem,we propose an unsupervised monocular depth estimation model based on edge enhancement,which is specifically aimed at the depth perception challenge in dynamic scenes.The model consists of two core networks:a deep prediction network and a motion estimation network,both of which adopt an encoder-decoder architecture.The depth prediction network is based on the U-Net structure of ResNet18,which is responsible for generating the depth map of the scene.The motion estimation network is based on the U-Net structure of Flow-Net,focusing on the motion estimation of dynamic targets.In the decoding stage of the motion estimation network,we innovatively introduce an edge-enhanced decoder,which integrates a convolutional block attention module(CBAM)in the decoding process to enhance the recognition ability of the edge features of moving objects.In addition,we also designed a strip convolution module to improve the model’s capture efficiency of discrete moving targets.To further improve the performance of the model,we propose a novel edge regularization method based on the Laplace operator,which effectively accelerates the convergence process of themodel.Experimental results on the KITTI and Cityscapes datasets show that compared with the current advanced dynamic unsupervised monocular model,the proposed model has a significant improvement in depth estimation accuracy and convergence speed.Specifically,the rootmean square error(RMSE)is reduced by 4.8%compared with the DepthMotion algorithm,while the training convergence speed is increased by 36%,which shows the superior performance of the model in the depth estimation task in dynamic scenes.
基金supported by National 2011 Collaborative Innovation Center of Wireless Communication Technologies under Grant 2242022k60006。
文摘This paper presents a comprehensive framework that enables communication scene recognition through deep learning and multi-sensor fusion.This study aims to address the challenge of current communication scene recognition methods that struggle to adapt in dynamic environments,as they typically rely on post-response mechanisms that fail to detect scene changes before users experience latency.The proposed framework leverages data from multiple smartphone sensors,including acceleration sensors,gyroscopes,magnetic field sensors,and orientation sensors,to identify different communication scenes,such as walking,running,cycling,and various modes of transportation.Extensive experimental comparative analysis with existing methods on the open-source SHL-2018 dataset confirmed the superior performance of our approach in terms of F1 score and processing speed.Additionally,tests using a Microsoft Surface Pro tablet and a self-collected Beijing-2023 dataset have validated the framework's efficiency and generalization capability.The results show that our framework achieved an F1 score of 95.15%on SHL-2018and 94.6%on Beijing-2023,highlighting its robustness across different datasets and conditions.Furthermore,the levels of computational complexity and power consumption associated with the algorithm are moderate,making it suitable for deployment on mobile devices.
基金supported in part by the National Natural Science Foundation of China[Grant number 62471075]the Major Science and Technology Project Grant of the Chongqing Municipal Education Commission[Grant number KJZD-M202301901]Graduate Innovation Fund of Chongqing[gzlcx20253235].
文摘Semantic segmentation in street scenes is a crucial technology for autonomous driving to analyze the surrounding environment.In street scenes,issues such as high image resolution caused by a large viewpoints and differences in object scales lead to a decline in real-time performance and difficulties in multi-scale feature extraction.To address this,we propose a bilateral-branch real-time semantic segmentationmethod based on semantic information distillation(BSDNet)for street scene images.The BSDNet consists of a Feature Conversion Convolutional Block(FCB),a Semantic Information Distillation Module(SIDM),and a Deep Aggregation Atrous Convolution Pyramid Pooling(DASP).FCB reduces the semantic gap between the backbone and the semantic branch.SIDM extracts high-quality semantic information fromthe Transformer branch to reduce computational costs.DASP aggregates information lost in atrous convolutions,effectively capturingmulti-scale objects.Extensive experiments conducted on Cityscapes,CamVid,and ADE20K,achieving an accuracy of 81.7% Mean Intersection over Union(mIoU)at 70.6 Frames Per Second(FPS)on Cityscapes,demonstrate that our method achieves a better balance between accuracy and inference speed.
基金supported by the National Social Science Fund of China’s project‘Philosophical Research on the Challenge of Artificial Cognition to Natural Cognition’(grant number 21&ZD061)
文摘As a new research direction in contemporary cognitive science,predictive processing surpasses traditional computational representation and embodied cognition and has emerged as a new paradigm in cognitive science research.The predictive processing theory advocates that the brain is a hierarchical predictive model based on Bayesian inference,and its purpose is to minimize the difference between the predicted world and the actual world,so as to minimize the prediction error.Predictive processing is therefore essentially a context-dependent model representation,an adaptive representational system designed to achieve its cognitive goals through the minimization of prediction error.
文摘Recognizing road scene context from a single image remains a critical challenge for intelligent autonomous driving systems,particularly in dynamic and unstructured environments.While recent advancements in deep learning have significantly enhanced road scene classification,simultaneously achieving high accuracy,computational efficiency,and adaptability across diverse conditions continues to be difficult.To address these challenges,this study proposes HybridLSTM,a novel and efficient framework that integrates deep learning-based,object-based,and handcrafted feature extraction methods within a unified architecture.HybridLSTM is designed to classify four distinct road scene categories—crosswalk(CW),highway(HW),overpass/tunnel(OP/T),and parking(P)—by leveraging multiple publicly available datasets,including Places-365,BDD100K,LabelMe,and KITTI,thereby promoting domain generalization.The framework fuses object-level features extracted using YOLOv5 and VGG19,scene-level global representations obtained from a modified VGG19,and fine-grained texture features captured through eight handcrafted descriptors.This hybrid feature fusion enables the model to capture both semantic context and low-level visual cues,which are critical for robust scene understanding.To model spatial arrangements and latent sequential dependencies present even in static imagery,the combined features are processed through a Long Short-Term Memory(LSTM)network,allowing the extraction of discriminative patterns across heterogeneous feature spaces.Extensive experiments conducted on 2725 annotated road scene images,with an 80:20 training-to-testing split,validate the effectiveness of the proposed model.HybridLSTM achieves a classification accuracy of 96.3%,a precision of 95.8%,a recall of 96.1%,and an F1-score of 96.0%,outperforming several existing state-of-the-art methods.These results demonstrate the robustness,scalability,and generalization capability of HybridLSTM across varying environments and scene complexities.Moreover,the framework is optimized to balance classification performance with computational efficiency,making it highly suitable for real-time deployment in embedded autonomous driving systems.Future work will focus on extending the model to multi-class detection within a single frame and optimizing it further for edge-device deployments to reduce computational overhead in practical applications.
基金co-supported by the Science and Technology Innovation Program of Hunan Province,China(No.2023RC3023)the National Natural Science Foundation of China(No.12272404)。
文摘The autonomous landing guidance of fixed-wing aircraft in unknown structured scenes presents a substantial technological challenge,particularly regarding the effectiveness of solutions for monocular visual relative pose estimation.This study proposes a novel airborne monocular visual estimation method based on structured scene features to address this challenge.First,a multitask neural network model is established for segmentation,depth estimation,and slope estimation on monocular images.And a monocular image comprehensive three-dimensional information metric is designed,encompassing length,span,flatness,and slope information.Subsequently,structured edge features are leveraged to filter candidate landing regions adaptively.By leveraging the three-dimensional information metric,the optimal landing region is accurately and efficiently identified.Finally,sparse two-dimensional key point is used to parameterize the optimal landing region for the first time and a high-precision relative pose estimation is achieved.Additional measurement information is introduced to provide the autonomous landing guidance information between the aircraft and the optimal landing region.Experimental results obtained from both synthetic and real data demonstrate the effectiveness of the proposed method in monocular pose estimation for autonomous aircraft landing guidance in unknown structured scenes.
基金supported by the National Natural Science Foundation of China(Nos.12072027,62103052,61603346 and 62103379)the Henan Key Laboratory of General Aviation Technology,China(No.ZHKF-230201)+3 种基金the Funding for the Open Research Project of the Rotor Aerodynamics Key Laboratory,China(No.RAL20200101)the Key Research and Development Program of Henan Province,China(Nos.241111222000 and 241111222900)the Key Science and Technology Program of Henan Province,China(No.232102220067)the Scholarship Funding from the China Scholarship Council(No.202206030079).
文摘In global navigation satellite system denial environment,cross-view geo-localization based on image retrieval presents an exceedingly critical visual localization solution for Unmanned Aerial Vehicle(UAV)systems.The essence of cross-view geo-localization resides in matching images containing the same geographical targets from disparate platforms,such as UAV-view and satellite-view images.However,images of the same geographical targets may suffer from occlusions and geometric distortions due to variations in the capturing platform,view,and timing.The existing methods predominantly extract features by segmenting feature maps,which overlook the holistic semantic distribution and structural information of objects,resulting in loss of image information.To address these challenges,dilated neighborhood attention Transformer is employed as the feature extraction backbone,and Multi-feature representations based on Multi-scale Hierarchical Contextual Aggregation(MMHCA)is proposed.In the proposed MMHCA method,the multiscale hierarchical contextual aggregation method is utilized to extract contextual information from local to global across various granularity levels,establishing feature associations of contextual information with global and local information in the image.Subsequently,the multi-feature representations method is utilized to obtain rich discriminative feature information,bolstering the robustness of model in scenarios characterized by positional shifts,varying distances,and scale ambiguities.Comprehensive experiments conducted on the extensively utilized University-1652 and SUES-200 benchmarks indicate that the MMHCA method surpasses the existing techniques.showing outstanding results in UAV localization and navigation.
基金supported by the National Natural Science Foun-dation of China(Grant No.72349002).
文摘Establishing and maintaining protected areas is a pivotal strategy for attaining the post-2020 biodiversity target. The conservation objectives of protected areas have shifted from a narrow emphasis on biodiversity to encompass broader considerations such as ecosystem stability, community resilience to climate change, and enhancement of human well-being. Given these multifaceted objectives, it is imperative to judiciously allocate resources to effectively conserve biodiversity by identifying strategically significant areas for conservation, particularly for mountainous areas. In this study, we evaluated the representativeness of the protected area network in the Qin ling Mountains concerning species diversity, ecosystem services, climate stability and ecological stability. The results indicate that some of the ecological indicators are spatially correlated with topographic gradient effects. The conservation priority areas predominantly lie in the northern foothills, the southeastern, and southwestern parts of the Qinling Mountain with areas concentrated at altitudes between 1,500-2,000 m and slopes between 40°-50° as hotspots. The conservation priority areas identified through the framework of inclusive conservation optimization account for 22.9 % of the Qinling Mountain. Existing protected areas comprise only 6.1 % of the Qinling Mountain and 13.18 % of the conservation priority areas. This will play an important role in achiev ing sustainable development in the region and in meeting the post-2020 biodiversity target. The framework can advance the different objectives of achieving a quadruple win and can also be extended to other regions.
基金King Abdullah University of Science and Technol-ogy(KAUST)for supporting this research and the Seismic Wave Anal-ysis group for the supportive and encouraging environment.
文摘Physics-informed neural networks(PINNs)are promising to replace conventional mesh-based partial tial differen-equation(PDE)solvers by offering more accurate and flexible PDE solutions.However,PINNs are hampered by the relatively slow convergence and the need to perform additional,potentially expensive training for new PDE parameters.To solve this limitation,we introduce LatentPINN,a framework that utilizes latent representations of the PDE parameters as additional(to the coordinates)inputs into PINNs and allows for training over the distribution of these parameters.Motivated by the recent progress on generative models,we promote using latent diffusion models to learn compressed latent representations of the distribution of PDE parameters as they act as input parameters for NN functional solutions.We use a two-stage training scheme in which,in the first stage,we learn the latent representations for the distribution of PDE parameters.In the second stage,we train a physics-informed neural network over inputs given by randomly drawn samples from the coordinate space within the solution domain and samples from the learned latent representation of the PDE parameters.Considering their importance in capturing evolving interfaces and fronts in various fields,we test the approach on a class of level set equations given,for example,by the nonlinear Eikonal equation.We share results corresponding to three Eikonal parameters(velocity models)sets.The proposed method performs well on new phase velocity models without the need for any additional training.
基金the financial support from Natural Science Foundation of Gansu Province(Nos.22JR5RA217,22JR5RA216)Lanzhou Science and Technology Program(No.2022-2-111)+1 种基金Lanzhou University of Arts and Sciences School Innovation Fund Project(No.XJ2022000103)Lanzhou College of Arts and Sciences 2023 Talent Cultivation Quality Improvement Project(No.2023-ZL-jxzz-03)。
文摘Considering that the algorithm accuracy of the traditional sparse representation models is not high under the influence of multiple complex environmental factors,this study focuses on the improvement of feature extraction and model construction.Firstly,the convolutional neural network(CNN)features of the face are extracted by the trained deep learning network.Next,the steady-state and dynamic classifiers for face recognition are constructed based on the CNN features and Haar features respectively,with two-stage sparse representation introduced in the process of constructing the steady-state classifier and the feature templates with high reliability are dynamically selected as alternative templates from the sparse representation template dictionary constructed using the CNN features.Finally,the results of face recognition are given based on the classification results of the steady-state classifier and the dynamic classifier together.Based on this,the feature weights of the steady-state classifier template are adjusted in real time and the dictionary set is dynamically updated to reduce the probability of irrelevant features entering the dictionary set.The average recognition accuracy of this method is 94.45%on the CMU PIE face database and 96.58%on the AR face database,which is significantly improved compared with that of the traditional face recognition methods.
基金supported by Guangxi Science and Technology Program(No.GuiKeAD23026291)Guangxi Science and Technology Major Project(No.AA22068057).
文摘Clustering is a pivotal data analysis method for deciphering the charge transport properties of single molecules in break junction experiments.However,given the high dimensionality and variability of the data,feature extraction remains a bottleneck in the development of efficient clustering methods.In this regard,extensive research over the past two decades has focused on feature engineering and dimensionality reduction in break junction conductance.However,extracting highly relevant features without expert knowledge remains an unresolved challenge.To address this issue,we propose a deep clustering method driven by task-oriented representation learning(CTRL)in which the clustering module serves as a guide for the representation learning(RepL)module.First,we determine an optimal autoencoder(AE)structure through a neural architecture search(NAS)to ensure efficient RepL;second,the RepL process is guided by a joint training strategy that combines AE reconstruction loss with the clustering objective.The results demonstrate that CTRL achieves excellent performance on both the generated and experimental data.Further inspection of the RepL step reveals that joint training robustly learns more compact features than the unconstrained AE or traditional dimensionality reduction methods,significantly reducing misclustering possibilities.Our method provides a general end-to-end automatic clustering solution for analyzing single-molecule break junction data.