Scene graph prediction has emerged as a critical task in computer vision,focusing on transforming complex visual scenes into structured representations by identifying objects,their attributes,and the relationships amo...Scene graph prediction has emerged as a critical task in computer vision,focusing on transforming complex visual scenes into structured representations by identifying objects,their attributes,and the relationships among them.Extending this to 3D semantic scene graph(3DSSG)prediction introduces an additional layer of complexity because it requires the processing of point-cloud data to accurately capture the spatial and volumetric characteristics of a scene.A significant challenge in 3DSSG is the long-tailed distribution of object and relationship labels,causing certain classes to be severely underrepresented and suboptimal performance in these rare categories.To address this,we proposed a fusion prototypical network(FPN),which combines the strengths of conventional neural networks for 3DSSG with a Prototypical Network.The former are known for their ability to handle complex scene graph predictions while the latter excels in few-shot learning scenarios.By leveraging this fusion,our approach enhances the overall prediction accuracy and substantially improves the handling of underrepresented labels.Through extensive experiments using the 3DSSG dataset,we demonstrated that the FPN achieves state-of-the-art performance in 3D scene graph prediction as a single model and effectively mitigates the impact of the long-tailed distribution,providing a more balanced and comprehensive understanding of complex 3D environments.展开更多
Crime scene investigation(CSI)is an important link in the criminal justice system as it serves as a bridge between establishing the happenings during an incident and possibly identifying the accountable persons,provid...Crime scene investigation(CSI)is an important link in the criminal justice system as it serves as a bridge between establishing the happenings during an incident and possibly identifying the accountable persons,providing light in the dark.The International Organization for Standardization(ISO)and the International Electrotechnical Commission(IEC)collaborated to develop the ISO/IEC 17020:2012 standard to govern the quality of CSI,a branch of inspection activity.These protocols include the impartiality and competence of the crime scene investigators involved,contemporary recording of scene observations and data obtained,the correct use of resources during scene processing,forensic evidence collection and handling procedures,and the confidentiality and integrity of any scene information obtained from other parties etc.The preparatory work,the accreditation processes involved and the implementation of new quality measures to the existing quality management system in order to achieve the ISO/IE 17020:2012 accreditation at the Forensic Science Division of the Government Laboratory in Hong Kong are discussed in this paper.展开更多
In dynamic scenarios,visual simultaneous localization and mapping(SLAM)algorithms often incorrectly incorporate dynamic points during camera pose computation,leading to reduced accuracy and robustness.This paper prese...In dynamic scenarios,visual simultaneous localization and mapping(SLAM)algorithms often incorrectly incorporate dynamic points during camera pose computation,leading to reduced accuracy and robustness.This paper presents a dynamic SLAM algorithm that leverages object detection and regional dynamic probability.Firstly,a parallel thread employs the YOLOX object detectionmodel to gather 2D semantic information and compensate for missed detections.Next,an improved K-means++clustering algorithm clusters bounding box regions,adaptively determining the threshold for extracting dynamic object contours as dynamic points change.This process divides the image into low dynamic,suspicious dynamic,and high dynamic regions.In the tracking thread,the dynamic point removal module assigns dynamic probability weights to the feature points in these regions.Combined with geometric methods,it detects and removes the dynamic points.The final evaluation on the public TUM RGB-D dataset shows that the proposed dynamic SLAM algorithm surpasses most existing SLAM algorithms,providing better pose estimation accuracy and robustness in dynamic environments.展开更多
Remote sensing scene image classification is a prominent research area within remote sensing.Deep learningbased methods have been extensively utilized and have shown significant advancements in this field.Recent progr...Remote sensing scene image classification is a prominent research area within remote sensing.Deep learningbased methods have been extensively utilized and have shown significant advancements in this field.Recent progress in these methods primarily focuses on enhancing feature representation capabilities to improve performance.The challenge lies in the limited spatial resolution of small-sized remote sensing images,as well as image blurring and sparse data.These factors contribute to lower accuracy in current deep learning models.Additionally,deeper networks with attention-based modules require a substantial number of network parameters,leading to high computational costs and memory usage.In this article,we introduce ERSNet,a lightweight novel attention-guided network for remote sensing scene image classification.ERSNet is constructed using a deep separable convolutional network and incorporates an attention mechanism.It utilizes spatial attention,channel attention,and channel self-attention to enhance feature representation and accuracy,while also reducing computational complexity and memory usage.Experimental results indicate that,compared to existing state-of-the-art methods,ERSNet has a significantly lower parameter count of only 1.2 M and reduced Flops.It achieves the highest classification accuracy of 99.14%on the EuroSAT dataset,demonstrating its suitability for application on mobile terminal devices.Furthermore,experimental results from the UCMerced land use dataset and the Brazilian coffee scene also confirm the strong generalization ability of this method.展开更多
In the dynamic scene of autonomous vehicles,the depth estimation of monocular cameras often faces the problem of inaccurate edge depth estimation.To solve this problem,we propose an unsupervised monocular depth estima...In the dynamic scene of autonomous vehicles,the depth estimation of monocular cameras often faces the problem of inaccurate edge depth estimation.To solve this problem,we propose an unsupervised monocular depth estimation model based on edge enhancement,which is specifically aimed at the depth perception challenge in dynamic scenes.The model consists of two core networks:a deep prediction network and a motion estimation network,both of which adopt an encoder-decoder architecture.The depth prediction network is based on the U-Net structure of ResNet18,which is responsible for generating the depth map of the scene.The motion estimation network is based on the U-Net structure of Flow-Net,focusing on the motion estimation of dynamic targets.In the decoding stage of the motion estimation network,we innovatively introduce an edge-enhanced decoder,which integrates a convolutional block attention module(CBAM)in the decoding process to enhance the recognition ability of the edge features of moving objects.In addition,we also designed a strip convolution module to improve the model’s capture efficiency of discrete moving targets.To further improve the performance of the model,we propose a novel edge regularization method based on the Laplace operator,which effectively accelerates the convergence process of themodel.Experimental results on the KITTI and Cityscapes datasets show that compared with the current advanced dynamic unsupervised monocular model,the proposed model has a significant improvement in depth estimation accuracy and convergence speed.Specifically,the rootmean square error(RMSE)is reduced by 4.8%compared with the DepthMotion algorithm,while the training convergence speed is increased by 36%,which shows the superior performance of the model in the depth estimation task in dynamic scenes.展开更多
Plant species diversity is one of the most widely used indicators in ecosystem management.The relation of species diversity with the size of the sample plot has not been fully determined for Oriental beech forests(Fag...Plant species diversity is one of the most widely used indicators in ecosystem management.The relation of species diversity with the size of the sample plot has not been fully determined for Oriental beech forests(Fagus orientalis Lipsky),a widespread species in the Hyrcanian region.Assessing the impacts of plot size on species diversity is fundamental for an ecosystem-based approach to forest management.This study determined the relation of species diversity and plot size by investigating species richness and abundance of both canopy and forest floor.Two hundred and fifty-six sample plots of 625 m^(2) each were layout in a grid pattern across 16 ha.Base plots(25 m×25 m)were integrated in different scales to investigate the effect of plot size on species diversity.The total included nine plots of 0.063,0.125,0.188,0.250,0.375,0.500,0.563,0.750 and 1 ha.Ten biodiversity indices were calculated.The results show that species richness in the different plot sizes was less than the actual value.The estimated value of the Simpson species diversity index was not significantly different from actual values for both canopy and forest floor diversity.The coefficient of variation of this index for the 1-ha sample plot showed the lowest amount across different plot sizes.Inverse Hill species diversity was insignificant difference across different plot sizes with an area greater than 0.500 ha.The modified Hill evenness index for the 1-ha sample size was a correct estimation of the 16-ha for both canopy and forest floor;however,the precision estimation was higher for the canopy layer.All plots greater than 0.250-ha provided an accurate estimation of the Camargo evenness index for forest floor species,but was inaccurate across different plot sizes for the canopy layer.The results indicate that the same plot size did not have the same effect across species diversity measurements.Our results show that correct estimation of species diversity measurements is related to the selection of appropriate indicators and plot size to increase the accuracy of the estimate so that the cost and time of biodiversity management may be reduced.展开更多
This paper presents a comprehensive framework that enables communication scene recognition through deep learning and multi-sensor fusion.This study aims to address the challenge of current communication scene recognit...This paper presents a comprehensive framework that enables communication scene recognition through deep learning and multi-sensor fusion.This study aims to address the challenge of current communication scene recognition methods that struggle to adapt in dynamic environments,as they typically rely on post-response mechanisms that fail to detect scene changes before users experience latency.The proposed framework leverages data from multiple smartphone sensors,including acceleration sensors,gyroscopes,magnetic field sensors,and orientation sensors,to identify different communication scenes,such as walking,running,cycling,and various modes of transportation.Extensive experimental comparative analysis with existing methods on the open-source SHL-2018 dataset confirmed the superior performance of our approach in terms of F1 score and processing speed.Additionally,tests using a Microsoft Surface Pro tablet and a self-collected Beijing-2023 dataset have validated the framework's efficiency and generalization capability.The results show that our framework achieved an F1 score of 95.15%on SHL-2018and 94.6%on Beijing-2023,highlighting its robustness across different datasets and conditions.Furthermore,the levels of computational complexity and power consumption associated with the algorithm are moderate,making it suitable for deployment on mobile devices.展开更多
Semantic segmentation in street scenes is a crucial technology for autonomous driving to analyze the surrounding environment.In street scenes,issues such as high image resolution caused by a large viewpoints and diffe...Semantic segmentation in street scenes is a crucial technology for autonomous driving to analyze the surrounding environment.In street scenes,issues such as high image resolution caused by a large viewpoints and differences in object scales lead to a decline in real-time performance and difficulties in multi-scale feature extraction.To address this,we propose a bilateral-branch real-time semantic segmentationmethod based on semantic information distillation(BSDNet)for street scene images.The BSDNet consists of a Feature Conversion Convolutional Block(FCB),a Semantic Information Distillation Module(SIDM),and a Deep Aggregation Atrous Convolution Pyramid Pooling(DASP).FCB reduces the semantic gap between the backbone and the semantic branch.SIDM extracts high-quality semantic information fromthe Transformer branch to reduce computational costs.DASP aggregates information lost in atrous convolutions,effectively capturingmulti-scale objects.Extensive experiments conducted on Cityscapes,CamVid,and ADE20K,achieving an accuracy of 81.7% Mean Intersection over Union(mIoU)at 70.6 Frames Per Second(FPS)on Cityscapes,demonstrate that our method achieves a better balance between accuracy and inference speed.展开更多
Recognizing road scene context from a single image remains a critical challenge for intelligent autonomous driving systems,particularly in dynamic and unstructured environments.While recent advancements in deep learni...Recognizing road scene context from a single image remains a critical challenge for intelligent autonomous driving systems,particularly in dynamic and unstructured environments.While recent advancements in deep learning have significantly enhanced road scene classification,simultaneously achieving high accuracy,computational efficiency,and adaptability across diverse conditions continues to be difficult.To address these challenges,this study proposes HybridLSTM,a novel and efficient framework that integrates deep learning-based,object-based,and handcrafted feature extraction methods within a unified architecture.HybridLSTM is designed to classify four distinct road scene categories—crosswalk(CW),highway(HW),overpass/tunnel(OP/T),and parking(P)—by leveraging multiple publicly available datasets,including Places-365,BDD100K,LabelMe,and KITTI,thereby promoting domain generalization.The framework fuses object-level features extracted using YOLOv5 and VGG19,scene-level global representations obtained from a modified VGG19,and fine-grained texture features captured through eight handcrafted descriptors.This hybrid feature fusion enables the model to capture both semantic context and low-level visual cues,which are critical for robust scene understanding.To model spatial arrangements and latent sequential dependencies present even in static imagery,the combined features are processed through a Long Short-Term Memory(LSTM)network,allowing the extraction of discriminative patterns across heterogeneous feature spaces.Extensive experiments conducted on 2725 annotated road scene images,with an 80:20 training-to-testing split,validate the effectiveness of the proposed model.HybridLSTM achieves a classification accuracy of 96.3%,a precision of 95.8%,a recall of 96.1%,and an F1-score of 96.0%,outperforming several existing state-of-the-art methods.These results demonstrate the robustness,scalability,and generalization capability of HybridLSTM across varying environments and scene complexities.Moreover,the framework is optimized to balance classification performance with computational efficiency,making it highly suitable for real-time deployment in embedded autonomous driving systems.Future work will focus on extending the model to multi-class detection within a single frame and optimizing it further for edge-device deployments to reduce computational overhead in practical applications.展开更多
The autonomous landing guidance of fixed-wing aircraft in unknown structured scenes presents a substantial technological challenge,particularly regarding the effectiveness of solutions for monocular visual relative po...The autonomous landing guidance of fixed-wing aircraft in unknown structured scenes presents a substantial technological challenge,particularly regarding the effectiveness of solutions for monocular visual relative pose estimation.This study proposes a novel airborne monocular visual estimation method based on structured scene features to address this challenge.First,a multitask neural network model is established for segmentation,depth estimation,and slope estimation on monocular images.And a monocular image comprehensive three-dimensional information metric is designed,encompassing length,span,flatness,and slope information.Subsequently,structured edge features are leveraged to filter candidate landing regions adaptively.By leveraging the three-dimensional information metric,the optimal landing region is accurately and efficiently identified.Finally,sparse two-dimensional key point is used to parameterize the optimal landing region for the first time and a high-precision relative pose estimation is achieved.Additional measurement information is introduced to provide the autonomous landing guidance information between the aircraft and the optimal landing region.Experimental results obtained from both synthetic and real data demonstrate the effectiveness of the proposed method in monocular pose estimation for autonomous aircraft landing guidance in unknown structured scenes.展开更多
Self-supervised monocular depth estimation has emerged as a major research focus in recent years,primarily due to the elimination of ground-truth depth dependence.However,the prevailing architectures in this domain su...Self-supervised monocular depth estimation has emerged as a major research focus in recent years,primarily due to the elimination of ground-truth depth dependence.However,the prevailing architectures in this domain suffer from inherent limitations:existing pose network branches infer camera ego-motion exclusively under static-scene and Lambertian-surface assumptions.These assumptions are often violated in real-world scenarios due to dynamic objects,non-Lambertian reflectance,and unstructured background elements,leading to pervasive artifacts such as depth discontinuities(“holes”),structural collapse,and ambiguous reconstruction.To address these challenges,we propose a novel framework that integrates scene dynamic pose estimation into the conventional self-supervised depth network,enhancing its ability to model complex scene dynamics.Our contributions are threefold:(1)a pixel-wise dynamic pose estimation module that jointly resolves the pose transformations of moving objects and localized scene perturbations;(2)a physically-informed loss function that couples dynamic pose and depth predictions,designed to mitigate depth errors arising from high-speed distant objects and geometrically inconsistent motion profiles;(3)an efficient SE(3)transformation parameterization that streamlines network complexity and temporal pre-processing.Extensive experiments on the KITTI and NYU-V2 benchmarks show that our framework achieves state-of-the-art performance in both quantitative metrics and qualitative visual fidelity,significantly improving the robustness and generalization of monocular depth estimation under dynamic conditions.展开更多
Understanding local variation in forest biomass allows for a better evaluation of broad-scale patterns and interpretation of forest ecosystems’role in carbon dynamics.This study focuses on patterns of aboveground tre...Understanding local variation in forest biomass allows for a better evaluation of broad-scale patterns and interpretation of forest ecosystems’role in carbon dynamics.This study focuses on patterns of aboveground tree biomass within a fully censused 20 ha forest plot in a temperate forest of northern Alabama,USA.We evaluated the relationship between biomass and topography using ridge and valley landforms along with digitally derived moisture and solar radiation indices.Every live woody stem over 1 cm diameter at breast height within this plot was mapped,measured,and identified to species in 2019-2022,and diameter data were used along with speciesspecific wood density to map the aboveground biomass at the scale of 20 m×20 m quadrats.The aboveground tree biomass was 211 Mg·ha^(-1).Other than small stream areas that experienced recent natural disturbances,the total stand biomass was not associated with landform or topographic indices.Dominant species,in contrast,had strong associations with topography.American beech(Fagus grandifolia)and yellow-poplar(Liriodendron tulipfera)dominated the valley landform,with 37% and 54% greater biomass in the valley than their plot average,respectively.Three other dominant species,white oak(Quercus alba),southern shagbark hickory(Carya carolinaeseptentrionalis),and white ash(Fraxinus americana),were more abundant on slopes and benches,thus partitioning the site.Of the six dominant species,only sugar maple(Acer saccharum)was not associated with landform.Moreover,both topographic wetness and potential radiation indices were significant predictors of dominant species biomass within each of the landforms.The study highlights the need to consider species when examining forest productivity in a range of site conditions.展开更多
The navigation system plays a pivotal role in guiding aircraft along designated routes,ensuring precise and punctual arrival at destinations.The integration of scene matching with an inertial navigation system enhance...The navigation system plays a pivotal role in guiding aircraft along designated routes,ensuring precise and punctual arrival at destinations.The integration of scene matching with an inertial navigation system enhances the capability of providing a dependable guarantee for success-ful accomplishment of flight missions.Nonetheless,assuring reliability in scene matching encoun-ters significant challenges in areas characterized by repetitive or weak textures.To tackle these challenges,we propose a novel method to assess the reliability of scene matching based on the dis-tinctive characteristics of correlation peaks.The proposed method leverages the fact that the sim-ilarity of the optimal matching result is significantly higher than that of the surrounding area,and three novel indicators(e.g.,relative height,slope of a correlation peak,and ratio of a sub peak to the main peak)are determined to conjointly evaluate the reliability of scene matching.The pro-posed method entails matching a real-time image with a reference image to generate a correlation surface.A correlation peak is then obtained by extracting the portion of the correlation surface exhibiting a significant gradient.Additionally,the matching reliability is determined by considering the relative height,slope,and ratio of the peak collectively.Exhaustive experimental results with two sets of data demonstrate that the proposed method significantly outperforms traditional approaches in terms of precision,recall,and F1-score.These experiments also establish the efficacy of the proposed method in achieving reliable matching in challenging environments characterized by repetitive and weak textures.This enhancement holds the potential to significantly elevate scene-matching-based navigation.展开更多
3D scene understanding and reconstruction aims to obtain a concise scene representation from images and reconstruct the complete scene,including the scene layout,objects bounding boxes and shapes.Existing holistic sce...3D scene understanding and reconstruction aims to obtain a concise scene representation from images and reconstruct the complete scene,including the scene layout,objects bounding boxes and shapes.Existing holistic scene understanding methods primarily recover scenes from single images,with a focus on indoor scenes.Due to the complexity of real-world,the information provided by a single image is limited,resulting in issues such as object occlusion and omission.Furthermore,captured data from outdoor scenes exhibits characteristics of sparsity,strong temporal dependencies and a lack of annotations.Consequently,the task of understanding and reconstructing outdoor scenes is highly challenging.The authors propose a sparse multi-view images-based 3D scene reconstruction framework(SMSR).It divides the scene reconstruction task into three stages:initial prediction,refinement,and fusion stage.The first two stages extract 3D scene representations from each viewpoint,while the final stage involves selection,calibration and fusion of object positions and orientations across different viewpoints.SMSR effectively address the issue of object omission by utilizing small-scale sequential scene information.Experimental results on the general outdoor scene dataset UrbanScene3D-Art Sci and our proprietary dataset Software College Aerial Time-series Images,demonstrate that SMSR achieves superior performance in the scene understanding and reconstruction.展开更多
Deep learning significantly improves the accuracy of remote sensing image scene classification,benefiting from the large-scale datasets.However,annotating the remote sensing images is time-consuming and even tough for...Deep learning significantly improves the accuracy of remote sensing image scene classification,benefiting from the large-scale datasets.However,annotating the remote sensing images is time-consuming and even tough for experts.Deep neural networks trained using a few labeled samples usually generalize less to new unseen images.In this paper,we propose a semi-supervised approach for remote sensing image scene classification based on the prototype-based consistency,by exploring massive unlabeled images.To this end,we,first,propose a feature enhancement module to extract discriminative features.This is achieved by focusing the model on the foreground areas.Then,the prototype-based classifier is introduced to the framework,which is used to acquire consistent feature representations.We conduct a series of experiments on NWPU-RESISC45 and Aerial Image Dataset(AID).Our method improves the State-Of-The-Art(SOTA)method on NWPU-RESISC45 from 92.03%to 93.08%and on AID from 94.25%to 95.24%in terms of accuracy.展开更多
Camouflaged people are extremely expert in actively concealing themselves by effectively utilizing cover and the surrounding environment. Despite advancements in optical detection capabilities through imaging systems,...Camouflaged people are extremely expert in actively concealing themselves by effectively utilizing cover and the surrounding environment. Despite advancements in optical detection capabilities through imaging systems, including spectral, polarization, and infrared technologies, there is still a lack of effective real-time method for accurately detecting small-size and high-efficient camouflaged people in complex real-world scenes. Here, this study proposes a snapshot multispectral image-based camouflaged detection model, multispectral YOLO(MS-YOLO), which utilizes the SPD-Conv and Sim AM modules to effectively represent targets and suppress background interference by exploiting the spatial-spectral target information. Besides, the study constructs the first real-shot multispectral camouflaged people dataset(MSCPD), which encompasses diverse scenes, target scales, and attitudes. To minimize information redundancy, MS-YOLO selects an optimal subset of 12 bands with strong feature representation and minimal inter-band correlation as input. Through experiments on the MSCPD, MS-YOLO achieves a mean Average Precision of 94.31% and real-time detection at 65 frames per second, which confirms the effectiveness and efficiency of our method in detecting camouflaged people in various typical desert and forest scenes. Our approach offers valuable support to improve the perception capabilities of unmanned aerial vehicles in detecting enemy forces and rescuing personnel in battlefield.展开更多
Research on neural radiance fields for novel view synthesis has experienced explosive growth with the development of new models and extensions.The NeRF(Neural Radiance Fields)algorithm,suitable for underwater scenes o...Research on neural radiance fields for novel view synthesis has experienced explosive growth with the development of new models and extensions.The NeRF(Neural Radiance Fields)algorithm,suitable for underwater scenes or scattering media,is also evolving.Existing underwater 3D reconstruction systems still face challenges such as long training times and low rendering efficiency.This paper proposes an improved underwater 3D reconstruction system to achieve rapid and high-quality 3D reconstruction.First,we enhance underwater videos captured by a monocular camera to correct the image quality degradation caused by the physical properties of the water medium and ensure consistency in enhancement across frames.Then,we perform keyframe selection to optimize resource usage and reduce the impact of dynamic objects on the reconstruction results.After pose estimation using COLMAP,the selected keyframes undergo 3D reconstruction using neural radiance fields(NeRF)based on multi-resolution hash encoding for model construction and rendering.In terms of image enhancement,our method has been optimized in certain scenarios,demonstrating effectiveness in image enhancement and better continuity between consecutive frames of the same data.In terms of 3D reconstruction,our method achieved a peak signal-to-noise ratio(PSNR)of 18.40 dB and a structural similarity(SSIM)of 0.6677,indicating a good balance between operational efficiency and reconstruction quality.展开更多
Scene text detection is an important task in computer vision.In this paper,we present YOLOv5 Scene Text(YOLOv5ST),an optimized architecture based on YOLOv5 v6.0 tailored for fast scene text detection.Our primary goal ...Scene text detection is an important task in computer vision.In this paper,we present YOLOv5 Scene Text(YOLOv5ST),an optimized architecture based on YOLOv5 v6.0 tailored for fast scene text detection.Our primary goal is to enhance inference speed without sacrificing significant detection accuracy,thereby enabling robust performance on resource-constrained devices like drones,closed-circuit television cameras,and other embedded systems.To achieve this,we propose key modifications to the network architecture to lighten the original backbone and improve feature aggregation,including replacing standard convolution with depth-wise convolution,adopting the C2 sequence module in place of C3,employing Spatial Pyramid Pooling Global(SPPG)instead of Spatial Pyramid Pooling Fast(SPPF)and integrating Bi-directional Feature Pyramid Network(BiFPN)into the neck.Experimental results demonstrate a remarkable 26%improvement in inference speed compared to the baseline,with only marginal reductions of 1.6%and 4.2%in mean average precision(mAP)at the intersection over union(IoU)thresholds of 0.5 and 0.5:0.95,respectively.Our work represents a significant advancement in scene text detection,striking a balance between speed and accuracy,making it well-suited for performance-constrained environments.展开更多
The proposed robust reversible watermarking algorithm addresses the compatibility challenges between robustness and reversibility in existing video watermarking techniques by leveraging scene smoothness for frame grou...The proposed robust reversible watermarking algorithm addresses the compatibility challenges between robustness and reversibility in existing video watermarking techniques by leveraging scene smoothness for frame grouping videos.Grounded in the H.264 video coding standard,the algorithm first employs traditional robust watermark stitching technology to embed watermark information in the low-frequency coefficient domain of the U channel.Subsequently,it utilizes histogram migration techniques in the high-frequency coefficient domain of the U channel to embed auxiliary information,enabling successful watermark extraction and lossless recovery of the original video content.Experimental results demonstrate the algorithm’s strong imperceptibility,with each embedded frame in the experimental videos achieving a mean peak signal-to-noise ratio of 49.3830 dB and a mean structural similarity of 0.9996.Compared with the three comparison algorithms,the performance of the two experimental indexes is improved by 7.59%and 0.4%on average.At the same time,the proposed algorithm has strong robustness to both offline and online attacks:In the face of offline attacks,the average normalized correlation coefficient between the extracted watermark and the original watermark is 0.9989,and the average bit error rate is 0.0089.In the face of online attacks,the normalized correlation coefficient between the extracted watermark and the original watermark is 0.8840,and the mean bit error rate is 0.2269.Compared with the three comparison algorithms,the performance of the two experimental indexes is improved by 1.27%and 18.16%on average,highlighting the algorithm’s robustness.Furthermore,the algorithm exhibits low computational complexity,with the mean encoding and the mean decoding time differentials during experimental video processing being 3.934 and 2.273 s,respectively,underscoring its practical utility.展开更多
Crime scene investigation(CSI)image is key evidence carrier during criminal investiga-tion,in which CSI image retrieval can assist the public police to obtain criminal clues.Moreover,with the rapid development of deep...Crime scene investigation(CSI)image is key evidence carrier during criminal investiga-tion,in which CSI image retrieval can assist the public police to obtain criminal clues.Moreover,with the rapid development of deep learning,data-driven paradigm has become the mainstreammethod of CSI image feature extraction and representation,and in this process,datasets provideeffective support for CSI retrieval performance.However,there is a lack of systematic research onCSI image retrieval methods and datasets.Therefore,we present an overview of the existing worksabout one-class and multi-class CSI image retrieval based on deep learning.According to theresearch,based on their technical functionalities and implementation methods,CSI image retrievalis roughly classified into five categories:feature representation,metric learning,generative adversar-ial networks,autoencoder networks and attention networks.Furthermore,We analyzed the remain-ing challenges and discussed future work directions in this field.展开更多
基金supported by the Glocal University 30 Project Fund of Gyeongsang National University in 2025.
文摘Scene graph prediction has emerged as a critical task in computer vision,focusing on transforming complex visual scenes into structured representations by identifying objects,their attributes,and the relationships among them.Extending this to 3D semantic scene graph(3DSSG)prediction introduces an additional layer of complexity because it requires the processing of point-cloud data to accurately capture the spatial and volumetric characteristics of a scene.A significant challenge in 3DSSG is the long-tailed distribution of object and relationship labels,causing certain classes to be severely underrepresented and suboptimal performance in these rare categories.To address this,we proposed a fusion prototypical network(FPN),which combines the strengths of conventional neural networks for 3DSSG with a Prototypical Network.The former are known for their ability to handle complex scene graph predictions while the latter excels in few-shot learning scenarios.By leveraging this fusion,our approach enhances the overall prediction accuracy and substantially improves the handling of underrepresented labels.Through extensive experiments using the 3DSSG dataset,we demonstrated that the FPN achieves state-of-the-art performance in 3D scene graph prediction as a single model and effectively mitigates the impact of the long-tailed distribution,providing a more balanced and comprehensive understanding of complex 3D environments.
文摘Crime scene investigation(CSI)is an important link in the criminal justice system as it serves as a bridge between establishing the happenings during an incident and possibly identifying the accountable persons,providing light in the dark.The International Organization for Standardization(ISO)and the International Electrotechnical Commission(IEC)collaborated to develop the ISO/IEC 17020:2012 standard to govern the quality of CSI,a branch of inspection activity.These protocols include the impartiality and competence of the crime scene investigators involved,contemporary recording of scene observations and data obtained,the correct use of resources during scene processing,forensic evidence collection and handling procedures,and the confidentiality and integrity of any scene information obtained from other parties etc.The preparatory work,the accreditation processes involved and the implementation of new quality measures to the existing quality management system in order to achieve the ISO/IE 17020:2012 accreditation at the Forensic Science Division of the Government Laboratory in Hong Kong are discussed in this paper.
基金the National Natural Science Foundation of China(No.62063006)to the Guangxi Natural Science Foundation under Grant(Nos.2023GXNSFAA026025,AA24010001)+3 种基金to the Innovation Fund of Chinese Universities Industry-University-Research(ID:2023RY018)to the Special Guangxi Industry and Information Technology Department,Textile and Pharmaceutical Division(ID:2021 No.231)to the Special Research Project of Hechi University(ID:2021GCC028)to the Key Laboratory of AI and Information Processing,Education Department of Guangxi Zhuang Autonomous Region(Hechi University),No.2024GXZDSY009。
文摘In dynamic scenarios,visual simultaneous localization and mapping(SLAM)algorithms often incorrectly incorporate dynamic points during camera pose computation,leading to reduced accuracy and robustness.This paper presents a dynamic SLAM algorithm that leverages object detection and regional dynamic probability.Firstly,a parallel thread employs the YOLOX object detectionmodel to gather 2D semantic information and compensate for missed detections.Next,an improved K-means++clustering algorithm clusters bounding box regions,adaptively determining the threshold for extracting dynamic object contours as dynamic points change.This process divides the image into low dynamic,suspicious dynamic,and high dynamic regions.In the tracking thread,the dynamic point removal module assigns dynamic probability weights to the feature points in these regions.Combined with geometric methods,it detects and removes the dynamic points.The final evaluation on the public TUM RGB-D dataset shows that the proposed dynamic SLAM algorithm surpasses most existing SLAM algorithms,providing better pose estimation accuracy and robustness in dynamic environments.
文摘Remote sensing scene image classification is a prominent research area within remote sensing.Deep learningbased methods have been extensively utilized and have shown significant advancements in this field.Recent progress in these methods primarily focuses on enhancing feature representation capabilities to improve performance.The challenge lies in the limited spatial resolution of small-sized remote sensing images,as well as image blurring and sparse data.These factors contribute to lower accuracy in current deep learning models.Additionally,deeper networks with attention-based modules require a substantial number of network parameters,leading to high computational costs and memory usage.In this article,we introduce ERSNet,a lightweight novel attention-guided network for remote sensing scene image classification.ERSNet is constructed using a deep separable convolutional network and incorporates an attention mechanism.It utilizes spatial attention,channel attention,and channel self-attention to enhance feature representation and accuracy,while also reducing computational complexity and memory usage.Experimental results indicate that,compared to existing state-of-the-art methods,ERSNet has a significantly lower parameter count of only 1.2 M and reduced Flops.It achieves the highest classification accuracy of 99.14%on the EuroSAT dataset,demonstrating its suitability for application on mobile terminal devices.Furthermore,experimental results from the UCMerced land use dataset and the Brazilian coffee scene also confirm the strong generalization ability of this method.
基金funded by the Yangtze River Delta Science and Technology Innovation Community Joint Research Project(2023CSJGG1600)the Natural Science Foundation of Anhui Province(2208085MF173)Wuhu“ChiZhu Light”Major Science and Technology Project(2023ZD01,2023ZD03).
文摘In the dynamic scene of autonomous vehicles,the depth estimation of monocular cameras often faces the problem of inaccurate edge depth estimation.To solve this problem,we propose an unsupervised monocular depth estimation model based on edge enhancement,which is specifically aimed at the depth perception challenge in dynamic scenes.The model consists of two core networks:a deep prediction network and a motion estimation network,both of which adopt an encoder-decoder architecture.The depth prediction network is based on the U-Net structure of ResNet18,which is responsible for generating the depth map of the scene.The motion estimation network is based on the U-Net structure of Flow-Net,focusing on the motion estimation of dynamic targets.In the decoding stage of the motion estimation network,we innovatively introduce an edge-enhanced decoder,which integrates a convolutional block attention module(CBAM)in the decoding process to enhance the recognition ability of the edge features of moving objects.In addition,we also designed a strip convolution module to improve the model’s capture efficiency of discrete moving targets.To further improve the performance of the model,we propose a novel edge regularization method based on the Laplace operator,which effectively accelerates the convergence process of themodel.Experimental results on the KITTI and Cityscapes datasets show that compared with the current advanced dynamic unsupervised monocular model,the proposed model has a significant improvement in depth estimation accuracy and convergence speed.Specifically,the rootmean square error(RMSE)is reduced by 4.8%compared with the DepthMotion algorithm,while the training convergence speed is increased by 36%,which shows the superior performance of the model in the depth estimation task in dynamic scenes.
基金funded by Gorgan University of Agricultural Sciences and Natural Resources(grant number 9318124503).
文摘Plant species diversity is one of the most widely used indicators in ecosystem management.The relation of species diversity with the size of the sample plot has not been fully determined for Oriental beech forests(Fagus orientalis Lipsky),a widespread species in the Hyrcanian region.Assessing the impacts of plot size on species diversity is fundamental for an ecosystem-based approach to forest management.This study determined the relation of species diversity and plot size by investigating species richness and abundance of both canopy and forest floor.Two hundred and fifty-six sample plots of 625 m^(2) each were layout in a grid pattern across 16 ha.Base plots(25 m×25 m)were integrated in different scales to investigate the effect of plot size on species diversity.The total included nine plots of 0.063,0.125,0.188,0.250,0.375,0.500,0.563,0.750 and 1 ha.Ten biodiversity indices were calculated.The results show that species richness in the different plot sizes was less than the actual value.The estimated value of the Simpson species diversity index was not significantly different from actual values for both canopy and forest floor diversity.The coefficient of variation of this index for the 1-ha sample plot showed the lowest amount across different plot sizes.Inverse Hill species diversity was insignificant difference across different plot sizes with an area greater than 0.500 ha.The modified Hill evenness index for the 1-ha sample size was a correct estimation of the 16-ha for both canopy and forest floor;however,the precision estimation was higher for the canopy layer.All plots greater than 0.250-ha provided an accurate estimation of the Camargo evenness index for forest floor species,but was inaccurate across different plot sizes for the canopy layer.The results indicate that the same plot size did not have the same effect across species diversity measurements.Our results show that correct estimation of species diversity measurements is related to the selection of appropriate indicators and plot size to increase the accuracy of the estimate so that the cost and time of biodiversity management may be reduced.
基金supported by National 2011 Collaborative Innovation Center of Wireless Communication Technologies under Grant 2242022k60006。
文摘This paper presents a comprehensive framework that enables communication scene recognition through deep learning and multi-sensor fusion.This study aims to address the challenge of current communication scene recognition methods that struggle to adapt in dynamic environments,as they typically rely on post-response mechanisms that fail to detect scene changes before users experience latency.The proposed framework leverages data from multiple smartphone sensors,including acceleration sensors,gyroscopes,magnetic field sensors,and orientation sensors,to identify different communication scenes,such as walking,running,cycling,and various modes of transportation.Extensive experimental comparative analysis with existing methods on the open-source SHL-2018 dataset confirmed the superior performance of our approach in terms of F1 score and processing speed.Additionally,tests using a Microsoft Surface Pro tablet and a self-collected Beijing-2023 dataset have validated the framework's efficiency and generalization capability.The results show that our framework achieved an F1 score of 95.15%on SHL-2018and 94.6%on Beijing-2023,highlighting its robustness across different datasets and conditions.Furthermore,the levels of computational complexity and power consumption associated with the algorithm are moderate,making it suitable for deployment on mobile devices.
基金supported in part by the National Natural Science Foundation of China[Grant number 62471075]the Major Science and Technology Project Grant of the Chongqing Municipal Education Commission[Grant number KJZD-M202301901]Graduate Innovation Fund of Chongqing[gzlcx20253235].
文摘Semantic segmentation in street scenes is a crucial technology for autonomous driving to analyze the surrounding environment.In street scenes,issues such as high image resolution caused by a large viewpoints and differences in object scales lead to a decline in real-time performance and difficulties in multi-scale feature extraction.To address this,we propose a bilateral-branch real-time semantic segmentationmethod based on semantic information distillation(BSDNet)for street scene images.The BSDNet consists of a Feature Conversion Convolutional Block(FCB),a Semantic Information Distillation Module(SIDM),and a Deep Aggregation Atrous Convolution Pyramid Pooling(DASP).FCB reduces the semantic gap between the backbone and the semantic branch.SIDM extracts high-quality semantic information fromthe Transformer branch to reduce computational costs.DASP aggregates information lost in atrous convolutions,effectively capturingmulti-scale objects.Extensive experiments conducted on Cityscapes,CamVid,and ADE20K,achieving an accuracy of 81.7% Mean Intersection over Union(mIoU)at 70.6 Frames Per Second(FPS)on Cityscapes,demonstrate that our method achieves a better balance between accuracy and inference speed.
文摘Recognizing road scene context from a single image remains a critical challenge for intelligent autonomous driving systems,particularly in dynamic and unstructured environments.While recent advancements in deep learning have significantly enhanced road scene classification,simultaneously achieving high accuracy,computational efficiency,and adaptability across diverse conditions continues to be difficult.To address these challenges,this study proposes HybridLSTM,a novel and efficient framework that integrates deep learning-based,object-based,and handcrafted feature extraction methods within a unified architecture.HybridLSTM is designed to classify four distinct road scene categories—crosswalk(CW),highway(HW),overpass/tunnel(OP/T),and parking(P)—by leveraging multiple publicly available datasets,including Places-365,BDD100K,LabelMe,and KITTI,thereby promoting domain generalization.The framework fuses object-level features extracted using YOLOv5 and VGG19,scene-level global representations obtained from a modified VGG19,and fine-grained texture features captured through eight handcrafted descriptors.This hybrid feature fusion enables the model to capture both semantic context and low-level visual cues,which are critical for robust scene understanding.To model spatial arrangements and latent sequential dependencies present even in static imagery,the combined features are processed through a Long Short-Term Memory(LSTM)network,allowing the extraction of discriminative patterns across heterogeneous feature spaces.Extensive experiments conducted on 2725 annotated road scene images,with an 80:20 training-to-testing split,validate the effectiveness of the proposed model.HybridLSTM achieves a classification accuracy of 96.3%,a precision of 95.8%,a recall of 96.1%,and an F1-score of 96.0%,outperforming several existing state-of-the-art methods.These results demonstrate the robustness,scalability,and generalization capability of HybridLSTM across varying environments and scene complexities.Moreover,the framework is optimized to balance classification performance with computational efficiency,making it highly suitable for real-time deployment in embedded autonomous driving systems.Future work will focus on extending the model to multi-class detection within a single frame and optimizing it further for edge-device deployments to reduce computational overhead in practical applications.
基金co-supported by the Science and Technology Innovation Program of Hunan Province,China(No.2023RC3023)the National Natural Science Foundation of China(No.12272404)。
文摘The autonomous landing guidance of fixed-wing aircraft in unknown structured scenes presents a substantial technological challenge,particularly regarding the effectiveness of solutions for monocular visual relative pose estimation.This study proposes a novel airborne monocular visual estimation method based on structured scene features to address this challenge.First,a multitask neural network model is established for segmentation,depth estimation,and slope estimation on monocular images.And a monocular image comprehensive three-dimensional information metric is designed,encompassing length,span,flatness,and slope information.Subsequently,structured edge features are leveraged to filter candidate landing regions adaptively.By leveraging the three-dimensional information metric,the optimal landing region is accurately and efficiently identified.Finally,sparse two-dimensional key point is used to parameterize the optimal landing region for the first time and a high-precision relative pose estimation is achieved.Additional measurement information is introduced to provide the autonomous landing guidance information between the aircraft and the optimal landing region.Experimental results obtained from both synthetic and real data demonstrate the effectiveness of the proposed method in monocular pose estimation for autonomous aircraft landing guidance in unknown structured scenes.
基金supported in part by the National Natural Science Foundation of China under Grants 62071345。
文摘Self-supervised monocular depth estimation has emerged as a major research focus in recent years,primarily due to the elimination of ground-truth depth dependence.However,the prevailing architectures in this domain suffer from inherent limitations:existing pose network branches infer camera ego-motion exclusively under static-scene and Lambertian-surface assumptions.These assumptions are often violated in real-world scenarios due to dynamic objects,non-Lambertian reflectance,and unstructured background elements,leading to pervasive artifacts such as depth discontinuities(“holes”),structural collapse,and ambiguous reconstruction.To address these challenges,we propose a novel framework that integrates scene dynamic pose estimation into the conventional self-supervised depth network,enhancing its ability to model complex scene dynamics.Our contributions are threefold:(1)a pixel-wise dynamic pose estimation module that jointly resolves the pose transformations of moving objects and localized scene perturbations;(2)a physically-informed loss function that couples dynamic pose and depth predictions,designed to mitigate depth errors arising from high-speed distant objects and geometrically inconsistent motion profiles;(3)an efficient SE(3)transformation parameterization that streamlines network complexity and temporal pre-processing.Extensive experiments on the KITTI and NYU-V2 benchmarks show that our framework achieves state-of-the-art performance in both quantitative metrics and qualitative visual fidelity,significantly improving the robustness and generalization of monocular depth estimation under dynamic conditions.
基金supported in part by the intramural research program of the US Department of Agriculture,National Institute of Food and Agriculture,Evans-Allen#1024525,and Capacity Building Grant#006531supported in part by the US National Science Foundation RII Track 2 FEC:Leveraging Intelligent Informatics and Smart Data for Improved Understanding of Northern Forest Ecosystem Resiliency(INSPIRES)#1920908by The Lyndhurst Foundation.
文摘Understanding local variation in forest biomass allows for a better evaluation of broad-scale patterns and interpretation of forest ecosystems’role in carbon dynamics.This study focuses on patterns of aboveground tree biomass within a fully censused 20 ha forest plot in a temperate forest of northern Alabama,USA.We evaluated the relationship between biomass and topography using ridge and valley landforms along with digitally derived moisture and solar radiation indices.Every live woody stem over 1 cm diameter at breast height within this plot was mapped,measured,and identified to species in 2019-2022,and diameter data were used along with speciesspecific wood density to map the aboveground biomass at the scale of 20 m×20 m quadrats.The aboveground tree biomass was 211 Mg·ha^(-1).Other than small stream areas that experienced recent natural disturbances,the total stand biomass was not associated with landform or topographic indices.Dominant species,in contrast,had strong associations with topography.American beech(Fagus grandifolia)and yellow-poplar(Liriodendron tulipfera)dominated the valley landform,with 37% and 54% greater biomass in the valley than their plot average,respectively.Three other dominant species,white oak(Quercus alba),southern shagbark hickory(Carya carolinaeseptentrionalis),and white ash(Fraxinus americana),were more abundant on slopes and benches,thus partitioning the site.Of the six dominant species,only sugar maple(Acer saccharum)was not associated with landform.Moreover,both topographic wetness and potential radiation indices were significant predictors of dominant species biomass within each of the landforms.The study highlights the need to consider species when examining forest productivity in a range of site conditions.
基金supported by the National Natural Science Foundation of China(No.42271446).
文摘The navigation system plays a pivotal role in guiding aircraft along designated routes,ensuring precise and punctual arrival at destinations.The integration of scene matching with an inertial navigation system enhances the capability of providing a dependable guarantee for success-ful accomplishment of flight missions.Nonetheless,assuring reliability in scene matching encoun-ters significant challenges in areas characterized by repetitive or weak textures.To tackle these challenges,we propose a novel method to assess the reliability of scene matching based on the dis-tinctive characteristics of correlation peaks.The proposed method leverages the fact that the sim-ilarity of the optimal matching result is significantly higher than that of the surrounding area,and three novel indicators(e.g.,relative height,slope of a correlation peak,and ratio of a sub peak to the main peak)are determined to conjointly evaluate the reliability of scene matching.The pro-posed method entails matching a real-time image with a reference image to generate a correlation surface.A correlation peak is then obtained by extracting the portion of the correlation surface exhibiting a significant gradient.Additionally,the matching reliability is determined by considering the relative height,slope,and ratio of the peak collectively.Exhaustive experimental results with two sets of data demonstrate that the proposed method significantly outperforms traditional approaches in terms of precision,recall,and F1-score.These experiments also establish the efficacy of the proposed method in achieving reliable matching in challenging environments characterized by repetitive and weak textures.This enhancement holds the potential to significantly elevate scene-matching-based navigation.
基金National Key R&D Program of China,Grant/Award Number:2021YFC3300203TaiShan Scholars Program,Grant/Award Number:tsqn202211289+1 种基金Oversea Innovation Team Project of the“20 Regulations for New Universities”funding program of Jinan,Grant/Award Number:2021GXRC073Excellent Youth Scholars Program of Shandong Province,Grant/Award Number:2022HWYQ-048。
文摘3D scene understanding and reconstruction aims to obtain a concise scene representation from images and reconstruct the complete scene,including the scene layout,objects bounding boxes and shapes.Existing holistic scene understanding methods primarily recover scenes from single images,with a focus on indoor scenes.Due to the complexity of real-world,the information provided by a single image is limited,resulting in issues such as object occlusion and omission.Furthermore,captured data from outdoor scenes exhibits characteristics of sparsity,strong temporal dependencies and a lack of annotations.Consequently,the task of understanding and reconstructing outdoor scenes is highly challenging.The authors propose a sparse multi-view images-based 3D scene reconstruction framework(SMSR).It divides the scene reconstruction task into three stages:initial prediction,refinement,and fusion stage.The first two stages extract 3D scene representations from each viewpoint,while the final stage involves selection,calibration and fusion of object positions and orientations across different viewpoints.SMSR effectively address the issue of object omission by utilizing small-scale sequential scene information.Experimental results on the general outdoor scene dataset UrbanScene3D-Art Sci and our proprietary dataset Software College Aerial Time-series Images,demonstrate that SMSR achieves superior performance in the scene understanding and reconstruction.
基金supported in part by the National Natural Science Foundation of China(No.12302252)。
文摘Deep learning significantly improves the accuracy of remote sensing image scene classification,benefiting from the large-scale datasets.However,annotating the remote sensing images is time-consuming and even tough for experts.Deep neural networks trained using a few labeled samples usually generalize less to new unseen images.In this paper,we propose a semi-supervised approach for remote sensing image scene classification based on the prototype-based consistency,by exploring massive unlabeled images.To this end,we,first,propose a feature enhancement module to extract discriminative features.This is achieved by focusing the model on the foreground areas.Then,the prototype-based classifier is introduced to the framework,which is used to acquire consistent feature representations.We conduct a series of experiments on NWPU-RESISC45 and Aerial Image Dataset(AID).Our method improves the State-Of-The-Art(SOTA)method on NWPU-RESISC45 from 92.03%to 93.08%and on AID from 94.25%to 95.24%in terms of accuracy.
基金support by the National Natural Science Foundation of China (Grant No. 62005049)Natural Science Foundation of Fujian Province (Grant Nos. 2020J01451, 2022J05113)Education and Scientific Research Program for Young and Middleaged Teachers in Fujian Province (Grant No. JAT210035)。
文摘Camouflaged people are extremely expert in actively concealing themselves by effectively utilizing cover and the surrounding environment. Despite advancements in optical detection capabilities through imaging systems, including spectral, polarization, and infrared technologies, there is still a lack of effective real-time method for accurately detecting small-size and high-efficient camouflaged people in complex real-world scenes. Here, this study proposes a snapshot multispectral image-based camouflaged detection model, multispectral YOLO(MS-YOLO), which utilizes the SPD-Conv and Sim AM modules to effectively represent targets and suppress background interference by exploiting the spatial-spectral target information. Besides, the study constructs the first real-shot multispectral camouflaged people dataset(MSCPD), which encompasses diverse scenes, target scales, and attitudes. To minimize information redundancy, MS-YOLO selects an optimal subset of 12 bands with strong feature representation and minimal inter-band correlation as input. Through experiments on the MSCPD, MS-YOLO achieves a mean Average Precision of 94.31% and real-time detection at 65 frames per second, which confirms the effectiveness and efficiency of our method in detecting camouflaged people in various typical desert and forest scenes. Our approach offers valuable support to improve the perception capabilities of unmanned aerial vehicles in detecting enemy forces and rescuing personnel in battlefield.
基金This work was supported by the Key Research and Development Program of Hainan Province(Grant Nos.ZDYF2023GXJS163,ZDYF2024GXJS014)National Natural Science Foundation of China(NSFC)(Grant Nos.62162022,62162024)+2 种基金the Major Science and Technology Project of Hainan Province(Grant No.ZDKJ2020012)Hainan Provincial Natural Science Foundation of China(Grant No.620MS021)Youth Foundation Project of Hainan Natural Science Foundation(621QN211).
文摘Research on neural radiance fields for novel view synthesis has experienced explosive growth with the development of new models and extensions.The NeRF(Neural Radiance Fields)algorithm,suitable for underwater scenes or scattering media,is also evolving.Existing underwater 3D reconstruction systems still face challenges such as long training times and low rendering efficiency.This paper proposes an improved underwater 3D reconstruction system to achieve rapid and high-quality 3D reconstruction.First,we enhance underwater videos captured by a monocular camera to correct the image quality degradation caused by the physical properties of the water medium and ensure consistency in enhancement across frames.Then,we perform keyframe selection to optimize resource usage and reduce the impact of dynamic objects on the reconstruction results.After pose estimation using COLMAP,the selected keyframes undergo 3D reconstruction using neural radiance fields(NeRF)based on multi-resolution hash encoding for model construction and rendering.In terms of image enhancement,our method has been optimized in certain scenarios,demonstrating effectiveness in image enhancement and better continuity between consecutive frames of the same data.In terms of 3D reconstruction,our method achieved a peak signal-to-noise ratio(PSNR)of 18.40 dB and a structural similarity(SSIM)of 0.6677,indicating a good balance between operational efficiency and reconstruction quality.
基金the National Natural Science Foundation of PRChina(42075130)Nari Technology Co.,Ltd.(4561655965)。
文摘Scene text detection is an important task in computer vision.In this paper,we present YOLOv5 Scene Text(YOLOv5ST),an optimized architecture based on YOLOv5 v6.0 tailored for fast scene text detection.Our primary goal is to enhance inference speed without sacrificing significant detection accuracy,thereby enabling robust performance on resource-constrained devices like drones,closed-circuit television cameras,and other embedded systems.To achieve this,we propose key modifications to the network architecture to lighten the original backbone and improve feature aggregation,including replacing standard convolution with depth-wise convolution,adopting the C2 sequence module in place of C3,employing Spatial Pyramid Pooling Global(SPPG)instead of Spatial Pyramid Pooling Fast(SPPF)and integrating Bi-directional Feature Pyramid Network(BiFPN)into the neck.Experimental results demonstrate a remarkable 26%improvement in inference speed compared to the baseline,with only marginal reductions of 1.6%and 4.2%in mean average precision(mAP)at the intersection over union(IoU)thresholds of 0.5 and 0.5:0.95,respectively.Our work represents a significant advancement in scene text detection,striking a balance between speed and accuracy,making it well-suited for performance-constrained environments.
基金supported in part by the National Natural Science Foundation of China under Grants 62202496,62272478the Basic Frontier Innovation Project of Engineering university of People Armed Police under Grants WJY202314,WJY202221.
文摘The proposed robust reversible watermarking algorithm addresses the compatibility challenges between robustness and reversibility in existing video watermarking techniques by leveraging scene smoothness for frame grouping videos.Grounded in the H.264 video coding standard,the algorithm first employs traditional robust watermark stitching technology to embed watermark information in the low-frequency coefficient domain of the U channel.Subsequently,it utilizes histogram migration techniques in the high-frequency coefficient domain of the U channel to embed auxiliary information,enabling successful watermark extraction and lossless recovery of the original video content.Experimental results demonstrate the algorithm’s strong imperceptibility,with each embedded frame in the experimental videos achieving a mean peak signal-to-noise ratio of 49.3830 dB and a mean structural similarity of 0.9996.Compared with the three comparison algorithms,the performance of the two experimental indexes is improved by 7.59%and 0.4%on average.At the same time,the proposed algorithm has strong robustness to both offline and online attacks:In the face of offline attacks,the average normalized correlation coefficient between the extracted watermark and the original watermark is 0.9989,and the average bit error rate is 0.0089.In the face of online attacks,the normalized correlation coefficient between the extracted watermark and the original watermark is 0.8840,and the mean bit error rate is 0.2269.Compared with the three comparison algorithms,the performance of the two experimental indexes is improved by 1.27%and 18.16%on average,highlighting the algorithm’s robustness.Furthermore,the algorithm exhibits low computational complexity,with the mean encoding and the mean decoding time differentials during experimental video processing being 3.934 and 2.273 s,respectively,underscoring its practical utility.
文摘Crime scene investigation(CSI)image is key evidence carrier during criminal investiga-tion,in which CSI image retrieval can assist the public police to obtain criminal clues.Moreover,with the rapid development of deep learning,data-driven paradigm has become the mainstreammethod of CSI image feature extraction and representation,and in this process,datasets provideeffective support for CSI retrieval performance.However,there is a lack of systematic research onCSI image retrieval methods and datasets.Therefore,we present an overview of the existing worksabout one-class and multi-class CSI image retrieval based on deep learning.According to theresearch,based on their technical functionalities and implementation methods,CSI image retrievalis roughly classified into five categories:feature representation,metric learning,generative adversar-ial networks,autoencoder networks and attention networks.Furthermore,We analyzed the remain-ing challenges and discussed future work directions in this field.