The agility of Internet of Things(IoT)software engineering is benchmarked based on its systematic insights for wide application support infrastructure developments.Such developments are focused on reducing the interfa...The agility of Internet of Things(IoT)software engineering is benchmarked based on its systematic insights for wide application support infrastructure developments.Such developments are focused on reducing the interfacing complexity with heterogeneous devices through applications.To handle the interfacing complexity problem,this article introduces a Semantic Interfacing Obscuration Model(SIOM)for IoT software-engineered platforms.The interfacing obscuration between heterogeneous devices and application interfaces from the testing to real-time validations is accounted for in this model.Based on the level of obscuration between the infrastructure hardware to the end-user software,the modifications through device replacement,capacity amendments,or interface bug fixes are performed.These modifications are based on the level of semantic obscurations observed during the application service intervals.The obscuration level is determined using knowledge learning as a progression from hardware to software semantics.The results reported were computed using specific metrics obtained from these experimental evaluations:an 8.94%reduction in interfacing complexity and a 15.04%improvement in integration progression.The knowledge of obscurationsmaps themodifications appropriately to reinstate the agility testing of the hardware/software integrations.This modification-based semantics is verified using semantics error,modification time,and complexity.展开更多
Semantic communication(SemCom)aims to achieve high-fidelity information delivery under low communication consumption by only guaranteeing semantic accuracy.Nevertheless,semantic communication still suffers from unexpe...Semantic communication(SemCom)aims to achieve high-fidelity information delivery under low communication consumption by only guaranteeing semantic accuracy.Nevertheless,semantic communication still suffers from unexpected channel volatility and thus developing a re-transmission mechanism(e.g.,hybrid automatic repeat request[HARQ])becomes indispensable.In that regard,instead of discarding previously transmitted information,the incremental knowledge-based HARQ(IK-HARQ)is deemed as a more effective mechanism that could sufficiently utilize the information semantics.However,considering the possible existence of semantic ambiguity in image transmission,a simple bit-level cyclic redundancy check(CRC)might compromise the performance of IK-HARQ.Therefore,there emerges a strong incentive to revolutionize the CRC mechanism,thus more effectively reaping the benefits of both SemCom and HARQ.In this paper,built on top of swin transformer-based joint source-channel coding(JSCC)and IK-HARQ,we propose a semantic image transmission framework SC-TDA-HARQ.In particular,different from the conventional CRC,we introduce a topological data analysis(TDA)-based error detection method,which capably digs out the inner topological and geometric information of images,to capture semantic information and determine the necessity for re-transmission.Extensive numerical results validate the effectiveness and efficiency of the proposed SC-TDA-HARQ framework,especially under the limited bandwidth condition,and manifest the superiority of TDA-based error detection method in image transmission.展开更多
Text semantic extraction has been envisioned as a promising solution to improve the data transmission efficiency with the limited radio resources for the autonomous interactions among machines and things in the future...Text semantic extraction has been envisioned as a promising solution to improve the data transmission efficiency with the limited radio resources for the autonomous interactions among machines and things in the future sixth-generation(6G)wireless networks.In this paper,we propose a Chinese text semantic extraction model,namely T-Pointer,to improve the quality of semantic extraction by integrating the Transformer with the pointer-generator network.The proposed T-Pointer model consists of a semantic encoder and a semantic decoder.In the encoding stage,we use the multi-head attention mechanism of the Transformer to extract semantic features from the input Chinese text.In the decoding stage,we first use the Transformer to extract multi-level global text features.Then,we introduce the pointer-generator network model to directly copy the keyword information from the source text.The simulation results demonstrate that the T-Pointer model can improve the bilingual evaluation understudy(BLEU)and recalloriented understudy for gisting evaluation(ROUGE)by 14.69%and 14.87%on average in comparison with the state-of-the-art models,respectively.Also,we implement the T-Pointer model on a semantic communication system based on the universal software radio peripheral(USRP)platform.The result shows that the packet delay of semantic transmission can be reduced by 52.05%on average,compared to traditional information transmission.展开更多
Knowledge-based VisualQuestion Answering(VQA)requires the integration of visual information with external knowledge reasoning.Existing approaches typically retrieve information from external corpora and rely on pretra...Knowledge-based VisualQuestion Answering(VQA)requires the integration of visual information with external knowledge reasoning.Existing approaches typically retrieve information from external corpora and rely on pretrained language models for reasoning.However,their performance is often hindered by the limited capabilities of retrievers and the constrained size of knowledge bases.Moreover,relying on image captions to bridge the modal gap between visual and language modalities can lead to the omission of critical visual details.To address these limitations,we propose the Reflective Chain-of-Thought(ReCoT)method,a simple yet effective framework inspired by metacognition theory.ReCoT effectively activates the reasoning capabilities ofMultimodal Large LanguageModels(MLLMs),providing essential visual and knowledge cues required to solve complex visual questions.It simulates a metacognitive reasoning process that encompasses monitoring,reflection,and correction.Specifically,in the initial generation stage,an MLLM produces a preliminary answer that serves as the model’s initial cognitive output.During the reflective reasoning stage,this answer is critically examined to generate a reflective rationale that integrates key visual evidence and relevant knowledge.In the final refinement stage,a smaller language model leverages this rationale to revise the initial prediction,resulting in amore accurate final answer.By harnessing the strengths ofMLLMs in visual and knowledge grounding,ReCoT enables smaller language models to reason effectively without dependence on image captions or external knowledge bases.Experimental results demonstrate that ReCoT achieves substantial performance improvements,outperforming state-of-the-art methods by 2.26%on OK-VQA and 5.8%on A-OKVQA.展开更多
Weakly Supervised Semantic Segmentation(WSSS),which relies only on image-level labels,has attracted significant attention for its cost-effectiveness and scalability.Existing methods mainly enhance inter-class distinct...Weakly Supervised Semantic Segmentation(WSSS),which relies only on image-level labels,has attracted significant attention for its cost-effectiveness and scalability.Existing methods mainly enhance inter-class distinctions and employ data augmentation to mitigate semantic ambiguity and reduce spurious activations.However,they often neglect the complex contextual dependencies among image patches,resulting in incomplete local representations and limited segmentation accuracy.To address these issues,we propose the Context Patch Fusion with Class Token Enhancement(CPF-CTE)framework,which exploits contextual relations among patches to enrich feature repre-sentations and improve segmentation.At its core,the Contextual-Fusion Bidirectional Long Short-Term Memory(CF-BiLSTM)module captures spatial dependencies between patches and enables bidirectional information flow,yield-ing a more comprehensive understanding of spatial correlations.This strengthens feature learning and segmentation robustness.Moreover,we introduce learnable class tokens that dynamically encode and refine class-specific semantics,enhancing discriminative capability.By effectively integrating spatial and semantic cues,CPF-CTE produces richer and more accurate representations of image content.Extensive experiments on PASCAL VOC 2012 and MS COCO 2014 validate that CPF-CTE consistently surpasses prior WSSS methods.展开更多
In the age of big data,ensuring data privacy while enabling efficient encrypted data retrieval has become a critical challenge.Traditional searchable encryption schemes face difficulties in handling complex semantic q...In the age of big data,ensuring data privacy while enabling efficient encrypted data retrieval has become a critical challenge.Traditional searchable encryption schemes face difficulties in handling complex semantic queries.Additionally,they typically rely on honest but curious cloud servers,which introduces the risk of repudiation.Furthermore,the combined operations of search and verification increase system load,thereby reducing performance.Traditional verification mechanisms,which rely on complex hash constructions,suffer from low verification efficiency.To address these challenges,this paper proposes a blockchain-based contextual semantic-aware ciphertext retrieval scheme with efficient verification.Building on existing single and multi-keyword search methods,the scheme uses vector models to semantically train the dataset,enabling it to retain semantic information and achieve context-aware encrypted retrieval,significantly improving search accuracy.Additionally,a blockchain-based updatable master-slave chain storage model is designed,where the master chain stores encrypted keyword indexes and the slave chain stores verification information generated by zero-knowledge proofs,thus balancing system load while improving search and verification efficiency.Finally,an improved non-interactive zero-knowledge proof mechanism is introduced,reducing the computational complexity of verification and ensuring efficient validation of search results.Experimental results demonstrate that the proposed scheme offers stronger security,balanced overhead,and higher search verification efficiency.展开更多
This study aimed to enhance the performance of semantic segmentation for autonomous driving by improving the 2DPASS model.Two novel improvements were proposed and implemented in this paper:dynamically adjusting the lo...This study aimed to enhance the performance of semantic segmentation for autonomous driving by improving the 2DPASS model.Two novel improvements were proposed and implemented in this paper:dynamically adjusting the loss function ratio and integrating an attention mechanism(CBAM).First,the loss function weights were adjusted dynamically.The grid search method is used for deciding the best ratio of 7:3.It gives greater emphasis to the cross-entropy loss,which resulted in better segmentation performance.Second,CBAM was applied at different layers of the 2Dencoder.Heatmap analysis revealed that introducing it after the second block of 2D image encoding produced the most effective enhancement of important feature representation.The training epoch was chosen for optimizing the best value by experiments,which improved model convergence and overall accuracy.To evaluate the proposed approach,experiments were conducted based on the SemanticKITTI database.The results showed that the improved model achieved higher segmentation accuracy by 64.31%,improved 11.47% in mIoU compared with the conventional 2DPASS model(baseline:52.84%).It was more effective at detecting small and distant objects and clearly identifying boundaries between different classes.Issues such as noise and variations in data distribution affected its accuracy,indicating the need for further refinement.Overall,the proposed improvements to the 2DPASS model demonstrated the potential to advance semantic segmentation technology and contributed to a more reliable perception of complex,dynamic environments in autonomous vehicles.Accurate segmentation enhances the vehicle’s ability to distinguish different objects,and this improvement directly supports safer navigation,robust decision-making,and efficient path planning,making it highly applicable to real-world deployment of autonomous systems in urban and highway settings.展开更多
This article studies the problem of image segmentation-based semantic communication in autonomous driving.In real traffic scenes,the detecting of objects(e.g.,vehicles and pedestrians)is more important to guarantee dr...This article studies the problem of image segmentation-based semantic communication in autonomous driving.In real traffic scenes,the detecting of objects(e.g.,vehicles and pedestrians)is more important to guarantee driving safety,which is always ignored in existing works.Therefore,we propose a vehicular image segmentation-oriented semantic communication system,termed VIS-SemCom,focusing on transmitting and recovering image semantic features of high-important objects to reduce transmission redundancy.First,we develop a semantic codec based on Swin Transformer architecture,which expands the perceptual field thus improving the segmentation accuracy.To highlight the important objects'accuracy,we propose a multi-scale semantic extraction method by assigning the number of Swin Transformer blocks for diverse resolution semantic features.Also,an importance-aware loss incorporating important levels is devised,and an online hard example mining(OHEM)strategy is proposed to handle small sample issues in the dataset.Finally,experimental results demonstrate that the proposed VIS-SemCom can achieve a significant mean intersection over union(mIoU)performance in the SNR regions,a reduction of transmitted data volume by about 60%at 60%mIoU,and improve the segmentation accuracy of important objects,compared to baseline image communication.展开更多
This paper presents an intelligent patrol and security robot integrating 2D LiDAR and RGB-D vision sensors to achieve semantic simultaneous localization and mapping(SLAM),real-time object recognition,and dynamic obsta...This paper presents an intelligent patrol and security robot integrating 2D LiDAR and RGB-D vision sensors to achieve semantic simultaneous localization and mapping(SLAM),real-time object recognition,and dynamic obstacle avoidance.The system employs the YOLOv7 deep-learning framework for semantic detection and SLAM for localization and mapping,fusing geometric and visual data to build a high-fidelity 2D semantic map.This map enables the robot to identify and project object information for improved situational awareness.Experimental results show that object recognition reached 95.4%mAP@0.5.Semantic completeness increased from 68.7%(single view)to 94.1%(multi-view)with an average position error of 3.1 cm.During navigation,the robot achieved 98.0%reliability,avoided moving obstacles in 90.0%of encounters,and replanned paths in 0.42 s on average.The integration of LiDAR-based SLAMwith deep-learning–driven semantic perception establishes a robust foundation for intelligent,adaptive,and safe robotic navigation in dynamic environments.展开更多
Chinese abbreviations improve communicative efficiency by extracting key components from longer expressions.They are widely used in both daily communication and professional domains.However,existing abbreviation gener...Chinese abbreviations improve communicative efficiency by extracting key components from longer expressions.They are widely used in both daily communication and professional domains.However,existing abbreviation generation methods still face two major challenges.First,sequence-labeling-based approaches often neglect contextual meaning by making binary decisions at the character level,leading to abbreviations that fail to capture semantic completeness.Second,generation-basedmethods rely heavily on a single decoding process,which frequently produces correct abbreviations but ranks them lower due to inadequate semantic evaluation.To address these limitations,we propose a novel two-stage frameworkwithGeneration–Iterative Optimization forAbbreviation(GIOA).In the first stage,we design aChain-of-Thought prompting strategy and incorporate definitional and situational contexts to generate multiple abbreviation candidates.In the second stage,we introduce a Semantic Preservation Dynamic Adjustment mechanism that alternates between character-level importance estimation and semantic restoration to optimize candidate ranking.Experiments on two public benchmark datasets show that our method outperforms existing state-of-the-art approaches,achieving Hit@1 improvements of 15.15%and 13.01%,respectively,while maintaining consistent results in Hit@3.展开更多
High-resolution remote sensing images(HRSIs)are now an essential data source for gathering surface information due to advancements in remote sensing data capture technologies.However,their significant scale changes an...High-resolution remote sensing images(HRSIs)are now an essential data source for gathering surface information due to advancements in remote sensing data capture technologies.However,their significant scale changes and wealth of spatial details pose challenges for semantic segmentation.While convolutional neural networks(CNNs)excel at capturing local features,they are limited in modeling long-range dependencies.Conversely,transformers utilize multihead self-attention to integrate global context effectively,but this approach often incurs a high computational cost.This paper proposes a global-local multiscale context network(GLMCNet)to extract both global and local multiscale contextual information from HRSIs.A detail-enhanced filtering module(DEFM)is proposed at the end of the encoder to refine the encoder outputs further,thereby enhancing the key details extracted by the encoder and effectively suppressing redundant information.In addition,a global-local multiscale transformer block(GLMTB)is proposed in the decoding stage to enable the modeling of rich multiscale global and local information.We also design a stair fusion mechanism to transmit deep semantic information from deep to shallow layers progressively.Finally,we propose the semantic awareness enhancement module(SAEM),which further enhances the representation of multiscale semantic features through spatial attention and covariance channel attention.Extensive ablation analyses and comparative experiments were conducted to evaluate the performance of the proposed method.Specifically,our method achieved a mean Intersection over Union(mIoU)of 86.89%on the ISPRS Potsdam dataset and 84.34%on the ISPRS Vaihingen dataset,outperforming existing models such as ABCNet and BANet.展开更多
Advanced traffic monitoring systems encounter substantial challenges in vehicle detection and classification due to the limitations of conventional methods,which often demand extensive computational resources and stru...Advanced traffic monitoring systems encounter substantial challenges in vehicle detection and classification due to the limitations of conventional methods,which often demand extensive computational resources and struggle with diverse data acquisition techniques.This research presents a novel approach for vehicle classification and recognition in aerial image sequences,integrating multiple advanced techniques to enhance detection accuracy.The proposed model begins with preprocessing using Multiscale Retinex(MSR)to enhance image quality,followed by Expectation-Maximization(EM)Segmentation for precise foreground object identification.Vehicle detection is performed using the state-of-the-art YOLOv10 framework,while feature extraction incorporates Maximally Stable Extremal Regions(MSER),Dense Scale-Invariant Feature Transform(Dense SIFT),and Zernike Moments Features to capture distinct object characteristics.Feature optimization is further refined through a Hybrid Swarm-based Optimization algorithm,ensuring optimal feature selection for improved classification performance.The final classification is conducted using a Vision Transformer,leveraging its robust learning capabilities for enhanced accuracy.Experimental evaluations on benchmark datasets,including UAVDT and the Unmanned Aerial Vehicle Intruder Dataset(UAVID),demonstrate the superiority of the proposed approach,achieving an accuracy of 94.40%on UAVDT and 93.57%on UAVID.The results highlight the efficacy of the model in significantly enhancing vehicle detection and classification in aerial imagery,outperforming existing methodologies and offering a statistically validated improvement for intelligent traffic monitoring systems compared to existing approaches.展开更多
Weakly supervised semantic segmentation(WSSS)is a tricky task,which only provides category information for segmentation prediction.Thus,the key stage of WSSS is to generate the pseudo labels.For convolutional neural n...Weakly supervised semantic segmentation(WSSS)is a tricky task,which only provides category information for segmentation prediction.Thus,the key stage of WSSS is to generate the pseudo labels.For convolutional neural network(CNN)based methods,in which class activation mapping(CAM)is proposed to obtain the pseudo labels,and only concentrates on the most discriminative parts.Recently,transformer-based methods utilize attention map from the multi-headed self-attention(MHSA)module to predict pseudo labels,which usually contain obvious background noise and incoherent object area.To solve the above problems,we use the Conformer as our backbone,which is a parallel network based on convolutional neural network(CNN)and Transformer.The two branches generate pseudo labels and refine them independently,and can effectively combine the advantages of CNN and Transformer.However,the parallel structure is not close enough in the information communication.Thus,parallel structure can result in poor details about pseudo labels,and the background noise still exists.To alleviate this problem,we propose enhancing convolution CAM(ECCAM)model,which have three improved modules based on enhancing convolution,including deeper stem(DStem),convolutional feed-forward network(CFFN)and feature coupling unit with convolution(FCUConv).The ECCAM could make Conformer have tighter interaction between CNN and Transformer branches.After experimental verification,the improved modules we propose can help the network perceive more local information from images,making the final segmentation results more refined.Compared with similar architecture,our modules greatly improve the semantic segmentation performance and achieve70.2%mean intersection over union(mIoU)on the PASCAL VOC 2012 dataset.展开更多
In image analysis,high-precision semantic segmentation predominantly relies on supervised learning.Despite significant advancements driven by deep learning techniques,challenges such as class imbalance and dynamic per...In image analysis,high-precision semantic segmentation predominantly relies on supervised learning.Despite significant advancements driven by deep learning techniques,challenges such as class imbalance and dynamic performance evaluation persist.Traditional weighting methods,often based on pre-statistical class counting,tend to overemphasize certain classes while neglecting others,particularly rare sample categories.Approaches like focal loss and other rare-sample segmentation techniques introduce multiple hyperparameters that require manual tuning,leading to increased experimental costs due to their instability.This paper proposes a novel CAWASeg framework to address these limitations.Our approach leverages Grad-CAM technology to generate class activation maps,identifying key feature regions that the model focuses on during decision-making.We introduce a Comprehensive Segmentation Performance Score(CSPS)to dynamically evaluate model performance by converting these activation maps into pseudo mask and comparing them with Ground Truth.Additionally,we design two adaptive weights for each class:a Basic Weight(BW)and a Ratio Weight(RW),which the model adjusts during training based on real-time feedback.Extensive experiments on the COCO-Stuff,CityScapes,and ADE20k datasets demonstrate that our CAWASeg framework significantly improves segmentation performance for rare sample categories while enhancing overall segmentation accuracy.The proposed method offers a robust and efficient solution for addressing class imbalance in semantic segmentation tasks.展开更多
As a key node of modern transportation network,the informationization management of road tunnels is crucial to ensure the operation safety and traffic efficiency.However,the existing tunnel vehicle modeling methods ge...As a key node of modern transportation network,the informationization management of road tunnels is crucial to ensure the operation safety and traffic efficiency.However,the existing tunnel vehicle modeling methods generally have problems such as insufficient 3D scene description capability and low dynamic update efficiency,which are difficult to meet the demand of real-time accurate management.For this reason,this paper proposes a vehicle twin modeling method for road tunnels.This approach starts from the actual management needs,and supports multi-level dynamic modeling from vehicle type,size to color by constructing a vehicle model library that can be flexibly invoked;at the same time,semantic constraint rules with geometric layout,behavioral attributes,and spatial relationships are designed to ensure that the virtual model matches with the real model with a high degree of similarity;ultimately,the prototype system is constructed and the case region is selected for the case study,and the dynamic vehicle status in the tunnel is realized by integrating real-time monitoring data with semantic constraints for precise virtual-real mapping.Finally,the prototype system is constructed and case experiments are conducted in selected case areas,which are combined with real-time monitoring data to realize dynamic updating and three-dimensional visualization of vehicle states in tunnels.The experiments show that the proposed method can run smoothly with an average rendering efficiency of 17.70 ms while guaranteeing the modeling accuracy(composite similarity of 0.867),which significantly improves the real-time and intuitive tunnel management.The research results provide reliable technical support for intelligent operation and emergency response of road tunnels,and offer new ideas for digital twin modeling of complex scenes.展开更多
The Internet of Vehicles (IoV) has become an important direction in the field of intelligent transportation, in which vehicle positioning is a crucial part. SLAM (Simultaneous Localization and Mapping) technology play...The Internet of Vehicles (IoV) has become an important direction in the field of intelligent transportation, in which vehicle positioning is a crucial part. SLAM (Simultaneous Localization and Mapping) technology plays a crucial role in vehicle localization and navigation. Traditional Simultaneous Localization and Mapping (SLAM) systems are designed for use in static environments, and they can result in poor performance in terms of accuracy and robustness when used in dynamic environments where objects are in constant movement. To address this issue, a new real-time visual SLAM system called MG-SLAM has been developed. Based on ORB-SLAM2, MG-SLAM incorporates a dynamic target detection process that enables the detection of both known and unknown moving objects. In this process, a separate semantic segmentation thread is required to segment dynamic target instances, and the Mask R-CNN algorithm is applied on the Graphics Processing Unit (GPU) to accelerate segmentation. To reduce computational cost, only key frames are segmented to identify known dynamic objects. Additionally, a multi-view geometry method is adopted to detect unknown moving objects. The results demonstrate that MG-SLAM achieves higher precision, with an improvement from 0.2730 m to 0.0135 m in precision. Moreover, the processing time required by MG-SLAM is significantly reduced compared to other dynamic scene SLAM algorithms, which illustrates its efficacy in locating objects in dynamic scenes.展开更多
Ecological monitoring vehicles are equipped with a range of sensors and monitoring devices designed to gather data on ecological and environmental factors.These vehicles are crucial in various fields,including environ...Ecological monitoring vehicles are equipped with a range of sensors and monitoring devices designed to gather data on ecological and environmental factors.These vehicles are crucial in various fields,including environmental science research,ecological and environmental monitoring projects,disaster response,and emergency management.A key method employed in these vehicles for achieving high-precision positioning is LiDAR(lightlaser detection and ranging)-Visual Simultaneous Localization and Mapping(SLAM).However,maintaining highprecision localization in complex scenarios,such as degraded environments or when dynamic objects are present,remains a significant challenge.To address this issue,we integrate both semantic and texture information from LiDAR and cameras to enhance the robustness and efficiency of data registration.Specifically,semantic information simplifies the modeling of scene elements,reducing the reliance on dense point clouds,which can be less efficient.Meanwhile,visual texture information complements LiDAR-Visual localization by providing additional contextual details.By incorporating semantic and texture details frompaired images and point clouds,we significantly improve the quality of data association,thereby increasing the success rate of localization.This approach not only enhances the operational capabilities of ecological monitoring vehicles in complex environments but also contributes to improving the overall efficiency and effectiveness of ecological monitoring and environmental protection efforts.展开更多
Lower back pain is one of the most common medical problems in the world and it is experienced by a huge percentage of people everywhere.Due to its ability to produce a detailed view of the soft tissues,including the s...Lower back pain is one of the most common medical problems in the world and it is experienced by a huge percentage of people everywhere.Due to its ability to produce a detailed view of the soft tissues,including the spinal cord,nerves,intervertebral discs,and vertebrae,Magnetic Resonance Imaging is thought to be the most effective method for imaging the spine.The semantic segmentation of vertebrae plays a major role in the diagnostic process of lumbar diseases.It is difficult to semantically partition the vertebrae in Magnetic Resonance Images from the surrounding variety of tissues,including muscles,ligaments,and intervertebral discs.U-Net is a powerful deep-learning architecture to handle the challenges of medical image analysis tasks and achieves high segmentation accuracy.This work proposes a modified U-Net architecture namely MU-Net,consisting of the Meijering convolutional layer that incorporates the Meijering filter to perform the semantic segmentation of lumbar vertebrae L1 to L5 and sacral vertebra S1.Pseudo-colour mask images were generated and used as ground truth for training the model.The work has been carried out on 1312 images expanded from T1-weighted mid-sagittal MRI images of 515 patients in the Lumbar Spine MRI Dataset publicly available from Mendeley Data.The proposed MU-Net model for the semantic segmentation of the lumbar vertebrae gives better performance with 98.79%of pixel accuracy(PA),98.66%of dice similarity coefficient(DSC),97.36%of Jaccard coefficient,and 92.55%mean Intersection over Union(mean IoU)metrics using the mentioned dataset.展开更多
Semantic segmentation of eye images is a complex task with important applications in human–computer interaction,cognitive science,and neuroscience.Achieving real-time,accurate,and robust segmentation algorithms is cr...Semantic segmentation of eye images is a complex task with important applications in human–computer interaction,cognitive science,and neuroscience.Achieving real-time,accurate,and robust segmentation algorithms is crucial for computationally limited portable devices such as augmented reality and virtual reality.With the rapid advancements in deep learning,many network models have been developed specifically for eye image segmentation.Some methods divide the segmentation process into multiple stages to achieve model parameter miniaturization while enhancing output through post processing techniques to improve segmentation accuracy.These approaches significantly increase the inference time.Other networks adopt more complex encoding and decoding modules to achieve end-to-end output,which requires substantial computation.Therefore,balancing the model’s size,accuracy,and computational complexity is essential.To address these challenges,we propose a lightweight asymmetric UNet architecture and a projection loss function.We utilize ResNet-3 layer blocks to enhance feature extraction efficiency in the encoding stage.In the decoding stage,we employ regular convolutions and skip connections to upscale the feature maps from the latent space to the original image size,balancing the model size and segmentation accuracy.In addition,we leverage the geometric features of the eye region and design a projection loss function to further improve the segmentation accuracy without adding any additional inference computational cost.We validate our approach on the OpenEDS2019 dataset for virtual reality and achieve state-of-the-art performance with 95.33%mean intersection over union(mIoU).Our model has only 0.63M parameters and 350 FPS,which are 68%and 200%of the state-of-the-art model RITNet,respectively.展开更多
Semantic segmentation is a core task in computer vision that allows AI models to interact and understand their surrounding environment. Similarly to how humans subconsciously segment scenes, this ability is crucial fo...Semantic segmentation is a core task in computer vision that allows AI models to interact and understand their surrounding environment. Similarly to how humans subconsciously segment scenes, this ability is crucial for scene understanding. However, a challenge many semantic learning models face is the lack of data. Existing video datasets are limited to short, low-resolution videos that are not representative of real-world examples. Thus, one of our key contributions is a customized semantic segmentation version of the Walking Tours Dataset that features hour-long, high-resolution, real-world data from tours of different cities. Additionally, we evaluate the performance of open-vocabulary, semantic model OpenSeeD on our own custom dataset and discuss future implications.展开更多
文摘The agility of Internet of Things(IoT)software engineering is benchmarked based on its systematic insights for wide application support infrastructure developments.Such developments are focused on reducing the interfacing complexity with heterogeneous devices through applications.To handle the interfacing complexity problem,this article introduces a Semantic Interfacing Obscuration Model(SIOM)for IoT software-engineered platforms.The interfacing obscuration between heterogeneous devices and application interfaces from the testing to real-time validations is accounted for in this model.Based on the level of obscuration between the infrastructure hardware to the end-user software,the modifications through device replacement,capacity amendments,or interface bug fixes are performed.These modifications are based on the level of semantic obscurations observed during the application service intervals.The obscuration level is determined using knowledge learning as a progression from hardware to software semantics.The results reported were computed using specific metrics obtained from these experimental evaluations:an 8.94%reduction in interfacing complexity and a 15.04%improvement in integration progression.The knowledge of obscurationsmaps themodifications appropriately to reinstate the agility testing of the hardware/software integrations.This modification-based semantics is verified using semantics error,modification time,and complexity.
基金supported in part by the National Key Research and Development Program of China under Grant 2024YFE0200600in part by the National Natural Science Foundation of China under Grant 62071425+3 种基金in part by the Zhejiang Key Research and Development Plan under Grant 2022C01093in part by the Zhejiang Provincial Natural Science Foundation of China under Grant LR23F010005in part by the National Key Laboratory of Wireless Communications Foundation under Grant 2023KP01601in part by the Big Data and Intelligent Computing Key Lab of CQUPT under Grant BDIC-2023-B-001.
文摘Semantic communication(SemCom)aims to achieve high-fidelity information delivery under low communication consumption by only guaranteeing semantic accuracy.Nevertheless,semantic communication still suffers from unexpected channel volatility and thus developing a re-transmission mechanism(e.g.,hybrid automatic repeat request[HARQ])becomes indispensable.In that regard,instead of discarding previously transmitted information,the incremental knowledge-based HARQ(IK-HARQ)is deemed as a more effective mechanism that could sufficiently utilize the information semantics.However,considering the possible existence of semantic ambiguity in image transmission,a simple bit-level cyclic redundancy check(CRC)might compromise the performance of IK-HARQ.Therefore,there emerges a strong incentive to revolutionize the CRC mechanism,thus more effectively reaping the benefits of both SemCom and HARQ.In this paper,built on top of swin transformer-based joint source-channel coding(JSCC)and IK-HARQ,we propose a semantic image transmission framework SC-TDA-HARQ.In particular,different from the conventional CRC,we introduce a topological data analysis(TDA)-based error detection method,which capably digs out the inner topological and geometric information of images,to capture semantic information and determine the necessity for re-transmission.Extensive numerical results validate the effectiveness and efficiency of the proposed SC-TDA-HARQ framework,especially under the limited bandwidth condition,and manifest the superiority of TDA-based error detection method in image transmission.
基金National Natural Science Foundation of China under Grants 62122069,62071431,62072490,62301490Science and Technology Development Fund of Macao,Macao,China under Grant 0158/2022/A+2 种基金Guangdong Basic and Applied Basic Research Foundation(2022A1515011287)MYRG2020-00107-IOTSCFDCT SKL-IOTSC(UM)-2021-2023。
文摘Text semantic extraction has been envisioned as a promising solution to improve the data transmission efficiency with the limited radio resources for the autonomous interactions among machines and things in the future sixth-generation(6G)wireless networks.In this paper,we propose a Chinese text semantic extraction model,namely T-Pointer,to improve the quality of semantic extraction by integrating the Transformer with the pointer-generator network.The proposed T-Pointer model consists of a semantic encoder and a semantic decoder.In the encoding stage,we use the multi-head attention mechanism of the Transformer to extract semantic features from the input Chinese text.In the decoding stage,we first use the Transformer to extract multi-level global text features.Then,we introduce the pointer-generator network model to directly copy the keyword information from the source text.The simulation results demonstrate that the T-Pointer model can improve the bilingual evaluation understudy(BLEU)and recalloriented understudy for gisting evaluation(ROUGE)by 14.69%and 14.87%on average in comparison with the state-of-the-art models,respectively.Also,we implement the T-Pointer model on a semantic communication system based on the universal software radio peripheral(USRP)platform.The result shows that the packet delay of semantic transmission can be reduced by 52.05%on average,compared to traditional information transmission.
基金supported by the National Natural Science Foundation of China(Nos.62572017,62441232,62206007)R&D Program of Beijing Municipal Education Commission(KZ202210005008).
文摘Knowledge-based VisualQuestion Answering(VQA)requires the integration of visual information with external knowledge reasoning.Existing approaches typically retrieve information from external corpora and rely on pretrained language models for reasoning.However,their performance is often hindered by the limited capabilities of retrievers and the constrained size of knowledge bases.Moreover,relying on image captions to bridge the modal gap between visual and language modalities can lead to the omission of critical visual details.To address these limitations,we propose the Reflective Chain-of-Thought(ReCoT)method,a simple yet effective framework inspired by metacognition theory.ReCoT effectively activates the reasoning capabilities ofMultimodal Large LanguageModels(MLLMs),providing essential visual and knowledge cues required to solve complex visual questions.It simulates a metacognitive reasoning process that encompasses monitoring,reflection,and correction.Specifically,in the initial generation stage,an MLLM produces a preliminary answer that serves as the model’s initial cognitive output.During the reflective reasoning stage,this answer is critically examined to generate a reflective rationale that integrates key visual evidence and relevant knowledge.In the final refinement stage,a smaller language model leverages this rationale to revise the initial prediction,resulting in amore accurate final answer.By harnessing the strengths ofMLLMs in visual and knowledge grounding,ReCoT enables smaller language models to reason effectively without dependence on image captions or external knowledge bases.Experimental results demonstrate that ReCoT achieves substantial performance improvements,outperforming state-of-the-art methods by 2.26%on OK-VQA and 5.8%on A-OKVQA.
文摘Weakly Supervised Semantic Segmentation(WSSS),which relies only on image-level labels,has attracted significant attention for its cost-effectiveness and scalability.Existing methods mainly enhance inter-class distinctions and employ data augmentation to mitigate semantic ambiguity and reduce spurious activations.However,they often neglect the complex contextual dependencies among image patches,resulting in incomplete local representations and limited segmentation accuracy.To address these issues,we propose the Context Patch Fusion with Class Token Enhancement(CPF-CTE)framework,which exploits contextual relations among patches to enrich feature repre-sentations and improve segmentation.At its core,the Contextual-Fusion Bidirectional Long Short-Term Memory(CF-BiLSTM)module captures spatial dependencies between patches and enables bidirectional information flow,yield-ing a more comprehensive understanding of spatial correlations.This strengthens feature learning and segmentation robustness.Moreover,we introduce learnable class tokens that dynamically encode and refine class-specific semantics,enhancing discriminative capability.By effectively integrating spatial and semantic cues,CPF-CTE produces richer and more accurate representations of image content.Extensive experiments on PASCAL VOC 2012 and MS COCO 2014 validate that CPF-CTE consistently surpasses prior WSSS methods.
基金supported in part by the National Natural Science Foundation of China under Grant 62262073in part by the Yunnan Provincial Ten Thousand People Program for Young Top Talents under Grant YNWR-QNBJ-2019-237in part by the Yunnan Provincial Major Science and Technology Special Program under Grant 202402AD080002.
文摘In the age of big data,ensuring data privacy while enabling efficient encrypted data retrieval has become a critical challenge.Traditional searchable encryption schemes face difficulties in handling complex semantic queries.Additionally,they typically rely on honest but curious cloud servers,which introduces the risk of repudiation.Furthermore,the combined operations of search and verification increase system load,thereby reducing performance.Traditional verification mechanisms,which rely on complex hash constructions,suffer from low verification efficiency.To address these challenges,this paper proposes a blockchain-based contextual semantic-aware ciphertext retrieval scheme with efficient verification.Building on existing single and multi-keyword search methods,the scheme uses vector models to semantically train the dataset,enabling it to retain semantic information and achieve context-aware encrypted retrieval,significantly improving search accuracy.Additionally,a blockchain-based updatable master-slave chain storage model is designed,where the master chain stores encrypted keyword indexes and the slave chain stores verification information generated by zero-knowledge proofs,thus balancing system load while improving search and verification efficiency.Finally,an improved non-interactive zero-knowledge proof mechanism is introduced,reducing the computational complexity of verification and ensuring efficient validation of search results.Experimental results demonstrate that the proposed scheme offers stronger security,balanced overhead,and higher search verification efficiency.
文摘This study aimed to enhance the performance of semantic segmentation for autonomous driving by improving the 2DPASS model.Two novel improvements were proposed and implemented in this paper:dynamically adjusting the loss function ratio and integrating an attention mechanism(CBAM).First,the loss function weights were adjusted dynamically.The grid search method is used for deciding the best ratio of 7:3.It gives greater emphasis to the cross-entropy loss,which resulted in better segmentation performance.Second,CBAM was applied at different layers of the 2Dencoder.Heatmap analysis revealed that introducing it after the second block of 2D image encoding produced the most effective enhancement of important feature representation.The training epoch was chosen for optimizing the best value by experiments,which improved model convergence and overall accuracy.To evaluate the proposed approach,experiments were conducted based on the SemanticKITTI database.The results showed that the improved model achieved higher segmentation accuracy by 64.31%,improved 11.47% in mIoU compared with the conventional 2DPASS model(baseline:52.84%).It was more effective at detecting small and distant objects and clearly identifying boundaries between different classes.Issues such as noise and variations in data distribution affected its accuracy,indicating the need for further refinement.Overall,the proposed improvements to the 2DPASS model demonstrated the potential to advance semantic segmentation technology and contributed to a more reliable perception of complex,dynamic environments in autonomous vehicles.Accurate segmentation enhances the vehicle’s ability to distinguish different objects,and this improvement directly supports safer navigation,robust decision-making,and efficient path planning,making it highly applicable to real-world deployment of autonomous systems in urban and highway settings.
基金National Natural Science Foundation of China under Grants No.62171047,U22B2001,62271065,62001051Beijing Natural Science Foundation under Grant L223027BUPT Excellent Ph.D Students Foundation under Grants CX2021114。
文摘This article studies the problem of image segmentation-based semantic communication in autonomous driving.In real traffic scenes,the detecting of objects(e.g.,vehicles and pedestrians)is more important to guarantee driving safety,which is always ignored in existing works.Therefore,we propose a vehicular image segmentation-oriented semantic communication system,termed VIS-SemCom,focusing on transmitting and recovering image semantic features of high-important objects to reduce transmission redundancy.First,we develop a semantic codec based on Swin Transformer architecture,which expands the perceptual field thus improving the segmentation accuracy.To highlight the important objects'accuracy,we propose a multi-scale semantic extraction method by assigning the number of Swin Transformer blocks for diverse resolution semantic features.Also,an importance-aware loss incorporating important levels is devised,and an online hard example mining(OHEM)strategy is proposed to handle small sample issues in the dataset.Finally,experimental results demonstrate that the proposed VIS-SemCom can achieve a significant mean intersection over union(mIoU)performance in the SNR regions,a reduction of transmitted data volume by about 60%at 60%mIoU,and improve the segmentation accuracy of important objects,compared to baseline image communication.
基金supported by the National Science and Technology Council of under Grant NSTC 114-2221-E-130-007.
文摘This paper presents an intelligent patrol and security robot integrating 2D LiDAR and RGB-D vision sensors to achieve semantic simultaneous localization and mapping(SLAM),real-time object recognition,and dynamic obstacle avoidance.The system employs the YOLOv7 deep-learning framework for semantic detection and SLAM for localization and mapping,fusing geometric and visual data to build a high-fidelity 2D semantic map.This map enables the robot to identify and project object information for improved situational awareness.Experimental results show that object recognition reached 95.4%mAP@0.5.Semantic completeness increased from 68.7%(single view)to 94.1%(multi-view)with an average position error of 3.1 cm.During navigation,the robot achieved 98.0%reliability,avoided moving obstacles in 90.0%of encounters,and replanned paths in 0.42 s on average.The integration of LiDAR-based SLAMwith deep-learning–driven semantic perception establishes a robust foundation for intelligent,adaptive,and safe robotic navigation in dynamic environments.
基金supported by the National Key Research and Development Program of China(2020AAA0109300)the Shanghai Collaborative Innovation Center of data intelligence technology(No.0232-A1-8900-24-13).
文摘Chinese abbreviations improve communicative efficiency by extracting key components from longer expressions.They are widely used in both daily communication and professional domains.However,existing abbreviation generation methods still face two major challenges.First,sequence-labeling-based approaches often neglect contextual meaning by making binary decisions at the character level,leading to abbreviations that fail to capture semantic completeness.Second,generation-basedmethods rely heavily on a single decoding process,which frequently produces correct abbreviations but ranks them lower due to inadequate semantic evaluation.To address these limitations,we propose a novel two-stage frameworkwithGeneration–Iterative Optimization forAbbreviation(GIOA).In the first stage,we design aChain-of-Thought prompting strategy and incorporate definitional and situational contexts to generate multiple abbreviation candidates.In the second stage,we introduce a Semantic Preservation Dynamic Adjustment mechanism that alternates between character-level importance estimation and semantic restoration to optimize candidate ranking.Experiments on two public benchmark datasets show that our method outperforms existing state-of-the-art approaches,achieving Hit@1 improvements of 15.15%and 13.01%,respectively,while maintaining consistent results in Hit@3.
基金provided by the Science Research Project of Hebei Education Department under grant No.BJK2024115.
文摘High-resolution remote sensing images(HRSIs)are now an essential data source for gathering surface information due to advancements in remote sensing data capture technologies.However,their significant scale changes and wealth of spatial details pose challenges for semantic segmentation.While convolutional neural networks(CNNs)excel at capturing local features,they are limited in modeling long-range dependencies.Conversely,transformers utilize multihead self-attention to integrate global context effectively,but this approach often incurs a high computational cost.This paper proposes a global-local multiscale context network(GLMCNet)to extract both global and local multiscale contextual information from HRSIs.A detail-enhanced filtering module(DEFM)is proposed at the end of the encoder to refine the encoder outputs further,thereby enhancing the key details extracted by the encoder and effectively suppressing redundant information.In addition,a global-local multiscale transformer block(GLMTB)is proposed in the decoding stage to enable the modeling of rich multiscale global and local information.We also design a stair fusion mechanism to transmit deep semantic information from deep to shallow layers progressively.Finally,we propose the semantic awareness enhancement module(SAEM),which further enhances the representation of multiscale semantic features through spatial attention and covariance channel attention.Extensive ablation analyses and comparative experiments were conducted to evaluate the performance of the proposed method.Specifically,our method achieved a mean Intersection over Union(mIoU)of 86.89%on the ISPRS Potsdam dataset and 84.34%on the ISPRS Vaihingen dataset,outperforming existing models such as ABCNet and BANet.
文摘Advanced traffic monitoring systems encounter substantial challenges in vehicle detection and classification due to the limitations of conventional methods,which often demand extensive computational resources and struggle with diverse data acquisition techniques.This research presents a novel approach for vehicle classification and recognition in aerial image sequences,integrating multiple advanced techniques to enhance detection accuracy.The proposed model begins with preprocessing using Multiscale Retinex(MSR)to enhance image quality,followed by Expectation-Maximization(EM)Segmentation for precise foreground object identification.Vehicle detection is performed using the state-of-the-art YOLOv10 framework,while feature extraction incorporates Maximally Stable Extremal Regions(MSER),Dense Scale-Invariant Feature Transform(Dense SIFT),and Zernike Moments Features to capture distinct object characteristics.Feature optimization is further refined through a Hybrid Swarm-based Optimization algorithm,ensuring optimal feature selection for improved classification performance.The final classification is conducted using a Vision Transformer,leveraging its robust learning capabilities for enhanced accuracy.Experimental evaluations on benchmark datasets,including UAVDT and the Unmanned Aerial Vehicle Intruder Dataset(UAVID),demonstrate the superiority of the proposed approach,achieving an accuracy of 94.40%on UAVDT and 93.57%on UAVID.The results highlight the efficacy of the model in significantly enhancing vehicle detection and classification in aerial imagery,outperforming existing methodologies and offering a statistically validated improvement for intelligent traffic monitoring systems compared to existing approaches.
文摘Weakly supervised semantic segmentation(WSSS)is a tricky task,which only provides category information for segmentation prediction.Thus,the key stage of WSSS is to generate the pseudo labels.For convolutional neural network(CNN)based methods,in which class activation mapping(CAM)is proposed to obtain the pseudo labels,and only concentrates on the most discriminative parts.Recently,transformer-based methods utilize attention map from the multi-headed self-attention(MHSA)module to predict pseudo labels,which usually contain obvious background noise and incoherent object area.To solve the above problems,we use the Conformer as our backbone,which is a parallel network based on convolutional neural network(CNN)and Transformer.The two branches generate pseudo labels and refine them independently,and can effectively combine the advantages of CNN and Transformer.However,the parallel structure is not close enough in the information communication.Thus,parallel structure can result in poor details about pseudo labels,and the background noise still exists.To alleviate this problem,we propose enhancing convolution CAM(ECCAM)model,which have three improved modules based on enhancing convolution,including deeper stem(DStem),convolutional feed-forward network(CFFN)and feature coupling unit with convolution(FCUConv).The ECCAM could make Conformer have tighter interaction between CNN and Transformer branches.After experimental verification,the improved modules we propose can help the network perceive more local information from images,making the final segmentation results more refined.Compared with similar architecture,our modules greatly improve the semantic segmentation performance and achieve70.2%mean intersection over union(mIoU)on the PASCAL VOC 2012 dataset.
基金supported by the Funds for Central-Guided Local Science and Technology Development(Grant No.202407AC110005)Key Technologies for the Construction of a Whole-Process Intelligent Service System for Neuroendocrine Neoplasm.Supported by 2023 Opening Research Fund of Yunnan Key Laboratory of Digital Communications(YNJTKFB-20230686,YNKLDC-KFKT-202304).
文摘In image analysis,high-precision semantic segmentation predominantly relies on supervised learning.Despite significant advancements driven by deep learning techniques,challenges such as class imbalance and dynamic performance evaluation persist.Traditional weighting methods,often based on pre-statistical class counting,tend to overemphasize certain classes while neglecting others,particularly rare sample categories.Approaches like focal loss and other rare-sample segmentation techniques introduce multiple hyperparameters that require manual tuning,leading to increased experimental costs due to their instability.This paper proposes a novel CAWASeg framework to address these limitations.Our approach leverages Grad-CAM technology to generate class activation maps,identifying key feature regions that the model focuses on during decision-making.We introduce a Comprehensive Segmentation Performance Score(CSPS)to dynamically evaluate model performance by converting these activation maps into pseudo mask and comparing them with Ground Truth.Additionally,we design two adaptive weights for each class:a Basic Weight(BW)and a Ratio Weight(RW),which the model adjusts during training based on real-time feedback.Extensive experiments on the COCO-Stuff,CityScapes,and ADE20k datasets demonstrate that our CAWASeg framework significantly improves segmentation performance for rare sample categories while enhancing overall segmentation accuracy.The proposed method offers a robust and efficient solution for addressing class imbalance in semantic segmentation tasks.
基金National Natural Science Foundation of China(Nos.42301473,42271424,42171397)Chinese Postdoctoral Innovation Talents Support Program(No.BX20230299)+2 种基金China Postdoctoral Science Foundation(No.2023M742884)Natural Science Foundation of Sichuan Province(Nos.24NSFSC2264,2025ZNSFSC0322)Key Research and Development Project of Sichuan Province(No.24ZDYF0633).
文摘As a key node of modern transportation network,the informationization management of road tunnels is crucial to ensure the operation safety and traffic efficiency.However,the existing tunnel vehicle modeling methods generally have problems such as insufficient 3D scene description capability and low dynamic update efficiency,which are difficult to meet the demand of real-time accurate management.For this reason,this paper proposes a vehicle twin modeling method for road tunnels.This approach starts from the actual management needs,and supports multi-level dynamic modeling from vehicle type,size to color by constructing a vehicle model library that can be flexibly invoked;at the same time,semantic constraint rules with geometric layout,behavioral attributes,and spatial relationships are designed to ensure that the virtual model matches with the real model with a high degree of similarity;ultimately,the prototype system is constructed and the case region is selected for the case study,and the dynamic vehicle status in the tunnel is realized by integrating real-time monitoring data with semantic constraints for precise virtual-real mapping.Finally,the prototype system is constructed and case experiments are conducted in selected case areas,which are combined with real-time monitoring data to realize dynamic updating and three-dimensional visualization of vehicle states in tunnels.The experiments show that the proposed method can run smoothly with an average rendering efficiency of 17.70 ms while guaranteeing the modeling accuracy(composite similarity of 0.867),which significantly improves the real-time and intuitive tunnel management.The research results provide reliable technical support for intelligent operation and emergency response of road tunnels,and offer new ideas for digital twin modeling of complex scenes.
基金funded by the Natural Science Foundation of the Jiangsu Higher Education Institutions of China(grant number 22KJD440001)Changzhou Science&Technology Program(grant number CJ20220232).
文摘The Internet of Vehicles (IoV) has become an important direction in the field of intelligent transportation, in which vehicle positioning is a crucial part. SLAM (Simultaneous Localization and Mapping) technology plays a crucial role in vehicle localization and navigation. Traditional Simultaneous Localization and Mapping (SLAM) systems are designed for use in static environments, and they can result in poor performance in terms of accuracy and robustness when used in dynamic environments where objects are in constant movement. To address this issue, a new real-time visual SLAM system called MG-SLAM has been developed. Based on ORB-SLAM2, MG-SLAM incorporates a dynamic target detection process that enables the detection of both known and unknown moving objects. In this process, a separate semantic segmentation thread is required to segment dynamic target instances, and the Mask R-CNN algorithm is applied on the Graphics Processing Unit (GPU) to accelerate segmentation. To reduce computational cost, only key frames are segmented to identify known dynamic objects. Additionally, a multi-view geometry method is adopted to detect unknown moving objects. The results demonstrate that MG-SLAM achieves higher precision, with an improvement from 0.2730 m to 0.0135 m in precision. Moreover, the processing time required by MG-SLAM is significantly reduced compared to other dynamic scene SLAM algorithms, which illustrates its efficacy in locating objects in dynamic scenes.
基金supported by the project“GEF9874:Strengthening Coordinated Approaches to Reduce Invasive Alien Species(lAS)Threats to Globally Significant Agrobiodiversity and Agroecosystems in China”funding from the Excellent Talent Training Funding Project in Dongcheng District,Beijing,with project number 2024-dchrcpyzz-9.
文摘Ecological monitoring vehicles are equipped with a range of sensors and monitoring devices designed to gather data on ecological and environmental factors.These vehicles are crucial in various fields,including environmental science research,ecological and environmental monitoring projects,disaster response,and emergency management.A key method employed in these vehicles for achieving high-precision positioning is LiDAR(lightlaser detection and ranging)-Visual Simultaneous Localization and Mapping(SLAM).However,maintaining highprecision localization in complex scenarios,such as degraded environments or when dynamic objects are present,remains a significant challenge.To address this issue,we integrate both semantic and texture information from LiDAR and cameras to enhance the robustness and efficiency of data registration.Specifically,semantic information simplifies the modeling of scene elements,reducing the reliance on dense point clouds,which can be less efficient.Meanwhile,visual texture information complements LiDAR-Visual localization by providing additional contextual details.By incorporating semantic and texture details frompaired images and point clouds,we significantly improve the quality of data association,thereby increasing the success rate of localization.This approach not only enhances the operational capabilities of ecological monitoring vehicles in complex environments but also contributes to improving the overall efficiency and effectiveness of ecological monitoring and environmental protection efforts.
文摘Lower back pain is one of the most common medical problems in the world and it is experienced by a huge percentage of people everywhere.Due to its ability to produce a detailed view of the soft tissues,including the spinal cord,nerves,intervertebral discs,and vertebrae,Magnetic Resonance Imaging is thought to be the most effective method for imaging the spine.The semantic segmentation of vertebrae plays a major role in the diagnostic process of lumbar diseases.It is difficult to semantically partition the vertebrae in Magnetic Resonance Images from the surrounding variety of tissues,including muscles,ligaments,and intervertebral discs.U-Net is a powerful deep-learning architecture to handle the challenges of medical image analysis tasks and achieves high segmentation accuracy.This work proposes a modified U-Net architecture namely MU-Net,consisting of the Meijering convolutional layer that incorporates the Meijering filter to perform the semantic segmentation of lumbar vertebrae L1 to L5 and sacral vertebra S1.Pseudo-colour mask images were generated and used as ground truth for training the model.The work has been carried out on 1312 images expanded from T1-weighted mid-sagittal MRI images of 515 patients in the Lumbar Spine MRI Dataset publicly available from Mendeley Data.The proposed MU-Net model for the semantic segmentation of the lumbar vertebrae gives better performance with 98.79%of pixel accuracy(PA),98.66%of dice similarity coefficient(DSC),97.36%of Jaccard coefficient,and 92.55%mean Intersection over Union(mean IoU)metrics using the mentioned dataset.
基金supported by the HFIPS Director’s Foundation(YZJJ202207-TS),the National Natural Science Foundation of China(82371931)the Natural Science Foundation of Anhui Province(2008085MC69)+3 种基金the Natural Science Foundation of Hefei City(2021033)the General Scientific Research Project of Anhui Provincial Health Commission(AHWJ2021b150)the Collaborative Innovation Program of Hefei Science Center,CAS(2021HSC-CIP013)the Anhui Province Key Research and Development Project(202204295107020004).
文摘Semantic segmentation of eye images is a complex task with important applications in human–computer interaction,cognitive science,and neuroscience.Achieving real-time,accurate,and robust segmentation algorithms is crucial for computationally limited portable devices such as augmented reality and virtual reality.With the rapid advancements in deep learning,many network models have been developed specifically for eye image segmentation.Some methods divide the segmentation process into multiple stages to achieve model parameter miniaturization while enhancing output through post processing techniques to improve segmentation accuracy.These approaches significantly increase the inference time.Other networks adopt more complex encoding and decoding modules to achieve end-to-end output,which requires substantial computation.Therefore,balancing the model’s size,accuracy,and computational complexity is essential.To address these challenges,we propose a lightweight asymmetric UNet architecture and a projection loss function.We utilize ResNet-3 layer blocks to enhance feature extraction efficiency in the encoding stage.In the decoding stage,we employ regular convolutions and skip connections to upscale the feature maps from the latent space to the original image size,balancing the model size and segmentation accuracy.In addition,we leverage the geometric features of the eye region and design a projection loss function to further improve the segmentation accuracy without adding any additional inference computational cost.We validate our approach on the OpenEDS2019 dataset for virtual reality and achieve state-of-the-art performance with 95.33%mean intersection over union(mIoU).Our model has only 0.63M parameters and 350 FPS,which are 68%and 200%of the state-of-the-art model RITNet,respectively.
文摘Semantic segmentation is a core task in computer vision that allows AI models to interact and understand their surrounding environment. Similarly to how humans subconsciously segment scenes, this ability is crucial for scene understanding. However, a challenge many semantic learning models face is the lack of data. Existing video datasets are limited to short, low-resolution videos that are not representative of real-world examples. Thus, one of our key contributions is a customized semantic segmentation version of the Walking Tours Dataset that features hour-long, high-resolution, real-world data from tours of different cities. Additionally, we evaluate the performance of open-vocabulary, semantic model OpenSeeD on our own custom dataset and discuss future implications.