Cabin cables,as critical components of an aircraft's electrical system,significantly impact the operational efficiency and safety of the aircraft.The existing cable segmentation methods in civil aviation cabins ar...Cabin cables,as critical components of an aircraft's electrical system,significantly impact the operational efficiency and safety of the aircraft.The existing cable segmentation methods in civil aviation cabins are limited,especially in automation,heavily dependent on large amounts of data and resources,lacking the flexibility to adapt to different scenarios.To address these challenges,this paper introduces a novel image segmentation model,CableSAM,specifically designed for automated segmentation of cabin cables.CableSAM improves segmentation efficiency and accuracy using knowledge distillation and employs a context ensemble strategy.It accurately segments cables in various scenarios with minimal input prompts.Comparative experiments on three cable datasets demonstrate that CableSAM surpasses other advanced cable segmentation methods in performance.展开更多
Automatic segmentation and recognition of content and element information in color geological map are of great significance for researchers to analyze the distribution of mineral resources and predict disaster informa...Automatic segmentation and recognition of content and element information in color geological map are of great significance for researchers to analyze the distribution of mineral resources and predict disaster information.This article focuses on color planar raster geological map(geological maps include planar geological maps,columnar maps,and profiles).While existing deep learning approaches are often used to segment general images,their performance is limited due to complex elements,diverse regional features,and complicated backgrounds for color geological map in the domain of geoscience.To address the issue,a color geological map segmentation model is proposed that combines the Felz clustering algorithm and an improved SE-UNet deep learning network(named GeoMSeg).Firstly,a symmetrical encoder-decoder structure backbone network based on UNet is constructed,and the channel attention mechanism SENet has been incorporated to augment the network’s capacity for feature representation,enabling the model to purposefully extract map information.The SE-UNet network is employed for feature extraction from the geological map and obtain coarse segmentation results.Secondly,the Felz clustering algorithm is used for super pixel pre-segmentation of geological maps.The coarse segmentation results are refined and modified based on the super pixel pre-segmentation results to obtain the final segmentation results.This study applies GeoMSeg to the constructed dataset,and the experimental results show that the algorithm proposed in this paper has superior performance compared to other mainstream map segmentation models,with an accuracy of 91.89%and a MIoU of 71.91%.展开更多
The Internet of Vehicles (IoV) has become an important direction in the field of intelligent transportation, in which vehicle positioning is a crucial part. SLAM (Simultaneous Localization and Mapping) technology play...The Internet of Vehicles (IoV) has become an important direction in the field of intelligent transportation, in which vehicle positioning is a crucial part. SLAM (Simultaneous Localization and Mapping) technology plays a crucial role in vehicle localization and navigation. Traditional Simultaneous Localization and Mapping (SLAM) systems are designed for use in static environments, and they can result in poor performance in terms of accuracy and robustness when used in dynamic environments where objects are in constant movement. To address this issue, a new real-time visual SLAM system called MG-SLAM has been developed. Based on ORB-SLAM2, MG-SLAM incorporates a dynamic target detection process that enables the detection of both known and unknown moving objects. In this process, a separate semantic segmentation thread is required to segment dynamic target instances, and the Mask R-CNN algorithm is applied on the Graphics Processing Unit (GPU) to accelerate segmentation. To reduce computational cost, only key frames are segmented to identify known dynamic objects. Additionally, a multi-view geometry method is adopted to detect unknown moving objects. The results demonstrate that MG-SLAM achieves higher precision, with an improvement from 0.2730 m to 0.0135 m in precision. Moreover, the processing time required by MG-SLAM is significantly reduced compared to other dynamic scene SLAM algorithms, which illustrates its efficacy in locating objects in dynamic scenes.展开更多
Considering the three-dimensional(3D) U-Net lacks sufficient local feature extraction for image features and lacks attention to the fusion of high-and low-level features, we propose a new model called 3DMAU-Net based ...Considering the three-dimensional(3D) U-Net lacks sufficient local feature extraction for image features and lacks attention to the fusion of high-and low-level features, we propose a new model called 3DMAU-Net based on the 3D U-Net architecture for liver region segmentation. Our model replaces the last two layers of the 3D U-Net with a sliding window-based multilayer perceptron(SMLP), enabling better extraction of local image features. We also design a high-and low-level feature fusion dilated convolution block that focuses on local features and better supplements the surrounding information of the target region. This block is embedded in the entire encoding process, ensuring that the overall network is not simply downsampling. Before each feature extraction, the input features are processed by the dilated convolution block. We validate our experiments on the liver tumor segmentation challenge 2017(Lits2017) dataset, and our model achieves a Dice coefficient of 0.95, which is an improvement of 0.015 compared to the 3D U-Net model. Furthermore, we compare our results with other segmentation methods, and our model consistently outperforms them.展开更多
Brain tumor segmentation is critical in clinical diagnosis and treatment planning.Existing methods for brain tumor segmentation with missing modalities often struggle when dealing with multiple missing modalities,a co...Brain tumor segmentation is critical in clinical diagnosis and treatment planning.Existing methods for brain tumor segmentation with missing modalities often struggle when dealing with multiple missing modalities,a common scenario in real-world clinical settings.These methods primarily focus on handling a single missing modality at a time,making them insufficiently robust for the additional complexity encountered with incomplete data containing various missing modality combinations.Additionally,most existing methods rely on single models,which may limit their performance and increase the risk of overfitting the training data.This work proposes a novel method called the ensemble adversarial co-training neural network(EACNet)for accurate brain tumor segmentation from multi-modal magnetic resonance imaging(MRI)scans with multiple missing modalities.The proposed method consists of three key modules:the ensemble of pre-trained models,which captures diverse feature representations from the MRI data by employing an ensemble of pre-trained models;adversarial learning,which leverages a competitive training approach involving two models;a generator model,which creates realistic missing data,while sub-networks acting as discriminators learn to distinguish real data from the generated“fake”data.Co-training framework utilizes the information extracted by the multimodal path(trained on complete scans)to guide the learning process in the path handling missing modalities.The model potentially compensates for missing information through co-training interactions by exploiting the relationships between available modalities and the tumor segmentation task.EACNet was evaluated on the BraTS2018 and BraTS2020 challenge datasets and achieved state-of-the-art and competitive performance respectively.Notably,the segmentation results for the whole tumor(WT)dice similarity coefficient(DSC)reached 89.27%,surpassing the performance of existing methods.The analysis suggests that the ensemble approach offers potential benefits,and the adversarial co-training contributes to the increased robustness and accuracy of EACNet for brain tumor segmentation of MRI scans with missing modalities.The experimental results show that EACNet has promising results for the task of brain tumor segmentation of MRI scans with missing modalities and is a better candidate for real-world clinical applications.展开更多
Medical image segmentation has become a cornerstone for many healthcare applications,allowing for the automated extraction of critical information from images such as Computed Tomography(CT)scans,Magnetic Resonance Im...Medical image segmentation has become a cornerstone for many healthcare applications,allowing for the automated extraction of critical information from images such as Computed Tomography(CT)scans,Magnetic Resonance Imaging(MRIs),and X-rays.The introduction of U-Net in 2015 has significantly advanced segmentation capabilities,especially for small datasets commonly found in medical imaging.Since then,various modifications to the original U-Net architecture have been proposed to enhance segmentation accuracy and tackle challenges like class imbalance,data scarcity,and multi-modal image processing.This paper provides a detailed review and comparison of several U-Net-based architectures,focusing on their effectiveness in medical image segmentation tasks.We evaluate performance metrics such as Dice Similarity Coefficient(DSC)and Intersection over Union(IoU)across different U-Net variants including HmsU-Net,CrossU-Net,mResU-Net,and others.Our results indicate that architectural enhancements such as transformers,attention mechanisms,and residual connections improve segmentation performance across diverse medical imaging applications,including tumor detection,organ segmentation,and lesion identification.The study also identifies current challenges in the field,including data variability,limited dataset sizes,and issues with class imbalance.Based on these findings,the paper suggests potential future directions for improving the robustness and clinical applicability of U-Net-based models in medical image segmentation.展开更多
The segmentation of retinal vessels and coronary angiographs is essential for diagnosing conditions such as glaucoma,diabetes,hypertension,and coronary artery disease.However,retinal vessels and coronary angiographs a...The segmentation of retinal vessels and coronary angiographs is essential for diagnosing conditions such as glaucoma,diabetes,hypertension,and coronary artery disease.However,retinal vessels and coronary angiographs are characterized by low contrast and complex structures,posing challenges for vessel segmentation.Moreover,CNN-based approaches are limited in capturing long-range pixel relationships due to their focus on local feature extraction,while ViT-based approaches struggle to capture fine local details,impacting tasks like vessel segmentation that require precise boundary detection.To address these issues,in this paper,we propose a Global–Local Hybrid Modulation Network(GLHM-Net),a dual-encoder architecture that combines the strengths of CNNs and ViTs for vessel segmentation.First,the Hybrid Non-Local Transformer Block(HNLTB)is proposed to efficiently consolidate long-range spatial dependencies into a compact feature representation,providing a global perspective while significantly reducing computational overhead.Second,the Collaborative Attention Fusion Block(CAFB)is proposed to more effectively integrate local and global vessel features at the same hierarchical level during the encoding phase.Finally,the proposed Feature Cross-Modulation Block(FCMB)better complements the local and global features in the decoding stage,effectively enhancing feature learning and minimizing information loss.The experiments conducted on the DRIVE,CHASEDB1,DCA1,and XCAD datasets,achieving AUC values of 0.9811,0.9864,0.9915,and 0.9919,F1 scores of 0.8288,0.8202,0.8040,and 0.8150,and IOU values of 0.7076,0.6952,0.6723,and 0.6878,respectively,demonstrate the strong performance of our proposed network for vessel segmentation.展开更多
This paper presents CW-HRNet,a high-resolution,lightweight crack segmentation network designed to address challenges in complex scenes with slender,deformable,and blurred crack structures.The model incorporates two ke...This paper presents CW-HRNet,a high-resolution,lightweight crack segmentation network designed to address challenges in complex scenes with slender,deformable,and blurred crack structures.The model incorporates two key modules:Constrained Deformable Convolution(CDC),which stabilizes geometric alignment by applying a tanh limiter and learnable scaling factor to the predicted offsets,and the Wavelet Frequency Enhancement Module(WFEM),which decomposes features using Haar wavelets to preserve low-frequency structures while enhancing high-frequency boundaries and textures.Evaluations on the CrackSeg9k benchmark demonstrate CW-HRNet’s superior performance,achieving 82.39%mIoU with only 7.49M parameters and 10.34 GFLOPs,outperforming HrSegNet-B48 by 1.83% in segmentation accuracy with minimal complexity overhead.The model also shows strong cross-dataset generalization,achieving 60.01%mIoU and 66.22%F1 on Asphalt3k without fine-tuning.These results highlight CW-HRNet’s favorable accuracyefficiency trade-off for real-world crack segmentation tasks.展开更多
Neuronal soma segmentation plays a crucial role in neuroscience applications.However,the fine structure,such as boundaries,small-volume neuronal somata and fibers,are commonly present in cell images,which pose a chall...Neuronal soma segmentation plays a crucial role in neuroscience applications.However,the fine structure,such as boundaries,small-volume neuronal somata and fibers,are commonly present in cell images,which pose a challenge for accurate segmentation.In this paper,we propose a 3D semantic segmentation network for neuronal soma segmentation to address this issue.Using an encoding-decoding structure,we introduce a Multi-Scale feature extraction and Adaptive Weighting fusion module(MSAW)after each encoding block.The MSAW module can not only emphasize the fine structures via an upsampling strategy,but also provide pixel-wise weights to measure the importance of the multi-scale features.Additionally,a dynamic convolution instead of normal convolution is employed to better adapt the network to input data with different distributions.The proposed MSAW-based semantic segmentation network(MSAW-Net)was evaluated on three neuronal soma images from mouse brain and one neuronal soma image from macaque brain,demonstrating the efficiency of the proposed method.It achieved an F1 score of 91.8%on Fezf2-2A-CreER dataset,97.1%on LSL-H2B-GFP dataset,82.8%on Thy1-EGFP-Mline dataset,and 86.9%on macaque dataset,achieving improvements over the 3D U-Net model by 3.1%,3.3%,3.9%,and 2.3%,respectively.展开更多
In recent years,video coding has been widely applied in the field of video image processing to remove redundant information and improve data transmission efficiency.However,during the video coding process,irrelevant o...In recent years,video coding has been widely applied in the field of video image processing to remove redundant information and improve data transmission efficiency.However,during the video coding process,irrelevant objects such as background elements are often encoded due to environmental disturbances,resulting in the wastage of computational resources.Existing research on video coding efficiency optimization primarily focuses on optimizing encoding units during intra-frame or inter frame prediction after the generation of coding units,neglecting the optimization of video images before coding unit generation.To address this challenge,This work proposes an image semantic segmentation compression algorithm based on macroblock encoding,called image semantic segmentation compression algorithm based on macroblock encoding(ISSC-ME),which consists of three modules.(1)The semantic label generation module generates interesting object labels using a grid-based approach to reduce redundant coding of consecutive frames.(2)The image segmentation network module generates a semantic segmentation image using U-Net.(3)The macroblock coding module,is a block segmentation-based video encoding and decoding algorithm used to compress images and improve video transmission efficiency.Experimental results show that the proposed image semantic segmentation optimization algorithm can reduce the computational costs,and improve the overall accuracy by 1.00%and the mean intersection over union(IoU)by 1.20%.In addition,the proposed compression algorithm utilizes macroblock fusion,resulting in the image compression rate achieving 80.64%.It has been proven that the proposed algorithm greatly reduces data storage and transmission,and enables fast image compression processing at the millisecond level.展开更多
The key to the success of few-shot semantic segmentation(FSS)depends on the efficient use of limited annotated support set to accurately segment novel classes in the query set.Due to the few samples in the support set...The key to the success of few-shot semantic segmentation(FSS)depends on the efficient use of limited annotated support set to accurately segment novel classes in the query set.Due to the few samples in the support set,FSS faces challenges such as intra-class differences,background(BG)mismatches between query and support sets,and ambiguous segmentation between the foreground(FG)and BG in the query set.To address these issues,The paper propose a multi-module network called CAMSNet,which includes four modules:the General Information Module(GIM),the Class Activation Map Aggregation(CAMA)module,the Self-Cross Attention(SCA)Block,and the Feature Fusion Module(FFM).In CAMSNet,The GIM employs an improved triplet loss,which concatenates word embedding vectors and support prototypes as anchors,and uses local support features of FG and BG as positive and negative samples to help solve the problem of intra-class differences.Then for the first time,the Class Activation Map(CAM)from the Weakly Supervised Semantic Segmentation(WSSS)is applied to FSS within the CAMA module.This method replaces the traditional use of cosine similarity to locate query information.Subsequently,the SCA Block processes the support and query features aggregated by the CAMA module,significantly enhancing the understanding of input information,leading to more accurate predictions and effectively addressing BG mismatch and ambiguous FG-BG segmentation.Finally,The FFM combines general class information with the enhanced query information to achieve accurate segmentation of the query image.Extensive Experiments on PASCAL and COCO demonstrate that-5i-20ithe CAMSNet yields superior performance and set a state-of-the-art.展开更多
Brain tumors present significant challenges in medical diagnosis and treatment,where early detection is crucial for reducing morbidity and mortality rates.This research introduces a novel deep learning model,the Progr...Brain tumors present significant challenges in medical diagnosis and treatment,where early detection is crucial for reducing morbidity and mortality rates.This research introduces a novel deep learning model,the Progressive Layered U-Net(PLU-Net),designed to improve brain tumor segmentation accuracy from Magnetic Resonance Imaging(MRI)scans.The PLU-Net extends the standard U-Net architecture by incorporating progressive layering,attention mechanisms,and multi-scale data augmentation.The progressive layering involves a cascaded structure that refines segmentation masks across multiple stages,allowing the model to capture features at different scales and resolutions.Attention gates within the convolutional layers selectively focus on relevant features while suppressing irrelevant ones,enhancing the model's ability to delineate tumor boundaries.Additionally,multi-scale data augmentation techniques increase the diversity of training data and boost the model's generalization capabilities.Evaluated on the BraTS 2021 dataset,the PLU-Net achieved state-of-the-art performance with a dice coefficient of 0.91,specificity of 0.92,sensitivity of 0.89,Hausdorff95 of 2.5,outperforming other modified U-Net architectures in segmentation accuracy.These results underscore the effectiveness of the PLU-Net in improving brain tumor segmentation from MRI scans,supporting clinicians in early diagnosis,treatment planning,and the development of new therapies.展开更多
Lower back pain is one of the most common medical problems in the world and it is experienced by a huge percentage of people everywhere.Due to its ability to produce a detailed view of the soft tissues,including the s...Lower back pain is one of the most common medical problems in the world and it is experienced by a huge percentage of people everywhere.Due to its ability to produce a detailed view of the soft tissues,including the spinal cord,nerves,intervertebral discs,and vertebrae,Magnetic Resonance Imaging is thought to be the most effective method for imaging the spine.The semantic segmentation of vertebrae plays a major role in the diagnostic process of lumbar diseases.It is difficult to semantically partition the vertebrae in Magnetic Resonance Images from the surrounding variety of tissues,including muscles,ligaments,and intervertebral discs.U-Net is a powerful deep-learning architecture to handle the challenges of medical image analysis tasks and achieves high segmentation accuracy.This work proposes a modified U-Net architecture namely MU-Net,consisting of the Meijering convolutional layer that incorporates the Meijering filter to perform the semantic segmentation of lumbar vertebrae L1 to L5 and sacral vertebra S1.Pseudo-colour mask images were generated and used as ground truth for training the model.The work has been carried out on 1312 images expanded from T1-weighted mid-sagittal MRI images of 515 patients in the Lumbar Spine MRI Dataset publicly available from Mendeley Data.The proposed MU-Net model for the semantic segmentation of the lumbar vertebrae gives better performance with 98.79%of pixel accuracy(PA),98.66%of dice similarity coefficient(DSC),97.36%of Jaccard coefficient,and 92.55%mean Intersection over Union(mean IoU)metrics using the mentioned dataset.展开更多
Existing semi-supervisedmedical image segmentation algorithms use copy-paste data augmentation to correct the labeled-unlabeled data distribution mismatch.However,current copy-paste methods have three limitations:(1)t...Existing semi-supervisedmedical image segmentation algorithms use copy-paste data augmentation to correct the labeled-unlabeled data distribution mismatch.However,current copy-paste methods have three limitations:(1)training the model solely with copy-paste mixed pictures from labeled and unlabeled input loses a lot of labeled information;(2)low-quality pseudo-labels can cause confirmation bias in pseudo-supervised learning on unlabeled data;(3)the segmentation performance in low-contrast and local regions is less than optimal.We design a Stochastic Augmentation-Based Dual-Teaching Auxiliary Training Strategy(SADT),which enhances feature diversity and learns high-quality features to overcome these problems.To be more precise,SADT trains the Student Network by using pseudo-label-based training from Teacher Network 1 and supervised learning with labeled data,which prevents the loss of rare labeled data.We introduce a bi-directional copy-pastemask with progressive high-entropy filtering to reduce data distribution disparities and mitigate confirmation bias in pseudo-supervision.For the mixed images,Deep-Shallow Spatial Contrastive Learning(DSSCL)is proposed in the feature spaces of Teacher Network 2 and the Student Network to improve the segmentation capabilities in low-contrast and local areas.In this procedure,the features retrieved by the Student Network are subjected to a random feature perturbation technique.On two openly available datasets,extensive trials show that our proposed SADT performs much better than the state-ofthe-art semi-supervised medical segmentation techniques.Using only 10%of the labeled data for training,SADT was able to acquire a Dice score of 90.10%on the ACDC(Automatic Cardiac Diagnosis Challenge)dataset.展开更多
Retinal blood vessel segmentation is crucial for diagnosing ocular and cardiovascular diseases.Although the introduction of U-Net in 2015 by Olaf Ronneberger significantly advanced this field,yet issues like limited t...Retinal blood vessel segmentation is crucial for diagnosing ocular and cardiovascular diseases.Although the introduction of U-Net in 2015 by Olaf Ronneberger significantly advanced this field,yet issues like limited training data,imbalance data distribution,and inadequate feature extraction persist,hindering both the segmentation performance and optimal model generalization.Addressing these critical issues,the DEFFA-Unet is proposed featuring an additional encoder to process domain-invariant pre-processed inputs,thereby improving both richer feature encoding and enhanced model generalization.A feature filtering fusion module is developed to ensure the precise feature filtering and robust hybrid feature fusion.In response to the task-specific need for higher precision where false positives are very costly,traditional skip connections are replaced with the attention-guided feature reconstructing fusion module.Additionally,innovative data augmentation and balancing methods are proposed to counter data scarcity and distribution imbalance,further boosting the robustness and generalization of the model.With a comprehensive suite of evaluation metrics,extensive validations on four benchmark datasets(DRIVE,CHASEDB1,STARE,and HRF)and an SLO dataset(IOSTAR),demonstrate the proposed method’s superiority over both baseline and state-of-the-art models.Particularly the proposed method significantly outperforms the compared methods in cross-validation model generalization.展开更多
The use of AI technologies in remote sensing(RS)tasks has been the focus of many individuals in both the professional and academic domains.Having more accessible interfaces and tools that allow people of little or no ...The use of AI technologies in remote sensing(RS)tasks has been the focus of many individuals in both the professional and academic domains.Having more accessible interfaces and tools that allow people of little or no experience to intuitively interact with RS data of multiple formats is a potential provided by this integration.However,the use of AI and AI agents to help automate RS-related tasks is still in its infancy stage,with some frameworks and interfaces built on top of well-known vision language models(VLM)such as GPT-4,segment anything model(SAM),and grounding DINO.These tools do promise and draw guidelines on the potentials and limitations of existing solutions concerning the use of said models.In this work,the state of the art AI foundation models(FM)are reviewed and used in a multi-modal manner to ingest RS imagery input and perform zero-shot object detection using natural language.The natural language input is then used to define the classes or labels the model should look for,then,both inputs are fed to the pipeline.The pipeline presented in this work makes up for the shortcomings of the general knowledge FMs by stacking pre-processing and post-processing applications on top of the FMs;these applications include tiling to produce uniform patches of the original image for faster detection,outlier rejection of redundant bounding boxes using statistical and machine learning methods.The pipeline was tested with UAV,aerial and satellite images taken over multiple areas.The accuracy for the semantic segmentation showed improvement from the original 64%to approximately 80%-99%by utilizing the pipeline and techniques proposed in this work.GitHub Repository:MohanadDiab/LangRS.展开更多
In this paper,we introduce an innovative method for computer-aided design(CAD)segmentation by concatenating meshes and CAD models.Many previous CAD segmentation methods have achieved impressive performance using singl...In this paper,we introduce an innovative method for computer-aided design(CAD)segmentation by concatenating meshes and CAD models.Many previous CAD segmentation methods have achieved impressive performance using single representations,such as meshes,CAD,and point clouds.However,existing methods cannot effectively combine different three-dimensional model types for the direct conversion,alignment,and integrity maintenance of geometric and topological information.Hence,we propose an integration approach that combines the geometric accuracy of CAD data with the flexibility of mesh representations,as well as introduce a unique hybrid representation that combines CAD and mesh models to enhance segmentation accuracy.To combine these two model types,our hybrid system utilizes advanced-neural-network techniques to convert CAD models into mesh models.For complex CAD models,model segmentation is crucial for model retrieval and reuse.In partial retrieval,it aims to segment a complex CAD model into several simple components.The first component of our hybrid system involves advanced mesh-labeling algorithms that harness the digitization of CAD properties to mesh models.The second component integrates labelled face features for CAD segmentation by leveraging the abundant multisemantic information embedded in CAD models.This combination of mesh and CAD not only refines the accuracy of boundary delineation but also provides a comprehensive understanding of the underlying object semantics.This study uses the Fusion 360 Gallery dataset.Experimental results indicate that our hybrid method can segment these models with higher accuracy than other methods that use single representations.展开更多
Mountain front faults form the boundary between mountains and adjacent plains.These faults can propagate toward the plains and escalate the risk of seismic hazard for near cities.The North Tehran Fault(NTF)is a mounta...Mountain front faults form the boundary between mountains and adjacent plains.These faults can propagate toward the plains and escalate the risk of seismic hazard for near cities.The North Tehran Fault(NTF)is a mountain front fault bordering the Central Alborz with Tehran and Karaj plains.Structural and morphotectonic data from interpreted aerial photographs,satellite images,airborne geomagnetic data as well as field surveying have been used for detailed segmentation and evolution of the North Tehran Fault.This resulted in identification of the fault segments as the Niknamdeh,Darband,Darakeh-Garmdarreh,and Karaj from east to west.Active kinematics of these segments includes both thrusting and left-lateral components;but the dominant component is different among the segments.The Niknamdeh segment is connected to the Mosha Fault with a hard linkage,while its connection with the Darband segment is a widespread deformation zone.The connection zone between the Darband and Darakeh-Garmdarreh segments has the highest density of minor faults along the North Tehran Fault.The boundary of the Darakeh-Garmdarreh and Karaj segments is controlled by the F-3 transverse fault that has offset the NTF for~3 km right-laterally.The NTF has inverted from normal to dextral oblique fault in Miocene.The fault kinematics has changed from dextral to sinistral in Pliocene-Quaternary.Further regional oblique convergence resulted in minor fault reactivation such as relay ramp breaching faults,propagation of several footwall branches and hangingwall bypasses geometrical change of alluvial fans,and transfer of deformation front southwardly to the Tehran and Karaj plains.The findings of this paper are also applicable to other active oblique converging mountain fronts,inverted mountain front faults and the transition of deformation from these structures to the foreland basin.展开更多
Reticular structures are the basis of major infrastructure projects,including bridges,electrical pylons and airports.However,inspecting and maintaining these structures is both expensive and hazardous,traditionally re...Reticular structures are the basis of major infrastructure projects,including bridges,electrical pylons and airports.However,inspecting and maintaining these structures is both expensive and hazardous,traditionally requiring human involvement.While some research has been conducted in this field of study,most efforts focus on faults identification through images or the design of robotic platforms,often neglecting the autonomous navigation of robots through the structure.This study addresses this limitation by proposing methods to detect navigable surfaces in truss structures,thereby enhancing the autonomous capabilities of climbing robots to navigate through these environments.The paper proposes multiple approaches for the binary segmentation between navigable surfaces and background from 3D point clouds captured from metallic trusses.Approaches can be classified into two paradigms:analytical algorithms and deep learning methods.Within the analytical approach,an ad hoc algorithm is developed for segmenting the structures,leveraging different techniques to evaluate the eigendecomposition of planar patches within the point cloud.In parallel,widely used and advanced deep learning models,including PointNet,PointNet++,MinkUNet34C,and PointTransformerV3,are trained and evaluated for the same task.A comparative analysis of these paradigms reveals some key insights.The analytical algorithm demonstrates easier parameter adjustment and comparable performance to that of the deep learning models,despite the latter’s higher computational demands.Nevertheless,the deep learning models stand out in segmentation accuracy,with PointTransformerV3 achieving impressive results,such as a Mean Intersection Over Union(mIoU)of approximately 97%.This study highlights the potential of analytical and deep learning approaches to improve the autonomous navigation of climbing robots in complex truss structures.The findings underscore the trade-offs between computational efficiency and segmentation performance,offering valuable insights for future research and practical applications in autonomous infrastructure maintenance and inspection.展开更多
Segmenting a breast ultrasound image is still challenging due to the presence of speckle noise,dependency on the operator,and the variation of image quality.This paper presents the UltraSegNet architecture that addres...Segmenting a breast ultrasound image is still challenging due to the presence of speckle noise,dependency on the operator,and the variation of image quality.This paper presents the UltraSegNet architecture that addresses these challenges through three key technical innovations:This work adds three things:(1)a changed ResNet-50 backbone with sequential 3×3 convolutions to keep fine anatomical details that are needed for finding lesion boundaries;(2)a computationally efficient regional attention mechanism that works on high-resolution features without using a transformer’s extra memory;and(3)an adaptive feature fusion strategy that changes local and global featuresbasedonhowthe image isbeing used.Extensive evaluation on two distinct datasets demonstrates UltraSegNet’s superior performance:On the BUSI dataset,it obtains a precision of 0.915,a recall of 0.908,and an F1 score of 0.911.In the UDAIT dataset,it achieves robust performance across the board,with a precision of 0.901 and recall of 0.894.Importantly,these improvements are achieved at clinically feasible computation times,taking 235 ms per image on standard GPU hardware.Notably,UltraSegNet does amazingly well on difficult small lesions(less than 10 mm),achieving a detection accuracy of 0.891.This is a huge improvement over traditional methods that have a hard time with small-scale features,as standard models can only achieve 0.63–0.71 accuracy.This improvement in small lesion detection is particularly crucial for early-stage breast cancer identification.Results from this work demonstrate that UltraSegNet can be practically deployable in clinical workflows to improve breast cancer screening accuracy.展开更多
基金supported by the Innovation Foundation of National Commercial Aircraft Manufacturing Engineering Technology Research Center(No.COMAC-SFGS-2022-1877)in part by the National Natural Science Foundation of China(No.92048301)。
文摘Cabin cables,as critical components of an aircraft's electrical system,significantly impact the operational efficiency and safety of the aircraft.The existing cable segmentation methods in civil aviation cabins are limited,especially in automation,heavily dependent on large amounts of data and resources,lacking the flexibility to adapt to different scenarios.To address these challenges,this paper introduces a novel image segmentation model,CableSAM,specifically designed for automated segmentation of cabin cables.CableSAM improves segmentation efficiency and accuracy using knowledge distillation and employs a context ensemble strategy.It accurately segments cables in various scenarios with minimal input prompts.Comparative experiments on three cable datasets demonstrate that CableSAM surpasses other advanced cable segmentation methods in performance.
基金financially supported by the Natural Science Foundation of China(42301492)the Open Fund of Hubei Key Laboratory of Intelligent Vision Based Monitoring for Hydroelectric Engineering(2022SDSJ04,2024SDSJ03)+1 种基金the Opening Fund of Key Laboratory of Geological Survey and Evaluation of Ministry of Education(GLAB 2023ZR01,GLAB2024ZR08)the Fundamental Research Funds for the Central Universities.
文摘Automatic segmentation and recognition of content and element information in color geological map are of great significance for researchers to analyze the distribution of mineral resources and predict disaster information.This article focuses on color planar raster geological map(geological maps include planar geological maps,columnar maps,and profiles).While existing deep learning approaches are often used to segment general images,their performance is limited due to complex elements,diverse regional features,and complicated backgrounds for color geological map in the domain of geoscience.To address the issue,a color geological map segmentation model is proposed that combines the Felz clustering algorithm and an improved SE-UNet deep learning network(named GeoMSeg).Firstly,a symmetrical encoder-decoder structure backbone network based on UNet is constructed,and the channel attention mechanism SENet has been incorporated to augment the network’s capacity for feature representation,enabling the model to purposefully extract map information.The SE-UNet network is employed for feature extraction from the geological map and obtain coarse segmentation results.Secondly,the Felz clustering algorithm is used for super pixel pre-segmentation of geological maps.The coarse segmentation results are refined and modified based on the super pixel pre-segmentation results to obtain the final segmentation results.This study applies GeoMSeg to the constructed dataset,and the experimental results show that the algorithm proposed in this paper has superior performance compared to other mainstream map segmentation models,with an accuracy of 91.89%and a MIoU of 71.91%.
基金funded by the Natural Science Foundation of the Jiangsu Higher Education Institutions of China(grant number 22KJD440001)Changzhou Science&Technology Program(grant number CJ20220232).
文摘The Internet of Vehicles (IoV) has become an important direction in the field of intelligent transportation, in which vehicle positioning is a crucial part. SLAM (Simultaneous Localization and Mapping) technology plays a crucial role in vehicle localization and navigation. Traditional Simultaneous Localization and Mapping (SLAM) systems are designed for use in static environments, and they can result in poor performance in terms of accuracy and robustness when used in dynamic environments where objects are in constant movement. To address this issue, a new real-time visual SLAM system called MG-SLAM has been developed. Based on ORB-SLAM2, MG-SLAM incorporates a dynamic target detection process that enables the detection of both known and unknown moving objects. In this process, a separate semantic segmentation thread is required to segment dynamic target instances, and the Mask R-CNN algorithm is applied on the Graphics Processing Unit (GPU) to accelerate segmentation. To reduce computational cost, only key frames are segmented to identify known dynamic objects. Additionally, a multi-view geometry method is adopted to detect unknown moving objects. The results demonstrate that MG-SLAM achieves higher precision, with an improvement from 0.2730 m to 0.0135 m in precision. Moreover, the processing time required by MG-SLAM is significantly reduced compared to other dynamic scene SLAM algorithms, which illustrates its efficacy in locating objects in dynamic scenes.
基金supported by the Shandong Provincial Natural Science Foundation (Nos.ZR2023MF062 and ZR2021MF115)the Introduction and Cultivation Program for Young Innovative Talents of Universities in Shandong (No.2021QCYY003)。
文摘Considering the three-dimensional(3D) U-Net lacks sufficient local feature extraction for image features and lacks attention to the fusion of high-and low-level features, we propose a new model called 3DMAU-Net based on the 3D U-Net architecture for liver region segmentation. Our model replaces the last two layers of the 3D U-Net with a sliding window-based multilayer perceptron(SMLP), enabling better extraction of local image features. We also design a high-and low-level feature fusion dilated convolution block that focuses on local features and better supplements the surrounding information of the target region. This block is embedded in the entire encoding process, ensuring that the overall network is not simply downsampling. Before each feature extraction, the input features are processed by the dilated convolution block. We validate our experiments on the liver tumor segmentation challenge 2017(Lits2017) dataset, and our model achieves a Dice coefficient of 0.95, which is an improvement of 0.015 compared to the 3D U-Net model. Furthermore, we compare our results with other segmentation methods, and our model consistently outperforms them.
基金supported by Gansu Natural Science Foundation Programme(No.24JRRA231)National Natural Science Foundation of China(No.62061023)Gansu Provincial Education,Science and Technology Innovation and Industry(No.2021CYZC-04)。
文摘Brain tumor segmentation is critical in clinical diagnosis and treatment planning.Existing methods for brain tumor segmentation with missing modalities often struggle when dealing with multiple missing modalities,a common scenario in real-world clinical settings.These methods primarily focus on handling a single missing modality at a time,making them insufficiently robust for the additional complexity encountered with incomplete data containing various missing modality combinations.Additionally,most existing methods rely on single models,which may limit their performance and increase the risk of overfitting the training data.This work proposes a novel method called the ensemble adversarial co-training neural network(EACNet)for accurate brain tumor segmentation from multi-modal magnetic resonance imaging(MRI)scans with multiple missing modalities.The proposed method consists of three key modules:the ensemble of pre-trained models,which captures diverse feature representations from the MRI data by employing an ensemble of pre-trained models;adversarial learning,which leverages a competitive training approach involving two models;a generator model,which creates realistic missing data,while sub-networks acting as discriminators learn to distinguish real data from the generated“fake”data.Co-training framework utilizes the information extracted by the multimodal path(trained on complete scans)to guide the learning process in the path handling missing modalities.The model potentially compensates for missing information through co-training interactions by exploiting the relationships between available modalities and the tumor segmentation task.EACNet was evaluated on the BraTS2018 and BraTS2020 challenge datasets and achieved state-of-the-art and competitive performance respectively.Notably,the segmentation results for the whole tumor(WT)dice similarity coefficient(DSC)reached 89.27%,surpassing the performance of existing methods.The analysis suggests that the ensemble approach offers potential benefits,and the adversarial co-training contributes to the increased robustness and accuracy of EACNet for brain tumor segmentation of MRI scans with missing modalities.The experimental results show that EACNet has promising results for the task of brain tumor segmentation of MRI scans with missing modalities and is a better candidate for real-world clinical applications.
文摘Medical image segmentation has become a cornerstone for many healthcare applications,allowing for the automated extraction of critical information from images such as Computed Tomography(CT)scans,Magnetic Resonance Imaging(MRIs),and X-rays.The introduction of U-Net in 2015 has significantly advanced segmentation capabilities,especially for small datasets commonly found in medical imaging.Since then,various modifications to the original U-Net architecture have been proposed to enhance segmentation accuracy and tackle challenges like class imbalance,data scarcity,and multi-modal image processing.This paper provides a detailed review and comparison of several U-Net-based architectures,focusing on their effectiveness in medical image segmentation tasks.We evaluate performance metrics such as Dice Similarity Coefficient(DSC)and Intersection over Union(IoU)across different U-Net variants including HmsU-Net,CrossU-Net,mResU-Net,and others.Our results indicate that architectural enhancements such as transformers,attention mechanisms,and residual connections improve segmentation performance across diverse medical imaging applications,including tumor detection,organ segmentation,and lesion identification.The study also identifies current challenges in the field,including data variability,limited dataset sizes,and issues with class imbalance.Based on these findings,the paper suggests potential future directions for improving the robustness and clinical applicability of U-Net-based models in medical image segmentation.
基金supported by Natural Science Research Project of Tianjin Education Commission(Grant 2020KJ124)National Natural Science Foundation of China(Grant 11601372)National Key Research and Development Program of China(Grant 2022YFF0706003).
文摘The segmentation of retinal vessels and coronary angiographs is essential for diagnosing conditions such as glaucoma,diabetes,hypertension,and coronary artery disease.However,retinal vessels and coronary angiographs are characterized by low contrast and complex structures,posing challenges for vessel segmentation.Moreover,CNN-based approaches are limited in capturing long-range pixel relationships due to their focus on local feature extraction,while ViT-based approaches struggle to capture fine local details,impacting tasks like vessel segmentation that require precise boundary detection.To address these issues,in this paper,we propose a Global–Local Hybrid Modulation Network(GLHM-Net),a dual-encoder architecture that combines the strengths of CNNs and ViTs for vessel segmentation.First,the Hybrid Non-Local Transformer Block(HNLTB)is proposed to efficiently consolidate long-range spatial dependencies into a compact feature representation,providing a global perspective while significantly reducing computational overhead.Second,the Collaborative Attention Fusion Block(CAFB)is proposed to more effectively integrate local and global vessel features at the same hierarchical level during the encoding phase.Finally,the proposed Feature Cross-Modulation Block(FCMB)better complements the local and global features in the decoding stage,effectively enhancing feature learning and minimizing information loss.The experiments conducted on the DRIVE,CHASEDB1,DCA1,and XCAD datasets,achieving AUC values of 0.9811,0.9864,0.9915,and 0.9919,F1 scores of 0.8288,0.8202,0.8040,and 0.8150,and IOU values of 0.7076,0.6952,0.6723,and 0.6878,respectively,demonstrate the strong performance of our proposed network for vessel segmentation.
文摘This paper presents CW-HRNet,a high-resolution,lightweight crack segmentation network designed to address challenges in complex scenes with slender,deformable,and blurred crack structures.The model incorporates two key modules:Constrained Deformable Convolution(CDC),which stabilizes geometric alignment by applying a tanh limiter and learnable scaling factor to the predicted offsets,and the Wavelet Frequency Enhancement Module(WFEM),which decomposes features using Haar wavelets to preserve low-frequency structures while enhancing high-frequency boundaries and textures.Evaluations on the CrackSeg9k benchmark demonstrate CW-HRNet’s superior performance,achieving 82.39%mIoU with only 7.49M parameters and 10.34 GFLOPs,outperforming HrSegNet-B48 by 1.83% in segmentation accuracy with minimal complexity overhead.The model also shows strong cross-dataset generalization,achieving 60.01%mIoU and 66.22%F1 on Asphalt3k without fine-tuning.These results highlight CW-HRNet’s favorable accuracyefficiency trade-off for real-world crack segmentation tasks.
基金supported by the STI2030-Major-Projects(No.2021ZD0200104)the National Natural Science Foundations of China under Grant 61771437.
文摘Neuronal soma segmentation plays a crucial role in neuroscience applications.However,the fine structure,such as boundaries,small-volume neuronal somata and fibers,are commonly present in cell images,which pose a challenge for accurate segmentation.In this paper,we propose a 3D semantic segmentation network for neuronal soma segmentation to address this issue.Using an encoding-decoding structure,we introduce a Multi-Scale feature extraction and Adaptive Weighting fusion module(MSAW)after each encoding block.The MSAW module can not only emphasize the fine structures via an upsampling strategy,but also provide pixel-wise weights to measure the importance of the multi-scale features.Additionally,a dynamic convolution instead of normal convolution is employed to better adapt the network to input data with different distributions.The proposed MSAW-based semantic segmentation network(MSAW-Net)was evaluated on three neuronal soma images from mouse brain and one neuronal soma image from macaque brain,demonstrating the efficiency of the proposed method.It achieved an F1 score of 91.8%on Fezf2-2A-CreER dataset,97.1%on LSL-H2B-GFP dataset,82.8%on Thy1-EGFP-Mline dataset,and 86.9%on macaque dataset,achieving improvements over the 3D U-Net model by 3.1%,3.3%,3.9%,and 2.3%,respectively.
文摘In recent years,video coding has been widely applied in the field of video image processing to remove redundant information and improve data transmission efficiency.However,during the video coding process,irrelevant objects such as background elements are often encoded due to environmental disturbances,resulting in the wastage of computational resources.Existing research on video coding efficiency optimization primarily focuses on optimizing encoding units during intra-frame or inter frame prediction after the generation of coding units,neglecting the optimization of video images before coding unit generation.To address this challenge,This work proposes an image semantic segmentation compression algorithm based on macroblock encoding,called image semantic segmentation compression algorithm based on macroblock encoding(ISSC-ME),which consists of three modules.(1)The semantic label generation module generates interesting object labels using a grid-based approach to reduce redundant coding of consecutive frames.(2)The image segmentation network module generates a semantic segmentation image using U-Net.(3)The macroblock coding module,is a block segmentation-based video encoding and decoding algorithm used to compress images and improve video transmission efficiency.Experimental results show that the proposed image semantic segmentation optimization algorithm can reduce the computational costs,and improve the overall accuracy by 1.00%and the mean intersection over union(IoU)by 1.20%.In addition,the proposed compression algorithm utilizes macroblock fusion,resulting in the image compression rate achieving 80.64%.It has been proven that the proposed algorithm greatly reduces data storage and transmission,and enables fast image compression processing at the millisecond level.
基金supported by funding from the following sources:National Natural Science Foundation of China(U1904119)Research Programs of Henan Science and Technology Department(232102210033,232102210054)+3 种基金Chongqing Natural Science Foundation(CSTB2023NSCQ-MSX0070)Henan Province Key Research and Development Project(231111212000)Aviation Science Foundation(20230001055002)supported by Henan Center for Outstanding Overseas Scientists(GZS2022011).
文摘The key to the success of few-shot semantic segmentation(FSS)depends on the efficient use of limited annotated support set to accurately segment novel classes in the query set.Due to the few samples in the support set,FSS faces challenges such as intra-class differences,background(BG)mismatches between query and support sets,and ambiguous segmentation between the foreground(FG)and BG in the query set.To address these issues,The paper propose a multi-module network called CAMSNet,which includes four modules:the General Information Module(GIM),the Class Activation Map Aggregation(CAMA)module,the Self-Cross Attention(SCA)Block,and the Feature Fusion Module(FFM).In CAMSNet,The GIM employs an improved triplet loss,which concatenates word embedding vectors and support prototypes as anchors,and uses local support features of FG and BG as positive and negative samples to help solve the problem of intra-class differences.Then for the first time,the Class Activation Map(CAM)from the Weakly Supervised Semantic Segmentation(WSSS)is applied to FSS within the CAMA module.This method replaces the traditional use of cosine similarity to locate query information.Subsequently,the SCA Block processes the support and query features aggregated by the CAMA module,significantly enhancing the understanding of input information,leading to more accurate predictions and effectively addressing BG mismatch and ambiguous FG-BG segmentation.Finally,The FFM combines general class information with the enhanced query information to achieve accurate segmentation of the query image.Extensive Experiments on PASCAL and COCO demonstrate that-5i-20ithe CAMSNet yields superior performance and set a state-of-the-art.
文摘Brain tumors present significant challenges in medical diagnosis and treatment,where early detection is crucial for reducing morbidity and mortality rates.This research introduces a novel deep learning model,the Progressive Layered U-Net(PLU-Net),designed to improve brain tumor segmentation accuracy from Magnetic Resonance Imaging(MRI)scans.The PLU-Net extends the standard U-Net architecture by incorporating progressive layering,attention mechanisms,and multi-scale data augmentation.The progressive layering involves a cascaded structure that refines segmentation masks across multiple stages,allowing the model to capture features at different scales and resolutions.Attention gates within the convolutional layers selectively focus on relevant features while suppressing irrelevant ones,enhancing the model's ability to delineate tumor boundaries.Additionally,multi-scale data augmentation techniques increase the diversity of training data and boost the model's generalization capabilities.Evaluated on the BraTS 2021 dataset,the PLU-Net achieved state-of-the-art performance with a dice coefficient of 0.91,specificity of 0.92,sensitivity of 0.89,Hausdorff95 of 2.5,outperforming other modified U-Net architectures in segmentation accuracy.These results underscore the effectiveness of the PLU-Net in improving brain tumor segmentation from MRI scans,supporting clinicians in early diagnosis,treatment planning,and the development of new therapies.
文摘Lower back pain is one of the most common medical problems in the world and it is experienced by a huge percentage of people everywhere.Due to its ability to produce a detailed view of the soft tissues,including the spinal cord,nerves,intervertebral discs,and vertebrae,Magnetic Resonance Imaging is thought to be the most effective method for imaging the spine.The semantic segmentation of vertebrae plays a major role in the diagnostic process of lumbar diseases.It is difficult to semantically partition the vertebrae in Magnetic Resonance Images from the surrounding variety of tissues,including muscles,ligaments,and intervertebral discs.U-Net is a powerful deep-learning architecture to handle the challenges of medical image analysis tasks and achieves high segmentation accuracy.This work proposes a modified U-Net architecture namely MU-Net,consisting of the Meijering convolutional layer that incorporates the Meijering filter to perform the semantic segmentation of lumbar vertebrae L1 to L5 and sacral vertebra S1.Pseudo-colour mask images were generated and used as ground truth for training the model.The work has been carried out on 1312 images expanded from T1-weighted mid-sagittal MRI images of 515 patients in the Lumbar Spine MRI Dataset publicly available from Mendeley Data.The proposed MU-Net model for the semantic segmentation of the lumbar vertebrae gives better performance with 98.79%of pixel accuracy(PA),98.66%of dice similarity coefficient(DSC),97.36%of Jaccard coefficient,and 92.55%mean Intersection over Union(mean IoU)metrics using the mentioned dataset.
基金supported by the Natural Science Foundation of China(No.41804112,author:Chengyun Song).
文摘Existing semi-supervisedmedical image segmentation algorithms use copy-paste data augmentation to correct the labeled-unlabeled data distribution mismatch.However,current copy-paste methods have three limitations:(1)training the model solely with copy-paste mixed pictures from labeled and unlabeled input loses a lot of labeled information;(2)low-quality pseudo-labels can cause confirmation bias in pseudo-supervised learning on unlabeled data;(3)the segmentation performance in low-contrast and local regions is less than optimal.We design a Stochastic Augmentation-Based Dual-Teaching Auxiliary Training Strategy(SADT),which enhances feature diversity and learns high-quality features to overcome these problems.To be more precise,SADT trains the Student Network by using pseudo-label-based training from Teacher Network 1 and supervised learning with labeled data,which prevents the loss of rare labeled data.We introduce a bi-directional copy-pastemask with progressive high-entropy filtering to reduce data distribution disparities and mitigate confirmation bias in pseudo-supervision.For the mixed images,Deep-Shallow Spatial Contrastive Learning(DSSCL)is proposed in the feature spaces of Teacher Network 2 and the Student Network to improve the segmentation capabilities in low-contrast and local areas.In this procedure,the features retrieved by the Student Network are subjected to a random feature perturbation technique.On two openly available datasets,extensive trials show that our proposed SADT performs much better than the state-ofthe-art semi-supervised medical segmentation techniques.Using only 10%of the labeled data for training,SADT was able to acquire a Dice score of 90.10%on the ACDC(Automatic Cardiac Diagnosis Challenge)dataset.
文摘Retinal blood vessel segmentation is crucial for diagnosing ocular and cardiovascular diseases.Although the introduction of U-Net in 2015 by Olaf Ronneberger significantly advanced this field,yet issues like limited training data,imbalance data distribution,and inadequate feature extraction persist,hindering both the segmentation performance and optimal model generalization.Addressing these critical issues,the DEFFA-Unet is proposed featuring an additional encoder to process domain-invariant pre-processed inputs,thereby improving both richer feature encoding and enhanced model generalization.A feature filtering fusion module is developed to ensure the precise feature filtering and robust hybrid feature fusion.In response to the task-specific need for higher precision where false positives are very costly,traditional skip connections are replaced with the attention-guided feature reconstructing fusion module.Additionally,innovative data augmentation and balancing methods are proposed to counter data scarcity and distribution imbalance,further boosting the robustness and generalization of the model.With a comprehensive suite of evaluation metrics,extensive validations on four benchmark datasets(DRIVE,CHASEDB1,STARE,and HRF)and an SLO dataset(IOSTAR),demonstrate the proposed method’s superiority over both baseline and state-of-the-art models.Particularly the proposed method significantly outperforms the compared methods in cross-validation model generalization.
文摘The use of AI technologies in remote sensing(RS)tasks has been the focus of many individuals in both the professional and academic domains.Having more accessible interfaces and tools that allow people of little or no experience to intuitively interact with RS data of multiple formats is a potential provided by this integration.However,the use of AI and AI agents to help automate RS-related tasks is still in its infancy stage,with some frameworks and interfaces built on top of well-known vision language models(VLM)such as GPT-4,segment anything model(SAM),and grounding DINO.These tools do promise and draw guidelines on the potentials and limitations of existing solutions concerning the use of said models.In this work,the state of the art AI foundation models(FM)are reviewed and used in a multi-modal manner to ingest RS imagery input and perform zero-shot object detection using natural language.The natural language input is then used to define the classes or labels the model should look for,then,both inputs are fed to the pipeline.The pipeline presented in this work makes up for the shortcomings of the general knowledge FMs by stacking pre-processing and post-processing applications on top of the FMs;these applications include tiling to produce uniform patches of the original image for faster detection,outlier rejection of redundant bounding boxes using statistical and machine learning methods.The pipeline was tested with UAV,aerial and satellite images taken over multiple areas.The accuracy for the semantic segmentation showed improvement from the original 64%to approximately 80%-99%by utilizing the pipeline and techniques proposed in this work.GitHub Repository:MohanadDiab/LangRS.
基金Supported by the National Key Research and Development Program of China(2024YFB3311703)National Natural Science Foundation of China(61932003)Beijing Science and Technology Plan Project(Z221100006322003).
文摘In this paper,we introduce an innovative method for computer-aided design(CAD)segmentation by concatenating meshes and CAD models.Many previous CAD segmentation methods have achieved impressive performance using single representations,such as meshes,CAD,and point clouds.However,existing methods cannot effectively combine different three-dimensional model types for the direct conversion,alignment,and integrity maintenance of geometric and topological information.Hence,we propose an integration approach that combines the geometric accuracy of CAD data with the flexibility of mesh representations,as well as introduce a unique hybrid representation that combines CAD and mesh models to enhance segmentation accuracy.To combine these two model types,our hybrid system utilizes advanced-neural-network techniques to convert CAD models into mesh models.For complex CAD models,model segmentation is crucial for model retrieval and reuse.In partial retrieval,it aims to segment a complex CAD model into several simple components.The first component of our hybrid system involves advanced mesh-labeling algorithms that harness the digitization of CAD properties to mesh models.The second component integrates labelled face features for CAD segmentation by leveraging the abundant multisemantic information embedded in CAD models.This combination of mesh and CAD not only refines the accuracy of boundary delineation but also provides a comprehensive understanding of the underlying object semantics.This study uses the Fusion 360 Gallery dataset.Experimental results indicate that our hybrid method can segment these models with higher accuracy than other methods that use single representations.
文摘Mountain front faults form the boundary between mountains and adjacent plains.These faults can propagate toward the plains and escalate the risk of seismic hazard for near cities.The North Tehran Fault(NTF)is a mountain front fault bordering the Central Alborz with Tehran and Karaj plains.Structural and morphotectonic data from interpreted aerial photographs,satellite images,airborne geomagnetic data as well as field surveying have been used for detailed segmentation and evolution of the North Tehran Fault.This resulted in identification of the fault segments as the Niknamdeh,Darband,Darakeh-Garmdarreh,and Karaj from east to west.Active kinematics of these segments includes both thrusting and left-lateral components;but the dominant component is different among the segments.The Niknamdeh segment is connected to the Mosha Fault with a hard linkage,while its connection with the Darband segment is a widespread deformation zone.The connection zone between the Darband and Darakeh-Garmdarreh segments has the highest density of minor faults along the North Tehran Fault.The boundary of the Darakeh-Garmdarreh and Karaj segments is controlled by the F-3 transverse fault that has offset the NTF for~3 km right-laterally.The NTF has inverted from normal to dextral oblique fault in Miocene.The fault kinematics has changed from dextral to sinistral in Pliocene-Quaternary.Further regional oblique convergence resulted in minor fault reactivation such as relay ramp breaching faults,propagation of several footwall branches and hangingwall bypasses geometrical change of alluvial fans,and transfer of deformation front southwardly to the Tehran and Karaj plains.The findings of this paper are also applicable to other active oblique converging mountain fronts,inverted mountain front faults and the transition of deformation from these structures to the foreland basin.
基金funded by the spanish Ministry of Science,Innovation and Universities as part of the project PID2020-116418RB-I00 funded by MCIN/AEI/10.13039/501100011033.
文摘Reticular structures are the basis of major infrastructure projects,including bridges,electrical pylons and airports.However,inspecting and maintaining these structures is both expensive and hazardous,traditionally requiring human involvement.While some research has been conducted in this field of study,most efforts focus on faults identification through images or the design of robotic platforms,often neglecting the autonomous navigation of robots through the structure.This study addresses this limitation by proposing methods to detect navigable surfaces in truss structures,thereby enhancing the autonomous capabilities of climbing robots to navigate through these environments.The paper proposes multiple approaches for the binary segmentation between navigable surfaces and background from 3D point clouds captured from metallic trusses.Approaches can be classified into two paradigms:analytical algorithms and deep learning methods.Within the analytical approach,an ad hoc algorithm is developed for segmenting the structures,leveraging different techniques to evaluate the eigendecomposition of planar patches within the point cloud.In parallel,widely used and advanced deep learning models,including PointNet,PointNet++,MinkUNet34C,and PointTransformerV3,are trained and evaluated for the same task.A comparative analysis of these paradigms reveals some key insights.The analytical algorithm demonstrates easier parameter adjustment and comparable performance to that of the deep learning models,despite the latter’s higher computational demands.Nevertheless,the deep learning models stand out in segmentation accuracy,with PointTransformerV3 achieving impressive results,such as a Mean Intersection Over Union(mIoU)of approximately 97%.This study highlights the potential of analytical and deep learning approaches to improve the autonomous navigation of climbing robots in complex truss structures.The findings underscore the trade-offs between computational efficiency and segmentation performance,offering valuable insights for future research and practical applications in autonomous infrastructure maintenance and inspection.
基金funded by Princess Nourah bint Abdulrahman University Researchers Supporting Project number(PNURSP2025R435),Princess Nourah bint Abdulrahman University,Riyadh,Saudi Arabia.
文摘Segmenting a breast ultrasound image is still challenging due to the presence of speckle noise,dependency on the operator,and the variation of image quality.This paper presents the UltraSegNet architecture that addresses these challenges through three key technical innovations:This work adds three things:(1)a changed ResNet-50 backbone with sequential 3×3 convolutions to keep fine anatomical details that are needed for finding lesion boundaries;(2)a computationally efficient regional attention mechanism that works on high-resolution features without using a transformer’s extra memory;and(3)an adaptive feature fusion strategy that changes local and global featuresbasedonhowthe image isbeing used.Extensive evaluation on two distinct datasets demonstrates UltraSegNet’s superior performance:On the BUSI dataset,it obtains a precision of 0.915,a recall of 0.908,and an F1 score of 0.911.In the UDAIT dataset,it achieves robust performance across the board,with a precision of 0.901 and recall of 0.894.Importantly,these improvements are achieved at clinically feasible computation times,taking 235 ms per image on standard GPU hardware.Notably,UltraSegNet does amazingly well on difficult small lesions(less than 10 mm),achieving a detection accuracy of 0.891.This is a huge improvement over traditional methods that have a hard time with small-scale features,as standard models can only achieve 0.63–0.71 accuracy.This improvement in small lesion detection is particularly crucial for early-stage breast cancer identification.Results from this work demonstrate that UltraSegNet can be practically deployable in clinical workflows to improve breast cancer screening accuracy.