Satellite image segmentation plays a crucial role in remote sensing,supporting applications such as environmental monitoring,land use analysis,and disaster management.However,traditional segmentation methods often rel...Satellite image segmentation plays a crucial role in remote sensing,supporting applications such as environmental monitoring,land use analysis,and disaster management.However,traditional segmentation methods often rely on large amounts of labeled data,which are costly and time-consuming to obtain,especially in largescale or dynamic environments.To address this challenge,we propose the Semi-Supervised Multi-View Picture Fuzzy Clustering(SS-MPFC)algorithm,which improves segmentation accuracy and robustness,particularly in complex and uncertain remote sensing scenarios.SS-MPFC unifies three paradigms:semi-supervised learning,multi-view clustering,and picture fuzzy set theory.This integration allows the model to effectively utilize a small number of labeled samples,fuse complementary information from multiple data views,and handle the ambiguity and uncertainty inherent in satellite imagery.We design a novel objective function that jointly incorporates picture fuzzy membership functions across multiple views of the data,and embeds pairwise semi-supervised constraints(must-link and cannot-link)directly into the clustering process to enhance segmentation accuracy.Experiments conducted on several benchmark satellite datasets demonstrate that SS-MPFC significantly outperforms existing state-of-the-art methods in segmentation accuracy,noise robustness,and semantic interpretability.On the Augsburg dataset,SS-MPFC achieves a Purity of 0.8158 and an Accuracy of 0.6860,highlighting its outstanding robustness and efficiency.These results demonstrate that SSMPFC offers a scalable and effective solution for real-world satellite-based monitoring systems,particularly in scenarios where rapid annotation is infeasible,such as wildfire tracking,agricultural monitoring,and dynamic urban mapping.展开更多
Accurately estimating depth from underwater monocular images is essential for the target tracking task of unmanned underwater vehicles.This work proposes a method based on the Lpg-Lap Unet architecture.First,the Unet ...Accurately estimating depth from underwater monocular images is essential for the target tracking task of unmanned underwater vehicles.This work proposes a method based on the Lpg-Lap Unet architecture.First,the Unet architecture integrates Laplacian pyramid depth residuals and Sobel operators to improve the boundary details in depth images,which may suffer from the feature loss caused by upsampling and the blurriness of underwater images.Multiscale local planar guidance layers then fully exploit the intermediate depth features,and a comprehensive loss function ensures robustness and accuracy.Experimental results on benchmarks demonstrate the effectiveness of Lpg-Lap Unet and its superior performance over state-of-the-art models.An underwater target tracking system is then designed to further validate its real-time capabilities in the AirSim simulation platform.展开更多
Multi-view clustering is a critical research area in computer science aimed at effectively extracting meaningful patterns from complex,high-dimensional data that single-view methods cannot capture.Traditional fuzzy cl...Multi-view clustering is a critical research area in computer science aimed at effectively extracting meaningful patterns from complex,high-dimensional data that single-view methods cannot capture.Traditional fuzzy clustering techniques,such as Fuzzy C-Means(FCM),face significant challenges in handling uncertainty and the dependencies between different views.To overcome these limitations,we introduce a new multi-view fuzzy clustering approach that integrates picture fuzzy sets with a dual-anchor graph method for multi-view data,aiming to enhance clustering accuracy and robustness,termed Multi-view Picture Fuzzy Clustering(MPFC).In particular,the picture fuzzy set theory extends the capability to represent uncertainty by modeling three membership levels:membership degrees,neutral degrees,and refusal degrees.This allows for a more flexible representation of uncertain and conflicting data than traditional fuzzy models.Meanwhile,dual-anchor graphs exploit the similarity relationships between data points and integrate information across views.This combination improves stability,scalability,and robustness when handling noisy and heterogeneous data.Experimental results on several benchmark datasets demonstrate significant improvements in clustering accuracy and efficiency,outperforming traditional methods.Specifically,the MPFC algorithm demonstrates outstanding clustering performance on a variety of datasets,attaining a Purity(PUR)score of 0.6440 and an Accuracy(ACC)score of 0.6213 for the 3 Sources dataset,underscoring its robustness and efficiency.The proposed approach significantly contributes to fields such as pattern recognition,multi-view relational data analysis,and large-scale clustering problems.Future work will focus on extending the method for semi-supervised multi-view clustering,aiming to enhance adaptability,scalability,and performance in real-world applications.展开更多
The accurate prediction of drug absorption,distribution,metabolism,excretion,and toxicity(ADMET)properties represents a crucial step in early drug development for reducing failure risk.Current deep learning approaches...The accurate prediction of drug absorption,distribution,metabolism,excretion,and toxicity(ADMET)properties represents a crucial step in early drug development for reducing failure risk.Current deep learning approaches face challenges with data sparsity and information loss due to single-molecule representation limitations and isolated predictive tasks.This research proposes molecular properties prediction with parallel-view and collaborative learning(MolP-PC),a multi-view fusion and multi-task deep learning framework that integrates 1D molecular fingerprints(MFs),2D molecular graphs,and 3D geometric representations,incorporating an attention-gated fusion mechanism and multi-task adaptive learning strategy for precise ADMET property predictions.Experimental results demonstrate that MolP-PC achieves optimal performance in 27 of 54 tasks,with its multi-task learning(MTL)mechanism significantly enhancing predictive performance on small-scale datasets and surpassing single-task models in 41 of 54 tasks.Additional ablation studies and interpretability analyses confirm the significance of multi-view fusion in capturing multi-dimensional molecular information and enhancing model generalization.A case study examining the anticancer compound Oroxylin A demonstrates MolP-PC’s effective generalization in predicting key pharmacokinetic parameters such as half-life(T0.5)and clearance(CL),indicating its practical utility in drug modeling.However,the model exhibits a tendency to underestimate volume of distribution(VD),indicating potential for improvement in analyzing compounds with high tissue distribution.This study presents an efficient and interpretable approach for ADMET property prediction,establishing a novel framework for molecular optimization and risk assessment in drug development.展开更多
Phenotypic prediction is a promising strategy for accelerating plant breeding.Data from multiple sources(called multi-view data)can provide complementary information to characterize a biological object from various as...Phenotypic prediction is a promising strategy for accelerating plant breeding.Data from multiple sources(called multi-view data)can provide complementary information to characterize a biological object from various aspects.By integrating multi-view information into phenotypic prediction,a multi-view best linear unbiased prediction(MVBLUP)method is proposed in this paper.To measure the importance of multiple data views,the differential evolution algorithm with an early stopping mechanism is used,by which we obtain a multi-view kinship matrix and then incorporate it into the BLUP model for phenotypic prediction.To further illustrate the characteristics of MVBLUP,we perform the empirical experiments on four multi-view datasets in different crops.Compared to the single-view method,the prediction accuracy of the MVBLUP method has improved by 0.038–0.201 on average.The results demonstrate that the MVBLUP is an effective integrative prediction method for multi-view data.展开更多
The morphological description of wear particles in lubricating oil is crucial for wear state monitoring and fault diagnosis in aero-engines.Accurately and comprehensively acquiring three-dimensional(3D)morphological d...The morphological description of wear particles in lubricating oil is crucial for wear state monitoring and fault diagnosis in aero-engines.Accurately and comprehensively acquiring three-dimensional(3D)morphological data of these particles has became a key focus in wear debris analysis.Herein,we develop a novel multi-view polarization-sensitive optical coherence tomography(PS-OCT)method to achieve accurate 3D morphology detection and reconstruction of aero-engine lubricant wear particles,effectively resolving occlusion-induced information loss while enabling material-specific characterization.The particle morphology is captured by multi-view imaging,followed by filtering,sharpening,and contour recognition.The method integrates advanced registration algorithms with Poisson reconstruction to generate high-precision 3D models.This approach not only provides accurate 3D morphological reconstruction but also mitigates information loss caused by particle occlusion,ensuring model completeness.Furthermore,by collecting polarization characteristics of typical metals and their oxides in aero-engine lubricants,this work comprehensively characterizes and comparatively analyzes particle polarization properties using Stokes vectors,polarization uniformity,and cumulative phase retardation,and obtains a three-dimensional model containing polarization information.Ultimately,the proposed method enables multidimensional information acquisition for the reliable identification of abrasive particle types.展开更多
Traditional sheep identification is based on ear tags.However,the application of ear tags not only causes stress to the animals but also leads to loss of ear tags,which affects the correct recognition of sheep identit...Traditional sheep identification is based on ear tags.However,the application of ear tags not only causes stress to the animals but also leads to loss of ear tags,which affects the correct recognition of sheep identity.In contrast,the acquisition of sheep face images offers the advantages of being non-invasive and stress-free for the animals.Nevertheless,the extant convolutional neural network-based sheep face identification model is prone to the issue of inadequate refinement,which renders its implementation on farms challenging.To address this issue,this study presented a novel sheep face recognition model that employs advanced feature fusion techniques and precise image segmentation strategies.The images were preprocessed and accurately segmented using deep learning techniques,with a dataset constructed containing sheep face images from multiple viewpoints(left,front,and right faces).In particular,the model employs a segmentation algorithm to delineate the sheep face region accurately,utilizes the Improved Convolutional Block Attention Module(I-CBAM)to emphasize the salient features of the sheep face,and achieves multi-scale fusion of the features through a Feature Pyramid Network(FPN).This process guarantees that the features captured from disparate viewpoints can be efficiently integrated to enhance recognition accuracy.Furthermore,the model guarantees the precise delineation of sheep facial contours by streamlining the image segmentation procedure,thereby establishing a robust basis for the precise identification of sheep identity.The findings demonstrate that the recognition accuracy of the Sheep Face Mask Region-based Convolutional Neural Network(SFMask RCNN)model has been enhanced by 9.64%to 98.65%in comparison to the original model.The method offers a novel technological approach to the management of animal identity in the context of sheep husbandry.展开更多
In the dynamic scene of autonomous vehicles,the depth estimation of monocular cameras often faces the problem of inaccurate edge depth estimation.To solve this problem,we propose an unsupervised monocular depth estima...In the dynamic scene of autonomous vehicles,the depth estimation of monocular cameras often faces the problem of inaccurate edge depth estimation.To solve this problem,we propose an unsupervised monocular depth estimation model based on edge enhancement,which is specifically aimed at the depth perception challenge in dynamic scenes.The model consists of two core networks:a deep prediction network and a motion estimation network,both of which adopt an encoder-decoder architecture.The depth prediction network is based on the U-Net structure of ResNet18,which is responsible for generating the depth map of the scene.The motion estimation network is based on the U-Net structure of Flow-Net,focusing on the motion estimation of dynamic targets.In the decoding stage of the motion estimation network,we innovatively introduce an edge-enhanced decoder,which integrates a convolutional block attention module(CBAM)in the decoding process to enhance the recognition ability of the edge features of moving objects.In addition,we also designed a strip convolution module to improve the model’s capture efficiency of discrete moving targets.To further improve the performance of the model,we propose a novel edge regularization method based on the Laplace operator,which effectively accelerates the convergence process of themodel.Experimental results on the KITTI and Cityscapes datasets show that compared with the current advanced dynamic unsupervised monocular model,the proposed model has a significant improvement in depth estimation accuracy and convergence speed.Specifically,the rootmean square error(RMSE)is reduced by 4.8%compared with the DepthMotion algorithm,while the training convergence speed is increased by 36%,which shows the superior performance of the model in the depth estimation task in dynamic scenes.展开更多
Robot-assisted surgery has become an indispensable component in modern neurosurgical procedures.However,existing registration methods for neurosurgical robots often rely on high-end hardware and involve prolonged or u...Robot-assisted surgery has become an indispensable component in modern neurosurgical procedures.However,existing registration methods for neurosurgical robots often rely on high-end hardware and involve prolonged or unstable registration times,limiting their applicability in dynamic and time-sensitive intraoperative settings.This paper proposes a novel fully automatic monocular-based registration and real-time tracking method.First,dedicated fiducials are designed,and an automatic preoperative and intraoperative detection method for these fiducials is introduced.Second,a geometric representation of the fiducials is constructed based on a 2D KD-Tree.Through a two-stage optimization process,the depth of 2D fiducials is estimated,and 2D-3D correspondences are established to achieve monocular registration.This approach enables fully automatic intraoperative registration using only a single optical camera.Finally,a six-degree-of-freedom visual servo control strategy inspired by the mass-spring-damper system is proposed.By integrating artificial potential field and admittance control,the strategy ensures real-time responsiveness and stable tracking.Experimental results demonstrate that the proposed method achieves a registration time of 0.23 s per instance with an average error of 0.58 mm.Additionally,the motion performance of the control strategy has been validated.Preliminary experiments verify the effectiveness of MonoTracker in dynamic tracking scenarios.This method holds promise for enhancing the adaptability of neurosurgical robots and offers significant clinical application potential.展开更多
The autonomous landing guidance of fixed-wing aircraft in unknown structured scenes presents a substantial technological challenge,particularly regarding the effectiveness of solutions for monocular visual relative po...The autonomous landing guidance of fixed-wing aircraft in unknown structured scenes presents a substantial technological challenge,particularly regarding the effectiveness of solutions for monocular visual relative pose estimation.This study proposes a novel airborne monocular visual estimation method based on structured scene features to address this challenge.First,a multitask neural network model is established for segmentation,depth estimation,and slope estimation on monocular images.And a monocular image comprehensive three-dimensional information metric is designed,encompassing length,span,flatness,and slope information.Subsequently,structured edge features are leveraged to filter candidate landing regions adaptively.By leveraging the three-dimensional information metric,the optimal landing region is accurately and efficiently identified.Finally,sparse two-dimensional key point is used to parameterize the optimal landing region for the first time and a high-precision relative pose estimation is achieved.Additional measurement information is introduced to provide the autonomous landing guidance information between the aircraft and the optimal landing region.Experimental results obtained from both synthetic and real data demonstrate the effectiveness of the proposed method in monocular pose estimation for autonomous aircraft landing guidance in unknown structured scenes.展开更多
Existing multi-view deep subspace clustering methods aim to learn a unified representation from multi-view data,while the learned representation is difficult to maintain the underlying structure hidden in the origin s...Existing multi-view deep subspace clustering methods aim to learn a unified representation from multi-view data,while the learned representation is difficult to maintain the underlying structure hidden in the origin samples,especially the high-order neighbor relationship between samples.To overcome the above challenges,this paper proposes a novel multi-order neighborhood fusion based multi-view deep subspace clustering model.We creatively integrate the multi-order proximity graph structures of different views into the self-expressive layer by a multi-order neighborhood fusion module.By this design,the multi-order Laplacian matrix supervises the learning of the view-consistent self-representation affinity matrix;then,we can obtain an optimal global affinity matrix where each connected node belongs to one cluster.In addition,the discriminative constraint between views is designed to further improve the clustering performance.A range of experiments on six public datasets demonstrates that the method performs better than other advanced multi-view clustering methods.The code is available at https://github.com/songzuolong/MNF-MDSC(accessed on 25 December 2024).展开更多
The increasing prevalence of multi-view data has made multi-view clustering a crucial technique for discovering latent structures from heterogeneous representations.However,traditional fuzzy clustering algorithms show...The increasing prevalence of multi-view data has made multi-view clustering a crucial technique for discovering latent structures from heterogeneous representations.However,traditional fuzzy clustering algorithms show limitations with the inherent uncertainty and imprecision of such data,as they rely on a single-dimensional membership value.To overcome these limitations,we propose an auto-weighted multi-view neutrosophic fuzzy clustering(AW-MVNFC)algorithm.Our method leverages the neutrosophic framework,an extension of fuzzy sets,to explicitly model imprecision and ambiguity through three membership degrees.The core novelty of AWMVNFC lies in a hierarchical weighting strategy that adaptively learns the contributions of both individual data views and the importance of each feature within a view.Through a unified objective function,AW-MVNFC jointly optimizes the neutrosophic membership assignments,cluster centers,and the distributions of view and feature weights.Comprehensive experiments conducted on synthetic and real-world datasets demonstrate that our algorithm achieves more accurate and stable clustering than existing methods,demonstrating its effectiveness in handling the complexities of multi-view data.展开更多
Depth maps play a crucial role in various practical applications such as computer vision,augmented reality,and autonomous driving.How to obtain clear and accurate depth information in video depth estimation is a signi...Depth maps play a crucial role in various practical applications such as computer vision,augmented reality,and autonomous driving.How to obtain clear and accurate depth information in video depth estimation is a significant challenge faced in the field of computer vision.However,existing monocular video depth estimation models tend to produce blurred or inaccurate depth information in regions with object edges and low texture.To address this issue,we propose a monocular depth estimation model architecture guided by semantic segmentation masks,which introduces semantic information into the model to correct the ambiguous depth regions.We have evaluated the proposed method,and experimental results show that our method improves the accuracy of edge depth,demonstrating the effectiveness of our approach.展开更多
Real-time and accurate drogue pose measurement during docking is basic and critical for Autonomous Aerial Refueling(AAR).Vision measurement is the best practicable technique,but its measurement accuracy and robustness...Real-time and accurate drogue pose measurement during docking is basic and critical for Autonomous Aerial Refueling(AAR).Vision measurement is the best practicable technique,but its measurement accuracy and robustness are easily affected by limited computing power of airborne equipment,complex aerial scenes and partial occlusion.To address the above challenges,we propose a novel drogue keypoint detection and pose measurement algorithm based on monocular vision,and realize real-time processing on airborne embedded devices.Firstly,a lightweight network is designed with structural re-parameterization to reduce computational cost and improve inference speed.And a sub-pixel level keypoints prediction head and loss functions are adopted to improve keypoint detection accuracy.Secondly,a closed-form solution of drogue pose is computed based on double spatial circles,followed by a nonlinear refinement based on Levenberg-Marquardt optimization.Both virtual simulation and physical simulation experiments have been used to test the proposed method.In the virtual simulation,the mean pixel error of the proposed method is 0.787 pixels,which is significantly superior to that of other methods.In the physical simulation,the mean relative measurement error is 0.788%,and the mean processing time is 13.65 ms on embedded devices.展开更多
Self-supervised monocular depth estimation has emerged as a major research focus in recent years,primarily due to the elimination of ground-truth depth dependence.However,the prevailing architectures in this domain su...Self-supervised monocular depth estimation has emerged as a major research focus in recent years,primarily due to the elimination of ground-truth depth dependence.However,the prevailing architectures in this domain suffer from inherent limitations:existing pose network branches infer camera ego-motion exclusively under static-scene and Lambertian-surface assumptions.These assumptions are often violated in real-world scenarios due to dynamic objects,non-Lambertian reflectance,and unstructured background elements,leading to pervasive artifacts such as depth discontinuities(“holes”),structural collapse,and ambiguous reconstruction.To address these challenges,we propose a novel framework that integrates scene dynamic pose estimation into the conventional self-supervised depth network,enhancing its ability to model complex scene dynamics.Our contributions are threefold:(1)a pixel-wise dynamic pose estimation module that jointly resolves the pose transformations of moving objects and localized scene perturbations;(2)a physically-informed loss function that couples dynamic pose and depth predictions,designed to mitigate depth errors arising from high-speed distant objects and geometrically inconsistent motion profiles;(3)an efficient SE(3)transformation parameterization that streamlines network complexity and temporal pre-processing.Extensive experiments on the KITTI and NYU-V2 benchmarks show that our framework achieves state-of-the-art performance in both quantitative metrics and qualitative visual fidelity,significantly improving the robustness and generalization of monocular depth estimation under dynamic conditions.展开更多
Drug repurposing offers a promising alternative to traditional drug development and significantly re-duces costs and timelines by identifying new therapeutic uses for existing drugs.However,the current approaches ofte...Drug repurposing offers a promising alternative to traditional drug development and significantly re-duces costs and timelines by identifying new therapeutic uses for existing drugs.However,the current approaches often rely on limited data sources and simplistic hypotheses,which restrict their ability to capture the multi-faceted nature of biological systems.This study introduces adaptive multi-view learning(AMVL),a novel methodology that integrates chemical-induced transcriptional profiles(CTPs),knowledge graph(KG)embeddings,and large language model(LLM)representations,to enhance drug repurposing predictions.AMVL incorporates an innovative similarity matrix expansion strategy and leverages multi-view learning(MVL),matrix factorization,and ensemble optimization techniques to integrate heterogeneous multi-source data.Comprehensive evaluations on benchmark datasets(Fdata-set,Cdataset,and Ydataset)and the large-scale iDrug dataset demonstrate that AMVL outperforms state-of-the-art(SOTA)methods,achieving superior accuracy in predicting drug-disease associations across multiple metrics.Literature-based validation further confirmed the model's predictive capabilities,with seven out of the top ten predictions corroborated by post-2011 evidence.To promote transparency and reproducibility,all data and codes used in this study were open-sourced,providing resources for pro-cessing CTPs,KG,and LLM-based similarity calculations,along with the complete AMVL algorithm and benchmarking procedures.By unifying diverse data modalities,AMVL offers a robust and scalable so-lution for accelerating drug discovery,fostering advancements in translational medicine and integrating multi-omics data.We aim to inspire further innovations in multi-source data integration and support the development of more precise and efficient strategies for advancing drug discovery and translational medicine.展开更多
Drone swarm systems,equipped with photoelectric imaging and intelligent target perception,are essential for reconnaissance and strike missions in complex and high-risk environments.They excel in information sharing,an...Drone swarm systems,equipped with photoelectric imaging and intelligent target perception,are essential for reconnaissance and strike missions in complex and high-risk environments.They excel in information sharing,anti-jamming capabilities,and combat performance,making them critical for future warfare.However,varied perspectives in collaborative combat scenarios pose challenges to object detection,hindering traditional detection algorithms and reducing accuracy.Limited angle-prior data and sparse samples further complicate detection.This paper presents the Multi-View Collaborative Detection System,which tackles the challenges of multi-view object detection in collaborative combat scenarios.The system is designed to enhance multi-view image generation and detection algorithms,thereby improving the accuracy and efficiency of object detection across varying perspectives.First,an observation model for three-dimensional targets through line-of-sight angle transformation is constructed,and a multi-view image generation algorithm based on the Pix2Pix network is designed.For object detection,YOLOX is utilized,and a deep feature extraction network,BA-RepCSPDarknet,is developed to address challenges related to small target scale and feature extraction challenges.Additionally,a feature fusion network NS-PAFPN is developed to mitigate the issue of deep feature map information loss in UAV images.A visual attention module(BAM)is employed to manage appearance differences under varying angles,while a feature mapping module(DFM)prevents fine-grained feature loss.These advancements lead to the development of BA-YOLOX,a multi-view object detection network model suitable for drone platforms,enhancing accuracy and effectively targeting small objects.展开更多
With the rapid progress of the artificial intelligence(AI)technology and mobile internet,3D hand pose estimation has become critical to various intelligent application areas,e.g.,human-computer interaction.To avoid th...With the rapid progress of the artificial intelligence(AI)technology and mobile internet,3D hand pose estimation has become critical to various intelligent application areas,e.g.,human-computer interaction.To avoid the low accuracy of single-modal estimation and the high complexity of traditional multi-modal 3D estimation,this paper proposes a novel multi-modal multi-view(MMV)3D hand pose estimation system,which introduces a registration before translation(RT)-translation before registration(TR)jointed conditional generative adversarial network(cGAN)to train a multi-modal registration network,and then employs the multi-modal feature fusion to achieve high-quality estimation,with low hardware and software costs both in data acquisition and processing.Experimental results demonstrate that the MMV system is effective and feasible in various scenarios.It is promising for the MMV system to be used in broad intelligent application areas.展开更多
High-resolution sub-meter satellite data play an increasingly crucial role in the 3D real-scene China construction initiative.Current research on 3D reconstruction using high-resolution satellite data primarily focuse...High-resolution sub-meter satellite data play an increasingly crucial role in the 3D real-scene China construction initiative.Current research on 3D reconstruction using high-resolution satellite data primarily focuses on two approaches:Multi-stereo fusion and multi-view matching.While algorithms based on these two methodologies for multi-view image 3D reconstruction have reached relative maturity,no systematic comparison has been conducted specifically on satellite data to evaluate the relative merits of multi-stereo fusion versus multi-view matching methods.This paper conducts a comparative analysis of the practical accuracy of both approaches using high-resolution satellite datasets from diverse geographical regions.To ensure fairness in accuracy comparison,both methodologies employ non-local dense matching for cost optimization.Results demonstrate that the multi-stereo fusion method outperforms multi-view matching in all evaluation metrics,exhibiting approximately 1.2%higher average matching accuracy and 10.7%superior elevation precision in the experimental datasets.Therefore,for 3D modeling applications using satellite data,we recommend adopting the multi-stereo fusion approach for digital surface model(DSM)product generation.展开更多
基金funded by the Research Project:THTETN.05/24-25,VietnamAcademy of Science and Technology.
文摘Satellite image segmentation plays a crucial role in remote sensing,supporting applications such as environmental monitoring,land use analysis,and disaster management.However,traditional segmentation methods often rely on large amounts of labeled data,which are costly and time-consuming to obtain,especially in largescale or dynamic environments.To address this challenge,we propose the Semi-Supervised Multi-View Picture Fuzzy Clustering(SS-MPFC)algorithm,which improves segmentation accuracy and robustness,particularly in complex and uncertain remote sensing scenarios.SS-MPFC unifies three paradigms:semi-supervised learning,multi-view clustering,and picture fuzzy set theory.This integration allows the model to effectively utilize a small number of labeled samples,fuse complementary information from multiple data views,and handle the ambiguity and uncertainty inherent in satellite imagery.We design a novel objective function that jointly incorporates picture fuzzy membership functions across multiple views of the data,and embeds pairwise semi-supervised constraints(must-link and cannot-link)directly into the clustering process to enhance segmentation accuracy.Experiments conducted on several benchmark satellite datasets demonstrate that SS-MPFC significantly outperforms existing state-of-the-art methods in segmentation accuracy,noise robustness,and semantic interpretability.On the Augsburg dataset,SS-MPFC achieves a Purity of 0.8158 and an Accuracy of 0.6860,highlighting its outstanding robustness and efficiency.These results demonstrate that SSMPFC offers a scalable and effective solution for real-world satellite-based monitoring systems,particularly in scenarios where rapid annotation is infeasible,such as wildfire tracking,agricultural monitoring,and dynamic urban mapping.
基金partially supported by the Natural Science Foundation of Shandong Province,China(No.ZR2023ME009)the National Natural Science Foundation of China(No.51909252)。
文摘Accurately estimating depth from underwater monocular images is essential for the target tracking task of unmanned underwater vehicles.This work proposes a method based on the Lpg-Lap Unet architecture.First,the Unet architecture integrates Laplacian pyramid depth residuals and Sobel operators to improve the boundary details in depth images,which may suffer from the feature loss caused by upsampling and the blurriness of underwater images.Multiscale local planar guidance layers then fully exploit the intermediate depth features,and a comprehensive loss function ensures robustness and accuracy.Experimental results on benchmarks demonstrate the effectiveness of Lpg-Lap Unet and its superior performance over state-of-the-art models.An underwater target tracking system is then designed to further validate its real-time capabilities in the AirSim simulation platform.
基金funded by the Research Project:THTETN.05/24-25,VietnamAcademy of Science and Technology.
文摘Multi-view clustering is a critical research area in computer science aimed at effectively extracting meaningful patterns from complex,high-dimensional data that single-view methods cannot capture.Traditional fuzzy clustering techniques,such as Fuzzy C-Means(FCM),face significant challenges in handling uncertainty and the dependencies between different views.To overcome these limitations,we introduce a new multi-view fuzzy clustering approach that integrates picture fuzzy sets with a dual-anchor graph method for multi-view data,aiming to enhance clustering accuracy and robustness,termed Multi-view Picture Fuzzy Clustering(MPFC).In particular,the picture fuzzy set theory extends the capability to represent uncertainty by modeling three membership levels:membership degrees,neutral degrees,and refusal degrees.This allows for a more flexible representation of uncertain and conflicting data than traditional fuzzy models.Meanwhile,dual-anchor graphs exploit the similarity relationships between data points and integrate information across views.This combination improves stability,scalability,and robustness when handling noisy and heterogeneous data.Experimental results on several benchmark datasets demonstrate significant improvements in clustering accuracy and efficiency,outperforming traditional methods.Specifically,the MPFC algorithm demonstrates outstanding clustering performance on a variety of datasets,attaining a Purity(PUR)score of 0.6440 and an Accuracy(ACC)score of 0.6213 for the 3 Sources dataset,underscoring its robustness and efficiency.The proposed approach significantly contributes to fields such as pattern recognition,multi-view relational data analysis,and large-scale clustering problems.Future work will focus on extending the method for semi-supervised multi-view clustering,aiming to enhance adaptability,scalability,and performance in real-world applications.
基金supported by the research on key technologies for monitoring and identifying drug abuse of anesthetic drugs and psychotropic drugs,and intervention for addiction(No.2023YFC3304200)the program of a study on the diagnosis of addiction to synthetic cannabinoids and methods of assessing the risk of abuse(No.2022YFC3300905)+1 种基金the program of Ab initio design and generation of AI models for small molecule ligands based on target structures(No.2022PE0AC03)ZHIJIANG LAB.
文摘The accurate prediction of drug absorption,distribution,metabolism,excretion,and toxicity(ADMET)properties represents a crucial step in early drug development for reducing failure risk.Current deep learning approaches face challenges with data sparsity and information loss due to single-molecule representation limitations and isolated predictive tasks.This research proposes molecular properties prediction with parallel-view and collaborative learning(MolP-PC),a multi-view fusion and multi-task deep learning framework that integrates 1D molecular fingerprints(MFs),2D molecular graphs,and 3D geometric representations,incorporating an attention-gated fusion mechanism and multi-task adaptive learning strategy for precise ADMET property predictions.Experimental results demonstrate that MolP-PC achieves optimal performance in 27 of 54 tasks,with its multi-task learning(MTL)mechanism significantly enhancing predictive performance on small-scale datasets and surpassing single-task models in 41 of 54 tasks.Additional ablation studies and interpretability analyses confirm the significance of multi-view fusion in capturing multi-dimensional molecular information and enhancing model generalization.A case study examining the anticancer compound Oroxylin A demonstrates MolP-PC’s effective generalization in predicting key pharmacokinetic parameters such as half-life(T0.5)and clearance(CL),indicating its practical utility in drug modeling.However,the model exhibits a tendency to underestimate volume of distribution(VD),indicating potential for improvement in analyzing compounds with high tissue distribution.This study presents an efficient and interpretable approach for ADMET property prediction,establishing a novel framework for molecular optimization and risk assessment in drug development.
基金supported by National Natural Science Foundation of China(32122066,32201855)STI2030—Major Projects(2023ZD04076).
文摘Phenotypic prediction is a promising strategy for accelerating plant breeding.Data from multiple sources(called multi-view data)can provide complementary information to characterize a biological object from various aspects.By integrating multi-view information into phenotypic prediction,a multi-view best linear unbiased prediction(MVBLUP)method is proposed in this paper.To measure the importance of multiple data views,the differential evolution algorithm with an early stopping mechanism is used,by which we obtain a multi-view kinship matrix and then incorporate it into the BLUP model for phenotypic prediction.To further illustrate the characteristics of MVBLUP,we perform the empirical experiments on four multi-view datasets in different crops.Compared to the single-view method,the prediction accuracy of the MVBLUP method has improved by 0.038–0.201 on average.The results demonstrate that the MVBLUP is an effective integrative prediction method for multi-view data.
文摘The morphological description of wear particles in lubricating oil is crucial for wear state monitoring and fault diagnosis in aero-engines.Accurately and comprehensively acquiring three-dimensional(3D)morphological data of these particles has became a key focus in wear debris analysis.Herein,we develop a novel multi-view polarization-sensitive optical coherence tomography(PS-OCT)method to achieve accurate 3D morphology detection and reconstruction of aero-engine lubricant wear particles,effectively resolving occlusion-induced information loss while enabling material-specific characterization.The particle morphology is captured by multi-view imaging,followed by filtering,sharpening,and contour recognition.The method integrates advanced registration algorithms with Poisson reconstruction to generate high-precision 3D models.This approach not only provides accurate 3D morphological reconstruction but also mitigates information loss caused by particle occlusion,ensuring model completeness.Furthermore,by collecting polarization characteristics of typical metals and their oxides in aero-engine lubricants,this work comprehensively characterizes and comparatively analyzes particle polarization properties using Stokes vectors,polarization uniformity,and cumulative phase retardation,and obtains a three-dimensional model containing polarization information.Ultimately,the proposed method enables multidimensional information acquisition for the reliable identification of abrasive particle types.
基金Fundamental Research Funds for Inner Mongolia Directly Affiliated Universities(Grant No.BR221032)the First Class Disciplines Research Special Project(Grant No.YLXKZX-NND-009)。
文摘Traditional sheep identification is based on ear tags.However,the application of ear tags not only causes stress to the animals but also leads to loss of ear tags,which affects the correct recognition of sheep identity.In contrast,the acquisition of sheep face images offers the advantages of being non-invasive and stress-free for the animals.Nevertheless,the extant convolutional neural network-based sheep face identification model is prone to the issue of inadequate refinement,which renders its implementation on farms challenging.To address this issue,this study presented a novel sheep face recognition model that employs advanced feature fusion techniques and precise image segmentation strategies.The images were preprocessed and accurately segmented using deep learning techniques,with a dataset constructed containing sheep face images from multiple viewpoints(left,front,and right faces).In particular,the model employs a segmentation algorithm to delineate the sheep face region accurately,utilizes the Improved Convolutional Block Attention Module(I-CBAM)to emphasize the salient features of the sheep face,and achieves multi-scale fusion of the features through a Feature Pyramid Network(FPN).This process guarantees that the features captured from disparate viewpoints can be efficiently integrated to enhance recognition accuracy.Furthermore,the model guarantees the precise delineation of sheep facial contours by streamlining the image segmentation procedure,thereby establishing a robust basis for the precise identification of sheep identity.The findings demonstrate that the recognition accuracy of the Sheep Face Mask Region-based Convolutional Neural Network(SFMask RCNN)model has been enhanced by 9.64%to 98.65%in comparison to the original model.The method offers a novel technological approach to the management of animal identity in the context of sheep husbandry.
基金funded by the Yangtze River Delta Science and Technology Innovation Community Joint Research Project(2023CSJGG1600)the Natural Science Foundation of Anhui Province(2208085MF173)Wuhu“ChiZhu Light”Major Science and Technology Project(2023ZD01,2023ZD03).
文摘In the dynamic scene of autonomous vehicles,the depth estimation of monocular cameras often faces the problem of inaccurate edge depth estimation.To solve this problem,we propose an unsupervised monocular depth estimation model based on edge enhancement,which is specifically aimed at the depth perception challenge in dynamic scenes.The model consists of two core networks:a deep prediction network and a motion estimation network,both of which adopt an encoder-decoder architecture.The depth prediction network is based on the U-Net structure of ResNet18,which is responsible for generating the depth map of the scene.The motion estimation network is based on the U-Net structure of Flow-Net,focusing on the motion estimation of dynamic targets.In the decoding stage of the motion estimation network,we innovatively introduce an edge-enhanced decoder,which integrates a convolutional block attention module(CBAM)in the decoding process to enhance the recognition ability of the edge features of moving objects.In addition,we also designed a strip convolution module to improve the model’s capture efficiency of discrete moving targets.To further improve the performance of the model,we propose a novel edge regularization method based on the Laplace operator,which effectively accelerates the convergence process of themodel.Experimental results on the KITTI and Cityscapes datasets show that compared with the current advanced dynamic unsupervised monocular model,the proposed model has a significant improvement in depth estimation accuracy and convergence speed.Specifically,the rootmean square error(RMSE)is reduced by 4.8%compared with the DepthMotion algorithm,while the training convergence speed is increased by 36%,which shows the superior performance of the model in the depth estimation task in dynamic scenes.
基金Supported by National Natural Science Foundation of China(Grant No.92148206).
文摘Robot-assisted surgery has become an indispensable component in modern neurosurgical procedures.However,existing registration methods for neurosurgical robots often rely on high-end hardware and involve prolonged or unstable registration times,limiting their applicability in dynamic and time-sensitive intraoperative settings.This paper proposes a novel fully automatic monocular-based registration and real-time tracking method.First,dedicated fiducials are designed,and an automatic preoperative and intraoperative detection method for these fiducials is introduced.Second,a geometric representation of the fiducials is constructed based on a 2D KD-Tree.Through a two-stage optimization process,the depth of 2D fiducials is estimated,and 2D-3D correspondences are established to achieve monocular registration.This approach enables fully automatic intraoperative registration using only a single optical camera.Finally,a six-degree-of-freedom visual servo control strategy inspired by the mass-spring-damper system is proposed.By integrating artificial potential field and admittance control,the strategy ensures real-time responsiveness and stable tracking.Experimental results demonstrate that the proposed method achieves a registration time of 0.23 s per instance with an average error of 0.58 mm.Additionally,the motion performance of the control strategy has been validated.Preliminary experiments verify the effectiveness of MonoTracker in dynamic tracking scenarios.This method holds promise for enhancing the adaptability of neurosurgical robots and offers significant clinical application potential.
基金co-supported by the Science and Technology Innovation Program of Hunan Province,China(No.2023RC3023)the National Natural Science Foundation of China(No.12272404)。
文摘The autonomous landing guidance of fixed-wing aircraft in unknown structured scenes presents a substantial technological challenge,particularly regarding the effectiveness of solutions for monocular visual relative pose estimation.This study proposes a novel airborne monocular visual estimation method based on structured scene features to address this challenge.First,a multitask neural network model is established for segmentation,depth estimation,and slope estimation on monocular images.And a monocular image comprehensive three-dimensional information metric is designed,encompassing length,span,flatness,and slope information.Subsequently,structured edge features are leveraged to filter candidate landing regions adaptively.By leveraging the three-dimensional information metric,the optimal landing region is accurately and efficiently identified.Finally,sparse two-dimensional key point is used to parameterize the optimal landing region for the first time and a high-precision relative pose estimation is achieved.Additional measurement information is introduced to provide the autonomous landing guidance information between the aircraft and the optimal landing region.Experimental results obtained from both synthetic and real data demonstrate the effectiveness of the proposed method in monocular pose estimation for autonomous aircraft landing guidance in unknown structured scenes.
基金supported by the National Key R&D Program of China(2023YFC3304600).
文摘Existing multi-view deep subspace clustering methods aim to learn a unified representation from multi-view data,while the learned representation is difficult to maintain the underlying structure hidden in the origin samples,especially the high-order neighbor relationship between samples.To overcome the above challenges,this paper proposes a novel multi-order neighborhood fusion based multi-view deep subspace clustering model.We creatively integrate the multi-order proximity graph structures of different views into the self-expressive layer by a multi-order neighborhood fusion module.By this design,the multi-order Laplacian matrix supervises the learning of the view-consistent self-representation affinity matrix;then,we can obtain an optimal global affinity matrix where each connected node belongs to one cluster.In addition,the discriminative constraint between views is designed to further improve the clustering performance.A range of experiments on six public datasets demonstrates that the method performs better than other advanced multi-view clustering methods.The code is available at https://github.com/songzuolong/MNF-MDSC(accessed on 25 December 2024).
文摘The increasing prevalence of multi-view data has made multi-view clustering a crucial technique for discovering latent structures from heterogeneous representations.However,traditional fuzzy clustering algorithms show limitations with the inherent uncertainty and imprecision of such data,as they rely on a single-dimensional membership value.To overcome these limitations,we propose an auto-weighted multi-view neutrosophic fuzzy clustering(AW-MVNFC)algorithm.Our method leverages the neutrosophic framework,an extension of fuzzy sets,to explicitly model imprecision and ambiguity through three membership degrees.The core novelty of AWMVNFC lies in a hierarchical weighting strategy that adaptively learns the contributions of both individual data views and the importance of each feature within a view.Through a unified objective function,AW-MVNFC jointly optimizes the neutrosophic membership assignments,cluster centers,and the distributions of view and feature weights.Comprehensive experiments conducted on synthetic and real-world datasets demonstrate that our algorithm achieves more accurate and stable clustering than existing methods,demonstrating its effectiveness in handling the complexities of multi-view data.
文摘Depth maps play a crucial role in various practical applications such as computer vision,augmented reality,and autonomous driving.How to obtain clear and accurate depth information in video depth estimation is a significant challenge faced in the field of computer vision.However,existing monocular video depth estimation models tend to produce blurred or inaccurate depth information in regions with object edges and low texture.To address this issue,we propose a monocular depth estimation model architecture guided by semantic segmentation masks,which introduces semantic information into the model to correct the ambiguous depth regions.We have evaluated the proposed method,and experimental results show that our method improves the accuracy of edge depth,demonstrating the effectiveness of our approach.
基金supported by the National Science Fund for Distinguished Young Scholars,China(No.51625501)Aeronautical Science Foundation of China(No.20240046051002)National Natural Science Foundation of China(No.52005028).
文摘Real-time and accurate drogue pose measurement during docking is basic and critical for Autonomous Aerial Refueling(AAR).Vision measurement is the best practicable technique,but its measurement accuracy and robustness are easily affected by limited computing power of airborne equipment,complex aerial scenes and partial occlusion.To address the above challenges,we propose a novel drogue keypoint detection and pose measurement algorithm based on monocular vision,and realize real-time processing on airborne embedded devices.Firstly,a lightweight network is designed with structural re-parameterization to reduce computational cost and improve inference speed.And a sub-pixel level keypoints prediction head and loss functions are adopted to improve keypoint detection accuracy.Secondly,a closed-form solution of drogue pose is computed based on double spatial circles,followed by a nonlinear refinement based on Levenberg-Marquardt optimization.Both virtual simulation and physical simulation experiments have been used to test the proposed method.In the virtual simulation,the mean pixel error of the proposed method is 0.787 pixels,which is significantly superior to that of other methods.In the physical simulation,the mean relative measurement error is 0.788%,and the mean processing time is 13.65 ms on embedded devices.
基金supported in part by the National Natural Science Foundation of China under Grants 62071345。
文摘Self-supervised monocular depth estimation has emerged as a major research focus in recent years,primarily due to the elimination of ground-truth depth dependence.However,the prevailing architectures in this domain suffer from inherent limitations:existing pose network branches infer camera ego-motion exclusively under static-scene and Lambertian-surface assumptions.These assumptions are often violated in real-world scenarios due to dynamic objects,non-Lambertian reflectance,and unstructured background elements,leading to pervasive artifacts such as depth discontinuities(“holes”),structural collapse,and ambiguous reconstruction.To address these challenges,we propose a novel framework that integrates scene dynamic pose estimation into the conventional self-supervised depth network,enhancing its ability to model complex scene dynamics.Our contributions are threefold:(1)a pixel-wise dynamic pose estimation module that jointly resolves the pose transformations of moving objects and localized scene perturbations;(2)a physically-informed loss function that couples dynamic pose and depth predictions,designed to mitigate depth errors arising from high-speed distant objects and geometrically inconsistent motion profiles;(3)an efficient SE(3)transformation parameterization that streamlines network complexity and temporal pre-processing.Extensive experiments on the KITTI and NYU-V2 benchmarks show that our framework achieves state-of-the-art performance in both quantitative metrics and qualitative visual fidelity,significantly improving the robustness and generalization of monocular depth estimation under dynamic conditions.
基金supported by the National Natural Science Foundation of China(Grant No.:62101087)the China Postdoctoral Science Foundation(Grant No.:2021MD703942)+2 种基金the Chongqing Postdoctoral Research Project Special Funding,China(Grant No.:2021XM2016)the Science Foundation of Chongqing Municipal Commission of Education,China(Grant No.:KJQN202100642)the Chongqing Natural Science Foundation,China(Grant No.:cstc2021jcyj-msxmX0834).
文摘Drug repurposing offers a promising alternative to traditional drug development and significantly re-duces costs and timelines by identifying new therapeutic uses for existing drugs.However,the current approaches often rely on limited data sources and simplistic hypotheses,which restrict their ability to capture the multi-faceted nature of biological systems.This study introduces adaptive multi-view learning(AMVL),a novel methodology that integrates chemical-induced transcriptional profiles(CTPs),knowledge graph(KG)embeddings,and large language model(LLM)representations,to enhance drug repurposing predictions.AMVL incorporates an innovative similarity matrix expansion strategy and leverages multi-view learning(MVL),matrix factorization,and ensemble optimization techniques to integrate heterogeneous multi-source data.Comprehensive evaluations on benchmark datasets(Fdata-set,Cdataset,and Ydataset)and the large-scale iDrug dataset demonstrate that AMVL outperforms state-of-the-art(SOTA)methods,achieving superior accuracy in predicting drug-disease associations across multiple metrics.Literature-based validation further confirmed the model's predictive capabilities,with seven out of the top ten predictions corroborated by post-2011 evidence.To promote transparency and reproducibility,all data and codes used in this study were open-sourced,providing resources for pro-cessing CTPs,KG,and LLM-based similarity calculations,along with the complete AMVL algorithm and benchmarking procedures.By unifying diverse data modalities,AMVL offers a robust and scalable so-lution for accelerating drug discovery,fostering advancements in translational medicine and integrating multi-omics data.We aim to inspire further innovations in multi-source data integration and support the development of more precise and efficient strategies for advancing drug discovery and translational medicine.
基金supported by the Natural Science Foundation of China,Grant No.62103052.
文摘Drone swarm systems,equipped with photoelectric imaging and intelligent target perception,are essential for reconnaissance and strike missions in complex and high-risk environments.They excel in information sharing,anti-jamming capabilities,and combat performance,making them critical for future warfare.However,varied perspectives in collaborative combat scenarios pose challenges to object detection,hindering traditional detection algorithms and reducing accuracy.Limited angle-prior data and sparse samples further complicate detection.This paper presents the Multi-View Collaborative Detection System,which tackles the challenges of multi-view object detection in collaborative combat scenarios.The system is designed to enhance multi-view image generation and detection algorithms,thereby improving the accuracy and efficiency of object detection across varying perspectives.First,an observation model for three-dimensional targets through line-of-sight angle transformation is constructed,and a multi-view image generation algorithm based on the Pix2Pix network is designed.For object detection,YOLOX is utilized,and a deep feature extraction network,BA-RepCSPDarknet,is developed to address challenges related to small target scale and feature extraction challenges.Additionally,a feature fusion network NS-PAFPN is developed to mitigate the issue of deep feature map information loss in UAV images.A visual attention module(BAM)is employed to manage appearance differences under varying angles,while a feature mapping module(DFM)prevents fine-grained feature loss.These advancements lead to the development of BA-YOLOX,a multi-view object detection network model suitable for drone platforms,enhancing accuracy and effectively targeting small objects.
文摘With the rapid progress of the artificial intelligence(AI)technology and mobile internet,3D hand pose estimation has become critical to various intelligent application areas,e.g.,human-computer interaction.To avoid the low accuracy of single-modal estimation and the high complexity of traditional multi-modal 3D estimation,this paper proposes a novel multi-modal multi-view(MMV)3D hand pose estimation system,which introduces a registration before translation(RT)-translation before registration(TR)jointed conditional generative adversarial network(cGAN)to train a multi-modal registration network,and then employs the multi-modal feature fusion to achieve high-quality estimation,with low hardware and software costs both in data acquisition and processing.Experimental results demonstrate that the MMV system is effective and feasible in various scenarios.It is promising for the MMV system to be used in broad intelligent application areas.
文摘High-resolution sub-meter satellite data play an increasingly crucial role in the 3D real-scene China construction initiative.Current research on 3D reconstruction using high-resolution satellite data primarily focuses on two approaches:Multi-stereo fusion and multi-view matching.While algorithms based on these two methodologies for multi-view image 3D reconstruction have reached relative maturity,no systematic comparison has been conducted specifically on satellite data to evaluate the relative merits of multi-stereo fusion versus multi-view matching methods.This paper conducts a comparative analysis of the practical accuracy of both approaches using high-resolution satellite datasets from diverse geographical regions.To ensure fairness in accuracy comparison,both methodologies employ non-local dense matching for cost optimization.Results demonstrate that the multi-stereo fusion method outperforms multi-view matching in all evaluation metrics,exhibiting approximately 1.2%higher average matching accuracy and 10.7%superior elevation precision in the experimental datasets.Therefore,for 3D modeling applications using satellite data,we recommend adopting the multi-stereo fusion approach for digital surface model(DSM)product generation.