As joint operations have become a key trend in modern military development,unmanned aerial vehicles(UAVs)play an increasingly important role in enhancing the intelligence and responsiveness of combat systems.However,t...As joint operations have become a key trend in modern military development,unmanned aerial vehicles(UAVs)play an increasingly important role in enhancing the intelligence and responsiveness of combat systems.However,the heterogeneity of aircraft,partial observability,and dynamic uncertainty in operational airspace pose significant challenges to autonomous collision avoidance using traditional methods.To address these issues,this paper proposes an adaptive collision avoidance approach for UAVs based on deep reinforcement learning.First,a unified uncertainty model incorporating dynamic wind fields is constructed to capture the complexity of joint operational environments.Then,to effectively handle the heterogeneity between manned and unmanned aircraft and the limitations of dynamic observations,a sector-based partial observation mechanism is designed.A Dynamic Threat Prioritization Assessment algorithm is also proposed to evaluate potential collision threats from multiple dimensions,including time to closest approach,minimum separation distance,and aircraft type.Furthermore,a Hierarchical Prioritized Experience Replay(HPER)mechanism is introduced,which classifies experience samples into high,medium,and low priority levels to preferentially sample critical experiences,thereby improving learning efficiency and accelerating policy convergence.Simulation results show that the proposed HPER-D3QN algorithm outperforms existing methods in terms of learning speed,environmental adaptability,and robustness,significantly enhancing collision avoidance performance and convergence rate.Finally,transfer experiments on a high-fidelity battlefield airspace simulation platform validate the proposed method's deployment potential and practical applicability in complex,real-world joint operational scenarios.展开更多
The accurate prediction of drug absorption,distribution,metabolism,excretion,and toxicity(ADMET)properties represents a crucial step in early drug development for reducing failure risk.Current deep learning approaches...The accurate prediction of drug absorption,distribution,metabolism,excretion,and toxicity(ADMET)properties represents a crucial step in early drug development for reducing failure risk.Current deep learning approaches face challenges with data sparsity and information loss due to single-molecule representation limitations and isolated predictive tasks.This research proposes molecular properties prediction with parallel-view and collaborative learning(MolP-PC),a multi-view fusion and multi-task deep learning framework that integrates 1D molecular fingerprints(MFs),2D molecular graphs,and 3D geometric representations,incorporating an attention-gated fusion mechanism and multi-task adaptive learning strategy for precise ADMET property predictions.Experimental results demonstrate that MolP-PC achieves optimal performance in 27 of 54 tasks,with its multi-task learning(MTL)mechanism significantly enhancing predictive performance on small-scale datasets and surpassing single-task models in 41 of 54 tasks.Additional ablation studies and interpretability analyses confirm the significance of multi-view fusion in capturing multi-dimensional molecular information and enhancing model generalization.A case study examining the anticancer compound Oroxylin A demonstrates MolP-PC’s effective generalization in predicting key pharmacokinetic parameters such as half-life(T0.5)and clearance(CL),indicating its practical utility in drug modeling.However,the model exhibits a tendency to underestimate volume of distribution(VD),indicating potential for improvement in analyzing compounds with high tissue distribution.This study presents an efficient and interpretable approach for ADMET property prediction,establishing a novel framework for molecular optimization and risk assessment in drug development.展开更多
Joint roughness coefficient(JRC)is the most commonly used parameter for quantifying surface roughness of rock discontinuities in practice.The system composed of multiple roughness statistical parameters to measure JRC...Joint roughness coefficient(JRC)is the most commonly used parameter for quantifying surface roughness of rock discontinuities in practice.The system composed of multiple roughness statistical parameters to measure JRC is a nonlinear system with a lot of overlapping information.In this paper,a dataset of eight roughness statistical parameters covering 112 digital joints is established.Then,the principal component analysis method is introduced to extract the significant information,which solves the information overlap problem of roughness characterization.Based on the two principal components of extracted features,the white shark optimizer algorithm was introduced to optimize the extreme gradient boosting model,and a new machine learning(ML)prediction model was established.The prediction accuracy of the new model and the other 17 models was measured using statistical metrics.The results show that the prediction result of the new model is more consistent with the real JRC value,with higher recognition accuracy and generalization ability.展开更多
In the field of optoelectronics,certain types of data may be difficult to accurately annotate,such as high-resolution optoelectronic imaging or imaging in certain special spectral ranges.Weakly supervised learning can...In the field of optoelectronics,certain types of data may be difficult to accurately annotate,such as high-resolution optoelectronic imaging or imaging in certain special spectral ranges.Weakly supervised learning can provide a more reliable approach in these situations.Current popular approaches mainly adopt the classification-based class activation maps(CAM)as initial pseudo labels to solve the task.展开更多
Drug repurposing offers a promising alternative to traditional drug development and significantly re-duces costs and timelines by identifying new therapeutic uses for existing drugs.However,the current approaches ofte...Drug repurposing offers a promising alternative to traditional drug development and significantly re-duces costs and timelines by identifying new therapeutic uses for existing drugs.However,the current approaches often rely on limited data sources and simplistic hypotheses,which restrict their ability to capture the multi-faceted nature of biological systems.This study introduces adaptive multi-view learning(AMVL),a novel methodology that integrates chemical-induced transcriptional profiles(CTPs),knowledge graph(KG)embeddings,and large language model(LLM)representations,to enhance drug repurposing predictions.AMVL incorporates an innovative similarity matrix expansion strategy and leverages multi-view learning(MVL),matrix factorization,and ensemble optimization techniques to integrate heterogeneous multi-source data.Comprehensive evaluations on benchmark datasets(Fdata-set,Cdataset,and Ydataset)and the large-scale iDrug dataset demonstrate that AMVL outperforms state-of-the-art(SOTA)methods,achieving superior accuracy in predicting drug-disease associations across multiple metrics.Literature-based validation further confirmed the model's predictive capabilities,with seven out of the top ten predictions corroborated by post-2011 evidence.To promote transparency and reproducibility,all data and codes used in this study were open-sourced,providing resources for pro-cessing CTPs,KG,and LLM-based similarity calculations,along with the complete AMVL algorithm and benchmarking procedures.By unifying diverse data modalities,AMVL offers a robust and scalable so-lution for accelerating drug discovery,fostering advancements in translational medicine and integrating multi-omics data.We aim to inspire further innovations in multi-source data integration and support the development of more precise and efficient strategies for advancing drug discovery and translational medicine.展开更多
Aerosol optical depth(AOD)and fine particulate matter with a diameter of less than or equal to 2.5μm(PM_(2.5))play crucial roles in air quality,human health,and climate change.However,the complex correlation of AOD–...Aerosol optical depth(AOD)and fine particulate matter with a diameter of less than or equal to 2.5μm(PM_(2.5))play crucial roles in air quality,human health,and climate change.However,the complex correlation of AOD–PM_(2.5)and the limitations of existing algorithms pose a significant challenge in realizing the accurate joint retrieval of these two parameters at the same location.On this point,a multi-task learning(MTL)model,which enables the joint retrieval of PM_(2.5)concentration and AOD,is proposed and applied on the top-of-the-atmosphere reflectance data gathered by the Fengyun-4A Advanced Geosynchronous Radiation Imager(FY-4A AGRI),and compared to that of two single-task learning models—namely,Random Forest(RF)and Deep Neural Network(DNN).Specifically,MTL achieves a coefficient of determination(R^(2))of 0.88 and a root-mean-square error(RMSE)of 0.10 in AOD retrieval.In comparison to RF,the R^(2)increases by 0.04,the RMSE decreases by 0.02,and the percentage of retrieval results falling within the expected error range(Within-EE)rises by 5.55%.The R^(2)and RMSE of PM_(2.5)retrieval by MTL are 0.84 and 13.76μg m~(-3)respectively.Compared with RF,the R^(2)increases by 0.06,the RMSE decreases by 4.55μg m~(-3),and the Within-EE increases by 7.28%.Additionally,compared to DNN,MTL shows an increase of 0.01 in R^(2)and a decrease of 0.02 in RMSE in AOD retrieval,with a corresponding increase of 2.89%in Within-EE.For PM_(2.5)retrieval,MTL exhibits an increase of 0.05 in R^(2),a decrease of 1.76μg m~(-3)in RMSE,and an increase of 6.83%in Within-EE.The evaluation suggests that MTL is able to provide simultaneously improved AOD and PM_(2.5)retrievals,demonstrating a significant advantage in efficiently capturing the spatial distribution of PM_(2.5)concentration and AOD.展开更多
Seismic AVO/AVA(amplitude-versus-offset or amplitude-versus-angle)analysis,based on prestack seismic angle gathers and the Zoeppritz equation,has been widely used in seismic exploration.However,conducting the multi-pa...Seismic AVO/AVA(amplitude-versus-offset or amplitude-versus-angle)analysis,based on prestack seismic angle gathers and the Zoeppritz equation,has been widely used in seismic exploration.However,conducting the multi-parameter AVO/AVA inversion using only PP-wave angle gathers is often highly ill-posed,leading to instability and inaccuracy in the inverted elastic parameters(e.g.,P-and Swave velocities and bulk density).Seismic AVO/AVA analysis simultaneously using both PP-wave(pressure wave down,pressure wave up)and PS-wave(pressure wave down,converted shear wave up)angle gathers has proven to be an effective method for reducing reservoir interpretation ambiguity associated with using the single wave mode of PP-waves.To avoid the complex PS-wave processing,and the risks associated with PP and PS waveform alignment,we developed a method that predicts PS-wave angle gathers from PP-wave angle gathers using a deep learning algorithm—specifically,the cGAN deep learning algorithm.Our deep learning model is trained with synthetic data,demonstrating a strong fit between the predicted PS-waves and real PS-waves in a test datasets.Subsequently,the trained deep learning model is applied to actual field PP-waves,maintaining robust performance.In the field data test,the predicted PS-wave angle gather at the well location closely aligns with the synthetic PS-wave angle gather generated using reference well logs.Finally,the P-and S-wave velocities estimated from the joint PP and PS AVA inversion,based on field PP-waves and the predicted PS-waves,display a superior model fit compared to those obtained solely from the PP-wave AVA inversion using field PPwaves.Our contribution lies in firstly carrying out the joint PP and PS inversion using predicted PS waves rather than the field PS waves,which break the limit of acquiring PS-wave angle gathers.展开更多
Fiber-to-the-Room(FTTR)networks with multi-access point(AP)coordination face significant challenges in implementing Joint Transmission(JT),particularly the high overhead of Channel State Information(CSI)acquisition.Wh...Fiber-to-the-Room(FTTR)networks with multi-access point(AP)coordination face significant challenges in implementing Joint Transmission(JT),particularly the high overhead of Channel State Information(CSI)acquisition.While the centralized wireless access net⁃work(C-WAN)architecture inherently provides high-precision synchronization through fiber-based clock distribution and centralized sched⁃uling,efficient JT still requires accurate CSI with low signaling cost.In this paper,we propose a deep learning-based hybrid model that syner⁃gistically integrates temporal prediction and spatial reconstruction to exploit spatiotemporal correlations in indoor channels.By leveraging the centralized data and computational capability of the C-WAN architecture,the model reduces sounding frequency and the number of antennas required per sounding instance.Experimental results on a real-world synchronized channel dataset show that the proposed method lowers over-the-air resource consumption while maintaining JT performance close to that achieved with ideal CSI,offering a practical low-overhead solution for high-performance FTTR systems.展开更多
针对无监督环境下传统网络异常诊断算法存在异常点定位和异常数据分类准确率低等不足,通过设计一种基于改进Q-learning算法的无线网络异常诊断方法:首先基于ADU(Asynchronous Data Unit异步数据单元)单元采集无线网络的数据流,并提取数...针对无监督环境下传统网络异常诊断算法存在异常点定位和异常数据分类准确率低等不足,通过设计一种基于改进Q-learning算法的无线网络异常诊断方法:首先基于ADU(Asynchronous Data Unit异步数据单元)单元采集无线网络的数据流,并提取数据包特征;然后构建Q-learning算法模型探索状态值和奖励值的平衡点,利用SA(Simulated Annealing模拟退火)算法从全局视角对下一时刻状态进行精确识别;最后确定训练样本的联合分布概率,提升输出值的逼近性能以达到平衡探索与代价之间的均衡。测试结果显示:改进Q-learning算法的网络异常定位准确率均值达99.4%,在不同类型网络异常的分类精度和分类效率等方面,也优于三种传统网络异常诊断方法。展开更多
In geotechnical and tunneling engineering,accurately determining the mechanical properties of jointed rock holds great significance for project safety assessments.Peak shear strength(PSS),being the paramount mechanica...In geotechnical and tunneling engineering,accurately determining the mechanical properties of jointed rock holds great significance for project safety assessments.Peak shear strength(PSS),being the paramount mechanical property of joints,has been a focal point in the research field.There are limitations in the current peak shear strength(PSS)prediction models for jointed rock:(i)the models do not comprehensively consider various influencing factors,and a PSS prediction model covering seven factors has not been established,including the sampling interval of the joints,the surface roughness of the joints,the normal stress,the basic friction angle,the uniaxial tensile strength,the uniaxial compressive strength,and the joint size for coupled joints;(ii)the datasets used to train the models are relatively limited;and(iii)there is a controversy regarding whether compressive or tensile strength should be used as the strength term among the influencing factors.To overcome these limitations,we developed four machine learning models covering these seven influencing factors,three relying on Support Vector Regression(SVR)with different kernel functions(linear,polynomial,and Radial Basis Function(RBF))and one using deep learning(DL).Based on these seven influencing factors,we compiled a dataset comprising the outcomes of 493 published direct shear tests for the training and validation of these four models.We compared the prediction performance of these four machine learning models with Tang’s and Tatone’s models.The prediction errors of Tang’s and Tatone’s models are 21.8%and 17.7%,respectively,while SVR_linear is at 16.6%,SVR_poly is at 14.0%,and SVR_RBF is at 12.1%.DL outperforms the two existing models with only an 8.5%error.Additionally,we performed shear tests on granite joints to validate the predictive capability of the DL-based model.With the DL approach,the results suggest that uniaxial tensile strength is recommended as the material strength term in the PSS model for more reliable outcomes.展开更多
A distributed reinforcement learning(RL)based resource management framework is proposed for a mobile edge computing(MEC)system with both latency-sensitive and latency-insensitive services.We investigate joint optimiza...A distributed reinforcement learning(RL)based resource management framework is proposed for a mobile edge computing(MEC)system with both latency-sensitive and latency-insensitive services.We investigate joint optimization of both computing and radio resources to achieve efficient on-demand matches of multi-dimensional resources and diverse requirements of users.A multi-objective integer programming problem is formulated by two subproblems,i.e.,access point(AP)selection and subcarrier allocation,which can be solved jointly by our proposed distributed RL-based approach with a heuristic iteration algorithm.The proposed algorithm allows for the reduction in complexity since each user needs to consider only its own selection of AP without knowing full global information.Simulation results show that our algorithm can achieve near-optimal performance while reducing computational complexity significantly.Compared with other algorithms that only optimize either of the two sub-problems,the proposed algorithm can serve more users with much less power consumption and content delivery latency.展开更多
To solve the problem of missing many valid triples in knowledge graphs(KGs),a novel model based on a convolutional neural network(CNN)called ConvKG is proposed,which employs a joint learning strategy for knowledge gra...To solve the problem of missing many valid triples in knowledge graphs(KGs),a novel model based on a convolutional neural network(CNN)called ConvKG is proposed,which employs a joint learning strategy for knowledge graph completion(KGC).Related research work has shown the superiority of convolutional neural networks(CNNs)in extracting semantic features of triple embeddings.However,these researches use only one single-shaped filter and fail to extract semantic features of different granularity.To solve this problem,ConvKG exploits multi-shaped filters to co-convolute on the triple embeddings,joint learning semantic features of different granularity.Different shaped filters cover different sizes on the triple embeddings and capture pairwise interactions of different granularity among triple elements.Experimental results confirm the strength of joint learning,and compared with state-of-the-art CNN-based KGC models,ConvKG achieves the better mean rank(MR)and Hits@10 metrics on dataset WN18 RR,and the better MR on dataset FB15k-237.展开更多
Intrinsic decomposition,the process of decomposing an image into reflectance and shading,is widely used in virtual and augmented reality tasks.Reflectance and shading often exhibit large gradients at the object edges,...Intrinsic decomposition,the process of decomposing an image into reflectance and shading,is widely used in virtual and augmented reality tasks.Reflectance and shading often exhibit large gradients at the object edges,and the intrinsic properties on the same object tend to be similar.This spatial coherence is closely related to semantic consistency because objects within the same semantic category often exhibit similar intrinsic properties.Therefore,incorporating semantic segmentation into a deep intrinsic decomposition framework helps the network distinguish between different object instances and understand high-level scene structures.To this end,we design an intrinsic decomposition network jointly trained with a dedicated semantic segmentation module,allowing semantic cues to enhance the decomposition of reflectance and shading.The semantic module provides guidance during training but is removed during inference,improving performance without increasing the inference cost.Additionally,to capture the global contextual dependencies critical for intrinsic decomposition,we adopt a Transformer-based backbone.The proposed backbone enables the model to associate distant regions with similar material properties,thereby maintaining consistency in reflectance and learning smooth illumination patterns across a scene.A convolutional decoder is also designed to output predictions with improved details.Experiments demonstrate that our approach achieves state-of-the-art performance in the quantitative evaluations on the Intrinsic Images in the Wild(IIW)and Shading Annotations in the wild(SAW)datasets.展开更多
This study introduces a novel approach to addressing the challenges of high-dimensional variables and strong nonlinearity in reservoir production and layer configuration optimization.For the first time,relational mach...This study introduces a novel approach to addressing the challenges of high-dimensional variables and strong nonlinearity in reservoir production and layer configuration optimization.For the first time,relational machine learning models are applied in reservoir development optimization.Traditional regression-based models often struggle in complex scenarios,but the proposed relational and regression-based composite differential evolution(RRCODE)method combines a Gaussian naive Bayes relational model with a radial basis function network regression model.This integration effectively captures complex relationships in the optimization process,improving both accuracy and convergence speed.Experimental tests on a multi-layer multi-channel reservoir model,the Egg reservoir model,and a real-field reservoir model(the S reservoir)demonstrate that RRCODE significantly reduces water injection and production volumes while increasing economic returns and cumulative oil recovery.Moreover,the surrogate models employed in RRCODE exhibit lightweight characteristics with low computational overhead.These results highlight RRCODE's superior performance in the integrated optimization of reservoir production and layer configurations,offering more efficient and economically viable solutions for oilfield development.展开更多
Machine learning methods are widely used to evaluate the risk of small-and mediumsized enterprises(SMEs)in supply chain finance(SCF).However,there may be problems with data scarcity,feature redundancy,and poor predict...Machine learning methods are widely used to evaluate the risk of small-and mediumsized enterprises(SMEs)in supply chain finance(SCF).However,there may be problems with data scarcity,feature redundancy,and poor predictive performance.Additionally,data collected over a long time span may cause differences in the data distribution,and classic supervised learning methods may exhibit poor predictive abilities under such conditions.To address these issues,a domain-adaptation-based multistage ensemble learning paradigm(DAMEL)is proposed in this study to evaluate the credit risk of SMEs in SCF.In this methodology,a bagging resampling algorithm is first used to generate a dataset to address data scarcity.Subsequently,a random subspace is applied to integrate various features and reduce feature redundancy.Additionally,a domain adaptation approach is utilized to reduce the data distribution discrepancy in the cross-domain.Finally,dynamic model selection is developed to improve the generalization ability of the model in the fourth stage.A real-world credit dataset from the Chinese securities market was used to validate the effectiveness and feasibility of the multistage ensemble learning paradigm.The experimental results demonstrated that the proposed domain-adaptation-based multistage ensemble learning paradigm is superior to principal component analysis,joint distribution adaptation,random forest,and other ensemble and transfer learning methods.Moreover,dynamic model selection can improve the model generalization performance and prediction precision of minority samples.This can be considered a promising solution for evaluating the credit risk of SMEs in SCF for financial institutions.展开更多
Dear Editor,This letter investigates the system development of a multi-joint rehabilitation exoskeleton,and highlights the subject-adaptive control factors for efficient motor learning.In order to enable the natural m...Dear Editor,This letter investigates the system development of a multi-joint rehabilitation exoskeleton,and highlights the subject-adaptive control factors for efficient motor learning.In order to enable the natural mobility of the human upper extremity,we design the shoulder mechanism by arranging three rotational joints with acute angles,and adopt a serial chain structure for the fully constructed system.After the kinematics and dynamics of CASIA-EXO are modelled,the patient-in-the-loop control strategy is proposed for rehabilitation training,consisting of the intention-based trajectory planning and performance-based intervention adaptation.Finally,we conduct experiments to validate the efficacy of the control system,and further demonstrate the potential of CASIA-EXO in neurorehabilitation.Introduction:Neurological diseases are the leading cause of nontraumatic disability worldwide,and stroke is one of the most common encountered neurological injury,which is suffered by over 15 million individuals each year,and about 70%−80%of these individuals have varying degrees of functional impairments[1].In order to facilitate the motor relearning in central nervous system,post-stroke patients need to undergo long-term rehabilitation training to promote neural plasticity,thereby enhancing the recovery of motor function in activities of daily living(ADLs).Evidence in the clinical studies suggests that robot-assisted rehabilitation integrating neuroscience,biomechanics,and automation control can improve the patients’motivation for active participation while improving the treatment efficiency,therefore,be expected to become the most promising means for neurorehabilitation[2].展开更多
Epilepsy is a central nervous system disorder in which brain activity becomes abnormal.Electroencephalogram(EEG)signals,as recordings of brain activity,have been widely used for epilepsy recognition.To study epilep-ti...Epilepsy is a central nervous system disorder in which brain activity becomes abnormal.Electroencephalogram(EEG)signals,as recordings of brain activity,have been widely used for epilepsy recognition.To study epilep-tic EEG signals and develop artificial intelligence(AI)-assist recognition,a multi-view transfer learning(MVTL-LSR)algorithm based on least squares regression is proposed in this study.Compared with most existing multi-view transfer learning algorithms,MVTL-LSR has two merits:(1)Since traditional transfer learning algorithms leverage knowledge from different sources,which poses a significant risk to data privacy.Therefore,we develop a knowledge transfer mechanism that can protect the security of source domain data while guaranteeing performance.(2)When utilizing multi-view data,we embed view weighting and manifold regularization into the transfer framework to measure the views’strengths and weaknesses and improve generalization ability.In the experimental studies,12 different simulated multi-view&transfer scenarios are constructed from epileptic EEG signals licensed and provided by the Uni-versity of Bonn,Germany.Extensive experimental results show that MVTL-LSR outperforms baselines.The source code will be available on https://github.com/didid5/MVTL-LSR.展开更多
Emotion cause extraction(ECE)task that aims at extracting potential trigger events of certain emotions has attracted extensive attention recently.However,current work neglects the implicit emotion expressed without an...Emotion cause extraction(ECE)task that aims at extracting potential trigger events of certain emotions has attracted extensive attention recently.However,current work neglects the implicit emotion expressed without any explicit emotional keywords,which appears more frequently in application scenarios.The lack of explicit emotion information makes it extremely hard to extract emotion causes only with the local context.Moreover,an entire event is usually across multiple clauses,while existing work merely extracts cause events at clause level and cannot effectively capture complete cause event information.To address these issues,the events are first redefined at the tuple level and a span-based tuple-level algorithm is proposed to extract events from different clauses.Based on it,a corpus for implicit emotion cause extraction that tries to extract causes of implicit emotions is constructed.The authors propose a knowledge-enriched jointlearning model of implicit emotion recognition and implicit emotion cause extraction tasks(KJ-IECE),which leverages commonsense knowledge from ConceptNet and NRC_VAD to better capture connections between emotion and corresponding cause events.Experiments on both implicit and explicit emotion cause extraction datasets demonstrate the effectiveness of the proposed model.展开更多
Satellite image segmentation plays a crucial role in remote sensing,supporting applications such as environmental monitoring,land use analysis,and disaster management.However,traditional segmentation methods often rel...Satellite image segmentation plays a crucial role in remote sensing,supporting applications such as environmental monitoring,land use analysis,and disaster management.However,traditional segmentation methods often rely on large amounts of labeled data,which are costly and time-consuming to obtain,especially in largescale or dynamic environments.To address this challenge,we propose the Semi-Supervised Multi-View Picture Fuzzy Clustering(SS-MPFC)algorithm,which improves segmentation accuracy and robustness,particularly in complex and uncertain remote sensing scenarios.SS-MPFC unifies three paradigms:semi-supervised learning,multi-view clustering,and picture fuzzy set theory.This integration allows the model to effectively utilize a small number of labeled samples,fuse complementary information from multiple data views,and handle the ambiguity and uncertainty inherent in satellite imagery.We design a novel objective function that jointly incorporates picture fuzzy membership functions across multiple views of the data,and embeds pairwise semi-supervised constraints(must-link and cannot-link)directly into the clustering process to enhance segmentation accuracy.Experiments conducted on several benchmark satellite datasets demonstrate that SS-MPFC significantly outperforms existing state-of-the-art methods in segmentation accuracy,noise robustness,and semantic interpretability.On the Augsburg dataset,SS-MPFC achieves a Purity of 0.8158 and an Accuracy of 0.6860,highlighting its outstanding robustness and efficiency.These results demonstrate that SSMPFC offers a scalable and effective solution for real-world satellite-based monitoring systems,particularly in scenarios where rapid annotation is infeasible,such as wildfire tracking,agricultural monitoring,and dynamic urban mapping.展开更多
基金supported by the National Key Research and Development Program of China(No.2022YFB4300902).
文摘As joint operations have become a key trend in modern military development,unmanned aerial vehicles(UAVs)play an increasingly important role in enhancing the intelligence and responsiveness of combat systems.However,the heterogeneity of aircraft,partial observability,and dynamic uncertainty in operational airspace pose significant challenges to autonomous collision avoidance using traditional methods.To address these issues,this paper proposes an adaptive collision avoidance approach for UAVs based on deep reinforcement learning.First,a unified uncertainty model incorporating dynamic wind fields is constructed to capture the complexity of joint operational environments.Then,to effectively handle the heterogeneity between manned and unmanned aircraft and the limitations of dynamic observations,a sector-based partial observation mechanism is designed.A Dynamic Threat Prioritization Assessment algorithm is also proposed to evaluate potential collision threats from multiple dimensions,including time to closest approach,minimum separation distance,and aircraft type.Furthermore,a Hierarchical Prioritized Experience Replay(HPER)mechanism is introduced,which classifies experience samples into high,medium,and low priority levels to preferentially sample critical experiences,thereby improving learning efficiency and accelerating policy convergence.Simulation results show that the proposed HPER-D3QN algorithm outperforms existing methods in terms of learning speed,environmental adaptability,and robustness,significantly enhancing collision avoidance performance and convergence rate.Finally,transfer experiments on a high-fidelity battlefield airspace simulation platform validate the proposed method's deployment potential and practical applicability in complex,real-world joint operational scenarios.
基金supported by the research on key technologies for monitoring and identifying drug abuse of anesthetic drugs and psychotropic drugs,and intervention for addiction(No.2023YFC3304200)the program of a study on the diagnosis of addiction to synthetic cannabinoids and methods of assessing the risk of abuse(No.2022YFC3300905)+1 种基金the program of Ab initio design and generation of AI models for small molecule ligands based on target structures(No.2022PE0AC03)ZHIJIANG LAB.
文摘The accurate prediction of drug absorption,distribution,metabolism,excretion,and toxicity(ADMET)properties represents a crucial step in early drug development for reducing failure risk.Current deep learning approaches face challenges with data sparsity and information loss due to single-molecule representation limitations and isolated predictive tasks.This research proposes molecular properties prediction with parallel-view and collaborative learning(MolP-PC),a multi-view fusion and multi-task deep learning framework that integrates 1D molecular fingerprints(MFs),2D molecular graphs,and 3D geometric representations,incorporating an attention-gated fusion mechanism and multi-task adaptive learning strategy for precise ADMET property predictions.Experimental results demonstrate that MolP-PC achieves optimal performance in 27 of 54 tasks,with its multi-task learning(MTL)mechanism significantly enhancing predictive performance on small-scale datasets and surpassing single-task models in 41 of 54 tasks.Additional ablation studies and interpretability analyses confirm the significance of multi-view fusion in capturing multi-dimensional molecular information and enhancing model generalization.A case study examining the anticancer compound Oroxylin A demonstrates MolP-PC’s effective generalization in predicting key pharmacokinetic parameters such as half-life(T0.5)and clearance(CL),indicating its practical utility in drug modeling.However,the model exhibits a tendency to underestimate volume of distribution(VD),indicating potential for improvement in analyzing compounds with high tissue distribution.This study presents an efficient and interpretable approach for ADMET property prediction,establishing a novel framework for molecular optimization and risk assessment in drug development.
基金funding from the National Natural Science Foundation of China (Grant No.42277175)the pilot project of cooperation between the Ministry of Natural Resources and Hunan Province“Research and demonstration of key technologies for comprehensive remote sensing identification of geological hazards in typical regions of Hunan Province” (Grant No.2023ZRBSHZ056)the National Key Research and Development Program of China-2023 Key Special Project (Grant No.2023YFC2907400).
文摘Joint roughness coefficient(JRC)is the most commonly used parameter for quantifying surface roughness of rock discontinuities in practice.The system composed of multiple roughness statistical parameters to measure JRC is a nonlinear system with a lot of overlapping information.In this paper,a dataset of eight roughness statistical parameters covering 112 digital joints is established.Then,the principal component analysis method is introduced to extract the significant information,which solves the information overlap problem of roughness characterization.Based on the two principal components of extracted features,the white shark optimizer algorithm was introduced to optimize the extreme gradient boosting model,and a new machine learning(ML)prediction model was established.The prediction accuracy of the new model and the other 17 models was measured using statistical metrics.The results show that the prediction result of the new model is more consistent with the real JRC value,with higher recognition accuracy and generalization ability.
文摘In the field of optoelectronics,certain types of data may be difficult to accurately annotate,such as high-resolution optoelectronic imaging or imaging in certain special spectral ranges.Weakly supervised learning can provide a more reliable approach in these situations.Current popular approaches mainly adopt the classification-based class activation maps(CAM)as initial pseudo labels to solve the task.
基金supported by the National Natural Science Foundation of China(Grant No.:62101087)the China Postdoctoral Science Foundation(Grant No.:2021MD703942)+2 种基金the Chongqing Postdoctoral Research Project Special Funding,China(Grant No.:2021XM2016)the Science Foundation of Chongqing Municipal Commission of Education,China(Grant No.:KJQN202100642)the Chongqing Natural Science Foundation,China(Grant No.:cstc2021jcyj-msxmX0834).
文摘Drug repurposing offers a promising alternative to traditional drug development and significantly re-duces costs and timelines by identifying new therapeutic uses for existing drugs.However,the current approaches often rely on limited data sources and simplistic hypotheses,which restrict their ability to capture the multi-faceted nature of biological systems.This study introduces adaptive multi-view learning(AMVL),a novel methodology that integrates chemical-induced transcriptional profiles(CTPs),knowledge graph(KG)embeddings,and large language model(LLM)representations,to enhance drug repurposing predictions.AMVL incorporates an innovative similarity matrix expansion strategy and leverages multi-view learning(MVL),matrix factorization,and ensemble optimization techniques to integrate heterogeneous multi-source data.Comprehensive evaluations on benchmark datasets(Fdata-set,Cdataset,and Ydataset)and the large-scale iDrug dataset demonstrate that AMVL outperforms state-of-the-art(SOTA)methods,achieving superior accuracy in predicting drug-disease associations across multiple metrics.Literature-based validation further confirmed the model's predictive capabilities,with seven out of the top ten predictions corroborated by post-2011 evidence.To promote transparency and reproducibility,all data and codes used in this study were open-sourced,providing resources for pro-cessing CTPs,KG,and LLM-based similarity calculations,along with the complete AMVL algorithm and benchmarking procedures.By unifying diverse data modalities,AMVL offers a robust and scalable so-lution for accelerating drug discovery,fostering advancements in translational medicine and integrating multi-omics data.We aim to inspire further innovations in multi-source data integration and support the development of more precise and efficient strategies for advancing drug discovery and translational medicine.
基金supported by the National Natural Science Foundation of China(Grant Nos.42030708,42375138,42030608,42105128,42075079)the Opening Foundation of Key Laboratory of Atmospheric Sounding,China Meteorological Administration(CMA),and the CMA Research Center on Meteorological Observation Engineering Technology(Grant No.U2021Z03),and the Opening Foundation of the Key Laboratory of Atmospheric Chemistry,CMA(Grant No.2022B02)。
文摘Aerosol optical depth(AOD)and fine particulate matter with a diameter of less than or equal to 2.5μm(PM_(2.5))play crucial roles in air quality,human health,and climate change.However,the complex correlation of AOD–PM_(2.5)and the limitations of existing algorithms pose a significant challenge in realizing the accurate joint retrieval of these two parameters at the same location.On this point,a multi-task learning(MTL)model,which enables the joint retrieval of PM_(2.5)concentration and AOD,is proposed and applied on the top-of-the-atmosphere reflectance data gathered by the Fengyun-4A Advanced Geosynchronous Radiation Imager(FY-4A AGRI),and compared to that of two single-task learning models—namely,Random Forest(RF)and Deep Neural Network(DNN).Specifically,MTL achieves a coefficient of determination(R^(2))of 0.88 and a root-mean-square error(RMSE)of 0.10 in AOD retrieval.In comparison to RF,the R^(2)increases by 0.04,the RMSE decreases by 0.02,and the percentage of retrieval results falling within the expected error range(Within-EE)rises by 5.55%.The R^(2)and RMSE of PM_(2.5)retrieval by MTL are 0.84 and 13.76μg m~(-3)respectively.Compared with RF,the R^(2)increases by 0.06,the RMSE decreases by 4.55μg m~(-3),and the Within-EE increases by 7.28%.Additionally,compared to DNN,MTL shows an increase of 0.01 in R^(2)and a decrease of 0.02 in RMSE in AOD retrieval,with a corresponding increase of 2.89%in Within-EE.For PM_(2.5)retrieval,MTL exhibits an increase of 0.05 in R^(2),a decrease of 1.76μg m~(-3)in RMSE,and an increase of 6.83%in Within-EE.The evaluation suggests that MTL is able to provide simultaneously improved AOD and PM_(2.5)retrievals,demonstrating a significant advantage in efficiently capturing the spatial distribution of PM_(2.5)concentration and AOD.
基金funded by the National Natural Science Foundation of China(No.42325403 and No.42122029)the Deep Earth Probe and Mineral Resources Exploration—National Science and Technology Major Project(No.2024ZD1004201)+3 种基金the Young Expert of Taishan Scholars Project(No.tsqn202408095)the Independent Innovation Research Project(Science and Engineering)—Youth Fund(No.24CX06012A)of China University of Petroleumthe Qingdao Postdoctoral Funding Program(No.QDBSH20240202023)the CNPC Investigations on fundamental experiments and advanced theoretical methods in geophysical prospecting applications(2022DQ0604-02)。
文摘Seismic AVO/AVA(amplitude-versus-offset or amplitude-versus-angle)analysis,based on prestack seismic angle gathers and the Zoeppritz equation,has been widely used in seismic exploration.However,conducting the multi-parameter AVO/AVA inversion using only PP-wave angle gathers is often highly ill-posed,leading to instability and inaccuracy in the inverted elastic parameters(e.g.,P-and Swave velocities and bulk density).Seismic AVO/AVA analysis simultaneously using both PP-wave(pressure wave down,pressure wave up)and PS-wave(pressure wave down,converted shear wave up)angle gathers has proven to be an effective method for reducing reservoir interpretation ambiguity associated with using the single wave mode of PP-waves.To avoid the complex PS-wave processing,and the risks associated with PP and PS waveform alignment,we developed a method that predicts PS-wave angle gathers from PP-wave angle gathers using a deep learning algorithm—specifically,the cGAN deep learning algorithm.Our deep learning model is trained with synthetic data,demonstrating a strong fit between the predicted PS-waves and real PS-waves in a test datasets.Subsequently,the trained deep learning model is applied to actual field PP-waves,maintaining robust performance.In the field data test,the predicted PS-wave angle gather at the well location closely aligns with the synthetic PS-wave angle gather generated using reference well logs.Finally,the P-and S-wave velocities estimated from the joint PP and PS AVA inversion,based on field PP-waves and the predicted PS-waves,display a superior model fit compared to those obtained solely from the PP-wave AVA inversion using field PPwaves.Our contribution lies in firstly carrying out the joint PP and PS inversion using predicted PS waves rather than the field PS waves,which break the limit of acquiring PS-wave angle gathers.
文摘Fiber-to-the-Room(FTTR)networks with multi-access point(AP)coordination face significant challenges in implementing Joint Transmission(JT),particularly the high overhead of Channel State Information(CSI)acquisition.While the centralized wireless access net⁃work(C-WAN)architecture inherently provides high-precision synchronization through fiber-based clock distribution and centralized sched⁃uling,efficient JT still requires accurate CSI with low signaling cost.In this paper,we propose a deep learning-based hybrid model that syner⁃gistically integrates temporal prediction and spatial reconstruction to exploit spatiotemporal correlations in indoor channels.By leveraging the centralized data and computational capability of the C-WAN architecture,the model reduces sounding frequency and the number of antennas required per sounding instance.Experimental results on a real-world synchronized channel dataset show that the proposed method lowers over-the-air resource consumption while maintaining JT performance close to that achieved with ideal CSI,offering a practical low-overhead solution for high-performance FTTR systems.
文摘针对无监督环境下传统网络异常诊断算法存在异常点定位和异常数据分类准确率低等不足,通过设计一种基于改进Q-learning算法的无线网络异常诊断方法:首先基于ADU(Asynchronous Data Unit异步数据单元)单元采集无线网络的数据流,并提取数据包特征;然后构建Q-learning算法模型探索状态值和奖励值的平衡点,利用SA(Simulated Annealing模拟退火)算法从全局视角对下一时刻状态进行精确识别;最后确定训练样本的联合分布概率,提升输出值的逼近性能以达到平衡探索与代价之间的均衡。测试结果显示:改进Q-learning算法的网络异常定位准确率均值达99.4%,在不同类型网络异常的分类精度和分类效率等方面,也优于三种传统网络异常诊断方法。
基金supported by the National Key Research and Development Program of China(2022YFC3080100)the National Natural Science Foundation of China(Nos.52104090,52208328 and 12272353)+1 种基金the Open Fund of Anhui Province Key Laboratory of Building Structure and Underground Engineering,Anhui Jianzhu University(No.KLBSUE-2022-06)the Open Research Fund of Key Laboratory of Construction and Safety of Water Engineering of the Ministry of Water Resources,China Institute of Water Resources and Hydropower Research(Grant No.IWHR-ENGI-202302)。
文摘In geotechnical and tunneling engineering,accurately determining the mechanical properties of jointed rock holds great significance for project safety assessments.Peak shear strength(PSS),being the paramount mechanical property of joints,has been a focal point in the research field.There are limitations in the current peak shear strength(PSS)prediction models for jointed rock:(i)the models do not comprehensively consider various influencing factors,and a PSS prediction model covering seven factors has not been established,including the sampling interval of the joints,the surface roughness of the joints,the normal stress,the basic friction angle,the uniaxial tensile strength,the uniaxial compressive strength,and the joint size for coupled joints;(ii)the datasets used to train the models are relatively limited;and(iii)there is a controversy regarding whether compressive or tensile strength should be used as the strength term among the influencing factors.To overcome these limitations,we developed four machine learning models covering these seven influencing factors,three relying on Support Vector Regression(SVR)with different kernel functions(linear,polynomial,and Radial Basis Function(RBF))and one using deep learning(DL).Based on these seven influencing factors,we compiled a dataset comprising the outcomes of 493 published direct shear tests for the training and validation of these four models.We compared the prediction performance of these four machine learning models with Tang’s and Tatone’s models.The prediction errors of Tang’s and Tatone’s models are 21.8%and 17.7%,respectively,while SVR_linear is at 16.6%,SVR_poly is at 14.0%,and SVR_RBF is at 12.1%.DL outperforms the two existing models with only an 8.5%error.Additionally,we performed shear tests on granite joints to validate the predictive capability of the DL-based model.With the DL approach,the results suggest that uniaxial tensile strength is recommended as the material strength term in the PSS model for more reliable outcomes.
基金supported in part by the National Natural Science Foundation of China under Grant 61671074in part by Project No.A01B02C01202015D0。
文摘A distributed reinforcement learning(RL)based resource management framework is proposed for a mobile edge computing(MEC)system with both latency-sensitive and latency-insensitive services.We investigate joint optimization of both computing and radio resources to achieve efficient on-demand matches of multi-dimensional resources and diverse requirements of users.A multi-objective integer programming problem is formulated by two subproblems,i.e.,access point(AP)selection and subcarrier allocation,which can be solved jointly by our proposed distributed RL-based approach with a heuristic iteration algorithm.The proposed algorithm allows for the reduction in complexity since each user needs to consider only its own selection of AP without knowing full global information.Simulation results show that our algorithm can achieve near-optimal performance while reducing computational complexity significantly.Compared with other algorithms that only optimize either of the two sub-problems,the proposed algorithm can serve more users with much less power consumption and content delivery latency.
基金Supported by the National Natural Science Foundation of China(No.61876144)。
文摘To solve the problem of missing many valid triples in knowledge graphs(KGs),a novel model based on a convolutional neural network(CNN)called ConvKG is proposed,which employs a joint learning strategy for knowledge graph completion(KGC).Related research work has shown the superiority of convolutional neural networks(CNNs)in extracting semantic features of triple embeddings.However,these researches use only one single-shaped filter and fail to extract semantic features of different granularity.To solve this problem,ConvKG exploits multi-shaped filters to co-convolute on the triple embeddings,joint learning semantic features of different granularity.Different shaped filters cover different sizes on the triple embeddings and capture pairwise interactions of different granularity among triple elements.Experimental results confirm the strength of joint learning,and compared with state-of-the-art CNN-based KGC models,ConvKG achieves the better mean rank(MR)and Hits@10 metrics on dataset WN18 RR,and the better MR on dataset FB15k-237.
基金Supported by Science and Technology Innovation 2030:Major Project of“New Generation Artificial Intelligence”(No.2022ZD0115901)the National Natural Science Foundation of China(No.62332003).
文摘Intrinsic decomposition,the process of decomposing an image into reflectance and shading,is widely used in virtual and augmented reality tasks.Reflectance and shading often exhibit large gradients at the object edges,and the intrinsic properties on the same object tend to be similar.This spatial coherence is closely related to semantic consistency because objects within the same semantic category often exhibit similar intrinsic properties.Therefore,incorporating semantic segmentation into a deep intrinsic decomposition framework helps the network distinguish between different object instances and understand high-level scene structures.To this end,we design an intrinsic decomposition network jointly trained with a dedicated semantic segmentation module,allowing semantic cues to enhance the decomposition of reflectance and shading.The semantic module provides guidance during training but is removed during inference,improving performance without increasing the inference cost.Additionally,to capture the global contextual dependencies critical for intrinsic decomposition,we adopt a Transformer-based backbone.The proposed backbone enables the model to associate distant regions with similar material properties,thereby maintaining consistency in reflectance and learning smooth illumination patterns across a scene.A convolutional decoder is also designed to output predictions with improved details.Experiments demonstrate that our approach achieves state-of-the-art performance in the quantitative evaluations on the Intrinsic Images in the Wild(IIW)and Shading Annotations in the wild(SAW)datasets.
基金supported by the National Natural Science Foundation of China under Grant 52325402,52274057,and 52074340the National Key R&D Program of China under Grant 2023YFB4104200+2 种基金the Major Scientific and Technological Projects of CNOOC under Grant CCL2022RCPS0397RSN111 Project under Grant B08028China Scholarship Council under Grant 202306450108.
文摘This study introduces a novel approach to addressing the challenges of high-dimensional variables and strong nonlinearity in reservoir production and layer configuration optimization.For the first time,relational machine learning models are applied in reservoir development optimization.Traditional regression-based models often struggle in complex scenarios,but the proposed relational and regression-based composite differential evolution(RRCODE)method combines a Gaussian naive Bayes relational model with a radial basis function network regression model.This integration effectively captures complex relationships in the optimization process,improving both accuracy and convergence speed.Experimental tests on a multi-layer multi-channel reservoir model,the Egg reservoir model,and a real-field reservoir model(the S reservoir)demonstrate that RRCODE significantly reduces water injection and production volumes while increasing economic returns and cumulative oil recovery.Moreover,the surrogate models employed in RRCODE exhibit lightweight characteristics with low computational overhead.These results highlight RRCODE's superior performance in the integrated optimization of reservoir production and layer configurations,offering more efficient and economically viable solutions for oilfield development.
基金supported by grants from the National Natural Science Foundation of China(No.72361014)the Technical Field Fund of Basic Research Strengthening Program(Project No.2021-JCJQ-JJ-0003)+1 种基金the Major Program of the National Social Science Foundation of China(No.19ZDA103)the Science and Technology Project of Jiangxi Provincial Department of Education(No.GJJ2200526).
文摘Machine learning methods are widely used to evaluate the risk of small-and mediumsized enterprises(SMEs)in supply chain finance(SCF).However,there may be problems with data scarcity,feature redundancy,and poor predictive performance.Additionally,data collected over a long time span may cause differences in the data distribution,and classic supervised learning methods may exhibit poor predictive abilities under such conditions.To address these issues,a domain-adaptation-based multistage ensemble learning paradigm(DAMEL)is proposed in this study to evaluate the credit risk of SMEs in SCF.In this methodology,a bagging resampling algorithm is first used to generate a dataset to address data scarcity.Subsequently,a random subspace is applied to integrate various features and reduce feature redundancy.Additionally,a domain adaptation approach is utilized to reduce the data distribution discrepancy in the cross-domain.Finally,dynamic model selection is developed to improve the generalization ability of the model in the fourth stage.A real-world credit dataset from the Chinese securities market was used to validate the effectiveness and feasibility of the multistage ensemble learning paradigm.The experimental results demonstrated that the proposed domain-adaptation-based multistage ensemble learning paradigm is superior to principal component analysis,joint distribution adaptation,random forest,and other ensemble and transfer learning methods.Moreover,dynamic model selection can improve the model generalization performance and prediction precision of minority samples.This can be considered a promising solution for evaluating the credit risk of SMEs in SCF for financial institutions.
基金supported in part by the National Key Research and Development Program of China(2022YFC3601200)the National Natural Science Foundation of China(62203441,U21A20479)+1 种基金the Beijing Natural Science Foundation(L232005)the Inner Mongolia Autonomous Region Science and Technology Plan(2023YFDZ0042).
文摘Dear Editor,This letter investigates the system development of a multi-joint rehabilitation exoskeleton,and highlights the subject-adaptive control factors for efficient motor learning.In order to enable the natural mobility of the human upper extremity,we design the shoulder mechanism by arranging three rotational joints with acute angles,and adopt a serial chain structure for the fully constructed system.After the kinematics and dynamics of CASIA-EXO are modelled,the patient-in-the-loop control strategy is proposed for rehabilitation training,consisting of the intention-based trajectory planning and performance-based intervention adaptation.Finally,we conduct experiments to validate the efficacy of the control system,and further demonstrate the potential of CASIA-EXO in neurorehabilitation.Introduction:Neurological diseases are the leading cause of nontraumatic disability worldwide,and stroke is one of the most common encountered neurological injury,which is suffered by over 15 million individuals each year,and about 70%−80%of these individuals have varying degrees of functional impairments[1].In order to facilitate the motor relearning in central nervous system,post-stroke patients need to undergo long-term rehabilitation training to promote neural plasticity,thereby enhancing the recovery of motor function in activities of daily living(ADLs).Evidence in the clinical studies suggests that robot-assisted rehabilitation integrating neuroscience,biomechanics,and automation control can improve the patients’motivation for active participation while improving the treatment efficiency,therefore,be expected to become the most promising means for neurorehabilitation[2].
基金supported in part by the National Natural Science Foundation of China(Grant No.82072019)the Shenzhen Basic Research Program(JCYJ20210324130209023)of Shenzhen Science and Technology Innovation Committee+6 种基金the Shenzhen-Hong Kong-Macao S&T Program(Category C)(SGDX20201103095002019)the Natural Science Foundation of Jiangsu Province(No.BK20201441)the Provincial and Ministry Co-constructed Project of Henan Province Medical Science and Technology Research(SBGJ202103038 and SBGJ202102056)the Henan Province Key R&D and Promotion Project(Science and Technology Research)(222102310015)the Natural Science Foundation of Henan Province(222300420575)the Henan Province Science and Technology Research(222102310322)The Jiangsu Students’Innovation and Entrepreneurship Training Program(202110304096Y).
文摘Epilepsy is a central nervous system disorder in which brain activity becomes abnormal.Electroencephalogram(EEG)signals,as recordings of brain activity,have been widely used for epilepsy recognition.To study epilep-tic EEG signals and develop artificial intelligence(AI)-assist recognition,a multi-view transfer learning(MVTL-LSR)algorithm based on least squares regression is proposed in this study.Compared with most existing multi-view transfer learning algorithms,MVTL-LSR has two merits:(1)Since traditional transfer learning algorithms leverage knowledge from different sources,which poses a significant risk to data privacy.Therefore,we develop a knowledge transfer mechanism that can protect the security of source domain data while guaranteeing performance.(2)When utilizing multi-view data,we embed view weighting and manifold regularization into the transfer framework to measure the views’strengths and weaknesses and improve generalization ability.In the experimental studies,12 different simulated multi-view&transfer scenarios are constructed from epileptic EEG signals licensed and provided by the Uni-versity of Bonn,Germany.Extensive experimental results show that MVTL-LSR outperforms baselines.The source code will be available on https://github.com/didid5/MVTL-LSR.
基金National Natural Science Foundation of China,Grant/Award Numbers:61671064,61732005National Key Research&Development Program,Grant/Award Number:2018YFC0831700。
文摘Emotion cause extraction(ECE)task that aims at extracting potential trigger events of certain emotions has attracted extensive attention recently.However,current work neglects the implicit emotion expressed without any explicit emotional keywords,which appears more frequently in application scenarios.The lack of explicit emotion information makes it extremely hard to extract emotion causes only with the local context.Moreover,an entire event is usually across multiple clauses,while existing work merely extracts cause events at clause level and cannot effectively capture complete cause event information.To address these issues,the events are first redefined at the tuple level and a span-based tuple-level algorithm is proposed to extract events from different clauses.Based on it,a corpus for implicit emotion cause extraction that tries to extract causes of implicit emotions is constructed.The authors propose a knowledge-enriched jointlearning model of implicit emotion recognition and implicit emotion cause extraction tasks(KJ-IECE),which leverages commonsense knowledge from ConceptNet and NRC_VAD to better capture connections between emotion and corresponding cause events.Experiments on both implicit and explicit emotion cause extraction datasets demonstrate the effectiveness of the proposed model.
基金funded by the Research Project:THTETN.05/24-25,VietnamAcademy of Science and Technology.
文摘Satellite image segmentation plays a crucial role in remote sensing,supporting applications such as environmental monitoring,land use analysis,and disaster management.However,traditional segmentation methods often rely on large amounts of labeled data,which are costly and time-consuming to obtain,especially in largescale or dynamic environments.To address this challenge,we propose the Semi-Supervised Multi-View Picture Fuzzy Clustering(SS-MPFC)algorithm,which improves segmentation accuracy and robustness,particularly in complex and uncertain remote sensing scenarios.SS-MPFC unifies three paradigms:semi-supervised learning,multi-view clustering,and picture fuzzy set theory.This integration allows the model to effectively utilize a small number of labeled samples,fuse complementary information from multiple data views,and handle the ambiguity and uncertainty inherent in satellite imagery.We design a novel objective function that jointly incorporates picture fuzzy membership functions across multiple views of the data,and embeds pairwise semi-supervised constraints(must-link and cannot-link)directly into the clustering process to enhance segmentation accuracy.Experiments conducted on several benchmark satellite datasets demonstrate that SS-MPFC significantly outperforms existing state-of-the-art methods in segmentation accuracy,noise robustness,and semantic interpretability.On the Augsburg dataset,SS-MPFC achieves a Purity of 0.8158 and an Accuracy of 0.6860,highlighting its outstanding robustness and efficiency.These results demonstrate that SSMPFC offers a scalable and effective solution for real-world satellite-based monitoring systems,particularly in scenarios where rapid annotation is infeasible,such as wildfire tracking,agricultural monitoring,and dynamic urban mapping.