In recent years,with the increase in the price of cryptocurrencies,the number of malicious cryptomining software has increased significantly.With their powerful spreading ability,cryptomining malware can unknowingly o...In recent years,with the increase in the price of cryptocurrencies,the number of malicious cryptomining software has increased significantly.With their powerful spreading ability,cryptomining malware can unknowingly occupy our resources,harm our interests,and damage more legitimate assets.However,although current traditional rule-based malware detection methods have a low false alarm rate,they have a relatively low detection rate when faced with a large volume of emerging malware.Even though common machine learning-based or deep learning-based methods have certain ability to learn and detect unknown malware,the characteristics they learn are single and independent,and cannot be learned adaptively.Aiming at the above problems,we propose a deep learning model with multi-input of multi-modal features,which can simultaneously accept digital features and image features on different dimensions.The model in turn includes parallel learning of three sub-models and ensemble learning of another specific sub-model.The four sub-models can be processed in parallel on different devices and can be further applied to edge computing environments.The model can adaptively learn multi-modal features and output prediction results.The detection rate of our model is as high as 97.01%and the false alarm rate is only 0.63%.The experimental results prove the advantage and effectiveness of the proposed method.展开更多
Gait recognition is a key biometric for long-distance identification,yet its performance is severely degraded by real-world challenges such as varying clothing,carrying conditions,and changing viewpoints.While combini...Gait recognition is a key biometric for long-distance identification,yet its performance is severely degraded by real-world challenges such as varying clothing,carrying conditions,and changing viewpoints.While combining silhouette and skeleton data is a promising direction,effectively fusing these heterogeneous modalities and adaptively weighting their contributions in response to diverse conditions remains a central problem.This paper introduces GaitMAFF,a novelMulti-modal Adaptive Feature Fusion Network,to address this challenge.Our approach first transforms discrete skeleton joints into a dense SkeletonMap representation to align with silhouettes,then employs an attention-based module to dynamically learn the fusion weights between the two modalities.These fused features are processed by a powerful spatio-temporal backbone withWeighted Global-Local Feature FusionModules(WFFM)to learn a discriminative representation.Extensive experiments on the challenging CCPG and Gait3D datasets show that GaitMAFF achieves state-of-the-art performance,with an average Rank-1 accuracy of 84.6%on CCPG and 58.7%on Gait3D.These results demonstrate that our adaptive fusion strategy effectively integrates complementary multimodal information,significantly enhancing gait recognition robustness and accuracy in complex scenes and providing a practical solution for real-world applications.展开更多
Autism spectrum disorder(AsD)is a highly heterogeneous neurodevelopmental disorder.Early diagnosis and intervention are crucial for improving outcomes.Traditional single-modality diagnostic methods are subjective,limi...Autism spectrum disorder(AsD)is a highly heterogeneous neurodevelopmental disorder.Early diagnosis and intervention are crucial for improving outcomes.Traditional single-modality diagnostic methods are subjective,limited,and struggle to reveal the underlying pathological mechanisms.In contrast,multimodal data analysis integrates behavioral,physiological,and neuroimaging information with advanced machine-learning and deeplearning algorithms to overcome these limitations.In this review,we surveyed the recent pediatric AsD literature,highlighting artificial intelligence-driven diagnostic techniques,multimodal data fusion strategies,and emerging trends in ASD assessment.We surveyed studies that integrated two or more modalities and summarized the fusion levels,learning paradigms,tasks,datasets,and metrics.Multimodal approaches outperform singlemodality baselines in classification,severity estimation,and subtyping by leveraging complementary information and reducing modality-specific biases.Multimodal approaches significantly enhance diagnostic accuracy and comprehensiveness,enabling early screening of AsD,symptom subtyping,severity assessment,and personalized interventions.Advances in multimodal fusion techniques have promoted progress in precision medicine for the treatment of ASD.展开更多
In multi-modal emotion recognition,excessive reliance on historical context often impedes the detection of emotional shifts,while modality heterogeneity and unimodal noise limit recognition performance.Existing method...In multi-modal emotion recognition,excessive reliance on historical context often impedes the detection of emotional shifts,while modality heterogeneity and unimodal noise limit recognition performance.Existing methods struggle to dynamically adjust cross-modal complementary strength to optimize fusion quality and lack effective mechanisms to model the dynamic evolution of emotions.To address these issues,we propose a multi-level dynamic gating and emotion transfer framework for multi-modal emotion recognition.A dynamic gating mechanism is applied across unimodal encoding,cross-modal alignment,and emotion transfer modeling,substantially improving noise robustness and feature alignment.First,we construct a unimodal encoder based on gated recurrent units and feature-selection gating to suppress intra-modal noise and enhance contextual representation.Second,we design a gated-attention crossmodal encoder that dynamically calibrates the complementary contributions of visual and audio modalities to the dominant textual features and eliminates redundant information.Finally,we introduce a gated enhanced emotion transfer module that explicitly models the temporal dependence of emotional evolution in dialogues via transfer gating and optimizes continuity modeling with a comparative learning loss.Experimental results demonstrate that the proposed method outperforms state-of-the-art models on the public MELD and IEMOCAP datasets.展开更多
AIM:To investigate the clinical features and prognosis of patients with orbital inflammatory myofibroblastic tumor(IMT).METHODS:This retrospective study collected clinical data from 22 patients diagnosed with orbital ...AIM:To investigate the clinical features and prognosis of patients with orbital inflammatory myofibroblastic tumor(IMT).METHODS:This retrospective study collected clinical data from 22 patients diagnosed with orbital IMT based on histopathological examination.The patients were followed up to assess their prognosis.Clinical data from patients,including age,gender,course of disease,past medical history,primary symptoms,ophthalmologic examination findings,general condition,as well as imaging,laboratory,histopathological,and immunohistochemical results from digital records were collected.Orbital magnetic resonance imaging(MRI)and(or)computed tomography(CT)scans were performed to assess bone destruction of the mass,invasion of surrounding tissues,and any inflammatory changes in periorbital areas.RESULTS:The mean age of patients with orbital IMT was 28.24±3.30y,with a male-to-female ratio of 1.2:1.Main clinical manifestations were proptosis,blurred vision,palpable mass,and pain.Bone destruction and surrounding tissue invasion occurred in 72.73%and 54.55%of cases,respectively.Inflammatory changes in the periorbital site were observed in 77.27%of the patients.Hematoxylin and eosin staining showed proliferation of fibroblasts and myofibroblasts,accompanied by infiltration of lymphocytes and plasma cells.Immunohistochemical staining revealed that smooth muscle actin(SMA)and vimentin were positive in 100%of cases,while anaplastic lymphoma kinase(ALK)showed positivity in 47.37%.The recurrence rate of orbital IMT was 27.27%,and sarcomatous degeneration could occur.There were no significant correlations between recurrence and factors such as age,gender,laterality,duration of the disease,periorbital tissue invasion,bone destruction,periorbital inflammation,tumor size,fever,leukocytosis,or treatment(P>0.05).However,lymphadenopathy and a Ki-67 index of 10%or higher may be risk factors for recurrence(P=0.046;P=0.023).CONCLUSION:Orbital IMT is a locally invasive disease that may recur or lead to sarcomatoid degeneration,primarily affecting young and middle-aged patients.The presence of lymphadenopathy and a Ki-67 index of 10%or higher may signify a poor prognosis.展开更多
The detection of steel surface anomalies has become an industrial challenge due to variations in production equipment,processes,and characteristics.To alleviate the problem,this paper proposes a detection and localiza...The detection of steel surface anomalies has become an industrial challenge due to variations in production equipment,processes,and characteristics.To alleviate the problem,this paper proposes a detection and localization method combining 3D depth and 2D RGB features.The framework comprises three stages:defect classification,defect location,an d warpage judgment.The first stage uses a dataefficient image Transformer model,the second stage utilizes reverse knowledge distillation,and the third stage performs feature fusion using3D depth and 2D RGB features.Experimental results show that the proposed algorithm achieves relatively high accuracy and feasibility,and can be effectively used in industrial scenarios.展开更多
Objective To develop a depression recognition model by integrating the spirit-expression diagnostic framework of traditional Chinese medicine(TCM)with machine learning algorithms.The proposed model seeks to establish ...Objective To develop a depression recognition model by integrating the spirit-expression diagnostic framework of traditional Chinese medicine(TCM)with machine learning algorithms.The proposed model seeks to establish a TCM-informed tool for early depression screening,thereby bridging traditional diagnostic principles with modern computational approaches.Methods The study included patients with depression who visited the Shanghai Pudong New Area Mental Health Center from October 1,2022 to October 1,2023,as well as students and teachers from Shanghai University of Traditional Chinese Medicine during the same period as the healthy control group.Videos of 3–10 s were captured using a Xiaomi Pad 5,and the TCM spirit and expressions were determined by TCM experts(at least 3 out of 5 experts agreed to determine the category of TCM spirit and expressions).Basic information,facial images,and interview information were collected through a portable TCM intelligent analysis and diagnosis device,and facial diagnosis features were extracted using the Open CV computer vision library technology.Statistical analysis methods such as parametric and non-parametric tests were used to analyze the baseline data,TCM spirit and expression features,and facial diagnosis feature parameters of the two groups,to compare the differences in TCM spirit and expression and facial features.Five machine learning algorithms,including extreme gradient boosting(XGBoost),decision tree(DT),Bernoulli naive Bayes(BernoulliNB),support vector machine(SVM),and k-nearest neighbor(KNN)classification,were used to construct a depression recognition model based on the fusion of TCM spirit and expression features.The performance of the model was evaluated using metrics such as accuracy,precision,and the area under the receiver operating characteristic(ROC)curve(AUC).The model results were explained using the Shapley Additive exPlanations(SHAP).Results A total of 93 depression patients and 87 healthy individuals were ultimately included in this study.There was no statistically significant difference in the baseline characteristics between the two groups(P>0.05).The differences in the characteristics of the spirit and expressions in TCM and facial features between the two groups were shown as follows.(i)Quantispirit facial analysis revealed that depression patients exhibited significantly reduced facial spirit and luminance compared with healthy controls(P<0.05),with characteristic features such as sad expressions,facial erythema,and changes in the lip color ranging from erythematous to cyanotic.(ii)Depressed patients exhibited significantly lower values in facial complexion L,lip L,and a values,and gloss index,but higher values in facial complexion a and b,lip b,low gloss index,and matte index(all P<0.05).(iii)The results of multiple models show that the XGBoost-based depression recognition model,integrating the TCM“spirit-expression”diagnostic framework,achieved an accuracy of 98.61%and significantly outperformed four benchmark algorithms—DT,BernoulliNB,SVM,and KNN(P<0.01).(iv)The SHAP visualization results show that in the recognition model constructed by the XGBoost algorithm,the complexion b value,categories of facial spirit,high gloss index,low gloss index,categories of facial expression and texture features have significant contribution to the model.Conclusion This study demonstrates that integrating TCM spirit-expression diagnostic features with machine learning enables the construction of a high-precision depression detection model,offering a novel paradigm for objective depression diagnosis.展开更多
BACKGROUND SMARCB1/INI1-deficient pancreatic undifferentiated rhabdoid carcinoma is a highly aggressive tumor,and spontaneous splenic rupture(SSR)as its presenting manifestation is rarely reported among pancreatic mal...BACKGROUND SMARCB1/INI1-deficient pancreatic undifferentiated rhabdoid carcinoma is a highly aggressive tumor,and spontaneous splenic rupture(SSR)as its presenting manifestation is rarely reported among pancreatic malignancies.CASE SUMMARY We herein report a rare case of a 59-year-old female who presented with acute left upper quadrant abdominal pain without any history of trauma.Abdominal imaging demonstrated a heterogeneous splenic lesion with hemoperitoneum,raising clinical suspicion of SSR.Emergency laparotomy revealed a pancreatic tumor invading the spleen and left kidney,with associated splenic rupture and dense adhesions,necessitating en bloc resection of the distal pancreas,spleen,and left kidney.Histopathology revealed a biphasic malignancy composed of moderately differentiated pancreatic ductal adenocarcinoma and an undifferentiated carcinoma with rhabdoid morphology and loss of SMARCB1 expression.Immunohistochemical analysis confirmed complete loss of SMARCB1/INI1 in the undifferentiated component,along with a high Ki-67 index(approximately 80%)and CD10 positivity.The ductal adenocarcinoma component retained SMARCB1/INI1 expression and was positive for CK7 and CK-pan.Transitional zones between the two tumor components suggested progressive dedifferentiation and underlying genomic instability.The patient received adjuvant chemotherapy with gemcitabine and nab-paclitaxel and maintained a satisfactory quality of life at the 6-month follow-up.CONCLUSION This study reports a rare case of SMARCB1/INI1-deficient undifferentiated rhabdoid carcinoma of the pancreas combined with ductal adenocarcinoma,presenting as SSR-an exceptionally uncommon initial manifestation of pancreatic malignancy.展开更多
To address the challenge of achieving decentralized,scalable,and adaptive control for large-scale multiple unmanned aerial vehicle(multi-UAV)swarms in dynamic urban environments with obstacles and wind perturbations,w...To address the challenge of achieving decentralized,scalable,and adaptive control for large-scale multiple unmanned aerial vehicle(multi-UAV)swarms in dynamic urban environments with obstacles and wind perturbations,we proposed a hybrid framework integrating adaptive reinforcement learning(RL),multi-modal perception fusion,and enhanced pigeon flock optimization(PFO)with curiosity-driven exploration to enable robust autonomous and formation control.The framework leverages meta-learning to optimize RL policies for real-time adaptation,fuses sensor data for precise state estimation,and enhances PFO with learned leader-follower dynamics and exploration rewards to maintain cohesive formations and explore uncertain areas.For swarms of 10–30 UAVs,it achieves 34%faster convergence,61%reduced stability root mean square error(RMSE),88%fewer collisions and 85.6%–92.3%success rates in target detection and encirclement,outperforming standard multi-agent RL,pure PFO,and single-modality RL.Three-dimensional trajectory visualizations confirm cohesive formations,collision-free maneuvers,and efficient exploration in urban search-and-rescue scenarios.Innovations include meta-RL for rapid adaptation,multi-modal fusion for robust perception,and curiosity-driven PFO for scalable,decentralized control,advancing real-world multi-UAV swarm autonomy and coordination.展开更多
This study proposes a multimodal deep learning framework for joint prediction of the state of health(SOH)and remaining useful life(RUL)of lithium-ion batteries.Twelve representative impedance features-covering charge-...This study proposes a multimodal deep learning framework for joint prediction of the state of health(SOH)and remaining useful life(RUL)of lithium-ion batteries.Twelve representative impedance features-covering charge-transfer resistance,solid electrolyte interface(SEI)layer impedance,and ion diffusion-are extracted from electrochemical impedance spectroscopy(EIS)and combined with short voltage/current segments to form a compact,interpretable feature set.A residual multi-layer perceptron(ResMLP)is employed for SOH regression,and a temporal convolutional network with attention(TCNAttention)is used for RUL estimation.Lifetime experiments on two battery types with different chemistries and form factors,evaluated through three rounds of paired cross-validation,validate the approach.Results show that the proposed features significantly reduce dimensionality and computational cost while substantially lowering SOH error,achieving an average normalized root mean square error of 2.3%.The RUL prediction reaches an average error of 14.8%.Overall,the framework balances interpretability,robustness,and feasibility,providing a practical solution for battery management systems(BMS)monitoring and life prediction.展开更多
Improved delay detached eddy simulation is performed to explore the flow features and aero-optical effects of turrets with different bottom cylinder height at a freestream Mach number Ma=0.7.Analysis of both the time-...Improved delay detached eddy simulation is performed to explore the flow features and aero-optical effects of turrets with different bottom cylinder height at a freestream Mach number Ma=0.7.Analysis of both the time-averaged and instantaneous flow features demonstrate that the shock motion causes the oscillation of separated shear layer.In flow analysis,two unsteady shock-wake-correlated modes are discerned:the asymmetric shifting mode and the symmetric breathing mode.With the increase of cylinder height,the relative energy of shock gradually increases,which goes from 26%to 59%.The proper orthogonal decomposition analysis yields the single frequency peak for the two dominant modes.The frequency peaks of shifting mode are generally at StD<0.23,while the frequency peaks of breathing mode are generally at StD>0.26.The dynamic mode decomposition analysis gives range of frequency peak.The frequency peaks of shifting mode are in the range of StD=0.11-0.23,and the frequency peaks of breathing mode are in range of StD=0.26-0.41.Optical distortion analysis indicates that the distortion calculated in five cases is linked to the breathing mode.When the beam passes through the turbulent wake,it exhibits the high-frequency and high-amplitude characteristics.展开更多
Hard disk drives(HDDs)serve as the primary storage devices in modern data centers.Once a failure occurs,it often leads to severe data loss,significantly degrading the reliability of storage systems.Numerous studies ha...Hard disk drives(HDDs)serve as the primary storage devices in modern data centers.Once a failure occurs,it often leads to severe data loss,significantly degrading the reliability of storage systems.Numerous studies have proposed machine learning-based HDD failure prediction models.However,the Self-Monitoring,Analysis,and Reporting Technology(SMART)attributes differ across HDD manufacturers.We define hard drives of the same brand and model as homogeneous HDD groups,and those from different brands or models as heterogeneous HDD groups.In practical engineering scenarios,a data center is often composed of a heterogeneous population of HDDs,spanning multiple vendors and models.Existing research predominantly focuses on homogeneous datasets,ignoring the model’s generalization capability across heterogeneous HDDs.As a result,HDD models with limited samples often suffer from poor training effectiveness and prediction performance.To address this issue,we investigate generalizable SMART predictors across heterogeneous HDD groups.By extracting time-series features within a fixed sliding time window,we propose a Heterogeneous Disk Failure Prediction Method based on Time Series Features(HDFPM)framework.This method is adaptable to HDD models with limited sample sizes,thereby enhancing its applicability and robustness across diverse drive populations.Experimental results show that the proposed model achieves an F1-score of 0.9518 when applied to two different Seagate HDD models,while maintaining the False Positive Rate(FPR)below 1%.After incorporating the Complexity-Ratio Dynamic Time Warping(CDTW)based feature enhancement method,the best prediction model achieves a True Positive Rate(TPR)of up to 0.93 between the two models.For next-day failure prediction across various Seagate models,the model achieves an F1-score of up to 0.8792.Moreover,the experimental results also show that within the same brand,the higher the proportion of shared SMART attributes across different models,the better the prediction performance.In addition,HDFPMdemonstrates the best stability andmost significant performance in heterogeneous environments.展开更多
In the field of intelligent air combat,real-time and accurate recognition of within-visual-range(WVR)maneuver actions serves as the foundational cornerstone for constructing autonomous decision-making systems.However,...In the field of intelligent air combat,real-time and accurate recognition of within-visual-range(WVR)maneuver actions serves as the foundational cornerstone for constructing autonomous decision-making systems.However,existing methods face two major challenges:traditional feature engineering suffers from insufficient effective dimensionality in the feature space due to kinematic coupling,making it difficult to distinguish essential differences between maneuvers,while end-to-end deep learning models lack controllability in implicit feature learning and fail to model high-order long-range temporal dependencies.This paper proposes a trajectory feature pre-extraction method based on a Long-range Masked Autoencoder(LMAE),incorporating three key innovations:(1)Random Fragment High-ratio Masking(RFH-Mask),which enforces the model to learn long-range temporal correlations by masking 80%of trajectory data while retaining continuous fragments;(2)Kalman Filter-Guided Objective Function(KFG-OF),integrating trajectory continuity constraints to align the feature space with kinematic principles;and(3)Two-stage Decoupled Architecture,enabling efficient and controllable feature learning through unsupervised pre-training and frozen-feature transfer.Experimental results demonstrate that LMAE significantly improves the average recognition accuracy for 20-class maneuvers compared to traditional end-to-end models,while significantly accelerating convergence speed.The contributions of this work lie in:introducing high-masking-rate autoencoders into low-informationdensity trajectory analysis,proposing a feature engineering framework with enhanced controllability and efficiency,and providing a novel technical pathway for intelligent air combat decision-making systems.展开更多
Retinal diseases are a serious threat to human visual health and their early diagnosis is crucial.Currently,most of the retinal disease diagnostic algorithms are based on a single imaging modality of fundus color phot...Retinal diseases are a serious threat to human visual health and their early diagnosis is crucial.Currently,most of the retinal disease diagnostic algorithms are based on a single imaging modality of fundus color photography(FCP)or optical coherence tomography(OCT).These methods can only reflect retinal diseases to a certain extent,ignoring the speci ficity of modalities between different imaging modalities.In this research,a newmulti-scale feature fusion network(MSFF-Net)model for multi-modal retinal image diagnosis is proposed.The MSFF-Net model employs a dualbranch architecture design,enabling efficient learning and extraction of multi-modal feature information related to retinal diseases from CFP and OCT images.MSFF-Net improves disease diagnosis by combining multi-scale features of CFP and OCT images.When evaluated on challenging datasets,the model achieved an accuracy of 95.00%and an F1-score of 95.24%for retinal disease diagnosis.Even under low-quality dataset conditions,it maintained robust performance,with diagnostic accuracy and F1-scores of 71.50%and 71.73%,respectively.In addition,the MSFFNet model outperformed eight state-of-the-art single and multi-modal models in the comparison experiments.The proposed MSFF-Net model provides ophthalmologists with a more accurate and efficient diagnostic pathway that helps them detect and treat retinal diseases earlier.展开更多
Phishing email detection represents a critical research challenge in cybersecurity.To address this,this paper proposes a novel Double-S(statistical-semantic)feature model based on three core entities involved in email...Phishing email detection represents a critical research challenge in cybersecurity.To address this,this paper proposes a novel Double-S(statistical-semantic)feature model based on three core entities involved in email communication:the sender,recipient,and email content.We employ strategic game theory to analyze the offensive strategies of phishing attackers and defensive strategies of protectors,extracting statistical features from these entities.We also leverage the Qwen large language model to excavate implicit semantic features(e.g.,emotional manipulation and social engineering tactics)from email content.By integrating statistical and semantic features,our model achieves a robust representation of phishing emails.We introduce a hybrid detection model that integrates a convolutional neural network(CNN)module with the XGBoost(Extreme Gradient Boosting)classifier,effectively capturing local correlations in high-dimensional features.Experimental results on real-world phishing email datasets demonstrate the superiority of our approach,achieving an F1-score of 0.9587,precision of 0.9591,and recall of 0.9583,representing improvements of 1.3%–10.6%compared to state-of-the-art methods.展开更多
By integrating self-localization,environment mapping,and dynamic object tracking into a unified framework,visual simultaneous localization and mapping with multiple object tracking(SLAMMOT)enhances decision-making and...By integrating self-localization,environment mapping,and dynamic object tracking into a unified framework,visual simultaneous localization and mapping with multiple object tracking(SLAMMOT)enhances decision-making and interaction capabilities in applications such as autonomous driving,robotic navigation,and augmented reality.While numerous outstanding visual SLAMMOT methods have been proposed,the majority rely only on point features,overlooking the abundant and stable planar features in artificial objects that can provide valuable constraints.To address this limitation,we propose OP(object planar)-SLAM,an RGB-D SLAMMOT system that leverages planar features to improve object pose estimation and reconstruction accuracy.Specifically,we introduce an accurate object planar feature extraction and association method using normal images,alongside a novel object bundle adjustment framework that incorporates planar constraints for enhanced optimization.The proposed system is evaluated on both synthetic and public real-world datasets,including Oxford multimotion dataset(OMD)and KITTI tracking dataset.Especially on the OMD,where planar features are prominent,our method improves object pose estimation accuracy by approximately 60%.Extensive experiments demonstrate its effectiveness in enhancing object pose estimation and reconstruction,achieving notable performance compared with existing methods.Furthermore,OP-SLAM runs in real time,making it suitable for practical robots and augmented reality applications.展开更多
[Objective]Accurate prediction of tomato growth height is crucial for optimizing production environments in smart farming.However,current prediction methods predominantly rely on empirical,mechanistic,or learning-base...[Objective]Accurate prediction of tomato growth height is crucial for optimizing production environments in smart farming.However,current prediction methods predominantly rely on empirical,mechanistic,or learning-based models that utilize either images data or environmental data.These methods fail to fully leverage multi-modal data to capture the diverse aspects of plant growth comprehensively.[Methods]To address this limitation,a two-stage phenotypic feature extraction(PFE)model based on deep learning algorithm of recurrent neural network(RNN)and long short-term memory(LSTM)was developed.The model integrated environment and plant information to provide a holistic understanding of the growth process,emploied phenotypic and temporal feature extractors to comprehensively capture both types of features,enabled a deeper understanding of the interaction between tomato plants and their environment,ultimately leading to highly accurate predictions of growth height.[Results and Discussions]The experimental results showed the model's ef‐fectiveness:When predicting the next two days based on the past five days,the PFE-based RNN and LSTM models achieved mean absolute percentage error(MAPE)of 0.81%and 0.40%,respectively,which were significantly lower than the 8.00%MAPE of the large language model(LLM)and 6.72%MAPE of the Transformer-based model.In longer-term predictions,the 10-day prediction for 4 days ahead and the 30-day prediction for 12 days ahead,the PFE-RNN model continued to outperform the other two baseline models,with MAPE of 2.66%and 14.05%,respectively.[Conclusions]The proposed method,which leverages phenotypic-temporal collaboration,shows great potential for intelligent,data-driven management of tomato cultivation,making it a promising approach for enhancing the efficiency and precision of smart tomato planting management.展开更多
Scene classification of high-resolution remote sensing (HRRS) image is an important research topic and has been applied broadly in many fields. Deep learning method has shown its high potential to in this domain, owin...Scene classification of high-resolution remote sensing (HRRS) image is an important research topic and has been applied broadly in many fields. Deep learning method has shown its high potential to in this domain, owing to its powerful learning ability of characterizing complex patterns. However the deep learning methods omit some global and local information of the HRRS image. To this end, in this article we show efforts to adopt explicit global and local information to provide complementary information to deep models. Specifically, we use a patch based MS-CLBP method to acquire global and local representations, and then we consider a pretrained CNN model as a feature extractor and extract deep hierarchical features from full-connection layers. After fisher vector (FV) encoding, we obtain the holistic visual representation of the scene image. We view the scene classification as a reconstruction procedure and train several class-specific stack denoising autoencoders (SDAEs) of corresponding class, i.e., one SDAE per class, and classify the test image according to the reconstruction error. Experimental results show that our combination method outperforms the state-of-the-art deep learning classification methods without employing fine-tuning.展开更多
Fusing hand-based features in multi-modal biometric recognition enhances anti-spoofing capabilities.Additionally,it leverages inter-modal correlation to enhance recognition performance.Concurrently,the robustness and ...Fusing hand-based features in multi-modal biometric recognition enhances anti-spoofing capabilities.Additionally,it leverages inter-modal correlation to enhance recognition performance.Concurrently,the robustness and recognition performance of the system can be enhanced through judiciously leveraging the correlation among multimodal features.Nevertheless,two issues persist in multi-modal feature fusion recognition:Firstly,the enhancement of recognition performance in fusion recognition has not comprehensively considered the inter-modality correlations among distinct modalities.Secondly,during modal fusion,improper weight selection diminishes the salience of crucial modal features,thereby diminishing the overall recognition performance.To address these two issues,we introduce an enhanced DenseNet multimodal recognition network founded on feature-level fusion.The information from the three modalities is fused akin to RGB,and the input network augments the correlation between modes through channel correlation.Within the enhanced DenseNet network,the Efficient Channel Attention Network(ECA-Net)dynamically adjusts the weight of each channel to amplify the salience of crucial information in each modal feature.Depthwise separable convolution markedly reduces the training parameters and further enhances the feature correlation.Experimental evaluations were conducted on four multimodal databases,comprising six unimodal databases,including multispectral palmprint and palm vein databases from the Chinese Academy of Sciences.The Equal Error Rates(EER)values were 0.0149%,0.0150%,0.0099%,and 0.0050%,correspondingly.In comparison to other network methods for palmprint,palm vein,and finger vein fusion recognition,this approach substantially enhances recognition performance,rendering it suitable for high-security environments with practical applicability.The experiments in this article utilized amodest sample database comprising 200 individuals.The subsequent phase involves preparing for the extension of the method to larger databases.展开更多
The multi-modal characteristics of mineral particles play a pivotal role in enhancing the classification accuracy,which is critical for obtaining a profound understanding of the Earth's composition and ensuring ef...The multi-modal characteristics of mineral particles play a pivotal role in enhancing the classification accuracy,which is critical for obtaining a profound understanding of the Earth's composition and ensuring effective exploitation utilization of its resources.However,the existing methods for classifying mineral particles do not fully utilize these multi-modal features,thereby limiting the classification accuracy.Furthermore,when conventional multi-modal image classification methods are applied to planepolarized and cross-polarized sequence images of mineral particles,they encounter issues such as information loss,misaligned features,and challenges in spatiotemporal feature extraction.To address these challenges,we propose a multi-modal mineral particle polarization image classification network(MMGC-Net)for precise mineral particle classification.Initially,MMGC-Net employs a two-dimensional(2D)backbone network with shared parameters to extract features from two types of polarized images to ensure feature alignment.Subsequently,a cross-polarized intra-modal feature fusion module is designed to refine the spatiotemporal features from the extracted features of the cross-polarized sequence images.Ultimately,the inter-modal feature fusion module integrates the two types of modal features to enhance the classification precision.Quantitative and qualitative experimental results indicate that when compared with the current state-of-the-art multi-modal image classification methods,MMGC-Net demonstrates marked superiority in terms of mineral particle multi-modal feature learning and four classification evaluation metrics.It also demonstrates better stability than the existing models.展开更多
基金supported by the Key Research and Development Program of Shandong Province(Soft Science Project)(2020RKB01364).
文摘In recent years,with the increase in the price of cryptocurrencies,the number of malicious cryptomining software has increased significantly.With their powerful spreading ability,cryptomining malware can unknowingly occupy our resources,harm our interests,and damage more legitimate assets.However,although current traditional rule-based malware detection methods have a low false alarm rate,they have a relatively low detection rate when faced with a large volume of emerging malware.Even though common machine learning-based or deep learning-based methods have certain ability to learn and detect unknown malware,the characteristics they learn are single and independent,and cannot be learned adaptively.Aiming at the above problems,we propose a deep learning model with multi-input of multi-modal features,which can simultaneously accept digital features and image features on different dimensions.The model in turn includes parallel learning of three sub-models and ensemble learning of another specific sub-model.The four sub-models can be processed in parallel on different devices and can be further applied to edge computing environments.The model can adaptively learn multi-modal features and output prediction results.The detection rate of our model is as high as 97.01%and the false alarm rate is only 0.63%.The experimental results prove the advantage and effectiveness of the proposed method.
基金funded by the Natural Science Foundation of Chongqing Municipality,grant number CSTB2022NSCQ-MSX0503.
文摘Gait recognition is a key biometric for long-distance identification,yet its performance is severely degraded by real-world challenges such as varying clothing,carrying conditions,and changing viewpoints.While combining silhouette and skeleton data is a promising direction,effectively fusing these heterogeneous modalities and adaptively weighting their contributions in response to diverse conditions remains a central problem.This paper introduces GaitMAFF,a novelMulti-modal Adaptive Feature Fusion Network,to address this challenge.Our approach first transforms discrete skeleton joints into a dense SkeletonMap representation to align with silhouettes,then employs an attention-based module to dynamically learn the fusion weights between the two modalities.These fused features are processed by a powerful spatio-temporal backbone withWeighted Global-Local Feature FusionModules(WFFM)to learn a discriminative representation.Extensive experiments on the challenging CCPG and Gait3D datasets show that GaitMAFF achieves state-of-the-art performance,with an average Rank-1 accuracy of 84.6%on CCPG and 58.7%on Gait3D.These results demonstrate that our adaptive fusion strategy effectively integrates complementary multimodal information,significantly enhancing gait recognition robustness and accuracy in complex scenes and providing a practical solution for real-world applications.
基金supported by the National Key Research and Development Program of China(Research Grant Number:2023YFC3603600).
文摘Autism spectrum disorder(AsD)is a highly heterogeneous neurodevelopmental disorder.Early diagnosis and intervention are crucial for improving outcomes.Traditional single-modality diagnostic methods are subjective,limited,and struggle to reveal the underlying pathological mechanisms.In contrast,multimodal data analysis integrates behavioral,physiological,and neuroimaging information with advanced machine-learning and deeplearning algorithms to overcome these limitations.In this review,we surveyed the recent pediatric AsD literature,highlighting artificial intelligence-driven diagnostic techniques,multimodal data fusion strategies,and emerging trends in ASD assessment.We surveyed studies that integrated two or more modalities and summarized the fusion levels,learning paradigms,tasks,datasets,and metrics.Multimodal approaches outperform singlemodality baselines in classification,severity estimation,and subtyping by leveraging complementary information and reducing modality-specific biases.Multimodal approaches significantly enhance diagnostic accuracy and comprehensiveness,enabling early screening of AsD,symptom subtyping,severity assessment,and personalized interventions.Advances in multimodal fusion techniques have promoted progress in precision medicine for the treatment of ASD.
基金funded by“the Fanying Special Program of the National Natural Science Foundation of China,grant number 62341307”“the Scientific research project of Jiangxi Provincial Department of Education,grant number GJJ200839”“theDoctoral startup fund of JiangxiUniversity of Technology,grant number 205200100402”.
文摘In multi-modal emotion recognition,excessive reliance on historical context often impedes the detection of emotional shifts,while modality heterogeneity and unimodal noise limit recognition performance.Existing methods struggle to dynamically adjust cross-modal complementary strength to optimize fusion quality and lack effective mechanisms to model the dynamic evolution of emotions.To address these issues,we propose a multi-level dynamic gating and emotion transfer framework for multi-modal emotion recognition.A dynamic gating mechanism is applied across unimodal encoding,cross-modal alignment,and emotion transfer modeling,substantially improving noise robustness and feature alignment.First,we construct a unimodal encoder based on gated recurrent units and feature-selection gating to suppress intra-modal noise and enhance contextual representation.Second,we design a gated-attention crossmodal encoder that dynamically calibrates the complementary contributions of visual and audio modalities to the dominant textual features and eliminates redundant information.Finally,we introduce a gated enhanced emotion transfer module that explicitly models the temporal dependence of emotional evolution in dialogues via transfer gating and optimizes continuity modeling with a comparative learning loss.Experimental results demonstrate that the proposed method outperforms state-of-the-art models on the public MELD and IEMOCAP datasets.
基金Supported by the National Key R&D Program of China(No.2023YFC2410203)Beijing Hospitals Authority Clinical Medicine Development of Special Funding Support(No.ZLRK202503).
文摘AIM:To investigate the clinical features and prognosis of patients with orbital inflammatory myofibroblastic tumor(IMT).METHODS:This retrospective study collected clinical data from 22 patients diagnosed with orbital IMT based on histopathological examination.The patients were followed up to assess their prognosis.Clinical data from patients,including age,gender,course of disease,past medical history,primary symptoms,ophthalmologic examination findings,general condition,as well as imaging,laboratory,histopathological,and immunohistochemical results from digital records were collected.Orbital magnetic resonance imaging(MRI)and(or)computed tomography(CT)scans were performed to assess bone destruction of the mass,invasion of surrounding tissues,and any inflammatory changes in periorbital areas.RESULTS:The mean age of patients with orbital IMT was 28.24±3.30y,with a male-to-female ratio of 1.2:1.Main clinical manifestations were proptosis,blurred vision,palpable mass,and pain.Bone destruction and surrounding tissue invasion occurred in 72.73%and 54.55%of cases,respectively.Inflammatory changes in the periorbital site were observed in 77.27%of the patients.Hematoxylin and eosin staining showed proliferation of fibroblasts and myofibroblasts,accompanied by infiltration of lymphocytes and plasma cells.Immunohistochemical staining revealed that smooth muscle actin(SMA)and vimentin were positive in 100%of cases,while anaplastic lymphoma kinase(ALK)showed positivity in 47.37%.The recurrence rate of orbital IMT was 27.27%,and sarcomatous degeneration could occur.There were no significant correlations between recurrence and factors such as age,gender,laterality,duration of the disease,periorbital tissue invasion,bone destruction,periorbital inflammation,tumor size,fever,leukocytosis,or treatment(P>0.05).However,lymphadenopathy and a Ki-67 index of 10%or higher may be risk factors for recurrence(P=0.046;P=0.023).CONCLUSION:Orbital IMT is a locally invasive disease that may recur or lead to sarcomatoid degeneration,primarily affecting young and middle-aged patients.The presence of lymphadenopathy and a Ki-67 index of 10%or higher may signify a poor prognosis.
基金supported by ZTE Industry-University-Institute Cooperation Funds under Grant No. HC-CN-20221107001。
文摘The detection of steel surface anomalies has become an industrial challenge due to variations in production equipment,processes,and characteristics.To alleviate the problem,this paper proposes a detection and localization method combining 3D depth and 2D RGB features.The framework comprises three stages:defect classification,defect location,an d warpage judgment.The first stage uses a dataefficient image Transformer model,the second stage utilizes reverse knowledge distillation,and the third stage performs feature fusion using3D depth and 2D RGB features.Experimental results show that the proposed algorithm achieves relatively high accuracy and feasibility,and can be effectively used in industrial scenarios.
基金General Program of National Natural Science Foundation of China(82474390)Construction Project of Pudong New Area Famous TCM Studios(National Pilot Zone for TCM Development,Shanghai)(PDZY-2025-0716)Shanghai Municipal Science and Technology Program Project Shanghai Key Laboratory of Health Identification and Assessment(21DZ2271000).
文摘Objective To develop a depression recognition model by integrating the spirit-expression diagnostic framework of traditional Chinese medicine(TCM)with machine learning algorithms.The proposed model seeks to establish a TCM-informed tool for early depression screening,thereby bridging traditional diagnostic principles with modern computational approaches.Methods The study included patients with depression who visited the Shanghai Pudong New Area Mental Health Center from October 1,2022 to October 1,2023,as well as students and teachers from Shanghai University of Traditional Chinese Medicine during the same period as the healthy control group.Videos of 3–10 s were captured using a Xiaomi Pad 5,and the TCM spirit and expressions were determined by TCM experts(at least 3 out of 5 experts agreed to determine the category of TCM spirit and expressions).Basic information,facial images,and interview information were collected through a portable TCM intelligent analysis and diagnosis device,and facial diagnosis features were extracted using the Open CV computer vision library technology.Statistical analysis methods such as parametric and non-parametric tests were used to analyze the baseline data,TCM spirit and expression features,and facial diagnosis feature parameters of the two groups,to compare the differences in TCM spirit and expression and facial features.Five machine learning algorithms,including extreme gradient boosting(XGBoost),decision tree(DT),Bernoulli naive Bayes(BernoulliNB),support vector machine(SVM),and k-nearest neighbor(KNN)classification,were used to construct a depression recognition model based on the fusion of TCM spirit and expression features.The performance of the model was evaluated using metrics such as accuracy,precision,and the area under the receiver operating characteristic(ROC)curve(AUC).The model results were explained using the Shapley Additive exPlanations(SHAP).Results A total of 93 depression patients and 87 healthy individuals were ultimately included in this study.There was no statistically significant difference in the baseline characteristics between the two groups(P>0.05).The differences in the characteristics of the spirit and expressions in TCM and facial features between the two groups were shown as follows.(i)Quantispirit facial analysis revealed that depression patients exhibited significantly reduced facial spirit and luminance compared with healthy controls(P<0.05),with characteristic features such as sad expressions,facial erythema,and changes in the lip color ranging from erythematous to cyanotic.(ii)Depressed patients exhibited significantly lower values in facial complexion L,lip L,and a values,and gloss index,but higher values in facial complexion a and b,lip b,low gloss index,and matte index(all P<0.05).(iii)The results of multiple models show that the XGBoost-based depression recognition model,integrating the TCM“spirit-expression”diagnostic framework,achieved an accuracy of 98.61%and significantly outperformed four benchmark algorithms—DT,BernoulliNB,SVM,and KNN(P<0.01).(iv)The SHAP visualization results show that in the recognition model constructed by the XGBoost algorithm,the complexion b value,categories of facial spirit,high gloss index,low gloss index,categories of facial expression and texture features have significant contribution to the model.Conclusion This study demonstrates that integrating TCM spirit-expression diagnostic features with machine learning enables the construction of a high-precision depression detection model,offering a novel paradigm for objective depression diagnosis.
文摘BACKGROUND SMARCB1/INI1-deficient pancreatic undifferentiated rhabdoid carcinoma is a highly aggressive tumor,and spontaneous splenic rupture(SSR)as its presenting manifestation is rarely reported among pancreatic malignancies.CASE SUMMARY We herein report a rare case of a 59-year-old female who presented with acute left upper quadrant abdominal pain without any history of trauma.Abdominal imaging demonstrated a heterogeneous splenic lesion with hemoperitoneum,raising clinical suspicion of SSR.Emergency laparotomy revealed a pancreatic tumor invading the spleen and left kidney,with associated splenic rupture and dense adhesions,necessitating en bloc resection of the distal pancreas,spleen,and left kidney.Histopathology revealed a biphasic malignancy composed of moderately differentiated pancreatic ductal adenocarcinoma and an undifferentiated carcinoma with rhabdoid morphology and loss of SMARCB1 expression.Immunohistochemical analysis confirmed complete loss of SMARCB1/INI1 in the undifferentiated component,along with a high Ki-67 index(approximately 80%)and CD10 positivity.The ductal adenocarcinoma component retained SMARCB1/INI1 expression and was positive for CK7 and CK-pan.Transitional zones between the two tumor components suggested progressive dedifferentiation and underlying genomic instability.The patient received adjuvant chemotherapy with gemcitabine and nab-paclitaxel and maintained a satisfactory quality of life at the 6-month follow-up.CONCLUSION This study reports a rare case of SMARCB1/INI1-deficient undifferentiated rhabdoid carcinoma of the pancreas combined with ductal adenocarcinoma,presenting as SSR-an exceptionally uncommon initial manifestation of pancreatic malignancy.
基金supported by the National Natural Science Foundation of China(No.62350048)。
文摘To address the challenge of achieving decentralized,scalable,and adaptive control for large-scale multiple unmanned aerial vehicle(multi-UAV)swarms in dynamic urban environments with obstacles and wind perturbations,we proposed a hybrid framework integrating adaptive reinforcement learning(RL),multi-modal perception fusion,and enhanced pigeon flock optimization(PFO)with curiosity-driven exploration to enable robust autonomous and formation control.The framework leverages meta-learning to optimize RL policies for real-time adaptation,fuses sensor data for precise state estimation,and enhances PFO with learned leader-follower dynamics and exploration rewards to maintain cohesive formations and explore uncertain areas.For swarms of 10–30 UAVs,it achieves 34%faster convergence,61%reduced stability root mean square error(RMSE),88%fewer collisions and 85.6%–92.3%success rates in target detection and encirclement,outperforming standard multi-agent RL,pure PFO,and single-modality RL.Three-dimensional trajectory visualizations confirm cohesive formations,collision-free maneuvers,and efficient exploration in urban search-and-rescue scenarios.Innovations include meta-RL for rapid adaptation,multi-modal fusion for robust perception,and curiosity-driven PFO for scalable,decentralized control,advancing real-world multi-UAV swarm autonomy and coordination.
基金financially supported by the National Natural Science Foundation of China(No.U22A20439)the Shenzhen Fundamental Research Program(No.JCYJ20220818100418040)+2 种基金the Guangdong-Hong Kong-Macao Joint Innovation Fund(No.2024A0505040001)the Guangdong Basic and Applied Basic Research Foundation(2023A1515011122)the Shenzhen ShowMac Network Technology Co.,Ltd.
文摘This study proposes a multimodal deep learning framework for joint prediction of the state of health(SOH)and remaining useful life(RUL)of lithium-ion batteries.Twelve representative impedance features-covering charge-transfer resistance,solid electrolyte interface(SEI)layer impedance,and ion diffusion-are extracted from electrochemical impedance spectroscopy(EIS)and combined with short voltage/current segments to form a compact,interpretable feature set.A residual multi-layer perceptron(ResMLP)is employed for SOH regression,and a temporal convolutional network with attention(TCNAttention)is used for RUL estimation.Lifetime experiments on two battery types with different chemistries and form factors,evaluated through three rounds of paired cross-validation,validate the approach.Results show that the proposed features significantly reduce dimensionality and computational cost while substantially lowering SOH error,achieving an average normalized root mean square error of 2.3%.The RUL prediction reaches an average error of 14.8%.Overall,the framework balances interpretability,robustness,and feasibility,providing a practical solution for battery management systems(BMS)monitoring and life prediction.
基金funded by the National Key Lab Foundation,China(No.2020KLF030101)the Innovation Foundation for Doctor Dissertation of Northwestern Polytechnical University,China(No.CX2025031)Shaanxi Innovative Research Team of Artificial Intelligence for Fluid Mechanics,China(No.2024RS-CXTD-16)。
文摘Improved delay detached eddy simulation is performed to explore the flow features and aero-optical effects of turrets with different bottom cylinder height at a freestream Mach number Ma=0.7.Analysis of both the time-averaged and instantaneous flow features demonstrate that the shock motion causes the oscillation of separated shear layer.In flow analysis,two unsteady shock-wake-correlated modes are discerned:the asymmetric shifting mode and the symmetric breathing mode.With the increase of cylinder height,the relative energy of shock gradually increases,which goes from 26%to 59%.The proper orthogonal decomposition analysis yields the single frequency peak for the two dominant modes.The frequency peaks of shifting mode are generally at StD<0.23,while the frequency peaks of breathing mode are generally at StD>0.26.The dynamic mode decomposition analysis gives range of frequency peak.The frequency peaks of shifting mode are in the range of StD=0.11-0.23,and the frequency peaks of breathing mode are in range of StD=0.26-0.41.Optical distortion analysis indicates that the distortion calculated in five cases is linked to the breathing mode.When the beam passes through the turbulent wake,it exhibits the high-frequency and high-amplitude characteristics.
基金supported by the Tianjin Manufacturing High Quality Development Special Foundation(No.20232185)the Roycom Foundation(No.70306901).
文摘Hard disk drives(HDDs)serve as the primary storage devices in modern data centers.Once a failure occurs,it often leads to severe data loss,significantly degrading the reliability of storage systems.Numerous studies have proposed machine learning-based HDD failure prediction models.However,the Self-Monitoring,Analysis,and Reporting Technology(SMART)attributes differ across HDD manufacturers.We define hard drives of the same brand and model as homogeneous HDD groups,and those from different brands or models as heterogeneous HDD groups.In practical engineering scenarios,a data center is often composed of a heterogeneous population of HDDs,spanning multiple vendors and models.Existing research predominantly focuses on homogeneous datasets,ignoring the model’s generalization capability across heterogeneous HDDs.As a result,HDD models with limited samples often suffer from poor training effectiveness and prediction performance.To address this issue,we investigate generalizable SMART predictors across heterogeneous HDD groups.By extracting time-series features within a fixed sliding time window,we propose a Heterogeneous Disk Failure Prediction Method based on Time Series Features(HDFPM)framework.This method is adaptable to HDD models with limited sample sizes,thereby enhancing its applicability and robustness across diverse drive populations.Experimental results show that the proposed model achieves an F1-score of 0.9518 when applied to two different Seagate HDD models,while maintaining the False Positive Rate(FPR)below 1%.After incorporating the Complexity-Ratio Dynamic Time Warping(CDTW)based feature enhancement method,the best prediction model achieves a True Positive Rate(TPR)of up to 0.93 between the two models.For next-day failure prediction across various Seagate models,the model achieves an F1-score of up to 0.8792.Moreover,the experimental results also show that within the same brand,the higher the proportion of shared SMART attributes across different models,the better the prediction performance.In addition,HDFPMdemonstrates the best stability andmost significant performance in heterogeneous environments.
文摘In the field of intelligent air combat,real-time and accurate recognition of within-visual-range(WVR)maneuver actions serves as the foundational cornerstone for constructing autonomous decision-making systems.However,existing methods face two major challenges:traditional feature engineering suffers from insufficient effective dimensionality in the feature space due to kinematic coupling,making it difficult to distinguish essential differences between maneuvers,while end-to-end deep learning models lack controllability in implicit feature learning and fail to model high-order long-range temporal dependencies.This paper proposes a trajectory feature pre-extraction method based on a Long-range Masked Autoencoder(LMAE),incorporating three key innovations:(1)Random Fragment High-ratio Masking(RFH-Mask),which enforces the model to learn long-range temporal correlations by masking 80%of trajectory data while retaining continuous fragments;(2)Kalman Filter-Guided Objective Function(KFG-OF),integrating trajectory continuity constraints to align the feature space with kinematic principles;and(3)Two-stage Decoupled Architecture,enabling efficient and controllable feature learning through unsupervised pre-training and frozen-feature transfer.Experimental results demonstrate that LMAE significantly improves the average recognition accuracy for 20-class maneuvers compared to traditional end-to-end models,while significantly accelerating convergence speed.The contributions of this work lie in:introducing high-masking-rate autoencoders into low-informationdensity trajectory analysis,proposing a feature engineering framework with enhanced controllability and efficiency,and providing a novel technical pathway for intelligent air combat decision-making systems.
基金supported by the National Natural Science Foundation of China(Nos.82472104 and U24B2053)the Natural Science Basic Research Program of Shaanxi(No.2025JC-JCQN-023)+2 种基金the Key Core Technology Research and Development of Shaanxi(No.2024QY2-GJHX-03)the Innovation Capability Support Program of Shaanxi(Program No.2023-CX-TD-54)the Xidian University Specially Funded Project for Interdisciplinary Exploration(No.TZJHF202510).
文摘Retinal diseases are a serious threat to human visual health and their early diagnosis is crucial.Currently,most of the retinal disease diagnostic algorithms are based on a single imaging modality of fundus color photography(FCP)or optical coherence tomography(OCT).These methods can only reflect retinal diseases to a certain extent,ignoring the speci ficity of modalities between different imaging modalities.In this research,a newmulti-scale feature fusion network(MSFF-Net)model for multi-modal retinal image diagnosis is proposed.The MSFF-Net model employs a dualbranch architecture design,enabling efficient learning and extraction of multi-modal feature information related to retinal diseases from CFP and OCT images.MSFF-Net improves disease diagnosis by combining multi-scale features of CFP and OCT images.When evaluated on challenging datasets,the model achieved an accuracy of 95.00%and an F1-score of 95.24%for retinal disease diagnosis.Even under low-quality dataset conditions,it maintained robust performance,with diagnostic accuracy and F1-scores of 71.50%and 71.73%,respectively.In addition,the MSFFNet model outperformed eight state-of-the-art single and multi-modal models in the comparison experiments.The proposed MSFF-Net model provides ophthalmologists with a more accurate and efficient diagnostic pathway that helps them detect and treat retinal diseases earlier.
基金supported by the National Key Research and Development Program of China(No.2023YFB3105700).
文摘Phishing email detection represents a critical research challenge in cybersecurity.To address this,this paper proposes a novel Double-S(statistical-semantic)feature model based on three core entities involved in email communication:the sender,recipient,and email content.We employ strategic game theory to analyze the offensive strategies of phishing attackers and defensive strategies of protectors,extracting statistical features from these entities.We also leverage the Qwen large language model to excavate implicit semantic features(e.g.,emotional manipulation and social engineering tactics)from email content.By integrating statistical and semantic features,our model achieves a robust representation of phishing emails.We introduce a hybrid detection model that integrates a convolutional neural network(CNN)module with the XGBoost(Extreme Gradient Boosting)classifier,effectively capturing local correlations in high-dimensional features.Experimental results on real-world phishing email datasets demonstrate the superiority of our approach,achieving an F1-score of 0.9587,precision of 0.9591,and recall of 0.9583,representing improvements of 1.3%–10.6%compared to state-of-the-art methods.
基金Supported by Major Science and Technology Project of Hubei Province(2022AAA009)。
文摘By integrating self-localization,environment mapping,and dynamic object tracking into a unified framework,visual simultaneous localization and mapping with multiple object tracking(SLAMMOT)enhances decision-making and interaction capabilities in applications such as autonomous driving,robotic navigation,and augmented reality.While numerous outstanding visual SLAMMOT methods have been proposed,the majority rely only on point features,overlooking the abundant and stable planar features in artificial objects that can provide valuable constraints.To address this limitation,we propose OP(object planar)-SLAM,an RGB-D SLAMMOT system that leverages planar features to improve object pose estimation and reconstruction accuracy.Specifically,we introduce an accurate object planar feature extraction and association method using normal images,alongside a novel object bundle adjustment framework that incorporates planar constraints for enhanced optimization.The proposed system is evaluated on both synthetic and public real-world datasets,including Oxford multimotion dataset(OMD)and KITTI tracking dataset.Especially on the OMD,where planar features are prominent,our method improves object pose estimation accuracy by approximately 60%.Extensive experiments demonstrate its effectiveness in enhancing object pose estimation and reconstruction,achieving notable performance compared with existing methods.Furthermore,OP-SLAM runs in real time,making it suitable for practical robots and augmented reality applications.
文摘[Objective]Accurate prediction of tomato growth height is crucial for optimizing production environments in smart farming.However,current prediction methods predominantly rely on empirical,mechanistic,or learning-based models that utilize either images data or environmental data.These methods fail to fully leverage multi-modal data to capture the diverse aspects of plant growth comprehensively.[Methods]To address this limitation,a two-stage phenotypic feature extraction(PFE)model based on deep learning algorithm of recurrent neural network(RNN)and long short-term memory(LSTM)was developed.The model integrated environment and plant information to provide a holistic understanding of the growth process,emploied phenotypic and temporal feature extractors to comprehensively capture both types of features,enabled a deeper understanding of the interaction between tomato plants and their environment,ultimately leading to highly accurate predictions of growth height.[Results and Discussions]The experimental results showed the model's ef‐fectiveness:When predicting the next two days based on the past five days,the PFE-based RNN and LSTM models achieved mean absolute percentage error(MAPE)of 0.81%and 0.40%,respectively,which were significantly lower than the 8.00%MAPE of the large language model(LLM)and 6.72%MAPE of the Transformer-based model.In longer-term predictions,the 10-day prediction for 4 days ahead and the 30-day prediction for 12 days ahead,the PFE-RNN model continued to outperform the other two baseline models,with MAPE of 2.66%and 14.05%,respectively.[Conclusions]The proposed method,which leverages phenotypic-temporal collaboration,shows great potential for intelligent,data-driven management of tomato cultivation,making it a promising approach for enhancing the efficiency and precision of smart tomato planting management.
文摘Scene classification of high-resolution remote sensing (HRRS) image is an important research topic and has been applied broadly in many fields. Deep learning method has shown its high potential to in this domain, owing to its powerful learning ability of characterizing complex patterns. However the deep learning methods omit some global and local information of the HRRS image. To this end, in this article we show efforts to adopt explicit global and local information to provide complementary information to deep models. Specifically, we use a patch based MS-CLBP method to acquire global and local representations, and then we consider a pretrained CNN model as a feature extractor and extract deep hierarchical features from full-connection layers. After fisher vector (FV) encoding, we obtain the holistic visual representation of the scene image. We view the scene classification as a reconstruction procedure and train several class-specific stack denoising autoencoders (SDAEs) of corresponding class, i.e., one SDAE per class, and classify the test image according to the reconstruction error. Experimental results show that our combination method outperforms the state-of-the-art deep learning classification methods without employing fine-tuning.
基金funded by the National Natural Science Foundation of China(61991413)the China Postdoctoral Science Foundation(2019M651142)+1 种基金the Natural Science Foundation of Liaoning Province(2021-KF-12-07)the Natural Science Foundations of Liaoning Province(2023-MS-322).
文摘Fusing hand-based features in multi-modal biometric recognition enhances anti-spoofing capabilities.Additionally,it leverages inter-modal correlation to enhance recognition performance.Concurrently,the robustness and recognition performance of the system can be enhanced through judiciously leveraging the correlation among multimodal features.Nevertheless,two issues persist in multi-modal feature fusion recognition:Firstly,the enhancement of recognition performance in fusion recognition has not comprehensively considered the inter-modality correlations among distinct modalities.Secondly,during modal fusion,improper weight selection diminishes the salience of crucial modal features,thereby diminishing the overall recognition performance.To address these two issues,we introduce an enhanced DenseNet multimodal recognition network founded on feature-level fusion.The information from the three modalities is fused akin to RGB,and the input network augments the correlation between modes through channel correlation.Within the enhanced DenseNet network,the Efficient Channel Attention Network(ECA-Net)dynamically adjusts the weight of each channel to amplify the salience of crucial information in each modal feature.Depthwise separable convolution markedly reduces the training parameters and further enhances the feature correlation.Experimental evaluations were conducted on four multimodal databases,comprising six unimodal databases,including multispectral palmprint and palm vein databases from the Chinese Academy of Sciences.The Equal Error Rates(EER)values were 0.0149%,0.0150%,0.0099%,and 0.0050%,correspondingly.In comparison to other network methods for palmprint,palm vein,and finger vein fusion recognition,this approach substantially enhances recognition performance,rendering it suitable for high-security environments with practical applicability.The experiments in this article utilized amodest sample database comprising 200 individuals.The subsequent phase involves preparing for the extension of the method to larger databases.
基金supported by the National Natural Science Foundation of China(Grant Nos.62071315 and 62271336).
文摘The multi-modal characteristics of mineral particles play a pivotal role in enhancing the classification accuracy,which is critical for obtaining a profound understanding of the Earth's composition and ensuring effective exploitation utilization of its resources.However,the existing methods for classifying mineral particles do not fully utilize these multi-modal features,thereby limiting the classification accuracy.Furthermore,when conventional multi-modal image classification methods are applied to planepolarized and cross-polarized sequence images of mineral particles,they encounter issues such as information loss,misaligned features,and challenges in spatiotemporal feature extraction.To address these challenges,we propose a multi-modal mineral particle polarization image classification network(MMGC-Net)for precise mineral particle classification.Initially,MMGC-Net employs a two-dimensional(2D)backbone network with shared parameters to extract features from two types of polarized images to ensure feature alignment.Subsequently,a cross-polarized intra-modal feature fusion module is designed to refine the spatiotemporal features from the extracted features of the cross-polarized sequence images.Ultimately,the inter-modal feature fusion module integrates the two types of modal features to enhance the classification precision.Quantitative and qualitative experimental results indicate that when compared with the current state-of-the-art multi-modal image classification methods,MMGC-Net demonstrates marked superiority in terms of mineral particle multi-modal feature learning and four classification evaluation metrics.It also demonstrates better stability than the existing models.