[Objectives]This study was conducted to achieve rapid and accurate detection of protein content in rice with a particle size of 1.0 mm.[Methods]A multi-model fusion strategy was proposed on the basis of Stacking ensem...[Objectives]This study was conducted to achieve rapid and accurate detection of protein content in rice with a particle size of 1.0 mm.[Methods]A multi-model fusion strategy was proposed on the basis of Stacking ensemble learning.A base learner pool was constructed,containing Partial Least Squares(PLS),Support Vector Machine(SVM),Deep Extreme Learning Machine(DELM),Random Forest(RF),Gradient Boosting Decision Tree(GBDT),and Multilayer Perceptron(MLP).PLS,DELM,and Linear Regression(LR)were used as meta-learner candidates.Employing integer coding technology,systematic dynamic combinations of base learners and meta-learners were generated,resulting in a total of 40 non-repetitive fusion models.The optimal combination was selected through a comprehensive evaluation based on multiple assessment indicators.[Results]The combination"PLS-DELM-MLP-LR"(code 1367)achieved coefficients of determination of 0.9732 and 0.9780 on the validation set and independent test set,respectively,with relative root mean square errors of 2.35%and 2.36%,and residual predictive deviations of 6.1075 and 6.7479,respectively.[Conclusions]The Stacking fusion model significantly enhances the predictive accuracy and robustness of spectral quantitative analysis,providing an efficient and feasible solution for modeling complex agricultural product spectral data.展开更多
Multimodal Sentiment Analysis(SA)is gaining popularity due to its broad application potential.The existing studies have focused on the SA of single modalities,such as texts or photos,posing challenges in effectively h...Multimodal Sentiment Analysis(SA)is gaining popularity due to its broad application potential.The existing studies have focused on the SA of single modalities,such as texts or photos,posing challenges in effectively handling social media data with multiple modalities.Moreover,most multimodal research has concentrated on merely combining the two modalities rather than exploring their complex correlations,leading to unsatisfactory sentiment classification results.Motivated by this,we propose a new visualtextual sentiment classification model named Multi-Model Fusion(MMF),which uses a mixed fusion framework for SA to effectively capture the essential information and the intrinsic relationship between the visual and textual content.The proposed model comprises three deep neural networks.Two different neural networks are proposed to extract the most emotionally relevant aspects of image and text data.Thus,more discriminative features are gathered for accurate sentiment classification.Then,a multichannel joint fusion modelwith a self-attention technique is proposed to exploit the intrinsic correlation between visual and textual characteristics and obtain emotionally rich information for joint sentiment classification.Finally,the results of the three classifiers are integrated using a decision fusion scheme to improve the robustness and generalizability of the proposed model.An interpretable visual-textual sentiment classification model is further developed using the Local Interpretable Model-agnostic Explanation model(LIME)to ensure the model’s explainability and resilience.The proposed MMF model has been tested on four real-world sentiment datasets,achieving(99.78%)accuracy on Binary_Getty(BG),(99.12%)on Binary_iStock(BIS),(95.70%)on Twitter,and(79.06%)on the Multi-View Sentiment Analysis(MVSA)dataset.These results demonstrate the superior performance of our MMF model compared to single-model approaches and current state-of-the-art techniques based on model evaluation criteria.展开更多
Background:Medical imaging advancements are constrained by fundamental trade-offs between acquisition speed,radiation dose,and image quality,forcing clinicians to work with noisy,incomplete data.Existing reconstructio...Background:Medical imaging advancements are constrained by fundamental trade-offs between acquisition speed,radiation dose,and image quality,forcing clinicians to work with noisy,incomplete data.Existing reconstruction methods either compromise on accuracy with iterative algorithms or suffer from limited generalizability with task-specific deep learning approaches.Methods:We present LDM-PIR,a lightweight physics-conditioned diffusion multi-model for medical image reconstruction that addresses key challenges in magnetic resonance imaging(MRI),CT,and low-photon imaging.Unlike traditional iterative methods,which are computationally expensive,or task-specific deep learning approaches lacking generalizability,integrates three innovations.A physics-conditioned diffusion framework that embeds acquisition operators(Fourier/Radon transforms)and noise models directly into the reconstruction process.A multi-model architecture that unifies denoising,inpainting,and super-resolution via shared weight conditioning.A lightweight design(2.1M parameters)enabling rapid inference(0.8s/image on GPU).Through self-supervised fine-tuning with measurement consistency losses adapts to new imaging modalities using fewer annotated samples.Results:Achieves state-of-the-art performance on fastMRI(peak signal-to-noise ratio(PSNR):34.04 for single-coil/31.50 for multi-coil)and Lung Image Database Consortium and Image Database Resource Initiative(28.83 PSNR under Poisson noise).Clinical evaluations demonstrate superior preservation of anatomical structures,with SSIM improvements of 8.8%for single-coil and 4.36%for multi-coil MRI over uDPIR.Conclusion:It offers a flexible,efficient,and scalable solution for medical image reconstruction,addressing the challenges of noise,undersampling,and modality generalization.The model’s lightweight design allows for rapid inference,while its self-supervised fine-tuning capability minimizes reliance on large annotated datasets,making it suitable for real-world clinical applications.展开更多
The contemporary era is characterized by rapid technological advancements,particularly in the fields of communication and multimedia.Digital media has significantly influenced the daily lives of individuals of all age...The contemporary era is characterized by rapid technological advancements,particularly in the fields of communication and multimedia.Digital media has significantly influenced the daily lives of individuals of all ages.One of the emerging domains in digital media is the creation of cartoons and animated videos.The accessibility of the internet has led to a surge in the consumption of cartoons among young children,presenting challenges in monitoring and controlling the content they view.The prevalence of cartoon videos containing potentially violent scenes has raised concerns regarding their impact,especially on young and impressionableminds.This article contributes to the growing concerns about the impact of animated media on children’s mental health and offers solutions to help mitigate these effects.To address this issue,an intelligent,multi-CNN fusion framework is proposed for detecting and predicting violent content in upcoming frames of animated videos.The framework integrates probabilistic and deep learning methodologies by leveraging a combination of visual and temporal features for violence prediction in future scenes.Two specific convolutional neural network classifiers i.e.,VGG16 and ResNet18 are employed to classify scenes from animated content as violent or non-violent.To enhance decision robustness,this study introduces a fusion strategy based on weighted averaging,combining the outputs of both Convolutional Neural Networks(CNNs)into a single decision stream.The resulting classifications are subsequently fed into Naive Bayes classifier,which analyzes sequential patterns to forecast violence in future scenes.The experimental findings demonstrate that the proposed framework achieved predictive accuracy of 92.84%,highlighting its effectiveness for intelligent content moderation.These results underscore the potential of intelligent data fusion techniques in enhancing the reliability and robustness of automated violence detection systems in animated content.This framework offers a promising solution for safeguarding young audiences by enabling proactive and accurate moderation of animated videos.展开更多
Visible and infrared(RGB-IR)fusion object detection plays an important role in security,disaster relief,etc.In recent years,deep-learning-based RGB-IR fusion detection methods have been developing rapidly,but still st...Visible and infrared(RGB-IR)fusion object detection plays an important role in security,disaster relief,etc.In recent years,deep-learning-based RGB-IR fusion detection methods have been developing rapidly,but still struggle to deal with the complex and changing scenarios captured by drones,mainly due to two reasons:(A)RGB-IR fusion detectors are susceptible to inferior inputs that degrade performance and stability.(B)RGB-IR fusion detectors are susceptible to redundant features that reduce accuracy and efficiency.In this paper,an innovative RGB-IR fusion detection framework based on global-local feature optimization,named GLFDet,is proposed to improve the detection performance and efficiency of drone-captured objects.The key components of GLFDet include a Global Feature Optimization(GFO)module,a Local Feature Optimization(LFO)module and a Channel Separation Fusion(CSF)module.Specifically,GFO calculates the information content of the input image from the frequency domain and optimizes the features holistically.Then,LFO dynamically selects high-value features and filters out low-value features before fusion,which significantly improves the efficiency of fusion.Finally,CSF fuses the RGB and IR features across the corresponding channels,which avoids the rearrangement of the channel relationships and enhances the model stability.Extensive experimental results show that the proposed method achieves the best performance on three popular RGB-IR datasets Drone Vehicle,VEDAI,and LLVIP.In addition,GLFDet is more lightweight than other comparable models,making it more appealing to edge devices such as drones.The code is available at https://github.com/lao chen330/GLFDet.展开更多
A dual-phase synergistic enhancement method was adopted to strengthen the Al-Mn-Mg-Sc-Zr alloy fabricated by laser powder bed fusion(LPBF)by leveraging the unique advantages of Er and TiB_(2).Spherical powders of 0.5w...A dual-phase synergistic enhancement method was adopted to strengthen the Al-Mn-Mg-Sc-Zr alloy fabricated by laser powder bed fusion(LPBF)by leveraging the unique advantages of Er and TiB_(2).Spherical powders of 0.5wt%Er-1wt%TiB_(2)/Al-Mn-Mg-Sc-Zr nanocomposite were prepared using vacuum homogenization technique,and the density of samples prepared through the LPBF process reached 99.8%.The strengthening and toughening mechanisms of Er-TiB_(2)were investigated.The results show that Al_(3)Er diffraction peaks are detected by X-ray diffraction analysis,and texture strength decreases according to electron backscatter diffraction results.The added Er and TiB_(2)nano-reinforcing phases act as heterogeneous nucleation sites during the LPBF forming process,hindering grain growth and effectively refining the grains.After incorporating the Er-TiB_(2)dual-phase nano-reinforcing phases,the tensile strength and elongation at break of the LPBF-deposited samples reach 550 MPa and 18.7%,which are 13.4%and 26.4%higher than those of the matrix material,respectively.展开更多
The process of nuclear fusion in the presence of a laser field was theoretically analyzed.The analysis is applicable to most fusion reactions and different types of currently available intense lasers,from X-ray free-e...The process of nuclear fusion in the presence of a laser field was theoretically analyzed.The analysis is applicable to most fusion reactions and different types of currently available intense lasers,from X-ray free-electron lasers to solid-state near-infrared lasers.Laser fields were shown to enhance the fusion yields,and the mechanism of this enhancement was explained.Low-frequency lasers are more efficient in enhancing fusion than high-frequency lasers.The calculation results show enhancements of fusion yields by orders of magnitude with currently available intense low-frequency laser fields.The temperature requirement for controlled nuclear fusion may be reduced with the aid of intense laser fields.展开更多
Fault diagnosis of rolling bearings is crucial for ensuring the stable operation of mechanical equipment and production safety in industrial environments.However,due to the nonlinearity and non-stationarity of collect...Fault diagnosis of rolling bearings is crucial for ensuring the stable operation of mechanical equipment and production safety in industrial environments.However,due to the nonlinearity and non-stationarity of collected vibration signals,single-modal methods struggle to capture fault features fully.This paper proposes a rolling bearing fault diagnosis method based on multi-modal information fusion.The method first employs the Hippopotamus Optimization Algorithm(HO)to optimize the number of modes in Variational Mode Decomposition(VMD)to achieve optimal modal decomposition performance.It combines Convolutional Neural Networks(CNN)and Gated Recurrent Units(GRU)to extract temporal features from one-dimensional time-series signals.Meanwhile,the Markovian Transition Field(MTF)is used to transform one-dimensional signals into two-dimensional images for spatial feature mining.Through visualization techniques,the effectiveness of generated images from different parameter combinations is compared to determine the optimal parameter configuration.A multi-modal network(GSTCN)is constructed by integrating Swin-Transformer and the Convolutional Block Attention Module(CBAM),where the attention module is utilized to enhance fault features.Finally,the fault features extracted from different modalities are deeply fused and fed into a fully connected layer to complete fault classification.Experimental results show that the GSTCN model achieves an average diagnostic accuracy of 99.5%across three datasets,significantly outperforming existing comparison methods.This demonstrates that the proposed model has high diagnostic precision and good generalization ability,providing an efficient and reliable solution for rolling bearing fault diagnosis.展开更多
Parkinson’s disease remains a major clinical issue in terms of early detection,especially during its prodromal stage when symptoms are not evident or not distinct.To address this problem,we proposed a new deep learni...Parkinson’s disease remains a major clinical issue in terms of early detection,especially during its prodromal stage when symptoms are not evident or not distinct.To address this problem,we proposed a new deep learning 2-based approach for detecting Parkinson’s disease before any of the overt symptoms develop during their prodromal stage.We used 5 publicly accessible datasets,including UCI Parkinson’s Voice,Spiral Drawings,PaHaW,NewHandPD,and PPMI,and implemented a dual stream CNN–BiLSTM architecture with Fisher-weighted feature merging and SHAP-based explanation.The findings reveal that the model’s performance was superior and achieved 98.2%,a F1-score of 0.981,and AUC of 0.991 on the UCI Voice dataset.The model’s performance on the remaining datasets was also comparable,with up to a 2–7 percent betterment in accuracy compared to existing strong models such as CNN–RNN–MLP,ILN–GNet,and CASENet.Across the evidence,the findings back the diagnostic promise of micro-tremor assessment and demonstrate that combining temporal and spatial features with a scatter-based segment for a multi-modal approach can be an effective and scalable platform for an“early,”interpretable PD screening system.展开更多
A low-temperature-resistant and high-strength stainless-steel jacket is a key component in the superconducting magnet of a fusion reactor.The development of cryogenic structural materials with high strength and toughn...A low-temperature-resistant and high-strength stainless-steel jacket is a key component in the superconducting magnet of a fusion reactor.The development of cryogenic structural materials with high strength and toughness poses a challenge for the future development of high-field superconducting magnets in fusion reactors.The yield strength of the International Thermonuclear Experimental Reactor developed for low-temperature structural materials at 4.2K is below 1100MPa,which fails to meet the demand for structural components with yield strengths exceeding 1500MPa at 4.2K in the future fusion reactors.CHSN01(formerly N50H),which is a low-temperature structural material developed in China,exhibits exceptional strength and toughness,thereby making it highly promising for practical applications.Recently,a 30 t jacket measuring approximately 5000m in total length was produced.Its low-temperature mechanical properties were tested using a sampling method to ensure compliance with application requirements.This paper presents the experimental data of the CHSN01 jacket and tests of the physical properties of the material in the temperature range of 4–300 K.The physical properties were unaffected by magnetic field.Furthermore,this paper discusses the feasibility of employing CHSN01 as a cryogenic structural material capable of withstanding high magnetic fields in next-generation fusion reactors.展开更多
Traffic sign detection is a critical component of driving systems.Single-stage network-based traffic sign detection algorithms,renowned for their fast detection speeds and high accuracy,have become the dominant approa...Traffic sign detection is a critical component of driving systems.Single-stage network-based traffic sign detection algorithms,renowned for their fast detection speeds and high accuracy,have become the dominant approach in current practices.However,in complex and dynamic traffic scenes,particularly with smaller traffic sign objects,challenges such as missed and false detections can lead to reduced overall detection accuracy.To address this issue,this paper proposes a detection algorithm that integrates edge and shape information.Recognizing that traffic signs have specific shapes and distinct edge contours,this paper introduces an edge feature extraction branch within the backbone network,enabling adaptive fusion with features of the same hierarchical level.Additionally,a shape prior convolution module is designed to replaces the first two convolutional modules of the backbone network,aimed at enhancing the model's perception ability for specific shape objects and reducing its sensitivity to background noise.The algorithm was evaluated on the CCTSDB and TT100k datasets,and compared to YOLOv8s,the mAP50 values increased by 3.0%and 10.4%,respectively,demonstrating the effectiveness of the proposed method in improving the accuracy of traffic sign detection.展开更多
AIM:To investigate the effects of binocular fusional C-optotypes(positive/negative)and 2D planar C-optotypes on the amplitude and stability of transient accommodation(TAC)in adults,and to provide a basis for non-conta...AIM:To investigate the effects of binocular fusional C-optotypes(positive/negative)and 2D planar C-optotypes on the amplitude and stability of transient accommodation(TAC)in adults,and to provide a basis for non-contact myopia intervention.METHODS:This was a self-controlled study.Using redblue 3D technology,four experimental stages were set up:Test A[fixating on the 1 m negative fusional C-optotypes,8△base-in(BI)],Test B(fixating on the 5 m planar C-optotypes),Test C(fixating on the 1 m planar C-optotypes),and Test D[fixating on the 1 m positive fusional C-optotypes,20△base-out(BO)].A WAM-5500 open-field autorefractor was used to measure TAC and accommodative microfluctuations[evaluated via interquartile range(IQR)and median-based coefficient of variation(CVmed)].Additionally,the convergence accommodation to convergence(CA/C)ratio was calculated,and a visual fatigue questionnaire was administered to assess participants’subjective visual comfort.RESULTS:A total of 21 subjects(7 males,14 females;aged 23-41y)with normal binocular visual function were enrolled.The results showed that the TAC increased gradually across the four stages,and these values were Test A(-0.35±0.26 D)<Test B(-0.46±0.24 D)<Test C(-0.77±0.32 D)<Test D(-1.38±0.31 D).There were significant overall differences(F=56.136,P<0.001).Compared with Test C,Test A reduced TAC by 0.42 D(P<0.05),while Test D increased it by 0.61 D(P<0.001).There was no significant intergroup difference in accommodative fluctuation amplitude(all P>0.05),but the fluctuation stability of Test D showed a significant difference between the first 20s and the second 20s(P=0.017).The CA/C ratio was significantly higher in Test D(0.05±0.02 D/△)than in Test A(0.03±0.02 D/△,P=0.007),indicating stronger accommodation-convergence linkage during positive fusional fixation.The visual fatigue scores of all stages were low(median 0-1),with Test D slightly higher than Test B and Test C(P<0.05).No linear correlation was found between TAC and age(all r<0.1,P>0.05).CONCLUSION:Negative fusional C-optotypes induce ciliary muscle relaxation to reduce TAC,while positive fusional C-optotypes enhance accommodation-convergence coordination to increase TAC.The red-blue 3D-based noncontact training mode exhibits good safety(median visual fatigue scores:0-1 across all tests)and provides a novel dual-directional(relaxation-activation)strategy for myopia prevention and control.展开更多
Camouflaged Object Detection(COD)aims to identify objects that share highly similar patterns—such as texture,intensity,and color—with their surrounding environment.Due to their intrinsic resemblance to the backgroun...Camouflaged Object Detection(COD)aims to identify objects that share highly similar patterns—such as texture,intensity,and color—with their surrounding environment.Due to their intrinsic resemblance to the background,camouflaged objects often exhibit vague boundaries and varying scales,making it challenging to accurately locate targets and delineate their indistinct edges.To address this,we propose a novel camouflaged object detection network called Edge-Guided and Multi-scale Fusion Network(EGMFNet),which leverages edge-guided multi-scale integration for enhanced performance.The model incorporates two innovative components:a Multi-scale Fusion Module(MSFM)and an Edge-Guided Attention Module(EGA).These designs exploit multi-scale features to uncover subtle cues between candidate objects and the background while emphasizing camouflaged object boundaries.Moreover,recognizing the rich contextual information in fused features,we introduce a Dual-Branch Global Context Module(DGCM)to refine features using extensive global context,thereby generatingmore informative representations.Experimental results on four benchmark datasets demonstrate that EGMFNet outperforms state-of-the-art methods across five evaluation metrics.Specifically,on COD10K,our EGMFNet-P improves F_(β)by 4.8 points and reduces mean absolute error(MAE)by 0.006 compared with ZoomNeXt;on NC4K,it achieves a 3.6-point increase in F_(β).OnCAMO and CHAMELEON,it obtains 4.5-point increases in F_(β),respectively.These consistent gains substantiate the superiority and robustness of EGMFNet.展开更多
With the growing demand formore comprehensive and nuanced sentiment understanding,Multimodal Sentiment Analysis(MSA)has gained significant traction in recent years and continues to attract widespread attention in the ...With the growing demand formore comprehensive and nuanced sentiment understanding,Multimodal Sentiment Analysis(MSA)has gained significant traction in recent years and continues to attract widespread attention in the academic community.Despite notable advances,existing approaches still face critical challenges in both information modeling and modality fusion.On one hand,many current methods rely heavily on encoders to extract global features from each modality,which limits their ability to capture latent fine-grained emotional cues within modalities.On the other hand,prevailing fusion strategies often lack mechanisms to model semantic discrepancies across modalities and to adaptively regulate modality interactions.To address these limitations,we propose a novel framework for MSA,termed Multi-Granularity Guided Fusion(MGGF).The proposed framework consists of three core components:(i)Multi-Granularity Feature Extraction Module,which simultaneously captures both global and local emotional features within each modality,and integrates them to construct richer intra-modal representations;(ii)Cross-ModalGuidance Learning Module(CMGL),which introduces a cross-modal scoring mechanism to quantify the divergence and complementarity betweenmodalities.These scores are then used as guiding signals to enable the fusion strategy to adaptively respond to scenarios of modality agreement or conflict;(iii)Cross-Modal Fusion Module(CMF),which learns the semantic dependencies among modalities and facilitates deep-level emotional feature interaction,thereby enhancing sentiment prediction with complementary information.We evaluate MGGF on two benchmark datasets:MVSA-Single and MVSA-Multiple.Experimental results demonstrate that MGGF outperforms the current state-of-the-art model CLMLF on MVSA-Single by achieving a 2.32% improvement in F1 score.On MVSA-Multiple,it surpasses MGNNS with a 0.26% increase in accuracy.These results substantiate the effectiveness ofMGGFin addressing two major limitations of existing methods—insufficient intra-modal fine-grained sentiment modeling and inadequate cross-modal semantic fusion.展开更多
In recent years,with the rapid advancement of artificial intelligence,object detection algorithms have made significant strides in accuracy and computational efficiency.Notably,research and applications of Anchor-Free...In recent years,with the rapid advancement of artificial intelligence,object detection algorithms have made significant strides in accuracy and computational efficiency.Notably,research and applications of Anchor-Free models have opened new avenues for real-time target detection in optical remote sensing images(ORSIs).However,in the realmof adversarial attacks,developing adversarial techniques tailored to Anchor-Freemodels remains challenging.Adversarial examples generated based on Anchor-Based models often exhibit poor transferability to these new model architectures.Furthermore,the growing diversity of Anchor-Free models poses additional hurdles to achieving robust transferability of adversarial attacks.This study presents an improved cross-conv-block feature fusion You Only Look Once(YOLO)architecture,meticulously engineered to facilitate the extraction ofmore comprehensive semantic features during the backpropagation process.To address the asymmetry between densely distributed objects in ORSIs and the corresponding detector outputs,a novel dense bounding box attack strategy is proposed.This approach leverages dense target bounding boxes loss in the calculation of adversarial loss functions.Furthermore,by integrating translation-invariant(TI)and momentum-iteration(MI)adversarial methodologies,the proposed framework significantly improves the transferability of adversarial attacks.Experimental results demonstrate that our method achieves superior adversarial attack performance,with adversarial transferability rates(ATR)of 67.53%on the NWPU VHR-10 dataset and 90.71%on the HRSC2016 dataset.Compared to ensemble adversarial attack and cascaded adversarial attack approaches,our method generates adversarial examples in an average of 0.64 s,representing an approximately 14.5%improvement in efficiency under equivalent conditions.展开更多
Magnesium-rare earth(Mg-RE)alloys are pivotal for lightweight applications in aerospace and advanced engineering due to their high specific strength.However,manufacturing large-scale complex components via monolithic ...Magnesium-rare earth(Mg-RE)alloys are pivotal for lightweight applications in aerospace and advanced engineering due to their high specific strength.However,manufacturing large-scale complex components via monolithic casting is challenging owing to defects such as RE oxides and shrinkage porosity,making fusion welding essential for both defect repair and structural joining.This review comprehensively examines recent advances in fusion welding of Mg-RE alloys,with emphasis on the interplay between their unique physicochemical properties and welding metallurgy.Various fusion welding methods suitable for Mg-RE alloys are compared and analyzed.Detailed characterization of joint regions reveals how thermal gradients and cooling rates govern phase evolution,grain morphology,and defect formation.Moreover,welding parameters and heat treatment strategies are systematically discussed for the microstructural configuration,especially for the inherent conflicts between grain coarsening in fusion zone and eutectic dissolution in heat-affected zone.Future research directions are also outlined.By correlating Mg-RE alloy properties with fusion welding processes,this review provides practical insights for designing reliable welded structures in critical applications.展开更多
文摘[Objectives]This study was conducted to achieve rapid and accurate detection of protein content in rice with a particle size of 1.0 mm.[Methods]A multi-model fusion strategy was proposed on the basis of Stacking ensemble learning.A base learner pool was constructed,containing Partial Least Squares(PLS),Support Vector Machine(SVM),Deep Extreme Learning Machine(DELM),Random Forest(RF),Gradient Boosting Decision Tree(GBDT),and Multilayer Perceptron(MLP).PLS,DELM,and Linear Regression(LR)were used as meta-learner candidates.Employing integer coding technology,systematic dynamic combinations of base learners and meta-learners were generated,resulting in a total of 40 non-repetitive fusion models.The optimal combination was selected through a comprehensive evaluation based on multiple assessment indicators.[Results]The combination"PLS-DELM-MLP-LR"(code 1367)achieved coefficients of determination of 0.9732 and 0.9780 on the validation set and independent test set,respectively,with relative root mean square errors of 2.35%and 2.36%,and residual predictive deviations of 6.1075 and 6.7479,respectively.[Conclusions]The Stacking fusion model significantly enhances the predictive accuracy and robustness of spectral quantitative analysis,providing an efficient and feasible solution for modeling complex agricultural product spectral data.
文摘Multimodal Sentiment Analysis(SA)is gaining popularity due to its broad application potential.The existing studies have focused on the SA of single modalities,such as texts or photos,posing challenges in effectively handling social media data with multiple modalities.Moreover,most multimodal research has concentrated on merely combining the two modalities rather than exploring their complex correlations,leading to unsatisfactory sentiment classification results.Motivated by this,we propose a new visualtextual sentiment classification model named Multi-Model Fusion(MMF),which uses a mixed fusion framework for SA to effectively capture the essential information and the intrinsic relationship between the visual and textual content.The proposed model comprises three deep neural networks.Two different neural networks are proposed to extract the most emotionally relevant aspects of image and text data.Thus,more discriminative features are gathered for accurate sentiment classification.Then,a multichannel joint fusion modelwith a self-attention technique is proposed to exploit the intrinsic correlation between visual and textual characteristics and obtain emotionally rich information for joint sentiment classification.Finally,the results of the three classifiers are integrated using a decision fusion scheme to improve the robustness and generalizability of the proposed model.An interpretable visual-textual sentiment classification model is further developed using the Local Interpretable Model-agnostic Explanation model(LIME)to ensure the model’s explainability and resilience.The proposed MMF model has been tested on four real-world sentiment datasets,achieving(99.78%)accuracy on Binary_Getty(BG),(99.12%)on Binary_iStock(BIS),(95.70%)on Twitter,and(79.06%)on the Multi-View Sentiment Analysis(MVSA)dataset.These results demonstrate the superior performance of our MMF model compared to single-model approaches and current state-of-the-art techniques based on model evaluation criteria.
文摘Background:Medical imaging advancements are constrained by fundamental trade-offs between acquisition speed,radiation dose,and image quality,forcing clinicians to work with noisy,incomplete data.Existing reconstruction methods either compromise on accuracy with iterative algorithms or suffer from limited generalizability with task-specific deep learning approaches.Methods:We present LDM-PIR,a lightweight physics-conditioned diffusion multi-model for medical image reconstruction that addresses key challenges in magnetic resonance imaging(MRI),CT,and low-photon imaging.Unlike traditional iterative methods,which are computationally expensive,or task-specific deep learning approaches lacking generalizability,integrates three innovations.A physics-conditioned diffusion framework that embeds acquisition operators(Fourier/Radon transforms)and noise models directly into the reconstruction process.A multi-model architecture that unifies denoising,inpainting,and super-resolution via shared weight conditioning.A lightweight design(2.1M parameters)enabling rapid inference(0.8s/image on GPU).Through self-supervised fine-tuning with measurement consistency losses adapts to new imaging modalities using fewer annotated samples.Results:Achieves state-of-the-art performance on fastMRI(peak signal-to-noise ratio(PSNR):34.04 for single-coil/31.50 for multi-coil)and Lung Image Database Consortium and Image Database Resource Initiative(28.83 PSNR under Poisson noise).Clinical evaluations demonstrate superior preservation of anatomical structures,with SSIM improvements of 8.8%for single-coil and 4.36%for multi-coil MRI over uDPIR.Conclusion:It offers a flexible,efficient,and scalable solution for medical image reconstruction,addressing the challenges of noise,undersampling,and modality generalization.The model’s lightweight design allows for rapid inference,while its self-supervised fine-tuning capability minimizes reliance on large annotated datasets,making it suitable for real-world clinical applications.
基金funded by Princess Nourah bint Abdulrahman University Researchers Supporting Project number(PNURSP2025R138),Princess Nourah bint Abdulrahman University,Riyadh,Saudi Arabia.
文摘The contemporary era is characterized by rapid technological advancements,particularly in the fields of communication and multimedia.Digital media has significantly influenced the daily lives of individuals of all ages.One of the emerging domains in digital media is the creation of cartoons and animated videos.The accessibility of the internet has led to a surge in the consumption of cartoons among young children,presenting challenges in monitoring and controlling the content they view.The prevalence of cartoon videos containing potentially violent scenes has raised concerns regarding their impact,especially on young and impressionableminds.This article contributes to the growing concerns about the impact of animated media on children’s mental health and offers solutions to help mitigate these effects.To address this issue,an intelligent,multi-CNN fusion framework is proposed for detecting and predicting violent content in upcoming frames of animated videos.The framework integrates probabilistic and deep learning methodologies by leveraging a combination of visual and temporal features for violence prediction in future scenes.Two specific convolutional neural network classifiers i.e.,VGG16 and ResNet18 are employed to classify scenes from animated content as violent or non-violent.To enhance decision robustness,this study introduces a fusion strategy based on weighted averaging,combining the outputs of both Convolutional Neural Networks(CNNs)into a single decision stream.The resulting classifications are subsequently fed into Naive Bayes classifier,which analyzes sequential patterns to forecast violence in future scenes.The experimental findings demonstrate that the proposed framework achieved predictive accuracy of 92.84%,highlighting its effectiveness for intelligent content moderation.These results underscore the potential of intelligent data fusion techniques in enhancing the reliability and robustness of automated violence detection systems in animated content.This framework offers a promising solution for safeguarding young audiences by enabling proactive and accurate moderation of animated videos.
基金supported by the National Natural Science Foundation of China(No.62276204)the Fundamental Research Funds for the Central Universities,China(No.YJSJ24011)+1 种基金the Natural Science Basic Research Program of Shaanxi,China(Nos.2022JM-340 and 2023-JC-QN-0710)the China Postdoctoral Science Foundation(Nos.2020T130494 and 2018M633470)。
文摘Visible and infrared(RGB-IR)fusion object detection plays an important role in security,disaster relief,etc.In recent years,deep-learning-based RGB-IR fusion detection methods have been developing rapidly,but still struggle to deal with the complex and changing scenarios captured by drones,mainly due to two reasons:(A)RGB-IR fusion detectors are susceptible to inferior inputs that degrade performance and stability.(B)RGB-IR fusion detectors are susceptible to redundant features that reduce accuracy and efficiency.In this paper,an innovative RGB-IR fusion detection framework based on global-local feature optimization,named GLFDet,is proposed to improve the detection performance and efficiency of drone-captured objects.The key components of GLFDet include a Global Feature Optimization(GFO)module,a Local Feature Optimization(LFO)module and a Channel Separation Fusion(CSF)module.Specifically,GFO calculates the information content of the input image from the frequency domain and optimizes the features holistically.Then,LFO dynamically selects high-value features and filters out low-value features before fusion,which significantly improves the efficiency of fusion.Finally,CSF fuses the RGB and IR features across the corresponding channels,which avoids the rearrangement of the channel relationships and enhances the model stability.Extensive experimental results show that the proposed method achieves the best performance on three popular RGB-IR datasets Drone Vehicle,VEDAI,and LLVIP.In addition,GLFDet is more lightweight than other comparable models,making it more appealing to edge devices such as drones.The code is available at https://github.com/lao chen330/GLFDet.
基金Shaanxi Province Qin Chuangyuan“Scientist+Engineer”Team Construction Project(2022KXJ-071)2022 Qin Chuangyuan Achievement Transformation Incubation Capacity Improvement Project(2022JH-ZHFHTS-0012)+8 种基金Shaanxi Province Key Research and Development Plan-“Two Chains”Integration Key Project-Qin Chuangyuan General Window Industrial Cluster Project(2023QCY-LL-02)Xixian New Area Science and Technology Plan(2022-YXYJ-003,2022-XXCY-010)2024 Scientific Research Project of Shaanxi National Defense Industry Vocational and Technical College(Gfy24-07)Shaanxi Vocational and Technical Education Association 2024 Vocational Education Teaching Reform Research Topic(2024SZX354)National Natural Science Foundation of China(U24A20115)2024 Shaanxi Provincial Education Department Service Local Special Scientific Research Program Project-Industrialization Cultivation Project(24JC005,24JC063)Shaanxi Province“14th Five-Year Plan”Education Science Plan,2024 Project(SGH24Y3181)National Key Research and Development Program of China(2023YFB4606400)Longmen Laboratory Frontier Exploration Topics Project(LMQYTSKT003)。
文摘A dual-phase synergistic enhancement method was adopted to strengthen the Al-Mn-Mg-Sc-Zr alloy fabricated by laser powder bed fusion(LPBF)by leveraging the unique advantages of Er and TiB_(2).Spherical powders of 0.5wt%Er-1wt%TiB_(2)/Al-Mn-Mg-Sc-Zr nanocomposite were prepared using vacuum homogenization technique,and the density of samples prepared through the LPBF process reached 99.8%.The strengthening and toughening mechanisms of Er-TiB_(2)were investigated.The results show that Al_(3)Er diffraction peaks are detected by X-ray diffraction analysis,and texture strength decreases according to electron backscatter diffraction results.The added Er and TiB_(2)nano-reinforcing phases act as heterogeneous nucleation sites during the LPBF forming process,hindering grain growth and effectively refining the grains.After incorporating the Er-TiB_(2)dual-phase nano-reinforcing phases,the tensile strength and elongation at break of the LPBF-deposited samples reach 550 MPa and 18.7%,which are 13.4%and 26.4%higher than those of the matrix material,respectively.
基金supported by the National Natural Science Foundation of China(Nos.12405288,12374241,12474484,U2330401,12088101)the Natural Science Foundation of Top Talent of SZTU(No.GDRC202526)。
文摘The process of nuclear fusion in the presence of a laser field was theoretically analyzed.The analysis is applicable to most fusion reactions and different types of currently available intense lasers,from X-ray free-electron lasers to solid-state near-infrared lasers.Laser fields were shown to enhance the fusion yields,and the mechanism of this enhancement was explained.Low-frequency lasers are more efficient in enhancing fusion than high-frequency lasers.The calculation results show enhancements of fusion yields by orders of magnitude with currently available intense low-frequency laser fields.The temperature requirement for controlled nuclear fusion may be reduced with the aid of intense laser fields.
基金funded by the Jilin Provincial Department of Science and Technology,grant number 20230101208JC.
文摘Fault diagnosis of rolling bearings is crucial for ensuring the stable operation of mechanical equipment and production safety in industrial environments.However,due to the nonlinearity and non-stationarity of collected vibration signals,single-modal methods struggle to capture fault features fully.This paper proposes a rolling bearing fault diagnosis method based on multi-modal information fusion.The method first employs the Hippopotamus Optimization Algorithm(HO)to optimize the number of modes in Variational Mode Decomposition(VMD)to achieve optimal modal decomposition performance.It combines Convolutional Neural Networks(CNN)and Gated Recurrent Units(GRU)to extract temporal features from one-dimensional time-series signals.Meanwhile,the Markovian Transition Field(MTF)is used to transform one-dimensional signals into two-dimensional images for spatial feature mining.Through visualization techniques,the effectiveness of generated images from different parameter combinations is compared to determine the optimal parameter configuration.A multi-modal network(GSTCN)is constructed by integrating Swin-Transformer and the Convolutional Block Attention Module(CBAM),where the attention module is utilized to enhance fault features.Finally,the fault features extracted from different modalities are deeply fused and fed into a fully connected layer to complete fault classification.Experimental results show that the GSTCN model achieves an average diagnostic accuracy of 99.5%across three datasets,significantly outperforming existing comparison methods.This demonstrates that the proposed model has high diagnostic precision and good generalization ability,providing an efficient and reliable solution for rolling bearing fault diagnosis.
基金supported via funding from Prince Sattam bin Abdulaziz University project number(PSAU/2025/03/32440).
文摘Parkinson’s disease remains a major clinical issue in terms of early detection,especially during its prodromal stage when symptoms are not evident or not distinct.To address this problem,we proposed a new deep learning 2-based approach for detecting Parkinson’s disease before any of the overt symptoms develop during their prodromal stage.We used 5 publicly accessible datasets,including UCI Parkinson’s Voice,Spiral Drawings,PaHaW,NewHandPD,and PPMI,and implemented a dual stream CNN–BiLSTM architecture with Fisher-weighted feature merging and SHAP-based explanation.The findings reveal that the model’s performance was superior and achieved 98.2%,a F1-score of 0.981,and AUC of 0.991 on the UCI Voice dataset.The model’s performance on the remaining datasets was also comparable,with up to a 2–7 percent betterment in accuracy compared to existing strong models such as CNN–RNN–MLP,ILN–GNet,and CASENet.Across the evidence,the findings back the diagnostic promise of micro-tremor assessment and demonstrate that combining temporal and spatial features with a scatter-based segment for a multi-modal approach can be an effective and scalable platform for an“early,”interpretable PD screening system.
基金supported in part by the National Natural Science Foundation of China(No.12305196)Anhui Provincial Natural Science Foundation(No.2308085QA23)+1 种基金Open Fund of Magnetic confinement Fusion Laboratory of Anhui Province(No.2023AMF03003)Science Foundation of Institute of Plasma Physics,Chinese Academy of Sciences(No.DSJJ-2024-10).
文摘A low-temperature-resistant and high-strength stainless-steel jacket is a key component in the superconducting magnet of a fusion reactor.The development of cryogenic structural materials with high strength and toughness poses a challenge for the future development of high-field superconducting magnets in fusion reactors.The yield strength of the International Thermonuclear Experimental Reactor developed for low-temperature structural materials at 4.2K is below 1100MPa,which fails to meet the demand for structural components with yield strengths exceeding 1500MPa at 4.2K in the future fusion reactors.CHSN01(formerly N50H),which is a low-temperature structural material developed in China,exhibits exceptional strength and toughness,thereby making it highly promising for practical applications.Recently,a 30 t jacket measuring approximately 5000m in total length was produced.Its low-temperature mechanical properties were tested using a sampling method to ensure compliance with application requirements.This paper presents the experimental data of the CHSN01 jacket and tests of the physical properties of the material in the temperature range of 4–300 K.The physical properties were unaffected by magnetic field.Furthermore,this paper discusses the feasibility of employing CHSN01 as a cryogenic structural material capable of withstanding high magnetic fields in next-generation fusion reactors.
基金supported by the National Natural Science Foundation of China(Grant Nos.62572057,62272049,U24A20331)Beijing Natural Science Foundation(Grant Nos.4232026,4242020)Academic Research Projects of Beijing Union University(Grant No.ZK10202404).
文摘Traffic sign detection is a critical component of driving systems.Single-stage network-based traffic sign detection algorithms,renowned for their fast detection speeds and high accuracy,have become the dominant approach in current practices.However,in complex and dynamic traffic scenes,particularly with smaller traffic sign objects,challenges such as missed and false detections can lead to reduced overall detection accuracy.To address this issue,this paper proposes a detection algorithm that integrates edge and shape information.Recognizing that traffic signs have specific shapes and distinct edge contours,this paper introduces an edge feature extraction branch within the backbone network,enabling adaptive fusion with features of the same hierarchical level.Additionally,a shape prior convolution module is designed to replaces the first two convolutional modules of the backbone network,aimed at enhancing the model's perception ability for specific shape objects and reducing its sensitivity to background noise.The algorithm was evaluated on the CCTSDB and TT100k datasets,and compared to YOLOv8s,the mAP50 values increased by 3.0%and 10.4%,respectively,demonstrating the effectiveness of the proposed method in improving the accuracy of traffic sign detection.
文摘AIM:To investigate the effects of binocular fusional C-optotypes(positive/negative)and 2D planar C-optotypes on the amplitude and stability of transient accommodation(TAC)in adults,and to provide a basis for non-contact myopia intervention.METHODS:This was a self-controlled study.Using redblue 3D technology,four experimental stages were set up:Test A[fixating on the 1 m negative fusional C-optotypes,8△base-in(BI)],Test B(fixating on the 5 m planar C-optotypes),Test C(fixating on the 1 m planar C-optotypes),and Test D[fixating on the 1 m positive fusional C-optotypes,20△base-out(BO)].A WAM-5500 open-field autorefractor was used to measure TAC and accommodative microfluctuations[evaluated via interquartile range(IQR)and median-based coefficient of variation(CVmed)].Additionally,the convergence accommodation to convergence(CA/C)ratio was calculated,and a visual fatigue questionnaire was administered to assess participants’subjective visual comfort.RESULTS:A total of 21 subjects(7 males,14 females;aged 23-41y)with normal binocular visual function were enrolled.The results showed that the TAC increased gradually across the four stages,and these values were Test A(-0.35±0.26 D)<Test B(-0.46±0.24 D)<Test C(-0.77±0.32 D)<Test D(-1.38±0.31 D).There were significant overall differences(F=56.136,P<0.001).Compared with Test C,Test A reduced TAC by 0.42 D(P<0.05),while Test D increased it by 0.61 D(P<0.001).There was no significant intergroup difference in accommodative fluctuation amplitude(all P>0.05),but the fluctuation stability of Test D showed a significant difference between the first 20s and the second 20s(P=0.017).The CA/C ratio was significantly higher in Test D(0.05±0.02 D/△)than in Test A(0.03±0.02 D/△,P=0.007),indicating stronger accommodation-convergence linkage during positive fusional fixation.The visual fatigue scores of all stages were low(median 0-1),with Test D slightly higher than Test B and Test C(P<0.05).No linear correlation was found between TAC and age(all r<0.1,P>0.05).CONCLUSION:Negative fusional C-optotypes induce ciliary muscle relaxation to reduce TAC,while positive fusional C-optotypes enhance accommodation-convergence coordination to increase TAC.The red-blue 3D-based noncontact training mode exhibits good safety(median visual fatigue scores:0-1 across all tests)and provides a novel dual-directional(relaxation-activation)strategy for myopia prevention and control.
基金financially supported byChongqingUniversity of Technology Graduate Innovation Foundation(Grant No.gzlcx20253267).
文摘Camouflaged Object Detection(COD)aims to identify objects that share highly similar patterns—such as texture,intensity,and color—with their surrounding environment.Due to their intrinsic resemblance to the background,camouflaged objects often exhibit vague boundaries and varying scales,making it challenging to accurately locate targets and delineate their indistinct edges.To address this,we propose a novel camouflaged object detection network called Edge-Guided and Multi-scale Fusion Network(EGMFNet),which leverages edge-guided multi-scale integration for enhanced performance.The model incorporates two innovative components:a Multi-scale Fusion Module(MSFM)and an Edge-Guided Attention Module(EGA).These designs exploit multi-scale features to uncover subtle cues between candidate objects and the background while emphasizing camouflaged object boundaries.Moreover,recognizing the rich contextual information in fused features,we introduce a Dual-Branch Global Context Module(DGCM)to refine features using extensive global context,thereby generatingmore informative representations.Experimental results on four benchmark datasets demonstrate that EGMFNet outperforms state-of-the-art methods across five evaluation metrics.Specifically,on COD10K,our EGMFNet-P improves F_(β)by 4.8 points and reduces mean absolute error(MAE)by 0.006 compared with ZoomNeXt;on NC4K,it achieves a 3.6-point increase in F_(β).OnCAMO and CHAMELEON,it obtains 4.5-point increases in F_(β),respectively.These consistent gains substantiate the superiority and robustness of EGMFNet.
基金supported in part by the National Key Research and Development Program of China under Grant 2022YFB3102904in part by the National Natural Science Foundation of China under Grant No.U23A20305 and No.62472440.
文摘With the growing demand formore comprehensive and nuanced sentiment understanding,Multimodal Sentiment Analysis(MSA)has gained significant traction in recent years and continues to attract widespread attention in the academic community.Despite notable advances,existing approaches still face critical challenges in both information modeling and modality fusion.On one hand,many current methods rely heavily on encoders to extract global features from each modality,which limits their ability to capture latent fine-grained emotional cues within modalities.On the other hand,prevailing fusion strategies often lack mechanisms to model semantic discrepancies across modalities and to adaptively regulate modality interactions.To address these limitations,we propose a novel framework for MSA,termed Multi-Granularity Guided Fusion(MGGF).The proposed framework consists of three core components:(i)Multi-Granularity Feature Extraction Module,which simultaneously captures both global and local emotional features within each modality,and integrates them to construct richer intra-modal representations;(ii)Cross-ModalGuidance Learning Module(CMGL),which introduces a cross-modal scoring mechanism to quantify the divergence and complementarity betweenmodalities.These scores are then used as guiding signals to enable the fusion strategy to adaptively respond to scenarios of modality agreement or conflict;(iii)Cross-Modal Fusion Module(CMF),which learns the semantic dependencies among modalities and facilitates deep-level emotional feature interaction,thereby enhancing sentiment prediction with complementary information.We evaluate MGGF on two benchmark datasets:MVSA-Single and MVSA-Multiple.Experimental results demonstrate that MGGF outperforms the current state-of-the-art model CLMLF on MVSA-Single by achieving a 2.32% improvement in F1 score.On MVSA-Multiple,it surpasses MGNNS with a 0.26% increase in accuracy.These results substantiate the effectiveness ofMGGFin addressing two major limitations of existing methods—insufficient intra-modal fine-grained sentiment modeling and inadequate cross-modal semantic fusion.
文摘In recent years,with the rapid advancement of artificial intelligence,object detection algorithms have made significant strides in accuracy and computational efficiency.Notably,research and applications of Anchor-Free models have opened new avenues for real-time target detection in optical remote sensing images(ORSIs).However,in the realmof adversarial attacks,developing adversarial techniques tailored to Anchor-Freemodels remains challenging.Adversarial examples generated based on Anchor-Based models often exhibit poor transferability to these new model architectures.Furthermore,the growing diversity of Anchor-Free models poses additional hurdles to achieving robust transferability of adversarial attacks.This study presents an improved cross-conv-block feature fusion You Only Look Once(YOLO)architecture,meticulously engineered to facilitate the extraction ofmore comprehensive semantic features during the backpropagation process.To address the asymmetry between densely distributed objects in ORSIs and the corresponding detector outputs,a novel dense bounding box attack strategy is proposed.This approach leverages dense target bounding boxes loss in the calculation of adversarial loss functions.Furthermore,by integrating translation-invariant(TI)and momentum-iteration(MI)adversarial methodologies,the proposed framework significantly improves the transferability of adversarial attacks.Experimental results demonstrate that our method achieves superior adversarial attack performance,with adversarial transferability rates(ATR)of 67.53%on the NWPU VHR-10 dataset and 90.71%on the HRSC2016 dataset.Compared to ensemble adversarial attack and cascaded adversarial attack approaches,our method generates adversarial examples in an average of 0.64 s,representing an approximately 14.5%improvement in efficiency under equivalent conditions.
基金supported by the National Natural Science Foundation of China(No.52405394)the Natural Science Foundation of Shanghai(No.24ZR1436500)+2 种基金the Natural Science Foundation of Chongqing(No.CSTB2024NSCQMSX0677)the Science Innovation Foundation of Shanghai Academy of Spaceflight Technology(No.SAST2024-039)the Startup Fund for Young Faculty at SJTU(No.24X010502880).
文摘Magnesium-rare earth(Mg-RE)alloys are pivotal for lightweight applications in aerospace and advanced engineering due to their high specific strength.However,manufacturing large-scale complex components via monolithic casting is challenging owing to defects such as RE oxides and shrinkage porosity,making fusion welding essential for both defect repair and structural joining.This review comprehensively examines recent advances in fusion welding of Mg-RE alloys,with emphasis on the interplay between their unique physicochemical properties and welding metallurgy.Various fusion welding methods suitable for Mg-RE alloys are compared and analyzed.Detailed characterization of joint regions reveals how thermal gradients and cooling rates govern phase evolution,grain morphology,and defect formation.Moreover,welding parameters and heat treatment strategies are systematically discussed for the microstructural configuration,especially for the inherent conflicts between grain coarsening in fusion zone and eutectic dissolution in heat-affected zone.Future research directions are also outlined.By correlating Mg-RE alloy properties with fusion welding processes,this review provides practical insights for designing reliable welded structures in critical applications.