The viscosity of refining slags plays a critical role in metallurgical processes.However,obtaining accurate viscosity data remains challenging due to the complexities of high-temperature experiments,often relying on e...The viscosity of refining slags plays a critical role in metallurgical processes.However,obtaining accurate viscosity data remains challenging due to the complexities of high-temperature experiments,often relying on empirical models with limited predictive capabilities.This study focuses on the influence of optical basicity on viscosity in CaO-Al_(2)O_(3)-based refining slags,leveraging machine learning to address data scarcity and improve prediction accuracy.An automated framework for algorithm integration,parameter tuning,and evaluation ranking framework(Auto-APE)is employed to develop customized data-driven models for various slag systems,including CaO-Al_(2)O_(3)-SiO_(2),CaO-Al_(2)O_(3)-CaF_(2),CaO-Al_(2)O_(3)-SiO_(2)-MgO,and CaO-Al_(2)O_(3)-SiO_(2)-MgO-CaF_(2).By incorporating optical basicity as a key feature,the models achieve an average validation error of 8.0%to 15.1%,significantly outperforming traditional empirical models.Additionally,symbolic regression is introduced to rapidly construct domain-specific features,such as optical basicity-like descriptors,offering a potential breakthrough in performance prediction for small datasets.This work highlights the critical role of domain-specific knowledge in understanding and predicting viscosity,providing a robust machine learning-based approach for optimizing refining slag properties.展开更多
Underwater images often affect the effectiveness of underwater visual tasks due to problems such as light scattering,color distortion,and detail blurring,limiting their application performance.Existing underwater imag...Underwater images often affect the effectiveness of underwater visual tasks due to problems such as light scattering,color distortion,and detail blurring,limiting their application performance.Existing underwater image enhancement methods,although they can improve the image quality to some extent,often lead to problems such as detail loss and edge blurring.To address these problems,we propose FENet,an efficient underwater image enhancement method.FENet first obtains three different scales of images by image downsampling and then transforms them into the frequency domain to extract the low-frequency and high-frequency spectra,respectively.Then,a distance mask and a mean mask are constructed based on the distance and magnitude mean for enhancing the high-frequency part,thus improving the image details and enhancing the effect by suppressing the noise in the low-frequency part.Affected by the light scattering of underwater images and the fact that some details are lost if they are directly reduced to the spatial domain after the frequency domain operation.For this reason,we propose a multi-stage residual feature aggregation module,which focuses on detail extraction and effectively avoids information loss caused by global enhancement.Finally,we combine the edge guidance strategy to further enhance the edge details of the image.Experimental results indicate that FENet outperforms current state-of-the-art underwater image enhancement methods in quantitative and qualitative evaluations on multiple publicly available datasets.展开更多
Domain adaptation aims to reduce the distribution gap between the training data(source domain)and the target data.This enables effective predictions even for domains not seen during training.However,most conventional ...Domain adaptation aims to reduce the distribution gap between the training data(source domain)and the target data.This enables effective predictions even for domains not seen during training.However,most conventional domain adaptation methods assume a single source domain,making them less suitable for modern deep learning settings that rely on diverse and large-scale datasets.To address this limitation,recent research has focused on Multi-Source Domain Adaptation(MSDA),which aims to learn effectively from multiple source domains.In this paper,we propose Efficient Domain Transition for Multi-source(EDTM),a novel and efficient framework designed to tackle two major challenges in existing MSDA approaches:(1)integrating knowledge across different source domains and(2)aligning label distributions between source and target domains.EDTM leverages an ensemble-based classifier expert mechanism to enhance the contribution of source domains that are more similar to the target domain.To further stabilize the learning process and improve performance,we incorporate imitation learning into the training of the target model.In addition,Maximum Classifier Discrepancy(MCD)is employed to align class-wise label distributions between the source and target domains.Experiments were conducted using Digits-Five,one of the most representative benchmark datasets for MSDA.The results show that EDTM consistently outperforms existing methods in terms of average classification accuracy.Notably,EDTM achieved significantly higher performance on target domains such as Modified National Institute of Standards and Technolog with blended background images(MNIST-M)and Street View House Numbers(SVHN)datasets,demonstrating enhanced generalization compared to baseline approaches.Furthermore,an ablation study analyzing the contribution of each loss component validated the effectiveness of the framework,highlighting the importance of each module in achieving optimal performance.展开更多
To address the issue of scarce labeled samples and operational condition variations that degrade the accuracy of fault diagnosis models in variable-condition gearbox fault diagnosis,this paper proposes a semi-supervis...To address the issue of scarce labeled samples and operational condition variations that degrade the accuracy of fault diagnosis models in variable-condition gearbox fault diagnosis,this paper proposes a semi-supervised masked contrastive learning and domain adaptation(SSMCL-DA)method for gearbox fault diagnosis under variable conditions.Initially,during the unsupervised pre-training phase,a dual signal augmentation strategy is devised,which simultaneously applies random masking in the time domain and random scaling in the frequency domain to unlabeled samples,thereby constructing more challenging positive sample pairs to guide the encoder in learning intrinsic features robust to condition variations.Subsequently,a ConvNeXt-Transformer hybrid architecture is employed,integrating the superior local detail modeling capacity of ConvNeXt with the robust global perception capability of Transformer to enhance feature extraction in complex scenarios.Thereafter,a contrastive learning model is constructed with the optimization objective of maximizing feature similarity across different masked instances of the same sample,enabling the extraction of consistent features from multiple masked perspectives and reducing reliance on labeled data.In the final supervised fine-tuning phase,a multi-scale attention mechanism is incorporated for feature rectification,and a domain adaptation module combining Local Maximum Mean Discrepancy(LMMD)with adversarial learning is proposed.This module embodies a dual mechanism:LMMD facilitates fine-grained class-conditional alignment,compelling features of identical fault classes to converge across varying conditions,while the domain discriminator utilizes adversarial training to guide the feature extractor toward learning domain-invariant features.Working in concert,they markedly diminish feature distribution discrepancies induced by changes in load,rotational speed,and other factors,thereby boosting the model’s adaptability to cross-condition scenarios.Experimental evaluations on the WT planetary gearbox dataset and the Case Western Reserve University(CWRU)bearing dataset demonstrate that the SSMCL-DA model effectively identifies multiple fault classes in gearboxes,with diagnostic performance substantially surpassing that of conventional methods.Under cross-condition scenarios,the model attains fault diagnosis accuracies of 99.21%for the WT planetary gearbox and 99.86%for the bearings,respectively.Furthermore,the model exhibits stable generalization capability in cross-device settings.展开更多
Human motion modeling is a core technology in computer animation,game development,and humancomputer interaction.In particular,generating natural and coherent in-between motion using only the initial and terminal frame...Human motion modeling is a core technology in computer animation,game development,and humancomputer interaction.In particular,generating natural and coherent in-between motion using only the initial and terminal frames remains a fundamental yet unresolved challenge.Existing methods typically rely on dense keyframe inputs or complex prior structures,making it difficult to balance motion quality and plausibility under conditions such as sparse constraints,long-term dependencies,and diverse motion styles.To address this,we propose a motion generation framework based on a frequency-domain diffusion model,which aims to better model complex motion distributions and enhance generation stability under sparse conditions.Our method maps motion sequences to the frequency domain via the Discrete Cosine Transform(DCT),enabling more effective modeling of low-frequency motion structures while suppressing high-frequency noise.A denoising network based on self-attention is introduced to capture long-range temporal dependencies and improve global structural awareness.Additionally,a multi-objective loss function is employed to jointly optimize motion smoothness,pose diversity,and anatomical consistency,enhancing the realism and physical plausibility of the generated sequences.Comparative experiments on the Human3.6M and LaFAN1 datasets demonstrate that our method outperforms state-of-the-art approaches across multiple performance metrics,showing stronger capabilities in generating intermediate motion frames.This research offers a new perspective and methodology for human motion generation and holds promise for applications in character animation,game development,and virtual interaction.展开更多
The precise tuning of magnetic nanoparticle size and magnetic domains,thereby shaping magnetic properties.However,the dynamic evolution mechanisms of magnetic domain configurations in relation to electromagnetic(EM)at...The precise tuning of magnetic nanoparticle size and magnetic domains,thereby shaping magnetic properties.However,the dynamic evolution mechanisms of magnetic domain configurations in relation to electromagnetic(EM)attenuation behavior remain poorly understood.To address this gap,a thermodynamically controlled periodic coordination strategy is proposed to achieve precise modulation of magnetic nanoparticle spacing.This approach unveils the evolution of magnetic domain configurations,progressing from individual to coupled and ultimately to crosslinked domain configurations.A unique magnetic coupling phenomenon surpasses the Snoek limit in low-frequency range,which is observed through micromagnetic simulation.The crosslinked magnetic configuration achieves effective low-frequency EM wave absorption at 3.68 GHz,encompassing nearly the entire C-band.This exceptional magnetic interaction significantly enhances radar camouflage and thermal insulation properties.Additionally,a robust gradient metamaterial design extends coverage across the full band(2–40 GHz),effectively mitigating the impact of EM pollution on human health and environment.This comprehensive study elucidates the evolution mechanisms of magnetic domain configurations,addresses gaps in dynamic magnetic modulation,and provides novel insights for the development of high-performance,low-frequency EM wave absorption materials.展开更多
Lithium niobate(LN)has remained at the forefront of academic research and industrial applications due to its rich material properties,which include second-order nonlinear optic,electro-optic,and piezoelectric properti...Lithium niobate(LN)has remained at the forefront of academic research and industrial applications due to its rich material properties,which include second-order nonlinear optic,electro-optic,and piezoelectric properties.A further aspect of LN’s versatility stems from the ability to engineer ferroelectric domains with micro and even nano-scale precision in LN,which provides an additional degree of freedom to design acoustic and optical devices with improved performance and is only possible in a handful of other materials.In this review paper,we provide an overview of the domain engineering techniques developed for LN,their principles,and the typical domain size and pattern uniformity they provide,which is important for devices that require high-resolution domain patterns with good reproducibility.It also highlights each technique's benefits,limitations,and adaptability for an application,along with possible improvements and future advancement prospects.Further,the review provides a brief overview of domain visualization methods,which is crucial to gain insights into domain quality/shape and explores the adaptability of the proposed domain engineering methodologies for the emerging thin-film lithium niobate on an insulator platform,which creates opportunities for developing the next generation of compact and scalable photonic integrated circuits and high frequency acoustic devices.展开更多
To enable proper diagnosis of a patient,medical images must demonstrate no presence of noise and artifacts.The major hurdle lies in acquiring these images in such a manner that extraneous variables,causing distortions...To enable proper diagnosis of a patient,medical images must demonstrate no presence of noise and artifacts.The major hurdle lies in acquiring these images in such a manner that extraneous variables,causing distortions in the form of noise and artifacts,are kept to a bare minimum.The unexpected change realized during the acquisition process specifically attacks the integrity of the image’s quality,while indirectly attacking the effectiveness of the diagnostic process.It is thus crucial that this is attended to with maximum efficiency at the level of pertinent expertise.The solution to these challenges presents a complex dilemma at the acquisition stage,where image processing techniques must be adopted.The necessity of this mandatory image pre-processing step underpins the implementation of traditional state-of-the-art methods to create functional and robust denoising or recovery devices.This article hereby provides an extensive systematic review of the above techniques,with the purpose of presenting a systematic evaluation of their effect on medical images under three different distributions of noise,i.e.,Gaussian,Poisson,and Rician.A thorough analysis of these methods is conducted using eight evaluation parameters to highlight the unique features of each method.The covered denoising methods are essential in actual clinical scenarios where the preservation of anatomical details is crucial for accurate and safe diagnosis,such as tumor detection in MRI and vascular imaging in CT.展开更多
Intelligent Automation&Soft Computing has retracted the article titled“Line Trace Effective Comparison AlgorithmBased onWavelet Domain DTW”[1],Intell Automat Soft Comput.2019;25(2):359–366 at the request of the...Intelligent Automation&Soft Computing has retracted the article titled“Line Trace Effective Comparison AlgorithmBased onWavelet Domain DTW”[1],Intell Automat Soft Comput.2019;25(2):359–366 at the request of the authors.DOI:10.31209/2019.100000097 URL:https://www.techscience.com/iasc/v25n2/39663 The article duplicates significant parts of a paper published in Journal of Intelligent&Fuzzy Systems[2].展开更多
Domain Generation Algorithms(DGAs)continue to pose a significant threat inmodernmalware infrastructures by enabling resilient and evasive communication with Command and Control(C&C)servers.Traditional detection me...Domain Generation Algorithms(DGAs)continue to pose a significant threat inmodernmalware infrastructures by enabling resilient and evasive communication with Command and Control(C&C)servers.Traditional detection methods-rooted in statistical heuristics,feature engineering,and shallow machine learning-struggle to adapt to the increasing sophistication,linguistic mimicry,and adversarial variability of DGA variants.The emergence of Large Language Models(LLMs)marks a transformative shift in this landscape.Leveraging deep contextual understanding,semantic generalization,and few-shot learning capabilities,LLMs such as BERT,GPT,and T5 have shown promising results in detecting both character-based and dictionary-based DGAs,including previously unseen(zeroday)variants.This paper provides a comprehensive and critical review of LLM-driven DGA detection,introducing a structured taxonomy of LLM architectures,evaluating the linguistic and behavioral properties of benchmark datasets,and comparing recent detection frameworks across accuracy,latency,robustness,and multilingual performance.We also highlight key limitations,including challenges in adversarial resilience,model interpretability,deployment scalability,and privacy risks.To address these gaps,we present a forward-looking research roadmap encompassing adversarial training,model compression,cross-lingual benchmarking,and real-time integration with SIEM/SOAR platforms.This survey aims to serve as a foundational resource for advancing the development of scalable,explainable,and operationally viable LLM-based DGA detection systems.展开更多
The rapid development of the industrial internet of things(IIoT)has brought huge benefits to factories equipped with IIoT technology,each of which represents an IIoT domain.More and more domains are choosing to cooper...The rapid development of the industrial internet of things(IIoT)has brought huge benefits to factories equipped with IIoT technology,each of which represents an IIoT domain.More and more domains are choosing to cooperate with each other to produce better products for greater profits.Therefore,in order to protect the security and privacy of IIoT devices in cross-domain communication,lots of cross-domain authentication schemes have been proposed.However,most schemes expose the domain to which the IIoT device belongs,or introduce a single point of failure in multi-domain cooperation,thus introducing unpredictable risks to each domain.We propose a more secure and efficient domain-level anonymous cross-domain authentication(DLCA)scheme based on alliance blockchain.The proposed scheme uses group signatures with decentralized tracing technology to provide domain-level anonymity to each IIoT device and allow the public to trace the real identity of the malicious pseudonym.In addition,DLCA takes into account the limited resource characteristics of IIoT devices to design an efficient cross-domain authentication protocol.Security analysis and performance evaluation show that the proposed scheme can be effectively used in the cross-domain authentication scenario of industrial internet of things.展开更多
Determining homogeneous domains statistically is helpful for engineering geological modeling and rock mass stability evaluation.In this text,a technique that can integrate lithology,geotechnical and structural informa...Determining homogeneous domains statistically is helpful for engineering geological modeling and rock mass stability evaluation.In this text,a technique that can integrate lithology,geotechnical and structural information is proposed to delineate homogeneous domains.This technique is then applied to a high and steep slope along a road.First,geological and geotechnical domains were described based on lithology,faults,and shear zones.Next,topological manifolds were used to eliminate the incompatibility between orientations and other parameters(i.e.trace length and roughness)so that the data concerning various properties of each discontinuity can be matched and characterized in the same Euclidean space.Thus,the influence of implicit combined effect in between parameter sequences on the homogeneous domains could be considered.Deep learning technique was employed to quantify abstract features of the characterization images of discontinuity properties,and to assess the similarity of rock mass structures.The results show that the technique can effectively distinguish structural variations and outperform conventional methods.It can handle multisource engineering geological information and multiple discontinuity parameters.This technique can also minimize the interference of human factors and delineate homogeneous domains based on orientations or multi-parameter with arbitrary distributions to satisfy different engineering requirements.展开更多
Landslide susceptibility evaluation plays an important role in disaster prevention and reduction.Feature-based transfer learning(TL)is an effective method for solving landslide susceptibility mapping(LSM)in target reg...Landslide susceptibility evaluation plays an important role in disaster prevention and reduction.Feature-based transfer learning(TL)is an effective method for solving landslide susceptibility mapping(LSM)in target regions with no available samples.However,as the study area expands,the distribution of land-slide types and triggering mechanisms becomes more diverse,leading to performance degradation in models relying on landslide evaluation knowledge from a single source domain due to domain feature shift.To address this,this study proposes a Multi-source Domain Adaptation Convolutional Neural Network(MDACNN),which combines the landslide prediction knowledge learned from two source domains to perform cross-regional LSM in complex large-scale areas.The method is validated through case studies in three regions located in southeastern coastal China and compared with single-source domain TL models(TCA-based models).The results demonstrate that MDACNN effectively integrates transfer knowledge from multiple source domains to learn diverse landslide-triggering mechanisms,thereby significantly reducing prediction bias inherent to single-source domain TL models,achieving an average improvement of 16.58%across all metrics.Moreover,the landslide susceptibility maps gener-ated by MDACNN accurately quantify the spatial distribution of landslide risks in the target area,provid-ing a powerful scientific and technological tool for landslide disaster management and prevention.展开更多
To avoid the laborious annotation process for dense prediction tasks like semantic segmentation,unsupervised domain adaptation(UDA)methods have been proposed to leverage the abundant annotations from a source domain,s...To avoid the laborious annotation process for dense prediction tasks like semantic segmentation,unsupervised domain adaptation(UDA)methods have been proposed to leverage the abundant annotations from a source domain,such as virtual world(e.g.,3D games),and adapt models to the target domain(the real world)by narrowing the domain discrepancies.However,because of the large domain gap,directly aligning two distinct domains without considering the intermediates leads to inefficient alignment and inferior adaptation.To address this issue,we propose a novel learnable evolutionary Category Intermediates(CIs)guided UDA model named Leci,which enables the information transfer between the two domains via two processes,i.e.,Distilling and Blending.Starting from a random initialization,the CIs learn shared category-wise semantics automatically from two domains in the Distilling process.Then,the learned semantics in the CIs are sent back to blend the domain features through a residual attentive fusion(RAF)module,such that the categorywise features of both domains shift towards each other.As the CIs progressively and consistently learn from the varying feature distributions during training,they are evolutionary to guide the model to achieve category-wise feature alignment.Experiments on both GTA5 and SYNTHIA datasets demonstrate Leci's superiority over prior representative methods.展开更多
基金supported by the National Key Research and Development Program of China(No.2023YFB3712401),the National Natural Science Foundation of China(No.52274301)the Aeronautical Science Foundation of China(No.2023Z0530S6005)the Ningbo Yongjiang Talent-Introduction Programme(No.2022A-023-C).
文摘The viscosity of refining slags plays a critical role in metallurgical processes.However,obtaining accurate viscosity data remains challenging due to the complexities of high-temperature experiments,often relying on empirical models with limited predictive capabilities.This study focuses on the influence of optical basicity on viscosity in CaO-Al_(2)O_(3)-based refining slags,leveraging machine learning to address data scarcity and improve prediction accuracy.An automated framework for algorithm integration,parameter tuning,and evaluation ranking framework(Auto-APE)is employed to develop customized data-driven models for various slag systems,including CaO-Al_(2)O_(3)-SiO_(2),CaO-Al_(2)O_(3)-CaF_(2),CaO-Al_(2)O_(3)-SiO_(2)-MgO,and CaO-Al_(2)O_(3)-SiO_(2)-MgO-CaF_(2).By incorporating optical basicity as a key feature,the models achieve an average validation error of 8.0%to 15.1%,significantly outperforming traditional empirical models.Additionally,symbolic regression is introduced to rapidly construct domain-specific features,such as optical basicity-like descriptors,offering a potential breakthrough in performance prediction for small datasets.This work highlights the critical role of domain-specific knowledge in understanding and predicting viscosity,providing a robust machine learning-based approach for optimizing refining slag properties.
基金supported in part by the National Natural Science Foundation of China[Grant number 62471075]the Major Science and Technology Project Grant of the Chongqing Municipal Education Commission[Grant number KJZD-M202301901].
文摘Underwater images often affect the effectiveness of underwater visual tasks due to problems such as light scattering,color distortion,and detail blurring,limiting their application performance.Existing underwater image enhancement methods,although they can improve the image quality to some extent,often lead to problems such as detail loss and edge blurring.To address these problems,we propose FENet,an efficient underwater image enhancement method.FENet first obtains three different scales of images by image downsampling and then transforms them into the frequency domain to extract the low-frequency and high-frequency spectra,respectively.Then,a distance mask and a mean mask are constructed based on the distance and magnitude mean for enhancing the high-frequency part,thus improving the image details and enhancing the effect by suppressing the noise in the low-frequency part.Affected by the light scattering of underwater images and the fact that some details are lost if they are directly reduced to the spatial domain after the frequency domain operation.For this reason,we propose a multi-stage residual feature aggregation module,which focuses on detail extraction and effectively avoids information loss caused by global enhancement.Finally,we combine the edge guidance strategy to further enhance the edge details of the image.Experimental results indicate that FENet outperforms current state-of-the-art underwater image enhancement methods in quantitative and qualitative evaluations on multiple publicly available datasets.
基金supported by the National Research Foundation of Korea(NRF)grant funded by the Korea government(MSIT)(No.RS-2024-00406320)the Institute of Information&Communica-tions Technology Planning&Evaluation(IITP)-Innovative Human Resource Development for Local Intellectualization Program Grant funded by the Korea government(MSIT)(IITP-2026-RS-2023-00259678).
文摘Domain adaptation aims to reduce the distribution gap between the training data(source domain)and the target data.This enables effective predictions even for domains not seen during training.However,most conventional domain adaptation methods assume a single source domain,making them less suitable for modern deep learning settings that rely on diverse and large-scale datasets.To address this limitation,recent research has focused on Multi-Source Domain Adaptation(MSDA),which aims to learn effectively from multiple source domains.In this paper,we propose Efficient Domain Transition for Multi-source(EDTM),a novel and efficient framework designed to tackle two major challenges in existing MSDA approaches:(1)integrating knowledge across different source domains and(2)aligning label distributions between source and target domains.EDTM leverages an ensemble-based classifier expert mechanism to enhance the contribution of source domains that are more similar to the target domain.To further stabilize the learning process and improve performance,we incorporate imitation learning into the training of the target model.In addition,Maximum Classifier Discrepancy(MCD)is employed to align class-wise label distributions between the source and target domains.Experiments were conducted using Digits-Five,one of the most representative benchmark datasets for MSDA.The results show that EDTM consistently outperforms existing methods in terms of average classification accuracy.Notably,EDTM achieved significantly higher performance on target domains such as Modified National Institute of Standards and Technolog with blended background images(MNIST-M)and Street View House Numbers(SVHN)datasets,demonstrating enhanced generalization compared to baseline approaches.Furthermore,an ablation study analyzing the contribution of each loss component validated the effectiveness of the framework,highlighting the importance of each module in achieving optimal performance.
基金supported by the National Natural Science Foundation of China Funded Project(Project Name:Research on Robust Adaptive Allocation Mechanism of Human Machine Co-Driving System Based on NMS Features,Project Approval Number:52172381).
文摘To address the issue of scarce labeled samples and operational condition variations that degrade the accuracy of fault diagnosis models in variable-condition gearbox fault diagnosis,this paper proposes a semi-supervised masked contrastive learning and domain adaptation(SSMCL-DA)method for gearbox fault diagnosis under variable conditions.Initially,during the unsupervised pre-training phase,a dual signal augmentation strategy is devised,which simultaneously applies random masking in the time domain and random scaling in the frequency domain to unlabeled samples,thereby constructing more challenging positive sample pairs to guide the encoder in learning intrinsic features robust to condition variations.Subsequently,a ConvNeXt-Transformer hybrid architecture is employed,integrating the superior local detail modeling capacity of ConvNeXt with the robust global perception capability of Transformer to enhance feature extraction in complex scenarios.Thereafter,a contrastive learning model is constructed with the optimization objective of maximizing feature similarity across different masked instances of the same sample,enabling the extraction of consistent features from multiple masked perspectives and reducing reliance on labeled data.In the final supervised fine-tuning phase,a multi-scale attention mechanism is incorporated for feature rectification,and a domain adaptation module combining Local Maximum Mean Discrepancy(LMMD)with adversarial learning is proposed.This module embodies a dual mechanism:LMMD facilitates fine-grained class-conditional alignment,compelling features of identical fault classes to converge across varying conditions,while the domain discriminator utilizes adversarial training to guide the feature extractor toward learning domain-invariant features.Working in concert,they markedly diminish feature distribution discrepancies induced by changes in load,rotational speed,and other factors,thereby boosting the model’s adaptability to cross-condition scenarios.Experimental evaluations on the WT planetary gearbox dataset and the Case Western Reserve University(CWRU)bearing dataset demonstrate that the SSMCL-DA model effectively identifies multiple fault classes in gearboxes,with diagnostic performance substantially surpassing that of conventional methods.Under cross-condition scenarios,the model attains fault diagnosis accuracies of 99.21%for the WT planetary gearbox and 99.86%for the bearings,respectively.Furthermore,the model exhibits stable generalization capability in cross-device settings.
基金supported by the National Natural Science Foundation of China(Grant No.72161034).
文摘Human motion modeling is a core technology in computer animation,game development,and humancomputer interaction.In particular,generating natural and coherent in-between motion using only the initial and terminal frames remains a fundamental yet unresolved challenge.Existing methods typically rely on dense keyframe inputs or complex prior structures,making it difficult to balance motion quality and plausibility under conditions such as sparse constraints,long-term dependencies,and diverse motion styles.To address this,we propose a motion generation framework based on a frequency-domain diffusion model,which aims to better model complex motion distributions and enhance generation stability under sparse conditions.Our method maps motion sequences to the frequency domain via the Discrete Cosine Transform(DCT),enabling more effective modeling of low-frequency motion structures while suppressing high-frequency noise.A denoising network based on self-attention is introduced to capture long-range temporal dependencies and improve global structural awareness.Additionally,a multi-objective loss function is employed to jointly optimize motion smoothness,pose diversity,and anatomical consistency,enhancing the realism and physical plausibility of the generated sequences.Comparative experiments on the Human3.6M and LaFAN1 datasets demonstrate that our method outperforms state-of-the-art approaches across multiple performance metrics,showing stronger capabilities in generating intermediate motion frames.This research offers a new perspective and methodology for human motion generation and holds promise for applications in character animation,game development,and virtual interaction.
基金supported by the National Natural Science Foundation of China(22265021,52231007,and 12327804)the Aeronautical Science Foundation of China(2020Z056056003)Jiangxi Provincial Natural Science Foundation(20232BAB212004).
文摘The precise tuning of magnetic nanoparticle size and magnetic domains,thereby shaping magnetic properties.However,the dynamic evolution mechanisms of magnetic domain configurations in relation to electromagnetic(EM)attenuation behavior remain poorly understood.To address this gap,a thermodynamically controlled periodic coordination strategy is proposed to achieve precise modulation of magnetic nanoparticle spacing.This approach unveils the evolution of magnetic domain configurations,progressing from individual to coupled and ultimately to crosslinked domain configurations.A unique magnetic coupling phenomenon surpasses the Snoek limit in low-frequency range,which is observed through micromagnetic simulation.The crosslinked magnetic configuration achieves effective low-frequency EM wave absorption at 3.68 GHz,encompassing nearly the entire C-band.This exceptional magnetic interaction significantly enhances radar camouflage and thermal insulation properties.Additionally,a robust gradient metamaterial design extends coverage across the full band(2–40 GHz),effectively mitigating the impact of EM pollution on human health and environment.This comprehensive study elucidates the evolution mechanisms of magnetic domain configurations,addresses gaps in dynamic magnetic modulation,and provides novel insights for the development of high-performance,low-frequency EM wave absorption materials.
基金supported by the Australian Research Council Centre of Excellence in Optical Microcombs for Breakthrough Science COMBS(CE230100006)the Australian Research Council grants DP220100488 and DE230100964funded by the Australian Government.
文摘Lithium niobate(LN)has remained at the forefront of academic research and industrial applications due to its rich material properties,which include second-order nonlinear optic,electro-optic,and piezoelectric properties.A further aspect of LN’s versatility stems from the ability to engineer ferroelectric domains with micro and even nano-scale precision in LN,which provides an additional degree of freedom to design acoustic and optical devices with improved performance and is only possible in a handful of other materials.In this review paper,we provide an overview of the domain engineering techniques developed for LN,their principles,and the typical domain size and pattern uniformity they provide,which is important for devices that require high-resolution domain patterns with good reproducibility.It also highlights each technique's benefits,limitations,and adaptability for an application,along with possible improvements and future advancement prospects.Further,the review provides a brief overview of domain visualization methods,which is crucial to gain insights into domain quality/shape and explores the adaptability of the proposed domain engineering methodologies for the emerging thin-film lithium niobate on an insulator platform,which creates opportunities for developing the next generation of compact and scalable photonic integrated circuits and high frequency acoustic devices.
文摘To enable proper diagnosis of a patient,medical images must demonstrate no presence of noise and artifacts.The major hurdle lies in acquiring these images in such a manner that extraneous variables,causing distortions in the form of noise and artifacts,are kept to a bare minimum.The unexpected change realized during the acquisition process specifically attacks the integrity of the image’s quality,while indirectly attacking the effectiveness of the diagnostic process.It is thus crucial that this is attended to with maximum efficiency at the level of pertinent expertise.The solution to these challenges presents a complex dilemma at the acquisition stage,where image processing techniques must be adopted.The necessity of this mandatory image pre-processing step underpins the implementation of traditional state-of-the-art methods to create functional and robust denoising or recovery devices.This article hereby provides an extensive systematic review of the above techniques,with the purpose of presenting a systematic evaluation of their effect on medical images under three different distributions of noise,i.e.,Gaussian,Poisson,and Rician.A thorough analysis of these methods is conducted using eight evaluation parameters to highlight the unique features of each method.The covered denoising methods are essential in actual clinical scenarios where the preservation of anatomical details is crucial for accurate and safe diagnosis,such as tumor detection in MRI and vascular imaging in CT.
文摘Intelligent Automation&Soft Computing has retracted the article titled“Line Trace Effective Comparison AlgorithmBased onWavelet Domain DTW”[1],Intell Automat Soft Comput.2019;25(2):359–366 at the request of the authors.DOI:10.31209/2019.100000097 URL:https://www.techscience.com/iasc/v25n2/39663 The article duplicates significant parts of a paper published in Journal of Intelligent&Fuzzy Systems[2].
基金the Deanship of Scientific Research at King Khalid University for funding this work through large group under grant number(GRP.2/663/46).
文摘Domain Generation Algorithms(DGAs)continue to pose a significant threat inmodernmalware infrastructures by enabling resilient and evasive communication with Command and Control(C&C)servers.Traditional detection methods-rooted in statistical heuristics,feature engineering,and shallow machine learning-struggle to adapt to the increasing sophistication,linguistic mimicry,and adversarial variability of DGA variants.The emergence of Large Language Models(LLMs)marks a transformative shift in this landscape.Leveraging deep contextual understanding,semantic generalization,and few-shot learning capabilities,LLMs such as BERT,GPT,and T5 have shown promising results in detecting both character-based and dictionary-based DGAs,including previously unseen(zeroday)variants.This paper provides a comprehensive and critical review of LLM-driven DGA detection,introducing a structured taxonomy of LLM architectures,evaluating the linguistic and behavioral properties of benchmark datasets,and comparing recent detection frameworks across accuracy,latency,robustness,and multilingual performance.We also highlight key limitations,including challenges in adversarial resilience,model interpretability,deployment scalability,and privacy risks.To address these gaps,we present a forward-looking research roadmap encompassing adversarial training,model compression,cross-lingual benchmarking,and real-time integration with SIEM/SOAR platforms.This survey aims to serve as a foundational resource for advancing the development of scalable,explainable,and operationally viable LLM-based DGA detection systems.
文摘The rapid development of the industrial internet of things(IIoT)has brought huge benefits to factories equipped with IIoT technology,each of which represents an IIoT domain.More and more domains are choosing to cooperate with each other to produce better products for greater profits.Therefore,in order to protect the security and privacy of IIoT devices in cross-domain communication,lots of cross-domain authentication schemes have been proposed.However,most schemes expose the domain to which the IIoT device belongs,or introduce a single point of failure in multi-domain cooperation,thus introducing unpredictable risks to each domain.We propose a more secure and efficient domain-level anonymous cross-domain authentication(DLCA)scheme based on alliance blockchain.The proposed scheme uses group signatures with decentralized tracing technology to provide domain-level anonymity to each IIoT device and allow the public to trace the real identity of the malicious pseudonym.In addition,DLCA takes into account the limited resource characteristics of IIoT devices to design an efficient cross-domain authentication protocol.Security analysis and performance evaluation show that the proposed scheme can be effectively used in the cross-domain authentication scenario of industrial internet of things.
基金the National Natural Science Foundation of China(Grant Nos.41941017 and U1702241).
文摘Determining homogeneous domains statistically is helpful for engineering geological modeling and rock mass stability evaluation.In this text,a technique that can integrate lithology,geotechnical and structural information is proposed to delineate homogeneous domains.This technique is then applied to a high and steep slope along a road.First,geological and geotechnical domains were described based on lithology,faults,and shear zones.Next,topological manifolds were used to eliminate the incompatibility between orientations and other parameters(i.e.trace length and roughness)so that the data concerning various properties of each discontinuity can be matched and characterized in the same Euclidean space.Thus,the influence of implicit combined effect in between parameter sequences on the homogeneous domains could be considered.Deep learning technique was employed to quantify abstract features of the characterization images of discontinuity properties,and to assess the similarity of rock mass structures.The results show that the technique can effectively distinguish structural variations and outperform conventional methods.It can handle multisource engineering geological information and multiple discontinuity parameters.This technique can also minimize the interference of human factors and delineate homogeneous domains based on orientations or multi-parameter with arbitrary distributions to satisfy different engineering requirements.
基金the National Natural Science Foundation of China(Grant No.42301002,and 52109118)Fujian Provincial Water Resources Science and Technology Project(Grant No.MSK202524)Guidance fund for Science and Technology Program,Fujian province(Grant No.2024Y0002).
文摘Landslide susceptibility evaluation plays an important role in disaster prevention and reduction.Feature-based transfer learning(TL)is an effective method for solving landslide susceptibility mapping(LSM)in target regions with no available samples.However,as the study area expands,the distribution of land-slide types and triggering mechanisms becomes more diverse,leading to performance degradation in models relying on landslide evaluation knowledge from a single source domain due to domain feature shift.To address this,this study proposes a Multi-source Domain Adaptation Convolutional Neural Network(MDACNN),which combines the landslide prediction knowledge learned from two source domains to perform cross-regional LSM in complex large-scale areas.The method is validated through case studies in three regions located in southeastern coastal China and compared with single-source domain TL models(TCA-based models).The results demonstrate that MDACNN effectively integrates transfer knowledge from multiple source domains to learn diverse landslide-triggering mechanisms,thereby significantly reducing prediction bias inherent to single-source domain TL models,achieving an average improvement of 16.58%across all metrics.Moreover,the landslide susceptibility maps gener-ated by MDACNN accurately quantify the spatial distribution of landslide risks in the target area,provid-ing a powerful scientific and technological tool for landslide disaster management and prevention.
基金Australian Research Council Project(FL-170100117).
文摘To avoid the laborious annotation process for dense prediction tasks like semantic segmentation,unsupervised domain adaptation(UDA)methods have been proposed to leverage the abundant annotations from a source domain,such as virtual world(e.g.,3D games),and adapt models to the target domain(the real world)by narrowing the domain discrepancies.However,because of the large domain gap,directly aligning two distinct domains without considering the intermediates leads to inefficient alignment and inferior adaptation.To address this issue,we propose a novel learnable evolutionary Category Intermediates(CIs)guided UDA model named Leci,which enables the information transfer between the two domains via two processes,i.e.,Distilling and Blending.Starting from a random initialization,the CIs learn shared category-wise semantics automatically from two domains in the Distilling process.Then,the learned semantics in the CIs are sent back to blend the domain features through a residual attentive fusion(RAF)module,such that the categorywise features of both domains shift towards each other.As the CIs progressively and consistently learn from the varying feature distributions during training,they are evolutionary to guide the model to achieve category-wise feature alignment.Experiments on both GTA5 and SYNTHIA datasets demonstrate Leci's superiority over prior representative methods.