Domain adaptation aims to reduce the distribution gap between the training data(source domain)and the target data.This enables effective predictions even for domains not seen during training.However,most conventional ...Domain adaptation aims to reduce the distribution gap between the training data(source domain)and the target data.This enables effective predictions even for domains not seen during training.However,most conventional domain adaptation methods assume a single source domain,making them less suitable for modern deep learning settings that rely on diverse and large-scale datasets.To address this limitation,recent research has focused on Multi-Source Domain Adaptation(MSDA),which aims to learn effectively from multiple source domains.In this paper,we propose Efficient Domain Transition for Multi-source(EDTM),a novel and efficient framework designed to tackle two major challenges in existing MSDA approaches:(1)integrating knowledge across different source domains and(2)aligning label distributions between source and target domains.EDTM leverages an ensemble-based classifier expert mechanism to enhance the contribution of source domains that are more similar to the target domain.To further stabilize the learning process and improve performance,we incorporate imitation learning into the training of the target model.In addition,Maximum Classifier Discrepancy(MCD)is employed to align class-wise label distributions between the source and target domains.Experiments were conducted using Digits-Five,one of the most representative benchmark datasets for MSDA.The results show that EDTM consistently outperforms existing methods in terms of average classification accuracy.Notably,EDTM achieved significantly higher performance on target domains such as Modified National Institute of Standards and Technolog with blended background images(MNIST-M)and Street View House Numbers(SVHN)datasets,demonstrating enhanced generalization compared to baseline approaches.Furthermore,an ablation study analyzing the contribution of each loss component validated the effectiveness of the framework,highlighting the importance of each module in achieving optimal performance.展开更多
Landslide susceptibility evaluation plays an important role in disaster prevention and reduction.Feature-based transfer learning(TL)is an effective method for solving landslide susceptibility mapping(LSM)in target reg...Landslide susceptibility evaluation plays an important role in disaster prevention and reduction.Feature-based transfer learning(TL)is an effective method for solving landslide susceptibility mapping(LSM)in target regions with no available samples.However,as the study area expands,the distribution of land-slide types and triggering mechanisms becomes more diverse,leading to performance degradation in models relying on landslide evaluation knowledge from a single source domain due to domain feature shift.To address this,this study proposes a Multi-source Domain Adaptation Convolutional Neural Network(MDACNN),which combines the landslide prediction knowledge learned from two source domains to perform cross-regional LSM in complex large-scale areas.The method is validated through case studies in three regions located in southeastern coastal China and compared with single-source domain TL models(TCA-based models).The results demonstrate that MDACNN effectively integrates transfer knowledge from multiple source domains to learn diverse landslide-triggering mechanisms,thereby significantly reducing prediction bias inherent to single-source domain TL models,achieving an average improvement of 16.58%across all metrics.Moreover,the landslide susceptibility maps gener-ated by MDACNN accurately quantify the spatial distribution of landslide risks in the target area,provid-ing a powerful scientific and technological tool for landslide disaster management and prevention.展开更多
The spatial offset of bridge has a significant impact on the safety,comfort,and durability of high-speed railway(HSR)operations,so it is crucial to rapidly and effectively detect the spatial offset of operational HSR ...The spatial offset of bridge has a significant impact on the safety,comfort,and durability of high-speed railway(HSR)operations,so it is crucial to rapidly and effectively detect the spatial offset of operational HSR bridges.Drive-by monitoring of bridge uneven settlement demonstrates significant potential due to its practicality,cost-effectiveness,and efficiency.However,existing drive-by methods for detecting bridge offset have limitations such as reliance on a single data source,low detection accuracy,and the inability to identify lateral deformations of bridges.This paper proposes a novel drive-by inspection method for spatial offset of HSR bridge based on multi-source data fusion of comprehensive inspection train.Firstly,dung beetle optimizer-variational mode decomposition was employed to achieve adaptive decomposition of non-stationary dynamic signals,and explore the hidden temporal relationships in the data.Subsequently,a long short-term memory neural network was developed to achieve feature fusion of multi-source signal and accurate prediction of spatial settlement of HSR bridge.A dataset of track irregularities and CRH380A high-speed train responses was generated using a 3D train-track-bridge interaction model,and the accuracy and effectiveness of the proposed hybrid deep learning model were numerically validated.Finally,the reliability of the proposed drive-by inspection method was further validated by analyzing the actual measurement data obtained from comprehensive inspection train.The research findings indicate that the proposed approach enables rapid and accurate detection of spatial offset in HSR bridge,ensuring the long-term operational safety of HSR bridges.展开更多
Benthic habitat mapping is an emerging discipline in the international marine field in recent years,providing an effective tool for marine spatial planning,marine ecological management,and decision-making applications...Benthic habitat mapping is an emerging discipline in the international marine field in recent years,providing an effective tool for marine spatial planning,marine ecological management,and decision-making applications.Seabed sediment classification is one of the main contents of seabed habitat mapping.In response to the impact of remote sensing imaging quality and the limitations of acoustic measurement range,where a single data source does not fully reflect the substrate type,we proposed a high-precision seabed habitat sediment classification method that integrates data from multiple sources.Based on WorldView-2 multi-spectral remote sensing image data and multibeam bathymetry data,constructed a random forests(RF)classifier with optimal feature selection.A seabed sediment classification experiment integrating optical remote sensing and acoustic remote sensing data was carried out in the shallow water area of Wuzhizhou Island,Hainan,South China.Different seabed sediment types,such as sand,seagrass,and coral reefs were effectively identified,with an overall classification accuracy of 92%.Experimental results show that RF matrix optimized by fusing multi-source remote sensing data for feature selection were better than the classification results of simple combinations of data sources,which improved the accuracy of seabed sediment classification.Therefore,the method proposed in this paper can be effectively applied to high-precision seabed sediment classification and habitat mapping around islands and reefs.展开更多
Multi-source domain adaptation utilizes multiple source domains to learn the knowledge and transfers it to an unlabeled target domain.To address the problem,most of the existing methods aim to minimize the domain shif...Multi-source domain adaptation utilizes multiple source domains to learn the knowledge and transfers it to an unlabeled target domain.To address the problem,most of the existing methods aim to minimize the domain shift by auxiliary distribution alignment objectives,which reduces the effect of domain-specific features.However,without explicitly modeling the domain-specific features,it is not easy to guarantee that the domain-invariant representation extracted from input domains contains domain-specific information as few as possible.In this work,we present a different perspective on MSDA,which employs the idea of feature elimination to reduce the influence of domain-specific features.We design two different ways to extract domain-specific features and total features and construct the domain-invariant representations by eliminating the domain-specific features from total features.The experimental results on different domain adaptation datasets demonstrate the effectiveness of our method and the generalization ability of our model.展开更多
The goal of decentralized multi-source domain adaptation is to conduct unsupervised multi-source domain adaptation in a data decentralization scenario. The challenge of data decentralization is that the source domains...The goal of decentralized multi-source domain adaptation is to conduct unsupervised multi-source domain adaptation in a data decentralization scenario. The challenge of data decentralization is that the source domains and target domain lack cross-domain collaboration during training. On the unlabeled target domain, the target model needs to transfer supervision knowledge with the collaboration of source models, while the domain gap will lead to limited adaptation performance from source models. On the labeled source domain, the source model tends to overfit its domain data in the data decentralization scenario, which leads to the negative transfer problem. For these challenges, we propose dual collaboration for decentralized multi-source domain adaptation by training and aggregating the local source models and local target model in collaboration with each other. On the target domain, we train the local target model by distilling supervision knowledge and fully using the unlabeled target domain data to alleviate the domain shift problem with the collaboration of local source models. On the source domain, we regularize the local source models in collaboration with the local target model to overcome the negative transfer problem. This forms a dual collaboration between the decentralized source domains and target domain, which improves the domain adaptation performance under the data decentralization scenario. Extensive experiments indicate that our method outperforms the state-of-the-art methods by a large margin on standard multi-source domain adaptation datasets.展开更多
Accurate estimation of understory terrain has significant scientific importance for maintaining ecosystem balance and biodiversity conservation.Addressing the issue of inadequate representation of spatial heterogeneit...Accurate estimation of understory terrain has significant scientific importance for maintaining ecosystem balance and biodiversity conservation.Addressing the issue of inadequate representation of spatial heterogeneity when traditional forest topographic inversion methods consider the entire forest as the inversion unit,this study pro⁃poses a differentiated modeling approach to forest types based on refined land cover classification.Taking Puerto Ri⁃co and Maryland as study areas,a multi-dimensional feature system is constructed by integrating multi-source re⁃mote sensing data:ICESat-2 spaceborne LiDAR is used to obtain benchmark values for understory terrain,topo⁃graphic factors such as slope and aspect are extracted based on SRTM data,and vegetation cover characteristics are analyzed using Landsat-8 multispectral imagery.This study incorporates forest type as a classification modeling con⁃dition and applies the random forest algorithm to build differentiated topographic inversion models.Experimental re⁃sults indicate that,compared to traditional whole-area modeling methods(RMSE=5.06 m),forest type-based classi⁃fication modeling significantly improves the accuracy of understory terrain estimation(RMSE=2.94 m),validating the effectiveness of spatial heterogeneity modeling.Further sensitivity analysis reveals that canopy structure parame⁃ters(with RMSE variation reaching 4.11 m)exert a stronger regulatory effect on estimation accuracy compared to forest cover,providing important theoretical support for optimizing remote sensing models of forest topography.展开更多
AIM:To address the challenges of data labeling difficulties,data privacy,and necessary large amount of labeled data for deep learning methods in diabetic retinopathy(DR)identification,the aim of this study is to devel...AIM:To address the challenges of data labeling difficulties,data privacy,and necessary large amount of labeled data for deep learning methods in diabetic retinopathy(DR)identification,the aim of this study is to develop a source-free domain adaptation(SFDA)method for efficient and effective DR identification from unlabeled data.METHODS:A multi-SFDA method was proposed for DR identification.This method integrates multiple source models,which are trained from the same source domain,to generate synthetic pseudo labels for the unlabeled target domain.Besides,a softmax-consistence minimization term is utilized to minimize the intra-class distances between the source and target domains and maximize the inter-class distances.Validation is performed using three color fundus photograph datasets(APTOS2019,DDR,and EyePACS).RESULTS:The proposed model was evaluated and provided promising results with respectively 0.8917 and 0.9795 F1-scores on referable and normal/abnormal DR identification tasks.It demonstrated effective DR identification through minimizing intra-class distances and maximizing inter-class distances between source and target domains.CONCLUSION:The multi-SFDA method provides an effective approach to overcome the challenges in DR identification.The method not only addresses difficulties in data labeling and privacy issues,but also reduces the need for large amounts of labeled data required by deep learning methods,making it a practical tool for early detection and preservation of vision in diabetic patients.展开更多
To elucidate the fracturing mechanism of deep hard rock under complex disturbance environments,this study investigates the dynamic failure behavior of pre-damaged granite subjected to multi-source dynamic disturbances...To elucidate the fracturing mechanism of deep hard rock under complex disturbance environments,this study investigates the dynamic failure behavior of pre-damaged granite subjected to multi-source dynamic disturbances.Blasting vibration monitoring was conducted in a deep-buried drill-and-blast tunnel to characterize in-situ dynamic loading conditions.Subsequently,true triaxial compression tests incorporating multi-source disturbances were performed using a self-developed wide-low-frequency true triaxial system to simulate disturbance accumulation and damage evolution in granite.The results demonstrate that combined dynamic disturbances and unloading damage significantly accelerate strength degradation and trigger shear-slip failure along preferentially oriented blast-induced fractures,with strength reductions up to 16.7%.Layered failure was observed on the free surface of pre-damaged granite under biaxial loading,indicating a disturbance-induced fracture localization mechanism.Time-stress-fracture-energy coupling fields were constructed to reveal the spatiotemporal characteristics of fracture evolution.Critical precursor frequency bands(105-150,185-225,and 300-325 kHz)were identified,which serve as diagnostic signatures of impending failure.A dynamic instability mechanism driven by multi-source disturbance superposition and pre-damage evolution was established.Furthermore,a grouting-based wave-absorption control strategy was proposed to mitigate deep dynamic disasters by attenuating disturbance amplitude and reducing excitation frequency.展开更多
The SiO_(2) inverse opal photonic crystals(PC)with a three-dimensional macroporous structure were fabricated by the sacrificial template method,followed by infiltration of a pyrene derivative,1-(pyren-8-yl)but-3-en-1-...The SiO_(2) inverse opal photonic crystals(PC)with a three-dimensional macroporous structure were fabricated by the sacrificial template method,followed by infiltration of a pyrene derivative,1-(pyren-8-yl)but-3-en-1-amine(PEA),to achieve a formaldehyde(FA)-sensitive and fluorescence-enhanced sensing film.Utilizing the specific Aza-Cope rearrangement reaction of allylamine of PEA and FA to generate a strong fluorescent product emitted at approximately 480 nm,we chose a PC whose blue band edge of stopband overlapped with the fluorescence emission wavelength.In virtue of the fluorescence enhancement property derived from slow photon effect of PC,FA was detected highly selectively and sensitively.The limit of detection(LoD)was calculated to be 1.38 nmol/L.Furthermore,the fast detection of FA(within 1 min)is realized due to the interconnected three-dimensional macroporous structure of the inverse opal PC and its high specific surface area.The prepared sensing film can be used for the detection of FA in air,aquatic products and living cells.The very close FA content in indoor air to the result from FA detector,the recovery rate of 101.5%for detecting FA in aquatic products and fast fluorescence imaging in 2 min for living cells demonstrate the reliability and accuracy of our method in practical applications.展开更多
Due to the development of cloud computing and machine learning,users can upload their data to the cloud for machine learning model training.However,dishonest clouds may infer user data,resulting in user data leakage.P...Due to the development of cloud computing and machine learning,users can upload their data to the cloud for machine learning model training.However,dishonest clouds may infer user data,resulting in user data leakage.Previous schemes have achieved secure outsourced computing,but they suffer from low computational accuracy,difficult-to-handle heterogeneous distribution of data from multiple sources,and high computational cost,which result in extremely poor user experience and expensive cloud computing costs.To address the above problems,we propose amulti-precision,multi-sourced,andmulti-key outsourcing neural network training scheme.Firstly,we design a multi-precision functional encryption computation based on Euclidean division.Second,we design the outsourcing model training algorithm based on a multi-precision functional encryption with multi-sourced heterogeneity.Finally,we conduct experiments on three datasets.The results indicate that our framework achieves an accuracy improvement of 6%to 30%.Additionally,it offers a memory space optimization of 1.0×2^(24) times compared to the previous best approach.展开更多
Accurate monitoring of track irregularities is very helpful to improving the vehicle operation quality and to formulating appropriate track maintenance strategies.Existing methods have the problem that they rely on co...Accurate monitoring of track irregularities is very helpful to improving the vehicle operation quality and to formulating appropriate track maintenance strategies.Existing methods have the problem that they rely on complex signal processing algorithms and lack multi-source data analysis.Driven by multi-source measurement data,including the axle box,the bogie frame and the carbody accelerations,this paper proposes a track irregularities monitoring network(TIMNet)based on deep learning methods.TIMNet uses the feature extraction capability of convolutional neural networks and the sequence map-ping capability of the long short-term memory model to explore the mapping relationship between vehicle accelerations and track irregularities.The particle swarm optimization algorithm is used to optimize the network parameters,so that both the vertical and lateral track irregularities can be accurately identified in the time and spatial domains.The effectiveness and superiority of the proposed TIMNet is analyzed under different simulation conditions using a vehicle dynamics model.Field tests are conducted to prove the availability of the proposed TIMNet in quantitatively monitoring vertical and lateral track irregularities.Furthermore,comparative tests show that the TIMNet has a better fitting degree and timeliness in monitoring track irregularities(vertical R2 of 0.91,lateral R2 of 0.84 and time cost of 10 ms),compared to other classical regression.The test also proves that the TIMNet has a better anti-interference ability than other regression models.展开更多
In the heterogeneous power internet of things(IoT)environment,data signals are acquired to support different business systems to realize advanced intelligent applications,with massive,multi-source,heterogeneous and ot...In the heterogeneous power internet of things(IoT)environment,data signals are acquired to support different business systems to realize advanced intelligent applications,with massive,multi-source,heterogeneous and other characteristics.Reliable perception of information and efficient transmission of energy in multi-source heterogeneous environments are crucial issues.Compressive sensing(CS),as an effective method of signal compression and transmission,can accurately recover the original signal only by very few sampling.In this paper,we study a new method of multi-source heterogeneous data signal reconstruction of power IoT based on compressive sensing technology.Based on the traditional compressive sensing technology to directly recover multi-source heterogeneous signals,we fully use the interference subspace information to design the measurement matrix,which directly and effectively eliminates the interference while making the measurement.The measure matrix is optimized by minimizing the average cross-coherence of the matrix,and the reconstruction performance of the new method is further improved.Finally,the effectiveness of the new method with different parameter settings under different multi-source heterogeneous data signal cases is verified by using orthogonal matching pursuit(OMP)and sparsity adaptive matching pursuit(SAMP)for considering the actual environment with prior information utilization of signal sparsity and no prior information utilization of signal sparsity.展开更多
This paper deeply discusses the causes of gear howling noise,the identification and analysis of multi-source excitation,the transmission path of dynamic noise,simulation and experimental research,case analysis,optimiz...This paper deeply discusses the causes of gear howling noise,the identification and analysis of multi-source excitation,the transmission path of dynamic noise,simulation and experimental research,case analysis,optimization effect,etc.,aiming to better provide a certain guideline and reference for relevant researchers.展开更多
With the acceleration of intelligent transformation of energy system,the monitoring of equipment operation status and optimization of production process in thermal power plants face the challenge of multi-source heter...With the acceleration of intelligent transformation of energy system,the monitoring of equipment operation status and optimization of production process in thermal power plants face the challenge of multi-source heterogeneous data integration.In view of the heterogeneous characteristics of physical sensor data,including temperature,vibration and pressure that generated by boilers,steam turbines and other key equipment and real-time working condition data of SCADA system,this paper proposes a multi-source heterogeneous data fusion and analysis platform for thermal power plants based on edge computing and deep learning.By constructing a multi-level fusion architecture,the platform adopts dynamic weight allocation strategy and 5D digital twin model to realize the collaborative analysis of physical sensor data,simulation calculation results and expert knowledge.The data fusion module combines Kalman filter,wavelet transform and Bayesian estimation method to solve the problem of data time series alignment and dimension difference.Simulation results show that the data fusion accuracy can be improved to more than 98%,and the calculation delay can be controlled within 500 ms.The data analysis module integrates Dymola simulation model and AERMOD pollutant diffusion model,supports the cascade analysis of boiler combustion efficiency prediction and flue gas emission monitoring,system response time is less than 2 seconds,and data consistency verification accuracy reaches 99.5%.展开更多
Multi-source data fusion provides high-precision spatial situational awareness essential for analyzing granular urban social activities.This study used Shanghai’s catering industry as a case study,leveraging electron...Multi-source data fusion provides high-precision spatial situational awareness essential for analyzing granular urban social activities.This study used Shanghai’s catering industry as a case study,leveraging electronic reviews and consumer data sourced from third-party restaurant platforms collected in 2021.By performing weighted processing on two-dimensional point-of-interest(POI)data,clustering hotspots of high-dimensional restaurant data were identified.A hierarchical network of restaurant hotspots was constructed following the Central Place Theory(CPT)framework,while the Geo-Informatic Tupu method was employed to resolve the challenges posed by network deformation in multi-scale processes.These findings suggest the necessity of enhancing the spatial balance of Shanghai’s urban centers by moderately increasing the number and service capacity of suburban centers at the urban periphery.Such measures would contribute to a more optimized urban structure and facilitate the outward dispersion of comfort-oriented facilities such as the restaurant industry.At a finer spatial scale,the distribution of restaurant hotspots demonstrates a polycentric and symmetric spatial pattern,with a developmental trend radiating outward along the city’s ring roads.This trend can be attributed to the efforts of restaurants to establish connections with other urban functional spaces,leading to the reconfiguration of urban spaces,expansion of restaurant-dedicated land use,and the reorganization of associated commercial activities.The results validate the existence of a polycentric urban structure in Shanghai but also highlight the instability of the restaurant hotspot network during cross-scale transitions.展开更多
Taking the Ming Tombs Forest Farm in Beijing as the research object,this research applied multi-source data fusion and GIS heat-map overlay analysis techniques,systematically collected bird observation point data from...Taking the Ming Tombs Forest Farm in Beijing as the research object,this research applied multi-source data fusion and GIS heat-map overlay analysis techniques,systematically collected bird observation point data from the Global Biodiversity Information Facility(GBIF),population distribution data from the Oak Ridge National Laboratory(ORNL)in the United States,as well as information on the composition of tree species in suitable forest areas for birds and the forest geographical information of the Ming Tombs Forest Farm,which is based on literature research and field investigations.By using GIS technology,spatial processing was carried out on bird observation points and population distribution data to identify suitable bird-watching areas in different seasons.Then,according to the suitability value range,these areas were classified into different grades(from unsuitable to highly suitable).The research findings indicated that there was significant spatial heterogeneity in the bird-watching suitability of the Ming Tombs Forest Farm.The north side of the reservoir was generally a core area with high suitability in all seasons.The deep-aged broad-leaved mixed forests supported the overlapping co-existence of the ecological niches of various bird species,such as the Zosterops simplex and Urocissa erythrorhyncha.In contrast,the shallow forest-edge coniferous pure forests and mixed forests were more suitable for specialized species like Carduelis sinica.The southern urban area and the core area of the mausoleums had relatively low suitability due to ecological fragmentation or human interference.Based on these results,this paper proposed a three-level protection framework of“core area conservation—buffer zone management—isolation zone construction”and a spatio-temporal coordinated human-bird co-existence strategy.It was also suggested that the human-bird co-existence space could be optimized through measures such as constructing sound and light buffer interfaces,restoring ecological corridors,and integrating cultural heritage elements.This research provided an operational technical approach and decision-making support for the scientific planning of bird-watching sites and the coordination of ecological protection and tourism development.展开更多
As coal mining progresses to greater depths,controlling the stability of surrounding rock in deep roadways has become an increasingly complex challenge.Although four-dimensional(4D)support theoretically offers unique ...As coal mining progresses to greater depths,controlling the stability of surrounding rock in deep roadways has become an increasingly complex challenge.Although four-dimensional(4D)support theoretically offers unique advantages in maintaining the stability of rock mass,the disaster evolution processes and multi-source information response characteristics in deep roadways with 4D support remain unclear.Consequently,a large-scale physical model testing system and self-designed 4D support components were employed to conduct similarity model tests on the surrounding rock failure process under unsupported(U-1),traditional bolt-mesh-cable support(T-2),and 4D support(4D-R-3)conditions.Combined with multi-source monitoring techniques,including stress–strain,digital image correlation(DIC),acoustic emission(AE),microseismic(MS),parallel electric(PE),and electromagnetic radiation(EMR),the mechanical behavior and multi-source information responses were comprehensively analyzed.The results show that the peak stress and displacement of the models are positively correlated with the support strength.The multi-source information exhibits distinct response characteristics under different supports.The response frequency,energy,and fluctuationsof AE,MS,and EMR signals,along with the apparent resistivity(AR)high-resistivity zone,follow the trend U-1>T-2>4D-R-3.Furthermore,multi-source information exhibits significantdifferences in sensitivity across different phases.The AE,MS,and EMR signals exhibit active responses to rock mass activity at each phase.However,AR signals are only sensitive to the fracture propagation during the plastic yield and failure phases.In summary,the 4D support significantlyenhances the bearing capacity and plastic deformation of the models,while substantially reducing the frequency,energy,and fluctuationsof multi-source signals.展开更多
The viscosity of refining slags plays a critical role in metallurgical processes.However,obtaining accurate viscosity data remains challenging due to the complexities of high-temperature experiments,often relying on e...The viscosity of refining slags plays a critical role in metallurgical processes.However,obtaining accurate viscosity data remains challenging due to the complexities of high-temperature experiments,often relying on empirical models with limited predictive capabilities.This study focuses on the influence of optical basicity on viscosity in CaO-Al_(2)O_(3)-based refining slags,leveraging machine learning to address data scarcity and improve prediction accuracy.An automated framework for algorithm integration,parameter tuning,and evaluation ranking framework(Auto-APE)is employed to develop customized data-driven models for various slag systems,including CaO-Al_(2)O_(3)-SiO_(2),CaO-Al_(2)O_(3)-CaF_(2),CaO-Al_(2)O_(3)-SiO_(2)-MgO,and CaO-Al_(2)O_(3)-SiO_(2)-MgO-CaF_(2).By incorporating optical basicity as a key feature,the models achieve an average validation error of 8.0%to 15.1%,significantly outperforming traditional empirical models.Additionally,symbolic regression is introduced to rapidly construct domain-specific features,such as optical basicity-like descriptors,offering a potential breakthrough in performance prediction for small datasets.This work highlights the critical role of domain-specific knowledge in understanding and predicting viscosity,providing a robust machine learning-based approach for optimizing refining slag properties.展开更多
To address the issue of scarce labeled samples and operational condition variations that degrade the accuracy of fault diagnosis models in variable-condition gearbox fault diagnosis,this paper proposes a semi-supervis...To address the issue of scarce labeled samples and operational condition variations that degrade the accuracy of fault diagnosis models in variable-condition gearbox fault diagnosis,this paper proposes a semi-supervised masked contrastive learning and domain adaptation(SSMCL-DA)method for gearbox fault diagnosis under variable conditions.Initially,during the unsupervised pre-training phase,a dual signal augmentation strategy is devised,which simultaneously applies random masking in the time domain and random scaling in the frequency domain to unlabeled samples,thereby constructing more challenging positive sample pairs to guide the encoder in learning intrinsic features robust to condition variations.Subsequently,a ConvNeXt-Transformer hybrid architecture is employed,integrating the superior local detail modeling capacity of ConvNeXt with the robust global perception capability of Transformer to enhance feature extraction in complex scenarios.Thereafter,a contrastive learning model is constructed with the optimization objective of maximizing feature similarity across different masked instances of the same sample,enabling the extraction of consistent features from multiple masked perspectives and reducing reliance on labeled data.In the final supervised fine-tuning phase,a multi-scale attention mechanism is incorporated for feature rectification,and a domain adaptation module combining Local Maximum Mean Discrepancy(LMMD)with adversarial learning is proposed.This module embodies a dual mechanism:LMMD facilitates fine-grained class-conditional alignment,compelling features of identical fault classes to converge across varying conditions,while the domain discriminator utilizes adversarial training to guide the feature extractor toward learning domain-invariant features.Working in concert,they markedly diminish feature distribution discrepancies induced by changes in load,rotational speed,and other factors,thereby boosting the model’s adaptability to cross-condition scenarios.Experimental evaluations on the WT planetary gearbox dataset and the Case Western Reserve University(CWRU)bearing dataset demonstrate that the SSMCL-DA model effectively identifies multiple fault classes in gearboxes,with diagnostic performance substantially surpassing that of conventional methods.Under cross-condition scenarios,the model attains fault diagnosis accuracies of 99.21%for the WT planetary gearbox and 99.86%for the bearings,respectively.Furthermore,the model exhibits stable generalization capability in cross-device settings.展开更多
基金supported by the National Research Foundation of Korea(NRF)grant funded by the Korea government(MSIT)(No.RS-2024-00406320)the Institute of Information&Communica-tions Technology Planning&Evaluation(IITP)-Innovative Human Resource Development for Local Intellectualization Program Grant funded by the Korea government(MSIT)(IITP-2026-RS-2023-00259678).
文摘Domain adaptation aims to reduce the distribution gap between the training data(source domain)and the target data.This enables effective predictions even for domains not seen during training.However,most conventional domain adaptation methods assume a single source domain,making them less suitable for modern deep learning settings that rely on diverse and large-scale datasets.To address this limitation,recent research has focused on Multi-Source Domain Adaptation(MSDA),which aims to learn effectively from multiple source domains.In this paper,we propose Efficient Domain Transition for Multi-source(EDTM),a novel and efficient framework designed to tackle two major challenges in existing MSDA approaches:(1)integrating knowledge across different source domains and(2)aligning label distributions between source and target domains.EDTM leverages an ensemble-based classifier expert mechanism to enhance the contribution of source domains that are more similar to the target domain.To further stabilize the learning process and improve performance,we incorporate imitation learning into the training of the target model.In addition,Maximum Classifier Discrepancy(MCD)is employed to align class-wise label distributions between the source and target domains.Experiments were conducted using Digits-Five,one of the most representative benchmark datasets for MSDA.The results show that EDTM consistently outperforms existing methods in terms of average classification accuracy.Notably,EDTM achieved significantly higher performance on target domains such as Modified National Institute of Standards and Technolog with blended background images(MNIST-M)and Street View House Numbers(SVHN)datasets,demonstrating enhanced generalization compared to baseline approaches.Furthermore,an ablation study analyzing the contribution of each loss component validated the effectiveness of the framework,highlighting the importance of each module in achieving optimal performance.
基金the National Natural Science Foundation of China(Grant No.42301002,and 52109118)Fujian Provincial Water Resources Science and Technology Project(Grant No.MSK202524)Guidance fund for Science and Technology Program,Fujian province(Grant No.2024Y0002).
文摘Landslide susceptibility evaluation plays an important role in disaster prevention and reduction.Feature-based transfer learning(TL)is an effective method for solving landslide susceptibility mapping(LSM)in target regions with no available samples.However,as the study area expands,the distribution of land-slide types and triggering mechanisms becomes more diverse,leading to performance degradation in models relying on landslide evaluation knowledge from a single source domain due to domain feature shift.To address this,this study proposes a Multi-source Domain Adaptation Convolutional Neural Network(MDACNN),which combines the landslide prediction knowledge learned from two source domains to perform cross-regional LSM in complex large-scale areas.The method is validated through case studies in three regions located in southeastern coastal China and compared with single-source domain TL models(TCA-based models).The results demonstrate that MDACNN effectively integrates transfer knowledge from multiple source domains to learn diverse landslide-triggering mechanisms,thereby significantly reducing prediction bias inherent to single-source domain TL models,achieving an average improvement of 16.58%across all metrics.Moreover,the landslide susceptibility maps gener-ated by MDACNN accurately quantify the spatial distribution of landslide risks in the target area,provid-ing a powerful scientific and technological tool for landslide disaster management and prevention.
基金sponsored by the National Natural Science Foundation of China(Grant No.52178100).
文摘The spatial offset of bridge has a significant impact on the safety,comfort,and durability of high-speed railway(HSR)operations,so it is crucial to rapidly and effectively detect the spatial offset of operational HSR bridges.Drive-by monitoring of bridge uneven settlement demonstrates significant potential due to its practicality,cost-effectiveness,and efficiency.However,existing drive-by methods for detecting bridge offset have limitations such as reliance on a single data source,low detection accuracy,and the inability to identify lateral deformations of bridges.This paper proposes a novel drive-by inspection method for spatial offset of HSR bridge based on multi-source data fusion of comprehensive inspection train.Firstly,dung beetle optimizer-variational mode decomposition was employed to achieve adaptive decomposition of non-stationary dynamic signals,and explore the hidden temporal relationships in the data.Subsequently,a long short-term memory neural network was developed to achieve feature fusion of multi-source signal and accurate prediction of spatial settlement of HSR bridge.A dataset of track irregularities and CRH380A high-speed train responses was generated using a 3D train-track-bridge interaction model,and the accuracy and effectiveness of the proposed hybrid deep learning model were numerically validated.Finally,the reliability of the proposed drive-by inspection method was further validated by analyzing the actual measurement data obtained from comprehensive inspection train.The research findings indicate that the proposed approach enables rapid and accurate detection of spatial offset in HSR bridge,ensuring the long-term operational safety of HSR bridges.
基金Supported by the National Natural Science Foundation of China(Nos.42376185,41876111)the Shandong Provincial Natural Science Foundation(No.ZR2023MD073)。
文摘Benthic habitat mapping is an emerging discipline in the international marine field in recent years,providing an effective tool for marine spatial planning,marine ecological management,and decision-making applications.Seabed sediment classification is one of the main contents of seabed habitat mapping.In response to the impact of remote sensing imaging quality and the limitations of acoustic measurement range,where a single data source does not fully reflect the substrate type,we proposed a high-precision seabed habitat sediment classification method that integrates data from multiple sources.Based on WorldView-2 multi-spectral remote sensing image data and multibeam bathymetry data,constructed a random forests(RF)classifier with optimal feature selection.A seabed sediment classification experiment integrating optical remote sensing and acoustic remote sensing data was carried out in the shallow water area of Wuzhizhou Island,Hainan,South China.Different seabed sediment types,such as sand,seagrass,and coral reefs were effectively identified,with an overall classification accuracy of 92%.Experimental results show that RF matrix optimized by fusing multi-source remote sensing data for feature selection were better than the classification results of simple combinations of data sources,which improved the accuracy of seabed sediment classification.Therefore,the method proposed in this paper can be effectively applied to high-precision seabed sediment classification and habitat mapping around islands and reefs.
基金supported by the National Natural Science Foundation of China(NSFC)(Grant Nos.61876130 and 61932009).
文摘Multi-source domain adaptation utilizes multiple source domains to learn the knowledge and transfers it to an unlabeled target domain.To address the problem,most of the existing methods aim to minimize the domain shift by auxiliary distribution alignment objectives,which reduces the effect of domain-specific features.However,without explicitly modeling the domain-specific features,it is not easy to guarantee that the domain-invariant representation extracted from input domains contains domain-specific information as few as possible.In this work,we present a different perspective on MSDA,which employs the idea of feature elimination to reduce the influence of domain-specific features.We design two different ways to extract domain-specific features and total features and construct the domain-invariant representations by eliminating the domain-specific features from total features.The experimental results on different domain adaptation datasets demonstrate the effectiveness of our method and the generalization ability of our model.
基金Project supported by the National Nature Science Foundation of China (Nos. 61876130 and 61932009)the Starry Night Science Fund of Zhejiang University Shanghai Institute for Advanced Study。
文摘The goal of decentralized multi-source domain adaptation is to conduct unsupervised multi-source domain adaptation in a data decentralization scenario. The challenge of data decentralization is that the source domains and target domain lack cross-domain collaboration during training. On the unlabeled target domain, the target model needs to transfer supervision knowledge with the collaboration of source models, while the domain gap will lead to limited adaptation performance from source models. On the labeled source domain, the source model tends to overfit its domain data in the data decentralization scenario, which leads to the negative transfer problem. For these challenges, we propose dual collaboration for decentralized multi-source domain adaptation by training and aggregating the local source models and local target model in collaboration with each other. On the target domain, we train the local target model by distilling supervision knowledge and fully using the unlabeled target domain data to alleviate the domain shift problem with the collaboration of local source models. On the source domain, we regularize the local source models in collaboration with the local target model to overcome the negative transfer problem. This forms a dual collaboration between the decentralized source domains and target domain, which improves the domain adaptation performance under the data decentralization scenario. Extensive experiments indicate that our method outperforms the state-of-the-art methods by a large margin on standard multi-source domain adaptation datasets.
基金Supported by the National Natural Science Foundation of China(42401488,42071351)the National Key Research and Development Program of China(2020YFA0608501,2017YFB0504204)+4 种基金the Liaoning Revitalization Talents Program(XLYC1802027)the Talent Recruited Program of the Chinese Academy of Science(Y938091)the Project Supported Discipline Innovation Team of the Liaoning Technical University(LNTU20TD-23)the Liaoning Province Doctoral Research Initiation Fund Program(2023-BS-202)the Basic Research Projects of Liaoning Department of Education(JYTQN2023202)。
文摘Accurate estimation of understory terrain has significant scientific importance for maintaining ecosystem balance and biodiversity conservation.Addressing the issue of inadequate representation of spatial heterogeneity when traditional forest topographic inversion methods consider the entire forest as the inversion unit,this study pro⁃poses a differentiated modeling approach to forest types based on refined land cover classification.Taking Puerto Ri⁃co and Maryland as study areas,a multi-dimensional feature system is constructed by integrating multi-source re⁃mote sensing data:ICESat-2 spaceborne LiDAR is used to obtain benchmark values for understory terrain,topo⁃graphic factors such as slope and aspect are extracted based on SRTM data,and vegetation cover characteristics are analyzed using Landsat-8 multispectral imagery.This study incorporates forest type as a classification modeling con⁃dition and applies the random forest algorithm to build differentiated topographic inversion models.Experimental re⁃sults indicate that,compared to traditional whole-area modeling methods(RMSE=5.06 m),forest type-based classi⁃fication modeling significantly improves the accuracy of understory terrain estimation(RMSE=2.94 m),validating the effectiveness of spatial heterogeneity modeling.Further sensitivity analysis reveals that canopy structure parame⁃ters(with RMSE variation reaching 4.11 m)exert a stronger regulatory effect on estimation accuracy compared to forest cover,providing important theoretical support for optimizing remote sensing models of forest topography.
基金Supported by the Fund for Shanxi“1331 Project”and Supported by Fundamental Research Program of Shanxi Province(No.202203021211006)the Key Research,Development Program of Shanxi Province(No.201903D311009)+4 种基金the Key Research Program of Taiyuan University(No.21TYKZ01)the Open Fund of Shanxi Province Key Laboratory of Ophthalmology(No.2023SXKLOS04)Shenzhen Fund for Guangdong Provincial High-Level Clinical Key Specialties(No.SZGSP014)Sanming Project of Medicine in Shenzhen(No.SZSM202311012)Shenzhen Science and Technology Planning Project(No.KCXFZ20211020163813019).
文摘AIM:To address the challenges of data labeling difficulties,data privacy,and necessary large amount of labeled data for deep learning methods in diabetic retinopathy(DR)identification,the aim of this study is to develop a source-free domain adaptation(SFDA)method for efficient and effective DR identification from unlabeled data.METHODS:A multi-SFDA method was proposed for DR identification.This method integrates multiple source models,which are trained from the same source domain,to generate synthetic pseudo labels for the unlabeled target domain.Besides,a softmax-consistence minimization term is utilized to minimize the intra-class distances between the source and target domains and maximize the inter-class distances.Validation is performed using three color fundus photograph datasets(APTOS2019,DDR,and EyePACS).RESULTS:The proposed model was evaluated and provided promising results with respectively 0.8917 and 0.9795 F1-scores on referable and normal/abnormal DR identification tasks.It demonstrated effective DR identification through minimizing intra-class distances and maximizing inter-class distances between source and target domains.CONCLUSION:The multi-SFDA method provides an effective approach to overcome the challenges in DR identification.The method not only addresses difficulties in data labeling and privacy issues,but also reduces the need for large amounts of labeled data required by deep learning methods,making it a practical tool for early detection and preservation of vision in diabetic patients.
基金supported by the National Key R&D Program of China(No.2023YFB2603602)the National Natural Science Foundation of China(Nos.52222810 and 52178383).
文摘To elucidate the fracturing mechanism of deep hard rock under complex disturbance environments,this study investigates the dynamic failure behavior of pre-damaged granite subjected to multi-source dynamic disturbances.Blasting vibration monitoring was conducted in a deep-buried drill-and-blast tunnel to characterize in-situ dynamic loading conditions.Subsequently,true triaxial compression tests incorporating multi-source disturbances were performed using a self-developed wide-low-frequency true triaxial system to simulate disturbance accumulation and damage evolution in granite.The results demonstrate that combined dynamic disturbances and unloading damage significantly accelerate strength degradation and trigger shear-slip failure along preferentially oriented blast-induced fractures,with strength reductions up to 16.7%.Layered failure was observed on the free surface of pre-damaged granite under biaxial loading,indicating a disturbance-induced fracture localization mechanism.Time-stress-fracture-energy coupling fields were constructed to reveal the spatiotemporal characteristics of fracture evolution.Critical precursor frequency bands(105-150,185-225,and 300-325 kHz)were identified,which serve as diagnostic signatures of impending failure.A dynamic instability mechanism driven by multi-source disturbance superposition and pre-damage evolution was established.Furthermore,a grouting-based wave-absorption control strategy was proposed to mitigate deep dynamic disasters by attenuating disturbance amplitude and reducing excitation frequency.
基金supported by the National Natural Science Foundation of China(21663032 and 22061041)the Open Sharing Platform for Scientific and Technological Resources of Shaanxi Province(2021PT-004)the National Innovation and Entrepreneurship Training Program for College Students of China(S202110719044)。
文摘The SiO_(2) inverse opal photonic crystals(PC)with a three-dimensional macroporous structure were fabricated by the sacrificial template method,followed by infiltration of a pyrene derivative,1-(pyren-8-yl)but-3-en-1-amine(PEA),to achieve a formaldehyde(FA)-sensitive and fluorescence-enhanced sensing film.Utilizing the specific Aza-Cope rearrangement reaction of allylamine of PEA and FA to generate a strong fluorescent product emitted at approximately 480 nm,we chose a PC whose blue band edge of stopband overlapped with the fluorescence emission wavelength.In virtue of the fluorescence enhancement property derived from slow photon effect of PC,FA was detected highly selectively and sensitively.The limit of detection(LoD)was calculated to be 1.38 nmol/L.Furthermore,the fast detection of FA(within 1 min)is realized due to the interconnected three-dimensional macroporous structure of the inverse opal PC and its high specific surface area.The prepared sensing film can be used for the detection of FA in air,aquatic products and living cells.The very close FA content in indoor air to the result from FA detector,the recovery rate of 101.5%for detecting FA in aquatic products and fast fluorescence imaging in 2 min for living cells demonstrate the reliability and accuracy of our method in practical applications.
基金supported by Natural Science Foundation of China(Nos.62303126,62362008,author Z.Z,https://www.nsfc.gov.cn/,accessed on 20 December 2024)Major Scientific and Technological Special Project of Guizhou Province([2024]014)+2 种基金Guizhou Provincial Science and Technology Projects(No.ZK[2022]General149) ,author Z.Z,https://kjt.guizhou.gov.cn/,accessed on 20 December 2024)The Open Project of the Key Laboratory of Computing Power Network and Information Security,Ministry of Education under Grant 2023ZD037,author Z.Z,https://www.gzu.edu.cn/,accessed on 20 December 2024)Open Research Project of the State Key Laboratory of Industrial Control Technology,Zhejiang University,China(No.ICT2024B25),author Z.Z,https://www.gzu.edu.cn/,accessed on 20 December 2024).
文摘Due to the development of cloud computing and machine learning,users can upload their data to the cloud for machine learning model training.However,dishonest clouds may infer user data,resulting in user data leakage.Previous schemes have achieved secure outsourced computing,but they suffer from low computational accuracy,difficult-to-handle heterogeneous distribution of data from multiple sources,and high computational cost,which result in extremely poor user experience and expensive cloud computing costs.To address the above problems,we propose amulti-precision,multi-sourced,andmulti-key outsourcing neural network training scheme.Firstly,we design a multi-precision functional encryption computation based on Euclidean division.Second,we design the outsourcing model training algorithm based on a multi-precision functional encryption with multi-sourced heterogeneity.Finally,we conduct experiments on three datasets.The results indicate that our framework achieves an accuracy improvement of 6%to 30%.Additionally,it offers a memory space optimization of 1.0×2^(24) times compared to the previous best approach.
基金supported by the Sichuan Science and Technology Program(Nos.2024JDRC0100 and 2023YFQ0091)the National Natural Science Foundation of China(Nos.U21A20167 and 52475138)the Scientific Research Foundation of the State Key Laboratory of Rail Transit Vehicle System(No.2024RVL-T08).
文摘Accurate monitoring of track irregularities is very helpful to improving the vehicle operation quality and to formulating appropriate track maintenance strategies.Existing methods have the problem that they rely on complex signal processing algorithms and lack multi-source data analysis.Driven by multi-source measurement data,including the axle box,the bogie frame and the carbody accelerations,this paper proposes a track irregularities monitoring network(TIMNet)based on deep learning methods.TIMNet uses the feature extraction capability of convolutional neural networks and the sequence map-ping capability of the long short-term memory model to explore the mapping relationship between vehicle accelerations and track irregularities.The particle swarm optimization algorithm is used to optimize the network parameters,so that both the vertical and lateral track irregularities can be accurately identified in the time and spatial domains.The effectiveness and superiority of the proposed TIMNet is analyzed under different simulation conditions using a vehicle dynamics model.Field tests are conducted to prove the availability of the proposed TIMNet in quantitatively monitoring vertical and lateral track irregularities.Furthermore,comparative tests show that the TIMNet has a better fitting degree and timeliness in monitoring track irregularities(vertical R2 of 0.91,lateral R2 of 0.84 and time cost of 10 ms),compared to other classical regression.The test also proves that the TIMNet has a better anti-interference ability than other regression models.
基金supported by National Natural Science Foundation of China(12174350)Science and Technology Project of State Grid Henan Electric Power Company(5217Q0240008).
文摘In the heterogeneous power internet of things(IoT)environment,data signals are acquired to support different business systems to realize advanced intelligent applications,with massive,multi-source,heterogeneous and other characteristics.Reliable perception of information and efficient transmission of energy in multi-source heterogeneous environments are crucial issues.Compressive sensing(CS),as an effective method of signal compression and transmission,can accurately recover the original signal only by very few sampling.In this paper,we study a new method of multi-source heterogeneous data signal reconstruction of power IoT based on compressive sensing technology.Based on the traditional compressive sensing technology to directly recover multi-source heterogeneous signals,we fully use the interference subspace information to design the measurement matrix,which directly and effectively eliminates the interference while making the measurement.The measure matrix is optimized by minimizing the average cross-coherence of the matrix,and the reconstruction performance of the new method is further improved.Finally,the effectiveness of the new method with different parameter settings under different multi-source heterogeneous data signal cases is verified by using orthogonal matching pursuit(OMP)and sparsity adaptive matching pursuit(SAMP)for considering the actual environment with prior information utilization of signal sparsity and no prior information utilization of signal sparsity.
文摘This paper deeply discusses the causes of gear howling noise,the identification and analysis of multi-source excitation,the transmission path of dynamic noise,simulation and experimental research,case analysis,optimization effect,etc.,aiming to better provide a certain guideline and reference for relevant researchers.
文摘With the acceleration of intelligent transformation of energy system,the monitoring of equipment operation status and optimization of production process in thermal power plants face the challenge of multi-source heterogeneous data integration.In view of the heterogeneous characteristics of physical sensor data,including temperature,vibration and pressure that generated by boilers,steam turbines and other key equipment and real-time working condition data of SCADA system,this paper proposes a multi-source heterogeneous data fusion and analysis platform for thermal power plants based on edge computing and deep learning.By constructing a multi-level fusion architecture,the platform adopts dynamic weight allocation strategy and 5D digital twin model to realize the collaborative analysis of physical sensor data,simulation calculation results and expert knowledge.The data fusion module combines Kalman filter,wavelet transform and Bayesian estimation method to solve the problem of data time series alignment and dimension difference.Simulation results show that the data fusion accuracy can be improved to more than 98%,and the calculation delay can be controlled within 500 ms.The data analysis module integrates Dymola simulation model and AERMOD pollutant diffusion model,supports the cascade analysis of boiler combustion efficiency prediction and flue gas emission monitoring,system response time is less than 2 seconds,and data consistency verification accuracy reaches 99.5%.
基金Under the auspices of the Key Program of National Natural Science Foundation of China(No.42030409)。
文摘Multi-source data fusion provides high-precision spatial situational awareness essential for analyzing granular urban social activities.This study used Shanghai’s catering industry as a case study,leveraging electronic reviews and consumer data sourced from third-party restaurant platforms collected in 2021.By performing weighted processing on two-dimensional point-of-interest(POI)data,clustering hotspots of high-dimensional restaurant data were identified.A hierarchical network of restaurant hotspots was constructed following the Central Place Theory(CPT)framework,while the Geo-Informatic Tupu method was employed to resolve the challenges posed by network deformation in multi-scale processes.These findings suggest the necessity of enhancing the spatial balance of Shanghai’s urban centers by moderately increasing the number and service capacity of suburban centers at the urban periphery.Such measures would contribute to a more optimized urban structure and facilitate the outward dispersion of comfort-oriented facilities such as the restaurant industry.At a finer spatial scale,the distribution of restaurant hotspots demonstrates a polycentric and symmetric spatial pattern,with a developmental trend radiating outward along the city’s ring roads.This trend can be attributed to the efforts of restaurants to establish connections with other urban functional spaces,leading to the reconfiguration of urban spaces,expansion of restaurant-dedicated land use,and the reorganization of associated commercial activities.The results validate the existence of a polycentric urban structure in Shanghai but also highlight the instability of the restaurant hotspot network during cross-scale transitions.
基金Sponsored by Beijing Youth Innovation Talent Support Program for Urban Greening and Landscaping——The 2024 Special Project for Promoting High-Quality Development of Beijing’s Landscaping through Scientific and Technological Innovation(KJCXQT202410).
文摘Taking the Ming Tombs Forest Farm in Beijing as the research object,this research applied multi-source data fusion and GIS heat-map overlay analysis techniques,systematically collected bird observation point data from the Global Biodiversity Information Facility(GBIF),population distribution data from the Oak Ridge National Laboratory(ORNL)in the United States,as well as information on the composition of tree species in suitable forest areas for birds and the forest geographical information of the Ming Tombs Forest Farm,which is based on literature research and field investigations.By using GIS technology,spatial processing was carried out on bird observation points and population distribution data to identify suitable bird-watching areas in different seasons.Then,according to the suitability value range,these areas were classified into different grades(from unsuitable to highly suitable).The research findings indicated that there was significant spatial heterogeneity in the bird-watching suitability of the Ming Tombs Forest Farm.The north side of the reservoir was generally a core area with high suitability in all seasons.The deep-aged broad-leaved mixed forests supported the overlapping co-existence of the ecological niches of various bird species,such as the Zosterops simplex and Urocissa erythrorhyncha.In contrast,the shallow forest-edge coniferous pure forests and mixed forests were more suitable for specialized species like Carduelis sinica.The southern urban area and the core area of the mausoleums had relatively low suitability due to ecological fragmentation or human interference.Based on these results,this paper proposed a three-level protection framework of“core area conservation—buffer zone management—isolation zone construction”and a spatio-temporal coordinated human-bird co-existence strategy.It was also suggested that the human-bird co-existence space could be optimized through measures such as constructing sound and light buffer interfaces,restoring ecological corridors,and integrating cultural heritage elements.This research provided an operational technical approach and decision-making support for the scientific planning of bird-watching sites and the coordination of ecological protection and tourism development.
基金supported by the National Natural Science Foundation of China(Grant Nos.U22A20598 and 52104107)the"Qinglan Project"of Jiangsu Colleges and Universities,Young Elite Scientists Sponsorship Program of Jiangsu Province(Grant No.TJ-2023-086).
文摘As coal mining progresses to greater depths,controlling the stability of surrounding rock in deep roadways has become an increasingly complex challenge.Although four-dimensional(4D)support theoretically offers unique advantages in maintaining the stability of rock mass,the disaster evolution processes and multi-source information response characteristics in deep roadways with 4D support remain unclear.Consequently,a large-scale physical model testing system and self-designed 4D support components were employed to conduct similarity model tests on the surrounding rock failure process under unsupported(U-1),traditional bolt-mesh-cable support(T-2),and 4D support(4D-R-3)conditions.Combined with multi-source monitoring techniques,including stress–strain,digital image correlation(DIC),acoustic emission(AE),microseismic(MS),parallel electric(PE),and electromagnetic radiation(EMR),the mechanical behavior and multi-source information responses were comprehensively analyzed.The results show that the peak stress and displacement of the models are positively correlated with the support strength.The multi-source information exhibits distinct response characteristics under different supports.The response frequency,energy,and fluctuationsof AE,MS,and EMR signals,along with the apparent resistivity(AR)high-resistivity zone,follow the trend U-1>T-2>4D-R-3.Furthermore,multi-source information exhibits significantdifferences in sensitivity across different phases.The AE,MS,and EMR signals exhibit active responses to rock mass activity at each phase.However,AR signals are only sensitive to the fracture propagation during the plastic yield and failure phases.In summary,the 4D support significantlyenhances the bearing capacity and plastic deformation of the models,while substantially reducing the frequency,energy,and fluctuationsof multi-source signals.
基金supported by the National Key Research and Development Program of China(No.2023YFB3712401),the National Natural Science Foundation of China(No.52274301)the Aeronautical Science Foundation of China(No.2023Z0530S6005)the Ningbo Yongjiang Talent-Introduction Programme(No.2022A-023-C).
文摘The viscosity of refining slags plays a critical role in metallurgical processes.However,obtaining accurate viscosity data remains challenging due to the complexities of high-temperature experiments,often relying on empirical models with limited predictive capabilities.This study focuses on the influence of optical basicity on viscosity in CaO-Al_(2)O_(3)-based refining slags,leveraging machine learning to address data scarcity and improve prediction accuracy.An automated framework for algorithm integration,parameter tuning,and evaluation ranking framework(Auto-APE)is employed to develop customized data-driven models for various slag systems,including CaO-Al_(2)O_(3)-SiO_(2),CaO-Al_(2)O_(3)-CaF_(2),CaO-Al_(2)O_(3)-SiO_(2)-MgO,and CaO-Al_(2)O_(3)-SiO_(2)-MgO-CaF_(2).By incorporating optical basicity as a key feature,the models achieve an average validation error of 8.0%to 15.1%,significantly outperforming traditional empirical models.Additionally,symbolic regression is introduced to rapidly construct domain-specific features,such as optical basicity-like descriptors,offering a potential breakthrough in performance prediction for small datasets.This work highlights the critical role of domain-specific knowledge in understanding and predicting viscosity,providing a robust machine learning-based approach for optimizing refining slag properties.
基金supported by the National Natural Science Foundation of China Funded Project(Project Name:Research on Robust Adaptive Allocation Mechanism of Human Machine Co-Driving System Based on NMS Features,Project Approval Number:52172381).
文摘To address the issue of scarce labeled samples and operational condition variations that degrade the accuracy of fault diagnosis models in variable-condition gearbox fault diagnosis,this paper proposes a semi-supervised masked contrastive learning and domain adaptation(SSMCL-DA)method for gearbox fault diagnosis under variable conditions.Initially,during the unsupervised pre-training phase,a dual signal augmentation strategy is devised,which simultaneously applies random masking in the time domain and random scaling in the frequency domain to unlabeled samples,thereby constructing more challenging positive sample pairs to guide the encoder in learning intrinsic features robust to condition variations.Subsequently,a ConvNeXt-Transformer hybrid architecture is employed,integrating the superior local detail modeling capacity of ConvNeXt with the robust global perception capability of Transformer to enhance feature extraction in complex scenarios.Thereafter,a contrastive learning model is constructed with the optimization objective of maximizing feature similarity across different masked instances of the same sample,enabling the extraction of consistent features from multiple masked perspectives and reducing reliance on labeled data.In the final supervised fine-tuning phase,a multi-scale attention mechanism is incorporated for feature rectification,and a domain adaptation module combining Local Maximum Mean Discrepancy(LMMD)with adversarial learning is proposed.This module embodies a dual mechanism:LMMD facilitates fine-grained class-conditional alignment,compelling features of identical fault classes to converge across varying conditions,while the domain discriminator utilizes adversarial training to guide the feature extractor toward learning domain-invariant features.Working in concert,they markedly diminish feature distribution discrepancies induced by changes in load,rotational speed,and other factors,thereby boosting the model’s adaptability to cross-condition scenarios.Experimental evaluations on the WT planetary gearbox dataset and the Case Western Reserve University(CWRU)bearing dataset demonstrate that the SSMCL-DA model effectively identifies multiple fault classes in gearboxes,with diagnostic performance substantially surpassing that of conventional methods.Under cross-condition scenarios,the model attains fault diagnosis accuracies of 99.21%for the WT planetary gearbox and 99.86%for the bearings,respectively.Furthermore,the model exhibits stable generalization capability in cross-device settings.