The spatial offset of bridge has a significant impact on the safety,comfort,and durability of high-speed railway(HSR)operations,so it is crucial to rapidly and effectively detect the spatial offset of operational HSR ...The spatial offset of bridge has a significant impact on the safety,comfort,and durability of high-speed railway(HSR)operations,so it is crucial to rapidly and effectively detect the spatial offset of operational HSR bridges.Drive-by monitoring of bridge uneven settlement demonstrates significant potential due to its practicality,cost-effectiveness,and efficiency.However,existing drive-by methods for detecting bridge offset have limitations such as reliance on a single data source,low detection accuracy,and the inability to identify lateral deformations of bridges.This paper proposes a novel drive-by inspection method for spatial offset of HSR bridge based on multi-source data fusion of comprehensive inspection train.Firstly,dung beetle optimizer-variational mode decomposition was employed to achieve adaptive decomposition of non-stationary dynamic signals,and explore the hidden temporal relationships in the data.Subsequently,a long short-term memory neural network was developed to achieve feature fusion of multi-source signal and accurate prediction of spatial settlement of HSR bridge.A dataset of track irregularities and CRH380A high-speed train responses was generated using a 3D train-track-bridge interaction model,and the accuracy and effectiveness of the proposed hybrid deep learning model were numerically validated.Finally,the reliability of the proposed drive-by inspection method was further validated by analyzing the actual measurement data obtained from comprehensive inspection train.The research findings indicate that the proposed approach enables rapid and accurate detection of spatial offset in HSR bridge,ensuring the long-term operational safety of HSR bridges.展开更多
Domain adaptation aims to reduce the distribution gap between the training data(source domain)and the target data.This enables effective predictions even for domains not seen during training.However,most conventional ...Domain adaptation aims to reduce the distribution gap between the training data(source domain)and the target data.This enables effective predictions even for domains not seen during training.However,most conventional domain adaptation methods assume a single source domain,making them less suitable for modern deep learning settings that rely on diverse and large-scale datasets.To address this limitation,recent research has focused on Multi-Source Domain Adaptation(MSDA),which aims to learn effectively from multiple source domains.In this paper,we propose Efficient Domain Transition for Multi-source(EDTM),a novel and efficient framework designed to tackle two major challenges in existing MSDA approaches:(1)integrating knowledge across different source domains and(2)aligning label distributions between source and target domains.EDTM leverages an ensemble-based classifier expert mechanism to enhance the contribution of source domains that are more similar to the target domain.To further stabilize the learning process and improve performance,we incorporate imitation learning into the training of the target model.In addition,Maximum Classifier Discrepancy(MCD)is employed to align class-wise label distributions between the source and target domains.Experiments were conducted using Digits-Five,one of the most representative benchmark datasets for MSDA.The results show that EDTM consistently outperforms existing methods in terms of average classification accuracy.Notably,EDTM achieved significantly higher performance on target domains such as Modified National Institute of Standards and Technolog with blended background images(MNIST-M)and Street View House Numbers(SVHN)datasets,demonstrating enhanced generalization compared to baseline approaches.Furthermore,an ablation study analyzing the contribution of each loss component validated the effectiveness of the framework,highlighting the importance of each module in achieving optimal performance.展开更多
Benthic habitat mapping is an emerging discipline in the international marine field in recent years,providing an effective tool for marine spatial planning,marine ecological management,and decision-making applications...Benthic habitat mapping is an emerging discipline in the international marine field in recent years,providing an effective tool for marine spatial planning,marine ecological management,and decision-making applications.Seabed sediment classification is one of the main contents of seabed habitat mapping.In response to the impact of remote sensing imaging quality and the limitations of acoustic measurement range,where a single data source does not fully reflect the substrate type,we proposed a high-precision seabed habitat sediment classification method that integrates data from multiple sources.Based on WorldView-2 multi-spectral remote sensing image data and multibeam bathymetry data,constructed a random forests(RF)classifier with optimal feature selection.A seabed sediment classification experiment integrating optical remote sensing and acoustic remote sensing data was carried out in the shallow water area of Wuzhizhou Island,Hainan,South China.Different seabed sediment types,such as sand,seagrass,and coral reefs were effectively identified,with an overall classification accuracy of 92%.Experimental results show that RF matrix optimized by fusing multi-source remote sensing data for feature selection were better than the classification results of simple combinations of data sources,which improved the accuracy of seabed sediment classification.Therefore,the method proposed in this paper can be effectively applied to high-precision seabed sediment classification and habitat mapping around islands and reefs.展开更多
Improving the accuracy of anthropogenic volatile organic compounds(VOCs)emission inventory is crucial for reducing atmospheric pollution and formulating control policy of air pollution.In this study,an anthropogenic s...Improving the accuracy of anthropogenic volatile organic compounds(VOCs)emission inventory is crucial for reducing atmospheric pollution and formulating control policy of air pollution.In this study,an anthropogenic speciated VOCs emission inventory was established for Central China represented by Henan Province at a 3 km×3 km spatial resolution based on the emission factormethod.The 2019 VOCs emission in Henan Provincewas 1003.5 Gg,while industrial process source(33.7%)was the highest emission source,Zhengzhou(17.9%)was the city with highest emission and April and August were the months with the more emissions.High VOCs emission regions were concentrated in downtown areas and industrial parks.Alkanes and aromatic hydrocarbons were the main VOCs contribution groups.The species composition,source contribution and spatial distribution were verified and evaluated through tracer ratio method(TR),Positive Matrix Factorization Model(PMF)and remote sensing inversion(RSI).Results show that both the emission results by emission inventory(EI)(15.7 Gg)and by TRmethod(13.6 Gg)and source contribution by EI and PMF are familiar.The spatial distribution of HCHO primary emission based on RSI is basically consistent with that of HCHO emission based on EI with a R-value of 0.73.The verification results show that the VOCs emission inventory and speciated emission inventory established in this study are relatively reliable.展开更多
Kinship verification is a key biometric recognition task that determines biological relationships based on physical features.Traditional methods predominantly use facial recognition,leveraging established techniques a...Kinship verification is a key biometric recognition task that determines biological relationships based on physical features.Traditional methods predominantly use facial recognition,leveraging established techniques and extensive datasets.However,recent research has highlighted ear recognition as a promising alternative,offering advantages in robustness against variations in facial expressions,aging,and occlusions.Despite its potential,a significant challenge in ear-based kinship verification is the lack of large-scale datasets necessary for training deep learning models effectively.To address this challenge,we introduce the EarKinshipVN dataset,a novel and extensive collection of ear images designed specifically for kinship verification.This dataset consists of 4876 high-resolution color images from 157 multiracial families across different regions,forming 73,220 kinship pairs.EarKinshipVN,a diverse and large-scale dataset,advances kinship verification research using ear features.Furthermore,we propose the Mixer Attention Inception(MAI)model,an improved architecture that enhances feature extraction and classification accuracy.The MAI model fuses Inceptionv4 and MLP Mixer,integrating four attention mechanisms to enhance spatial and channel-wise feature representation.Experimental results demonstrate that MAI significantly outperforms traditional backbone architectures.It achieves an accuracy of 98.71%,surpassing Vision Transformer models while reducing computational complexity by up to 95%in parameter usage.These findings suggest that ear-based kinship verification,combined with an optimized deep learning model and a comprehensive dataset,holds significant promise for biometric applications.展开更多
Metal Additive Manufacturing(MAM) technology has become an important means of rapid prototyping precision manufacturing of special high dynamic heterogeneous complex parts. In response to the micromechanical defects s...Metal Additive Manufacturing(MAM) technology has become an important means of rapid prototyping precision manufacturing of special high dynamic heterogeneous complex parts. In response to the micromechanical defects such as porosity issues, significant deformation, surface cracks, and challenging control of surface morphology encountered during the selective laser melting(SLM) additive manufacturing(AM) process of specialized Micro Electromechanical System(MEMS) components, multiparameter optimization and micro powder melt pool/macro-scale mechanical properties control simulation of specialized components are conducted. The optimal parameters obtained through highprecision preparation and machining of components and static/high dynamic verification are: laser power of 110 W, laser speed of 600 mm/s, laser diameter of 75 μm, and scanning spacing of 50 μm. The density of the subordinate components under this reference can reach 99.15%, the surface hardness can reach 51.9 HRA, the yield strength can reach 550 MPa, the maximum machining error of the components is 4.73%, and the average surface roughness is 0.45 μm. Through dynamic hammering and high dynamic firing verification, SLM components meet the requirements for overload resistance. The results have proven that MEM technology can provide a new means for the processing of MEMS components applied in high dynamic environments. The parameters obtained in the conclusion can provide a design basis for the additive preparation of MEMS components.展开更多
Accurate estimation of understory terrain has significant scientific importance for maintaining ecosystem balance and biodiversity conservation.Addressing the issue of inadequate representation of spatial heterogeneit...Accurate estimation of understory terrain has significant scientific importance for maintaining ecosystem balance and biodiversity conservation.Addressing the issue of inadequate representation of spatial heterogeneity when traditional forest topographic inversion methods consider the entire forest as the inversion unit,this study pro⁃poses a differentiated modeling approach to forest types based on refined land cover classification.Taking Puerto Ri⁃co and Maryland as study areas,a multi-dimensional feature system is constructed by integrating multi-source re⁃mote sensing data:ICESat-2 spaceborne LiDAR is used to obtain benchmark values for understory terrain,topo⁃graphic factors such as slope and aspect are extracted based on SRTM data,and vegetation cover characteristics are analyzed using Landsat-8 multispectral imagery.This study incorporates forest type as a classification modeling con⁃dition and applies the random forest algorithm to build differentiated topographic inversion models.Experimental re⁃sults indicate that,compared to traditional whole-area modeling methods(RMSE=5.06 m),forest type-based classi⁃fication modeling significantly improves the accuracy of understory terrain estimation(RMSE=2.94 m),validating the effectiveness of spatial heterogeneity modeling.Further sensitivity analysis reveals that canopy structure parame⁃ters(with RMSE variation reaching 4.11 m)exert a stronger regulatory effect on estimation accuracy compared to forest cover,providing important theoretical support for optimizing remote sensing models of forest topography.展开更多
Edit distance is an algorithm to measure the difference between two strings,usually represented as the minimum number of editing operations required to transform one string into another.The edit distance algorithm inv...Edit distance is an algorithm to measure the difference between two strings,usually represented as the minimum number of editing operations required to transform one string into another.The edit distance algorithm involves complex dependencies and constraints,making state management and verification work tedious.This paper proposes a derivation and verification method that avoids directly handling dependencies and constraints by proving the equivalence between the edit distance algorithm and existing functional modeling.First,the derivation process of edit distance algorithm mainly includes 1)describing problem specifications,2)inductively deducing recursive relations,3)formally constructing loop invariants using the optimization theory(memorization technology and optimal decision table)and properties(optimal substructure property and subproblems overlapping property)of the edit distance algorithm,4)generating the Minimalistic Imperative Programming Language(IMP)code based on the recursive relations.Second,the problem specification,loop invariants,and generated IMP code are input into Verification Condition Generator(VCG),which automatically generate five verification conditions,and then the correctness of edit distance algorithm is verified in the Isabelle/HOL theorem prover.The method utilizes formal technologies and theorem prover to complete the derivation and verification of the edit distance algorithm,and it can be applied to linear and nonlinear dynamic programming problems.展开更多
To elucidate the fracturing mechanism of deep hard rock under complex disturbance environments,this study investigates the dynamic failure behavior of pre-damaged granite subjected to multi-source dynamic disturbances...To elucidate the fracturing mechanism of deep hard rock under complex disturbance environments,this study investigates the dynamic failure behavior of pre-damaged granite subjected to multi-source dynamic disturbances.Blasting vibration monitoring was conducted in a deep-buried drill-and-blast tunnel to characterize in-situ dynamic loading conditions.Subsequently,true triaxial compression tests incorporating multi-source disturbances were performed using a self-developed wide-low-frequency true triaxial system to simulate disturbance accumulation and damage evolution in granite.The results demonstrate that combined dynamic disturbances and unloading damage significantly accelerate strength degradation and trigger shear-slip failure along preferentially oriented blast-induced fractures,with strength reductions up to 16.7%.Layered failure was observed on the free surface of pre-damaged granite under biaxial loading,indicating a disturbance-induced fracture localization mechanism.Time-stress-fracture-energy coupling fields were constructed to reveal the spatiotemporal characteristics of fracture evolution.Critical precursor frequency bands(105-150,185-225,and 300-325 kHz)were identified,which serve as diagnostic signatures of impending failure.A dynamic instability mechanism driven by multi-source disturbance superposition and pre-damage evolution was established.Furthermore,a grouting-based wave-absorption control strategy was proposed to mitigate deep dynamic disasters by attenuating disturbance amplitude and reducing excitation frequency.展开更多
The exponential growth of the Internet of Things(IoT)has revolutionized various domains such as healthcare,smart cities,and agriculture,generating vast volumes of data that require secure processing and storage in clo...The exponential growth of the Internet of Things(IoT)has revolutionized various domains such as healthcare,smart cities,and agriculture,generating vast volumes of data that require secure processing and storage in cloud environments.However,reliance on cloud infrastructure raises critical security challenges,particularly regarding data integrity.While existing cryptographic methods provide robust integrity verification,they impose significant computational and energy overheads on resource-constrained IoT devices,limiting their applicability in large-scale,real-time scenarios.To address these challenges,we propose the Cognitive-Based Integrity Verification Model(C-BIVM),which leverages Belief-Desire-Intention(BDI)cognitive intelligence and algebraic signatures to enable lightweight,efficient,and scalable data integrity verification.The model incorporates batch auditing,reducing resource consumption in large-scale IoT environments by approximately 35%,while achieving an accuracy of over 99.2%in detecting data corruption.C-BIVM dynamically adapts integrity checks based on real-time conditions,optimizing resource utilization by minimizing redundant operations by more than 30%.Furthermore,blind verification techniques safeguard sensitive IoT data,ensuring privacy compliance by preventing unauthorized access during integrity checks.Extensive experimental evaluations demonstrate that C-BIVM reduces computation time for integrity checks by up to 40%compared to traditional bilinear pairing-based methods,making it particularly suitable for IoT-driven applications in smart cities,healthcare,and beyond.These results underscore the effectiveness of C-BIVM in delivering a secure,scalable,and resource-efficient solution tailored to the evolving needs of IoT ecosystems.展开更多
Combining the characteristics of the course“Comprehensive Training of E-Commerce Live Streaming,”this paper embeds the CDIO(Conceive-Design-Implement-Operate)method into the live streaming training process,carries o...Combining the characteristics of the course“Comprehensive Training of E-Commerce Live Streaming,”this paper embeds the CDIO(Conceive-Design-Implement-Operate)method into the live streaming training process,carries out the virtual scene“e-commerce live streaming”course design and project-based teaching reform that integrates teaching training with learning effects,and establishes a set of cross-professional student live streaming training procedures guided by the CDIO engineering method.The training results show that the CDIO practical teaching model supported by data feedback plays an important role and significance in improving students’learning effects,and also provides some new experiences for integrating engineering thinking into the construction of new liberal arts.展开更多
The SiO_(2) inverse opal photonic crystals(PC)with a three-dimensional macroporous structure were fabricated by the sacrificial template method,followed by infiltration of a pyrene derivative,1-(pyren-8-yl)but-3-en-1-...The SiO_(2) inverse opal photonic crystals(PC)with a three-dimensional macroporous structure were fabricated by the sacrificial template method,followed by infiltration of a pyrene derivative,1-(pyren-8-yl)but-3-en-1-amine(PEA),to achieve a formaldehyde(FA)-sensitive and fluorescence-enhanced sensing film.Utilizing the specific Aza-Cope rearrangement reaction of allylamine of PEA and FA to generate a strong fluorescent product emitted at approximately 480 nm,we chose a PC whose blue band edge of stopband overlapped with the fluorescence emission wavelength.In virtue of the fluorescence enhancement property derived from slow photon effect of PC,FA was detected highly selectively and sensitively.The limit of detection(LoD)was calculated to be 1.38 nmol/L.Furthermore,the fast detection of FA(within 1 min)is realized due to the interconnected three-dimensional macroporous structure of the inverse opal PC and its high specific surface area.The prepared sensing film can be used for the detection of FA in air,aquatic products and living cells.The very close FA content in indoor air to the result from FA detector,the recovery rate of 101.5%for detecting FA in aquatic products and fast fluorescence imaging in 2 min for living cells demonstrate the reliability and accuracy of our method in practical applications.展开更多
Due to the development of cloud computing and machine learning,users can upload their data to the cloud for machine learning model training.However,dishonest clouds may infer user data,resulting in user data leakage.P...Due to the development of cloud computing and machine learning,users can upload their data to the cloud for machine learning model training.However,dishonest clouds may infer user data,resulting in user data leakage.Previous schemes have achieved secure outsourced computing,but they suffer from low computational accuracy,difficult-to-handle heterogeneous distribution of data from multiple sources,and high computational cost,which result in extremely poor user experience and expensive cloud computing costs.To address the above problems,we propose amulti-precision,multi-sourced,andmulti-key outsourcing neural network training scheme.Firstly,we design a multi-precision functional encryption computation based on Euclidean division.Second,we design the outsourcing model training algorithm based on a multi-precision functional encryption with multi-sourced heterogeneity.Finally,we conduct experiments on three datasets.The results indicate that our framework achieves an accuracy improvement of 6%to 30%.Additionally,it offers a memory space optimization of 1.0×2^(24) times compared to the previous best approach.展开更多
Accurate monitoring of track irregularities is very helpful to improving the vehicle operation quality and to formulating appropriate track maintenance strategies.Existing methods have the problem that they rely on co...Accurate monitoring of track irregularities is very helpful to improving the vehicle operation quality and to formulating appropriate track maintenance strategies.Existing methods have the problem that they rely on complex signal processing algorithms and lack multi-source data analysis.Driven by multi-source measurement data,including the axle box,the bogie frame and the carbody accelerations,this paper proposes a track irregularities monitoring network(TIMNet)based on deep learning methods.TIMNet uses the feature extraction capability of convolutional neural networks and the sequence map-ping capability of the long short-term memory model to explore the mapping relationship between vehicle accelerations and track irregularities.The particle swarm optimization algorithm is used to optimize the network parameters,so that both the vertical and lateral track irregularities can be accurately identified in the time and spatial domains.The effectiveness and superiority of the proposed TIMNet is analyzed under different simulation conditions using a vehicle dynamics model.Field tests are conducted to prove the availability of the proposed TIMNet in quantitatively monitoring vertical and lateral track irregularities.Furthermore,comparative tests show that the TIMNet has a better fitting degree and timeliness in monitoring track irregularities(vertical R2 of 0.91,lateral R2 of 0.84 and time cost of 10 ms),compared to other classical regression.The test also proves that the TIMNet has a better anti-interference ability than other regression models.展开更多
In the heterogeneous power internet of things(IoT)environment,data signals are acquired to support different business systems to realize advanced intelligent applications,with massive,multi-source,heterogeneous and ot...In the heterogeneous power internet of things(IoT)environment,data signals are acquired to support different business systems to realize advanced intelligent applications,with massive,multi-source,heterogeneous and other characteristics.Reliable perception of information and efficient transmission of energy in multi-source heterogeneous environments are crucial issues.Compressive sensing(CS),as an effective method of signal compression and transmission,can accurately recover the original signal only by very few sampling.In this paper,we study a new method of multi-source heterogeneous data signal reconstruction of power IoT based on compressive sensing technology.Based on the traditional compressive sensing technology to directly recover multi-source heterogeneous signals,we fully use the interference subspace information to design the measurement matrix,which directly and effectively eliminates the interference while making the measurement.The measure matrix is optimized by minimizing the average cross-coherence of the matrix,and the reconstruction performance of the new method is further improved.Finally,the effectiveness of the new method with different parameter settings under different multi-source heterogeneous data signal cases is verified by using orthogonal matching pursuit(OMP)and sparsity adaptive matching pursuit(SAMP)for considering the actual environment with prior information utilization of signal sparsity and no prior information utilization of signal sparsity.展开更多
This paper deeply discusses the causes of gear howling noise,the identification and analysis of multi-source excitation,the transmission path of dynamic noise,simulation and experimental research,case analysis,optimiz...This paper deeply discusses the causes of gear howling noise,the identification and analysis of multi-source excitation,the transmission path of dynamic noise,simulation and experimental research,case analysis,optimization effect,etc.,aiming to better provide a certain guideline and reference for relevant researchers.展开更多
This paper presents the design and ground verification for vision-based relative navigation systems of microsatellites,which offers a comprehensive hardware design solution and a robust experimental verification metho...This paper presents the design and ground verification for vision-based relative navigation systems of microsatellites,which offers a comprehensive hardware design solution and a robust experimental verification methodology for practical implementation of vision-based navigation technology on the microsatellite platform.Firstly,a low power consumption,light weight,and high performance vision-based relative navigation optical sensor is designed.Subsequently,a set of ground verification system is designed for the hardware-in-the-loop testing of the vision-based relative navigation systems.Finally,the designed vision-based relative navigation optical sensor and the proposed angles-only navigation algorithms are tested on the ground verification system.The results verify that the optical simulator after geometrical calibration can meet the requirements of the hardware-in-the-loop testing of vision-based relative navigation systems.Based on experimental results,the relative position accuracy of the angles-only navigation filter at terminal time is increased by 25.5%,and the relative speed accuracy is increased by 31.3% compared with those of optical simulator before geometrical calibration.展开更多
With the acceleration of intelligent transformation of energy system,the monitoring of equipment operation status and optimization of production process in thermal power plants face the challenge of multi-source heter...With the acceleration of intelligent transformation of energy system,the monitoring of equipment operation status and optimization of production process in thermal power plants face the challenge of multi-source heterogeneous data integration.In view of the heterogeneous characteristics of physical sensor data,including temperature,vibration and pressure that generated by boilers,steam turbines and other key equipment and real-time working condition data of SCADA system,this paper proposes a multi-source heterogeneous data fusion and analysis platform for thermal power plants based on edge computing and deep learning.By constructing a multi-level fusion architecture,the platform adopts dynamic weight allocation strategy and 5D digital twin model to realize the collaborative analysis of physical sensor data,simulation calculation results and expert knowledge.The data fusion module combines Kalman filter,wavelet transform and Bayesian estimation method to solve the problem of data time series alignment and dimension difference.Simulation results show that the data fusion accuracy can be improved to more than 98%,and the calculation delay can be controlled within 500 ms.The data analysis module integrates Dymola simulation model and AERMOD pollutant diffusion model,supports the cascade analysis of boiler combustion efficiency prediction and flue gas emission monitoring,system response time is less than 2 seconds,and data consistency verification accuracy reaches 99.5%.展开更多
Multi-source data fusion provides high-precision spatial situational awareness essential for analyzing granular urban social activities.This study used Shanghai’s catering industry as a case study,leveraging electron...Multi-source data fusion provides high-precision spatial situational awareness essential for analyzing granular urban social activities.This study used Shanghai’s catering industry as a case study,leveraging electronic reviews and consumer data sourced from third-party restaurant platforms collected in 2021.By performing weighted processing on two-dimensional point-of-interest(POI)data,clustering hotspots of high-dimensional restaurant data were identified.A hierarchical network of restaurant hotspots was constructed following the Central Place Theory(CPT)framework,while the Geo-Informatic Tupu method was employed to resolve the challenges posed by network deformation in multi-scale processes.These findings suggest the necessity of enhancing the spatial balance of Shanghai’s urban centers by moderately increasing the number and service capacity of suburban centers at the urban periphery.Such measures would contribute to a more optimized urban structure and facilitate the outward dispersion of comfort-oriented facilities such as the restaurant industry.At a finer spatial scale,the distribution of restaurant hotspots demonstrates a polycentric and symmetric spatial pattern,with a developmental trend radiating outward along the city’s ring roads.This trend can be attributed to the efforts of restaurants to establish connections with other urban functional spaces,leading to the reconfiguration of urban spaces,expansion of restaurant-dedicated land use,and the reorganization of associated commercial activities.The results validate the existence of a polycentric urban structure in Shanghai but also highlight the instability of the restaurant hotspot network during cross-scale transitions.展开更多
Verification and validation(V&V)is a helpful tool for evaluating simulation errors,but its application in unsteady cavitating flow remains a challenging issue due to the difficulty in meeting the requirement of an...Verification and validation(V&V)is a helpful tool for evaluating simulation errors,but its application in unsteady cavitating flow remains a challenging issue due to the difficulty in meeting the requirement of an asymptotic range.Hence,a new V&V approach for large eddy simulation(LES)is proposed.This approach offers a viable solution for the error estimation of simulation data that are unable to satisfy the asymptotic range.The simulation errors of cavitating flow around a projectile near the free surface are assessed using the new V&V method.The evident error values are primarily dispersed around the cavity region and free surface.The increasingly intense cavitating flow increases the error magnitudes.In addition,the modeling error magnitudes of the Dynamic Smagorinsky-Lilly model are substantially smaller than that of the Smagorinsky-Lilly model.The present V&V method can capture the decrease in the modeling errors due to model enhancements,further exhibiting its applicability in cavitating flow simulations.Moreover,the monitoring points where the simulation data are beyond the asymptotic range are primarily dispersed near the cavity region,and the number of such points grows as the cavitating flow intensifies.The simulation outcomes also suggest that the re-entrant jet and shedding cavity collapse are the chief sources of vorticity motions,which remarkably affect the simulation accuracy.The results of this study provide a valuable reference for V&V research.展开更多
基金sponsored by the National Natural Science Foundation of China(Grant No.52178100).
文摘The spatial offset of bridge has a significant impact on the safety,comfort,and durability of high-speed railway(HSR)operations,so it is crucial to rapidly and effectively detect the spatial offset of operational HSR bridges.Drive-by monitoring of bridge uneven settlement demonstrates significant potential due to its practicality,cost-effectiveness,and efficiency.However,existing drive-by methods for detecting bridge offset have limitations such as reliance on a single data source,low detection accuracy,and the inability to identify lateral deformations of bridges.This paper proposes a novel drive-by inspection method for spatial offset of HSR bridge based on multi-source data fusion of comprehensive inspection train.Firstly,dung beetle optimizer-variational mode decomposition was employed to achieve adaptive decomposition of non-stationary dynamic signals,and explore the hidden temporal relationships in the data.Subsequently,a long short-term memory neural network was developed to achieve feature fusion of multi-source signal and accurate prediction of spatial settlement of HSR bridge.A dataset of track irregularities and CRH380A high-speed train responses was generated using a 3D train-track-bridge interaction model,and the accuracy and effectiveness of the proposed hybrid deep learning model were numerically validated.Finally,the reliability of the proposed drive-by inspection method was further validated by analyzing the actual measurement data obtained from comprehensive inspection train.The research findings indicate that the proposed approach enables rapid and accurate detection of spatial offset in HSR bridge,ensuring the long-term operational safety of HSR bridges.
基金supported by the National Research Foundation of Korea(NRF)grant funded by the Korea government(MSIT)(No.RS-2024-00406320)the Institute of Information&Communica-tions Technology Planning&Evaluation(IITP)-Innovative Human Resource Development for Local Intellectualization Program Grant funded by the Korea government(MSIT)(IITP-2026-RS-2023-00259678).
文摘Domain adaptation aims to reduce the distribution gap between the training data(source domain)and the target data.This enables effective predictions even for domains not seen during training.However,most conventional domain adaptation methods assume a single source domain,making them less suitable for modern deep learning settings that rely on diverse and large-scale datasets.To address this limitation,recent research has focused on Multi-Source Domain Adaptation(MSDA),which aims to learn effectively from multiple source domains.In this paper,we propose Efficient Domain Transition for Multi-source(EDTM),a novel and efficient framework designed to tackle two major challenges in existing MSDA approaches:(1)integrating knowledge across different source domains and(2)aligning label distributions between source and target domains.EDTM leverages an ensemble-based classifier expert mechanism to enhance the contribution of source domains that are more similar to the target domain.To further stabilize the learning process and improve performance,we incorporate imitation learning into the training of the target model.In addition,Maximum Classifier Discrepancy(MCD)is employed to align class-wise label distributions between the source and target domains.Experiments were conducted using Digits-Five,one of the most representative benchmark datasets for MSDA.The results show that EDTM consistently outperforms existing methods in terms of average classification accuracy.Notably,EDTM achieved significantly higher performance on target domains such as Modified National Institute of Standards and Technolog with blended background images(MNIST-M)and Street View House Numbers(SVHN)datasets,demonstrating enhanced generalization compared to baseline approaches.Furthermore,an ablation study analyzing the contribution of each loss component validated the effectiveness of the framework,highlighting the importance of each module in achieving optimal performance.
基金Supported by the National Natural Science Foundation of China(Nos.42376185,41876111)the Shandong Provincial Natural Science Foundation(No.ZR2023MD073)。
文摘Benthic habitat mapping is an emerging discipline in the international marine field in recent years,providing an effective tool for marine spatial planning,marine ecological management,and decision-making applications.Seabed sediment classification is one of the main contents of seabed habitat mapping.In response to the impact of remote sensing imaging quality and the limitations of acoustic measurement range,where a single data source does not fully reflect the substrate type,we proposed a high-precision seabed habitat sediment classification method that integrates data from multiple sources.Based on WorldView-2 multi-spectral remote sensing image data and multibeam bathymetry data,constructed a random forests(RF)classifier with optimal feature selection.A seabed sediment classification experiment integrating optical remote sensing and acoustic remote sensing data was carried out in the shallow water area of Wuzhizhou Island,Hainan,South China.Different seabed sediment types,such as sand,seagrass,and coral reefs were effectively identified,with an overall classification accuracy of 92%.Experimental results show that RF matrix optimized by fusing multi-source remote sensing data for feature selection were better than the classification results of simple combinations of data sources,which improved the accuracy of seabed sediment classification.Therefore,the method proposed in this paper can be effectively applied to high-precision seabed sediment classification and habitat mapping around islands and reefs.
基金supported by Zhengzhou PM_(2.5)and O_(3)Collaborative Control and Monitoring Project(No.20220347A)the 2020 National Supercomputing Zhengzhou Center Innovation Ecosystem Construction Technology Project(No.201400210700).
文摘Improving the accuracy of anthropogenic volatile organic compounds(VOCs)emission inventory is crucial for reducing atmospheric pollution and formulating control policy of air pollution.In this study,an anthropogenic speciated VOCs emission inventory was established for Central China represented by Henan Province at a 3 km×3 km spatial resolution based on the emission factormethod.The 2019 VOCs emission in Henan Provincewas 1003.5 Gg,while industrial process source(33.7%)was the highest emission source,Zhengzhou(17.9%)was the city with highest emission and April and August were the months with the more emissions.High VOCs emission regions were concentrated in downtown areas and industrial parks.Alkanes and aromatic hydrocarbons were the main VOCs contribution groups.The species composition,source contribution and spatial distribution were verified and evaluated through tracer ratio method(TR),Positive Matrix Factorization Model(PMF)and remote sensing inversion(RSI).Results show that both the emission results by emission inventory(EI)(15.7 Gg)and by TRmethod(13.6 Gg)and source contribution by EI and PMF are familiar.The spatial distribution of HCHO primary emission based on RSI is basically consistent with that of HCHO emission based on EI with a R-value of 0.73.The verification results show that the VOCs emission inventory and speciated emission inventory established in this study are relatively reliable.
文摘Kinship verification is a key biometric recognition task that determines biological relationships based on physical features.Traditional methods predominantly use facial recognition,leveraging established techniques and extensive datasets.However,recent research has highlighted ear recognition as a promising alternative,offering advantages in robustness against variations in facial expressions,aging,and occlusions.Despite its potential,a significant challenge in ear-based kinship verification is the lack of large-scale datasets necessary for training deep learning models effectively.To address this challenge,we introduce the EarKinshipVN dataset,a novel and extensive collection of ear images designed specifically for kinship verification.This dataset consists of 4876 high-resolution color images from 157 multiracial families across different regions,forming 73,220 kinship pairs.EarKinshipVN,a diverse and large-scale dataset,advances kinship verification research using ear features.Furthermore,we propose the Mixer Attention Inception(MAI)model,an improved architecture that enhances feature extraction and classification accuracy.The MAI model fuses Inceptionv4 and MLP Mixer,integrating four attention mechanisms to enhance spatial and channel-wise feature representation.Experimental results demonstrate that MAI significantly outperforms traditional backbone architectures.It achieves an accuracy of 98.71%,surpassing Vision Transformer models while reducing computational complexity by up to 95%in parameter usage.These findings suggest that ear-based kinship verification,combined with an optimized deep learning model and a comprehensive dataset,holds significant promise for biometric applications.
基金funded by the National Natural Science Foundation of China Youth Fund(Grant No.62304022)Science and Technology on Electromechanical Dynamic Control Laboratory(China,Grant No.6142601012304)the 2022e2024 China Association for Science and Technology Innovation Integration Association Youth Talent Support Project(Grant No.2022QNRC001).
文摘Metal Additive Manufacturing(MAM) technology has become an important means of rapid prototyping precision manufacturing of special high dynamic heterogeneous complex parts. In response to the micromechanical defects such as porosity issues, significant deformation, surface cracks, and challenging control of surface morphology encountered during the selective laser melting(SLM) additive manufacturing(AM) process of specialized Micro Electromechanical System(MEMS) components, multiparameter optimization and micro powder melt pool/macro-scale mechanical properties control simulation of specialized components are conducted. The optimal parameters obtained through highprecision preparation and machining of components and static/high dynamic verification are: laser power of 110 W, laser speed of 600 mm/s, laser diameter of 75 μm, and scanning spacing of 50 μm. The density of the subordinate components under this reference can reach 99.15%, the surface hardness can reach 51.9 HRA, the yield strength can reach 550 MPa, the maximum machining error of the components is 4.73%, and the average surface roughness is 0.45 μm. Through dynamic hammering and high dynamic firing verification, SLM components meet the requirements for overload resistance. The results have proven that MEM technology can provide a new means for the processing of MEMS components applied in high dynamic environments. The parameters obtained in the conclusion can provide a design basis for the additive preparation of MEMS components.
基金Supported by the National Natural Science Foundation of China(42401488,42071351)the National Key Research and Development Program of China(2020YFA0608501,2017YFB0504204)+4 种基金the Liaoning Revitalization Talents Program(XLYC1802027)the Talent Recruited Program of the Chinese Academy of Science(Y938091)the Project Supported Discipline Innovation Team of the Liaoning Technical University(LNTU20TD-23)the Liaoning Province Doctoral Research Initiation Fund Program(2023-BS-202)the Basic Research Projects of Liaoning Department of Education(JYTQN2023202)。
文摘Accurate estimation of understory terrain has significant scientific importance for maintaining ecosystem balance and biodiversity conservation.Addressing the issue of inadequate representation of spatial heterogeneity when traditional forest topographic inversion methods consider the entire forest as the inversion unit,this study pro⁃poses a differentiated modeling approach to forest types based on refined land cover classification.Taking Puerto Ri⁃co and Maryland as study areas,a multi-dimensional feature system is constructed by integrating multi-source re⁃mote sensing data:ICESat-2 spaceborne LiDAR is used to obtain benchmark values for understory terrain,topo⁃graphic factors such as slope and aspect are extracted based on SRTM data,and vegetation cover characteristics are analyzed using Landsat-8 multispectral imagery.This study incorporates forest type as a classification modeling con⁃dition and applies the random forest algorithm to build differentiated topographic inversion models.Experimental re⁃sults indicate that,compared to traditional whole-area modeling methods(RMSE=5.06 m),forest type-based classi⁃fication modeling significantly improves the accuracy of understory terrain estimation(RMSE=2.94 m),validating the effectiveness of spatial heterogeneity modeling.Further sensitivity analysis reveals that canopy structure parame⁃ters(with RMSE variation reaching 4.11 m)exert a stronger regulatory effect on estimation accuracy compared to forest cover,providing important theoretical support for optimizing remote sensing models of forest topography.
基金Supported by the National Natural Science Foundation of China(62462036,62462037)Key Project of Jiangxi Provincial Natural Science Foundation(20242BAB26017)Academic and Major Disciplines in Jiangxi Province Technical Leader Training Project(20232BCJ22013)。
文摘Edit distance is an algorithm to measure the difference between two strings,usually represented as the minimum number of editing operations required to transform one string into another.The edit distance algorithm involves complex dependencies and constraints,making state management and verification work tedious.This paper proposes a derivation and verification method that avoids directly handling dependencies and constraints by proving the equivalence between the edit distance algorithm and existing functional modeling.First,the derivation process of edit distance algorithm mainly includes 1)describing problem specifications,2)inductively deducing recursive relations,3)formally constructing loop invariants using the optimization theory(memorization technology and optimal decision table)and properties(optimal substructure property and subproblems overlapping property)of the edit distance algorithm,4)generating the Minimalistic Imperative Programming Language(IMP)code based on the recursive relations.Second,the problem specification,loop invariants,and generated IMP code are input into Verification Condition Generator(VCG),which automatically generate five verification conditions,and then the correctness of edit distance algorithm is verified in the Isabelle/HOL theorem prover.The method utilizes formal technologies and theorem prover to complete the derivation and verification of the edit distance algorithm,and it can be applied to linear and nonlinear dynamic programming problems.
基金supported by the National Key R&D Program of China(No.2023YFB2603602)the National Natural Science Foundation of China(Nos.52222810 and 52178383).
文摘To elucidate the fracturing mechanism of deep hard rock under complex disturbance environments,this study investigates the dynamic failure behavior of pre-damaged granite subjected to multi-source dynamic disturbances.Blasting vibration monitoring was conducted in a deep-buried drill-and-blast tunnel to characterize in-situ dynamic loading conditions.Subsequently,true triaxial compression tests incorporating multi-source disturbances were performed using a self-developed wide-low-frequency true triaxial system to simulate disturbance accumulation and damage evolution in granite.The results demonstrate that combined dynamic disturbances and unloading damage significantly accelerate strength degradation and trigger shear-slip failure along preferentially oriented blast-induced fractures,with strength reductions up to 16.7%.Layered failure was observed on the free surface of pre-damaged granite under biaxial loading,indicating a disturbance-induced fracture localization mechanism.Time-stress-fracture-energy coupling fields were constructed to reveal the spatiotemporal characteristics of fracture evolution.Critical precursor frequency bands(105-150,185-225,and 300-325 kHz)were identified,which serve as diagnostic signatures of impending failure.A dynamic instability mechanism driven by multi-source disturbance superposition and pre-damage evolution was established.Furthermore,a grouting-based wave-absorption control strategy was proposed to mitigate deep dynamic disasters by attenuating disturbance amplitude and reducing excitation frequency.
基金supported by King Saud University,Riyadh,Saudi Arabia,through Researchers Supporting Project number RSP2025R498.
文摘The exponential growth of the Internet of Things(IoT)has revolutionized various domains such as healthcare,smart cities,and agriculture,generating vast volumes of data that require secure processing and storage in cloud environments.However,reliance on cloud infrastructure raises critical security challenges,particularly regarding data integrity.While existing cryptographic methods provide robust integrity verification,they impose significant computational and energy overheads on resource-constrained IoT devices,limiting their applicability in large-scale,real-time scenarios.To address these challenges,we propose the Cognitive-Based Integrity Verification Model(C-BIVM),which leverages Belief-Desire-Intention(BDI)cognitive intelligence and algebraic signatures to enable lightweight,efficient,and scalable data integrity verification.The model incorporates batch auditing,reducing resource consumption in large-scale IoT environments by approximately 35%,while achieving an accuracy of over 99.2%in detecting data corruption.C-BIVM dynamically adapts integrity checks based on real-time conditions,optimizing resource utilization by minimizing redundant operations by more than 30%.Furthermore,blind verification techniques safeguard sensitive IoT data,ensuring privacy compliance by preventing unauthorized access during integrity checks.Extensive experimental evaluations demonstrate that C-BIVM reduces computation time for integrity checks by up to 40%compared to traditional bilinear pairing-based methods,making it particularly suitable for IoT-driven applications in smart cities,healthcare,and beyond.These results underscore the effectiveness of C-BIVM in delivering a secure,scalable,and resource-efficient solution tailored to the evolving needs of IoT ecosystems.
基金phased research achievement of the Major Project of Philosophy and Social Sciences Research in Jiangsu Universities“Research on the Intervention Mechanism of Short Video Addiction”(2024SJZD145)。
文摘Combining the characteristics of the course“Comprehensive Training of E-Commerce Live Streaming,”this paper embeds the CDIO(Conceive-Design-Implement-Operate)method into the live streaming training process,carries out the virtual scene“e-commerce live streaming”course design and project-based teaching reform that integrates teaching training with learning effects,and establishes a set of cross-professional student live streaming training procedures guided by the CDIO engineering method.The training results show that the CDIO practical teaching model supported by data feedback plays an important role and significance in improving students’learning effects,and also provides some new experiences for integrating engineering thinking into the construction of new liberal arts.
基金supported by the National Natural Science Foundation of China(21663032 and 22061041)the Open Sharing Platform for Scientific and Technological Resources of Shaanxi Province(2021PT-004)the National Innovation and Entrepreneurship Training Program for College Students of China(S202110719044)。
文摘The SiO_(2) inverse opal photonic crystals(PC)with a three-dimensional macroporous structure were fabricated by the sacrificial template method,followed by infiltration of a pyrene derivative,1-(pyren-8-yl)but-3-en-1-amine(PEA),to achieve a formaldehyde(FA)-sensitive and fluorescence-enhanced sensing film.Utilizing the specific Aza-Cope rearrangement reaction of allylamine of PEA and FA to generate a strong fluorescent product emitted at approximately 480 nm,we chose a PC whose blue band edge of stopband overlapped with the fluorescence emission wavelength.In virtue of the fluorescence enhancement property derived from slow photon effect of PC,FA was detected highly selectively and sensitively.The limit of detection(LoD)was calculated to be 1.38 nmol/L.Furthermore,the fast detection of FA(within 1 min)is realized due to the interconnected three-dimensional macroporous structure of the inverse opal PC and its high specific surface area.The prepared sensing film can be used for the detection of FA in air,aquatic products and living cells.The very close FA content in indoor air to the result from FA detector,the recovery rate of 101.5%for detecting FA in aquatic products and fast fluorescence imaging in 2 min for living cells demonstrate the reliability and accuracy of our method in practical applications.
基金supported by Natural Science Foundation of China(Nos.62303126,62362008,author Z.Z,https://www.nsfc.gov.cn/,accessed on 20 December 2024)Major Scientific and Technological Special Project of Guizhou Province([2024]014)+2 种基金Guizhou Provincial Science and Technology Projects(No.ZK[2022]General149) ,author Z.Z,https://kjt.guizhou.gov.cn/,accessed on 20 December 2024)The Open Project of the Key Laboratory of Computing Power Network and Information Security,Ministry of Education under Grant 2023ZD037,author Z.Z,https://www.gzu.edu.cn/,accessed on 20 December 2024)Open Research Project of the State Key Laboratory of Industrial Control Technology,Zhejiang University,China(No.ICT2024B25),author Z.Z,https://www.gzu.edu.cn/,accessed on 20 December 2024).
文摘Due to the development of cloud computing and machine learning,users can upload their data to the cloud for machine learning model training.However,dishonest clouds may infer user data,resulting in user data leakage.Previous schemes have achieved secure outsourced computing,but they suffer from low computational accuracy,difficult-to-handle heterogeneous distribution of data from multiple sources,and high computational cost,which result in extremely poor user experience and expensive cloud computing costs.To address the above problems,we propose amulti-precision,multi-sourced,andmulti-key outsourcing neural network training scheme.Firstly,we design a multi-precision functional encryption computation based on Euclidean division.Second,we design the outsourcing model training algorithm based on a multi-precision functional encryption with multi-sourced heterogeneity.Finally,we conduct experiments on three datasets.The results indicate that our framework achieves an accuracy improvement of 6%to 30%.Additionally,it offers a memory space optimization of 1.0×2^(24) times compared to the previous best approach.
基金supported by the Sichuan Science and Technology Program(Nos.2024JDRC0100 and 2023YFQ0091)the National Natural Science Foundation of China(Nos.U21A20167 and 52475138)the Scientific Research Foundation of the State Key Laboratory of Rail Transit Vehicle System(No.2024RVL-T08).
文摘Accurate monitoring of track irregularities is very helpful to improving the vehicle operation quality and to formulating appropriate track maintenance strategies.Existing methods have the problem that they rely on complex signal processing algorithms and lack multi-source data analysis.Driven by multi-source measurement data,including the axle box,the bogie frame and the carbody accelerations,this paper proposes a track irregularities monitoring network(TIMNet)based on deep learning methods.TIMNet uses the feature extraction capability of convolutional neural networks and the sequence map-ping capability of the long short-term memory model to explore the mapping relationship between vehicle accelerations and track irregularities.The particle swarm optimization algorithm is used to optimize the network parameters,so that both the vertical and lateral track irregularities can be accurately identified in the time and spatial domains.The effectiveness and superiority of the proposed TIMNet is analyzed under different simulation conditions using a vehicle dynamics model.Field tests are conducted to prove the availability of the proposed TIMNet in quantitatively monitoring vertical and lateral track irregularities.Furthermore,comparative tests show that the TIMNet has a better fitting degree and timeliness in monitoring track irregularities(vertical R2 of 0.91,lateral R2 of 0.84 and time cost of 10 ms),compared to other classical regression.The test also proves that the TIMNet has a better anti-interference ability than other regression models.
基金supported by National Natural Science Foundation of China(12174350)Science and Technology Project of State Grid Henan Electric Power Company(5217Q0240008).
文摘In the heterogeneous power internet of things(IoT)environment,data signals are acquired to support different business systems to realize advanced intelligent applications,with massive,multi-source,heterogeneous and other characteristics.Reliable perception of information and efficient transmission of energy in multi-source heterogeneous environments are crucial issues.Compressive sensing(CS),as an effective method of signal compression and transmission,can accurately recover the original signal only by very few sampling.In this paper,we study a new method of multi-source heterogeneous data signal reconstruction of power IoT based on compressive sensing technology.Based on the traditional compressive sensing technology to directly recover multi-source heterogeneous signals,we fully use the interference subspace information to design the measurement matrix,which directly and effectively eliminates the interference while making the measurement.The measure matrix is optimized by minimizing the average cross-coherence of the matrix,and the reconstruction performance of the new method is further improved.Finally,the effectiveness of the new method with different parameter settings under different multi-source heterogeneous data signal cases is verified by using orthogonal matching pursuit(OMP)and sparsity adaptive matching pursuit(SAMP)for considering the actual environment with prior information utilization of signal sparsity and no prior information utilization of signal sparsity.
文摘This paper deeply discusses the causes of gear howling noise,the identification and analysis of multi-source excitation,the transmission path of dynamic noise,simulation and experimental research,case analysis,optimization effect,etc.,aiming to better provide a certain guideline and reference for relevant researchers.
基金supported in part by the Doctoral Initiation Fund of Nanchang Hangkong University(No.EA202403107)Jiangxi Province Early Career Youth Science and Technology Talent Training Project(No.CK202403509).
文摘This paper presents the design and ground verification for vision-based relative navigation systems of microsatellites,which offers a comprehensive hardware design solution and a robust experimental verification methodology for practical implementation of vision-based navigation technology on the microsatellite platform.Firstly,a low power consumption,light weight,and high performance vision-based relative navigation optical sensor is designed.Subsequently,a set of ground verification system is designed for the hardware-in-the-loop testing of the vision-based relative navigation systems.Finally,the designed vision-based relative navigation optical sensor and the proposed angles-only navigation algorithms are tested on the ground verification system.The results verify that the optical simulator after geometrical calibration can meet the requirements of the hardware-in-the-loop testing of vision-based relative navigation systems.Based on experimental results,the relative position accuracy of the angles-only navigation filter at terminal time is increased by 25.5%,and the relative speed accuracy is increased by 31.3% compared with those of optical simulator before geometrical calibration.
文摘With the acceleration of intelligent transformation of energy system,the monitoring of equipment operation status and optimization of production process in thermal power plants face the challenge of multi-source heterogeneous data integration.In view of the heterogeneous characteristics of physical sensor data,including temperature,vibration and pressure that generated by boilers,steam turbines and other key equipment and real-time working condition data of SCADA system,this paper proposes a multi-source heterogeneous data fusion and analysis platform for thermal power plants based on edge computing and deep learning.By constructing a multi-level fusion architecture,the platform adopts dynamic weight allocation strategy and 5D digital twin model to realize the collaborative analysis of physical sensor data,simulation calculation results and expert knowledge.The data fusion module combines Kalman filter,wavelet transform and Bayesian estimation method to solve the problem of data time series alignment and dimension difference.Simulation results show that the data fusion accuracy can be improved to more than 98%,and the calculation delay can be controlled within 500 ms.The data analysis module integrates Dymola simulation model and AERMOD pollutant diffusion model,supports the cascade analysis of boiler combustion efficiency prediction and flue gas emission monitoring,system response time is less than 2 seconds,and data consistency verification accuracy reaches 99.5%.
基金Under the auspices of the Key Program of National Natural Science Foundation of China(No.42030409)。
文摘Multi-source data fusion provides high-precision spatial situational awareness essential for analyzing granular urban social activities.This study used Shanghai’s catering industry as a case study,leveraging electronic reviews and consumer data sourced from third-party restaurant platforms collected in 2021.By performing weighted processing on two-dimensional point-of-interest(POI)data,clustering hotspots of high-dimensional restaurant data were identified.A hierarchical network of restaurant hotspots was constructed following the Central Place Theory(CPT)framework,while the Geo-Informatic Tupu method was employed to resolve the challenges posed by network deformation in multi-scale processes.These findings suggest the necessity of enhancing the spatial balance of Shanghai’s urban centers by moderately increasing the number and service capacity of suburban centers at the urban periphery.Such measures would contribute to a more optimized urban structure and facilitate the outward dispersion of comfort-oriented facilities such as the restaurant industry.At a finer spatial scale,the distribution of restaurant hotspots demonstrates a polycentric and symmetric spatial pattern,with a developmental trend radiating outward along the city’s ring roads.This trend can be attributed to the efforts of restaurants to establish connections with other urban functional spaces,leading to the reconfiguration of urban spaces,expansion of restaurant-dedicated land use,and the reorganization of associated commercial activities.The results validate the existence of a polycentric urban structure in Shanghai but also highlight the instability of the restaurant hotspot network during cross-scale transitions.
基金Supported by the National Key R&D Program of China(2022YFB3303501)the National Natural Science Foundation of China(Project Nos.52176041 and 12102308)the Fundamental Research Funds for the Central Universities(Project Nos.2042023kf0208 and 2042023kf0159).
文摘Verification and validation(V&V)is a helpful tool for evaluating simulation errors,but its application in unsteady cavitating flow remains a challenging issue due to the difficulty in meeting the requirement of an asymptotic range.Hence,a new V&V approach for large eddy simulation(LES)is proposed.This approach offers a viable solution for the error estimation of simulation data that are unable to satisfy the asymptotic range.The simulation errors of cavitating flow around a projectile near the free surface are assessed using the new V&V method.The evident error values are primarily dispersed around the cavity region and free surface.The increasingly intense cavitating flow increases the error magnitudes.In addition,the modeling error magnitudes of the Dynamic Smagorinsky-Lilly model are substantially smaller than that of the Smagorinsky-Lilly model.The present V&V method can capture the decrease in the modeling errors due to model enhancements,further exhibiting its applicability in cavitating flow simulations.Moreover,the monitoring points where the simulation data are beyond the asymptotic range are primarily dispersed near the cavity region,and the number of such points grows as the cavitating flow intensifies.The simulation outcomes also suggest that the re-entrant jet and shedding cavity collapse are the chief sources of vorticity motions,which remarkably affect the simulation accuracy.The results of this study provide a valuable reference for V&V research.