Improving the accuracy of anthropogenic volatile organic compounds(VOCs)emission inventory is crucial for reducing atmospheric pollution and formulating control policy of air pollution.In this study,an anthropogenic s...Improving the accuracy of anthropogenic volatile organic compounds(VOCs)emission inventory is crucial for reducing atmospheric pollution and formulating control policy of air pollution.In this study,an anthropogenic speciated VOCs emission inventory was established for Central China represented by Henan Province at a 3 km×3 km spatial resolution based on the emission factormethod.The 2019 VOCs emission in Henan Provincewas 1003.5 Gg,while industrial process source(33.7%)was the highest emission source,Zhengzhou(17.9%)was the city with highest emission and April and August were the months with the more emissions.High VOCs emission regions were concentrated in downtown areas and industrial parks.Alkanes and aromatic hydrocarbons were the main VOCs contribution groups.The species composition,source contribution and spatial distribution were verified and evaluated through tracer ratio method(TR),Positive Matrix Factorization Model(PMF)and remote sensing inversion(RSI).Results show that both the emission results by emission inventory(EI)(15.7 Gg)and by TRmethod(13.6 Gg)and source contribution by EI and PMF are familiar.The spatial distribution of HCHO primary emission based on RSI is basically consistent with that of HCHO emission based on EI with a R-value of 0.73.The verification results show that the VOCs emission inventory and speciated emission inventory established in this study are relatively reliable.展开更多
The morphological distribution of absorbent in composites is equally important with absorbents for the overall electromagnetic properties,but it is often ignored.Herein,a comprehensive consideration including electrom...The morphological distribution of absorbent in composites is equally important with absorbents for the overall electromagnetic properties,but it is often ignored.Herein,a comprehensive consideration including electromagnetic component regulation,layered arrangement structure,and gradient concentration distribution was used to optimize impedance matching and enhance electromagnetic loss.On the microscale,the incorporation of magnetic Ni nanoparticles into MXene nanosheets(Ni@MXene)endows suitable intrinsic permittivity and permeability.On the macroscale,the layered arrangement of Ni@MXene increases the effective interaction area with electromagnetic waves,inducing multiple reflection/scattering effects.On this basis,according to the analysis of absorption,reflection,and transmission(A-R-T)power coefficients of layered composites,the gradient concentration distribution was constructed to realize the impedance matching at low-concentration surface layer,electromagnetic loss at middle concentration interlayer and microwave reflection at high-concentration bottom layer.Consequently,the layered gradient composite(LG5-10-15)achieves complete absorption coverage of X-band at thickness of 2.00-2.20 mm with RL_(min) of-68.67 dB at 9.85 GHz in 2.05 mm,which is 199.0%,12.6%,and 50.6%higher than non-layered,layered and layered descending gradient composites,respectively.Therefore,this work confirms the importance of layered gradient structure in improving absorption performance and broadens the design of high-performance microwave absorption materials.展开更多
Metal Additive Manufacturing(MAM) technology has become an important means of rapid prototyping precision manufacturing of special high dynamic heterogeneous complex parts. In response to the micromechanical defects s...Metal Additive Manufacturing(MAM) technology has become an important means of rapid prototyping precision manufacturing of special high dynamic heterogeneous complex parts. In response to the micromechanical defects such as porosity issues, significant deformation, surface cracks, and challenging control of surface morphology encountered during the selective laser melting(SLM) additive manufacturing(AM) process of specialized Micro Electromechanical System(MEMS) components, multiparameter optimization and micro powder melt pool/macro-scale mechanical properties control simulation of specialized components are conducted. The optimal parameters obtained through highprecision preparation and machining of components and static/high dynamic verification are: laser power of 110 W, laser speed of 600 mm/s, laser diameter of 75 μm, and scanning spacing of 50 μm. The density of the subordinate components under this reference can reach 99.15%, the surface hardness can reach 51.9 HRA, the yield strength can reach 550 MPa, the maximum machining error of the components is 4.73%, and the average surface roughness is 0.45 μm. Through dynamic hammering and high dynamic firing verification, SLM components meet the requirements for overload resistance. The results have proven that MEM technology can provide a new means for the processing of MEMS components applied in high dynamic environments. The parameters obtained in the conclusion can provide a design basis for the additive preparation of MEMS components.展开更多
BACKGROUND At present,the conventional methods for diagnosing cerebral edema in clinical practice are computed tomography(CT)and magnetic resonance imaging(MRI),which can evaluate the location and degree of peripheral...BACKGROUND At present,the conventional methods for diagnosing cerebral edema in clinical practice are computed tomography(CT)and magnetic resonance imaging(MRI),which can evaluate the location and degree of peripheral cerebral edema,but cannot realize quantification.When patients have symptoms of diffuse cerebral edema or high cranial pressure,CT or MRI often suggests that cerebral edema is lagging and cannot be dynamically monitored in real time.Intracranial pressure monitoring is the gold standard,but it is an invasive operation with high cost and complications.For clinical purposes,the ideal cerebral edema monitoring should be non-invasive,real-time,bedside,and continuous dynamic monitoring.The dis-turbance coefficient(DC)was used in this study to dynamically monitor the occu-rrence,development,and evolution of cerebral edema in patients with cerebral hemorrhage in real time,and review head CT or MRI to evaluate the development of the disease and guide further treatment,so as to improve the prognosis of patients with cerebral hemorrhage.AIM To offer a promising new approach for non-invasive adjuvant therapy in cerebral edema treatment.METHODS A total of 160 patients with hypertensive cerebral hemorrhage admitted to the Department of Neurosurgery,Second Affiliated Hospital of Xi’an Medical University from September 2018 to September 2019 were recruited.The patients were randomly divided into a control group(n=80)and an experimental group(n=80).Patients in the control group received conventional empirical treatment,while those in the experimental group were treated with mannitol dehydration under the guidance of DC.Subsequently,we compared the two groups with regards to the total dosage of mannitol,the total course of treatment,the incidence of complications,and prognosis.RESULTS The mean daily consumption of mannitol,the total course of treatment,and the mean hospitalization days were 362.7±117.7 mL,14.8±5.2 days,and 29.4±7.9 in the control group and 283.1±93.6 mL,11.8±4.2 days,and 23.9±8.3 in the experimental group(P<0.05).In the control group,there were 20 patients with pulmonary infection(25%),30 with electrolyte disturbance(37.5%),20 with renal impairment(25%),and 16 with stress ulcer(20%).In the experimental group,pulmonary infection occurred in 18 patients(22.5%),electrolyte disturbance in 6(7.5%),renal impairment in 2(2.5%),and stress ulcers in 15(18.8%)(P<0.05).According to the Glasgow coma scale score 6 months after discharge,the prognosis of the control group was good in 20 patients(25%),fair in 26(32.5%),and poor in 34(42.5%);the prognosis of the experimental group was good in 32(40%),fair in 36(45%),and poor in 12(15%)(P<0.05).CONCLUSION Using DC for non-invasive dynamic monitoring of cerebral edema demonstrates considerable clinical potential.It reduces mannitol dosage,treatment duration,complication rates,and hospital stays,ultimately lowering hospital-ization costs.Additionally,it improves overall patient prognosis,offering a promising new approach for non-invasive adjuvant therapy in cerebral edema treatment.展开更多
BACKGROUND Various stone factors can affect the net results of shock wave lithotripsy(SWL).Recently a new factor called variation coefficient of stone density(VCSD)is being considered to have an impact on stone free r...BACKGROUND Various stone factors can affect the net results of shock wave lithotripsy(SWL).Recently a new factor called variation coefficient of stone density(VCSD)is being considered to have an impact on stone free rates.AIM To assess the role of VCSD in determining success of SWL in urinary calculi.METHODS Charts review was utilized for collection of data variables.The patients were subjected to SWL,using an electromagnetic lithotripter.Mean stone density(MSD),stone heterogeneity index(SHI),and VCSD were calculated by generating regions of interest on computed tomography(CT)images.Role of these factors were determined by applying the relevant statistical tests for continuous and categorical variables and a P value of<0.05 was gauged to be statistically significant.RESULTS There were a total of 407 patients included in the analysis.The mean age of the subjects in this study was 38.89±14.61 years.In total,165 out of the 407 patients could not achieve stone free status.The successful group had a significantly lower stone volume as compared to the unsuccessful group(P<0.0001).Skin to stone distance was not dissimilar among the two groups(P=0.47).MSD was significantly lower in the successful group(P<0.0001).SHI and VCSD were both significantly higher in the successful group(P<0.0001).CONCLUSION VCSD,a useful CT based parameter,can be utilized to gauge stone fragility and hence the prediction of SWL outcomes.展开更多
Joint roughness coefficient(JRC)is the most commonly used parameter for quantifying surface roughness of rock discontinuities in practice.The system composed of multiple roughness statistical parameters to measure JRC...Joint roughness coefficient(JRC)is the most commonly used parameter for quantifying surface roughness of rock discontinuities in practice.The system composed of multiple roughness statistical parameters to measure JRC is a nonlinear system with a lot of overlapping information.In this paper,a dataset of eight roughness statistical parameters covering 112 digital joints is established.Then,the principal component analysis method is introduced to extract the significant information,which solves the information overlap problem of roughness characterization.Based on the two principal components of extracted features,the white shark optimizer algorithm was introduced to optimize the extreme gradient boosting model,and a new machine learning(ML)prediction model was established.The prediction accuracy of the new model and the other 17 models was measured using statistical metrics.The results show that the prediction result of the new model is more consistent with the real JRC value,with higher recognition accuracy and generalization ability.展开更多
Let f be a primitive holomorphic cusp form with even integral weight k≥2 for the full modular groupΓ=SL(2,Z)andλ_(sym^(j)f)(n)be the n-th coefficient of Dirichlet series of j-th symmetric L-function L(s,sym^(j)f)at...Let f be a primitive holomorphic cusp form with even integral weight k≥2 for the full modular groupΓ=SL(2,Z)andλ_(sym^(j)f)(n)be the n-th coefficient of Dirichlet series of j-th symmetric L-function L(s,sym^(j)f)attached to f.In this paper,we study the mean value distribution over a specific sparse sequence of positive integers of the following sum∑(a^(2)+b^(2)+c^(2)+d^(2)≤x(a,b,c,d)∈Z^(4))λ_(sym^(j))^(i)f(a^(2)+b^(2)+c^(2)+d^(2))where j≥2 is a given positive integer,i=2,3,4 andαis sufficiently large.We utilize Python programming to design algorithms for higher power conditions,combining Perron's formula,latest results of representations of natural integers as sums of squares,as well as analytic properties and subconvexity and convexity bounds of automorphic L-functions,to ensure the accuracy and verifiability of asymptotic formulas.The conclusion we obtained improves previous results and extends them to a more general settings.展开更多
The study of the morphometric parameters of the three most abundant species in the lower course of the Kouilou River (Chrysichthys auratus, Liza falcipinnis and Pellonula vorax) was carried out. The standard length of...The study of the morphometric parameters of the three most abundant species in the lower course of the Kouilou River (Chrysichthys auratus, Liza falcipinnis and Pellonula vorax) was carried out. The standard length of Chrysichthys auratus varies between 43.57 and 210 mm, for an average of 96.70 ± 28.63 mm;the weight varies between 2.92 and 140.83 mg, an average of 73.03 ± 21.62 mg. The condition coefficient is equal to 4.42 ± 1.52. Liza falcipinnis has a standard length which varies between 59.9 mm and 158.08 mm for an average of 88.15 ± 29.74 mm;its weight varies between 4.77 and 76.21 mg, an average of 18.61 ± 11.82 mg. The condition coefficient is equal to 2.47 ± 1.57. Pellonula vorax has a standard length which varies between 60.33 mm and 117.72 mm;for an average of 80.48 ± 17.75 mm;the weight varies between 3.61 and 25.17 mg, an average of 9.03 ± 3.61 mg. The condition coefficient is equal to 2.17 ± 0.57. These three species have a minor allometric growth.展开更多
The exponential growth of the Internet of Things(IoT)has revolutionized various domains such as healthcare,smart cities,and agriculture,generating vast volumes of data that require secure processing and storage in clo...The exponential growth of the Internet of Things(IoT)has revolutionized various domains such as healthcare,smart cities,and agriculture,generating vast volumes of data that require secure processing and storage in cloud environments.However,reliance on cloud infrastructure raises critical security challenges,particularly regarding data integrity.While existing cryptographic methods provide robust integrity verification,they impose significant computational and energy overheads on resource-constrained IoT devices,limiting their applicability in large-scale,real-time scenarios.To address these challenges,we propose the Cognitive-Based Integrity Verification Model(C-BIVM),which leverages Belief-Desire-Intention(BDI)cognitive intelligence and algebraic signatures to enable lightweight,efficient,and scalable data integrity verification.The model incorporates batch auditing,reducing resource consumption in large-scale IoT environments by approximately 35%,while achieving an accuracy of over 99.2%in detecting data corruption.C-BIVM dynamically adapts integrity checks based on real-time conditions,optimizing resource utilization by minimizing redundant operations by more than 30%.Furthermore,blind verification techniques safeguard sensitive IoT data,ensuring privacy compliance by preventing unauthorized access during integrity checks.Extensive experimental evaluations demonstrate that C-BIVM reduces computation time for integrity checks by up to 40%compared to traditional bilinear pairing-based methods,making it particularly suitable for IoT-driven applications in smart cities,healthcare,and beyond.These results underscore the effectiveness of C-BIVM in delivering a secure,scalable,and resource-efficient solution tailored to the evolving needs of IoT ecosystems.展开更多
Combining the characteristics of the course“Comprehensive Training of E-Commerce Live Streaming,”this paper embeds the CDIO(Conceive-Design-Implement-Operate)method into the live streaming training process,carries o...Combining the characteristics of the course“Comprehensive Training of E-Commerce Live Streaming,”this paper embeds the CDIO(Conceive-Design-Implement-Operate)method into the live streaming training process,carries out the virtual scene“e-commerce live streaming”course design and project-based teaching reform that integrates teaching training with learning effects,and establishes a set of cross-professional student live streaming training procedures guided by the CDIO engineering method.The training results show that the CDIO practical teaching model supported by data feedback plays an important role and significance in improving students’learning effects,and also provides some new experiences for integrating engineering thinking into the construction of new liberal arts.展开更多
This paper presents the design and ground verification for vision-based relative navigation systems of microsatellites,which offers a comprehensive hardware design solution and a robust experimental verification metho...This paper presents the design and ground verification for vision-based relative navigation systems of microsatellites,which offers a comprehensive hardware design solution and a robust experimental verification methodology for practical implementation of vision-based navigation technology on the microsatellite platform.Firstly,a low power consumption,light weight,and high performance vision-based relative navigation optical sensor is designed.Subsequently,a set of ground verification system is designed for the hardware-in-the-loop testing of the vision-based relative navigation systems.Finally,the designed vision-based relative navigation optical sensor and the proposed angles-only navigation algorithms are tested on the ground verification system.The results verify that the optical simulator after geometrical calibration can meet the requirements of the hardware-in-the-loop testing of vision-based relative navigation systems.Based on experimental results,the relative position accuracy of the angles-only navigation filter at terminal time is increased by 25.5%,and the relative speed accuracy is increased by 31.3% compared with those of optical simulator before geometrical calibration.展开更多
Kinship verification is a key biometric recognition task that determines biological relationships based on physical features.Traditional methods predominantly use facial recognition,leveraging established techniques a...Kinship verification is a key biometric recognition task that determines biological relationships based on physical features.Traditional methods predominantly use facial recognition,leveraging established techniques and extensive datasets.However,recent research has highlighted ear recognition as a promising alternative,offering advantages in robustness against variations in facial expressions,aging,and occlusions.Despite its potential,a significant challenge in ear-based kinship verification is the lack of large-scale datasets necessary for training deep learning models effectively.To address this challenge,we introduce the EarKinshipVN dataset,a novel and extensive collection of ear images designed specifically for kinship verification.This dataset consists of 4876 high-resolution color images from 157 multiracial families across different regions,forming 73,220 kinship pairs.EarKinshipVN,a diverse and large-scale dataset,advances kinship verification research using ear features.Furthermore,we propose the Mixer Attention Inception(MAI)model,an improved architecture that enhances feature extraction and classification accuracy.The MAI model fuses Inceptionv4 and MLP Mixer,integrating four attention mechanisms to enhance spatial and channel-wise feature representation.Experimental results demonstrate that MAI significantly outperforms traditional backbone architectures.It achieves an accuracy of 98.71%,surpassing Vision Transformer models while reducing computational complexity by up to 95%in parameter usage.These findings suggest that ear-based kinship verification,combined with an optimized deep learning model and a comprehensive dataset,holds significant promise for biometric applications.展开更多
With the evolution of next-generation communication networks,ensuring robust Core Network(CN)architecture and data security has become paramount.This paper addresses critical vulnerabilities in the architecture of CN ...With the evolution of next-generation communication networks,ensuring robust Core Network(CN)architecture and data security has become paramount.This paper addresses critical vulnerabilities in the architecture of CN and data security by proposing a novel framework based on blockchain technology that is specifically designed for communication networks.Traditional centralized network architectures are vulnerable to Distributed Denial of Service(DDoS)attacks,particularly in roaming scenarios where there is also a risk of private data leakage,which imposes significant operational demands.To address these issues,we introduce the Blockchain-Enhanced Core Network Architecture(BECNA)and the Secure Decentralized Identity Authentication Scheme(SDIDAS).The BECNA utilizes blockchain technology to decentralize data storage,enhancing network security,stability,and reliability by mitigating Single Points of Failure(SPoF).The SDIDAS utilizes Decentralized Identity(DID)technology to secure user identity data and streamline authentication in roaming scenarios,significantly reducing the risk of data breaches during cross-network transmissions.Our framework employs Ethereum,free5GC,Wireshark,and UERANSIM tools to create a robust,tamper-evident system model.A comprehensive security analysis confirms substantial improvements in user privacy and network security.Simulation results indicate that our approach enhances communication CNs security and reliability,while also ensuring data security.展开更多
Verification and validation(V&V)is a helpful tool for evaluating simulation errors,but its application in unsteady cavitating flow remains a challenging issue due to the difficulty in meeting the requirement of an...Verification and validation(V&V)is a helpful tool for evaluating simulation errors,but its application in unsteady cavitating flow remains a challenging issue due to the difficulty in meeting the requirement of an asymptotic range.Hence,a new V&V approach for large eddy simulation(LES)is proposed.This approach offers a viable solution for the error estimation of simulation data that are unable to satisfy the asymptotic range.The simulation errors of cavitating flow around a projectile near the free surface are assessed using the new V&V method.The evident error values are primarily dispersed around the cavity region and free surface.The increasingly intense cavitating flow increases the error magnitudes.In addition,the modeling error magnitudes of the Dynamic Smagorinsky-Lilly model are substantially smaller than that of the Smagorinsky-Lilly model.The present V&V method can capture the decrease in the modeling errors due to model enhancements,further exhibiting its applicability in cavitating flow simulations.Moreover,the monitoring points where the simulation data are beyond the asymptotic range are primarily dispersed near the cavity region,and the number of such points grows as the cavitating flow intensifies.The simulation outcomes also suggest that the re-entrant jet and shedding cavity collapse are the chief sources of vorticity motions,which remarkably affect the simulation accuracy.The results of this study provide a valuable reference for V&V research.展开更多
In the foundry industries,process design has traditionally relied on manuals and complex theoretical calculations.With the advent of 3D design in casting,computer-aided design(CAD)has been applied to integrate the fea...In the foundry industries,process design has traditionally relied on manuals and complex theoretical calculations.With the advent of 3D design in casting,computer-aided design(CAD)has been applied to integrate the features of casting process,thereby expanding the scope of design options.These technologies use parametric model design techniques for rapid component creation and use databases to access standard process parameters and design specifications.However,3D models are currently still created through inputting or calling parameters,which requires numerous verifications through calculations to ensure the design rationality.This process may be significantly slowed down due to repetitive modifications and extended design time.As a result,there are increasingly urgent demands for a real-time verification mechanism to address this issue.Therefore,this study proposed a novel closed-loop model and software development method that integrated contextual design with real-time verification,dynamically verifying relevant rules for designing 3D casting components.Additionally,the study analyzed three typical closed-loop scenarios of agile design in an independent developed intelligent casting process system.It is believed that foundry industries can potentially benefit from favorably reduced design cycles to yield an enhanced competitive product market.展开更多
In order to get rid of the dependence on high-precision centrifuges in accelerometer nonlinear coefficients calibration,this paper proposes a system-level calibration method for field condition.Firstly,a 42-dimension ...In order to get rid of the dependence on high-precision centrifuges in accelerometer nonlinear coefficients calibration,this paper proposes a system-level calibration method for field condition.Firstly,a 42-dimension Kalman filter is constructed to reduce impact brought by turntable.Then,a biaxial rotation path is designed based on the accelerometer output model,including orthogonal 22 positions and tilt 12 positions,which enhances gravity excitation on nonlinear coefficients of accelerometer.Finally,sampling is carried out for calibration and further experiments.The results of static inertial navigation experiments lasting 4000 s show that compared with the traditional method,the proposed method reduces the position error by about 390 m.展开更多
systematic verification and validation(V&V)of our previously proposed momentum source wave generation method is performed.Some settings of previous numerical wave tanks(NWTs)of regular and irregular waves have bee...systematic verification and validation(V&V)of our previously proposed momentum source wave generation method is performed.Some settings of previous numerical wave tanks(NWTs)of regular and irregular waves have been optimized.The H2-5 V&V method involving five mesh sizes with mesh refinement ratio being 1.225 is used to verify the NWT of regular waves,in which the wave height and mass conservation are mainly considered based on a Lv3(H s=0.75 m)and a Lv6(H s=5 m)regular wave.Additionally,eight different sea states are chosen to validate the wave height,mass conservation and wave frequency of regular waves.Regarding the NWT of irregular waves,five different sea states with significant wave heights ranging from 0.09 m to 12.5 m are selected to validate the statistical characteristics of irregular waves,including the profile of the wave spectrum,peak frequency and significant wave height.Results show that the verification errors for Lv3 and Lv6 regular wave on the most refined grid are−0.018 and−0.35 for wave height,respectively,and−0.14 and for−0.17 mass conservation,respectively.The uncertainty estimation analysis shows that the numerical error could be partially balanced out by the modelling error to achieve a smaller validation error by adjusting the mesh size elaborately.And the validation errors of the wave height,mass conservation and dominant frequency of regular waves under different sea states are no more than 7%,8% and 2%,respectively.For a Lv3(H_(s)=0.75 m)and a Lv6(H_(s)=5 m)regular wave,simulations are validated on the wave height in wave development section for safety factors FS≈1 and FS≈0.5-1,respectively.Regarding irregular waves,the validation errors of the significant wave height and peak frequency are both lower than 2%.展开更多
The scroll expander,as the core component of the micro-compressed air energy storage and power generation system,directly affects the output efficiency of the system.Meanwhile,the scroll profile plays a central role i...The scroll expander,as the core component of the micro-compressed air energy storage and power generation system,directly affects the output efficiency of the system.Meanwhile,the scroll profile plays a central role in determining the output performance of the scroll expander.In this study,in order to investigate the output characteristics of a variable cross-section scroll expander,numerical simulation and experimental studies were con-ducted by using Computational Fluid Dynamics(CFD)methods and dynamic mesh techniques.The impact of critical parameters on the output performance of the scroll expander was analyzed through the utilization of the control variable method.It is found that increasing the inlet pressure and temperature within a certain range can improve the output power of the scroll expander.However,the increase in temperature and meshing clearance leads to a decline in the overall output performance of the scroll expander,leading to a decrease in volumetric efficiency by 8.43%and 12.79%,respectively.The experiments demonstrate that under equal inlet pressure conditions,increasing the inlet temperature elevates both the rotational speed and torque output of the scroll expander.Specifically,compared to operating at normal temperatures,the output torque increases by 21.8%under high-temperature conditions.However,the rate of speed and torque variation decreases as a consequence of enlarged meshing clearance,resulting in increased internal leakage and reduction in isentropic efficiency.展开更多
To minimize the calculation errors in the sound absorption coefficient resulting from inaccurate measurements of flow resistivity,a simple method for determining the sound absorption coefficient of soundabsorbing mate...To minimize the calculation errors in the sound absorption coefficient resulting from inaccurate measurements of flow resistivity,a simple method for determining the sound absorption coefficient of soundabsorbing materials is proposed.Firstly,the sound absorption coefficients of a fibrous sound-absorbing material are measured at two different frequencies using the impedance tube method.Secondly,utilizing the empirical formulas for the wavenumber and acoustic impedance in the fibrous material,the flow resistivity and porosity of the sound-absorbing materials are calculated using the MATLAB cycle program.Thirdly,based on the values obtained through reverse calculations,the sound absorption coefficient,the real and the imaginary parts of the acoustic impedance of the sound-absorbing material at different frequencies are theoretically computed.Finally,the accuracy of these theoretical calculations is verified through experiments.The experimental results indicate that the calculated values are basically consistent with the measured values,demonstrating the feasibility and reliability of this method.展开更多
The physical layer key generation technique provides an efficient method,which utilizes the natural dynamics of wireless channel.However,there are some extremely challenging security scenarios such as static or quasi-...The physical layer key generation technique provides an efficient method,which utilizes the natural dynamics of wireless channel.However,there are some extremely challenging security scenarios such as static or quasi-static environment,which lead to the low randomness of generated keys.Meanwhile,the coefficients of the static channel may be dropped into the guard space and discarded by the quantization approach,which causes low key generation rate.To tackle these issues,we propose a random coefficient-moving product based wireless key generation scheme(RCMP-WKG),where new random resources with remarkable fluctuations can be obtained by applying random coefficient and by moving product on the legitimate nodes.Furthermore,appropriate quantization approaches are used to increase the key generation rate.Moreover,the security of our proposed scheme is evaluated by analyzing different attacks and the eavesdropper’s mean square error(MSE).The simulation results reveal that the proposed scheme can achieve better performances in key capacity,key inconsistency rate(KIR)and key generation rate(KGR)compared with the prior works in static environment.Besides,the proposed scheme can deteriorate the MSE performance of the eavesdropper and improve the key generation performance of legitimate nodes by controlling the length of the moving product.展开更多
基金supported by Zhengzhou PM_(2.5)and O_(3)Collaborative Control and Monitoring Project(No.20220347A)the 2020 National Supercomputing Zhengzhou Center Innovation Ecosystem Construction Technology Project(No.201400210700).
文摘Improving the accuracy of anthropogenic volatile organic compounds(VOCs)emission inventory is crucial for reducing atmospheric pollution and formulating control policy of air pollution.In this study,an anthropogenic speciated VOCs emission inventory was established for Central China represented by Henan Province at a 3 km×3 km spatial resolution based on the emission factormethod.The 2019 VOCs emission in Henan Provincewas 1003.5 Gg,while industrial process source(33.7%)was the highest emission source,Zhengzhou(17.9%)was the city with highest emission and April and August were the months with the more emissions.High VOCs emission regions were concentrated in downtown areas and industrial parks.Alkanes and aromatic hydrocarbons were the main VOCs contribution groups.The species composition,source contribution and spatial distribution were verified and evaluated through tracer ratio method(TR),Positive Matrix Factorization Model(PMF)and remote sensing inversion(RSI).Results show that both the emission results by emission inventory(EI)(15.7 Gg)and by TRmethod(13.6 Gg)and source contribution by EI and PMF are familiar.The spatial distribution of HCHO primary emission based on RSI is basically consistent with that of HCHO emission based on EI with a R-value of 0.73.The verification results show that the VOCs emission inventory and speciated emission inventory established in this study are relatively reliable.
基金support for this work by Key Research and Development Project of Henan Province(Grant.No.241111232300)the National Natural Science Foundation of China(Grant.No.52273085 and 52303113)the Open Fund of Yaoshan Laboratory(Grant.No.2024003).
文摘The morphological distribution of absorbent in composites is equally important with absorbents for the overall electromagnetic properties,but it is often ignored.Herein,a comprehensive consideration including electromagnetic component regulation,layered arrangement structure,and gradient concentration distribution was used to optimize impedance matching and enhance electromagnetic loss.On the microscale,the incorporation of magnetic Ni nanoparticles into MXene nanosheets(Ni@MXene)endows suitable intrinsic permittivity and permeability.On the macroscale,the layered arrangement of Ni@MXene increases the effective interaction area with electromagnetic waves,inducing multiple reflection/scattering effects.On this basis,according to the analysis of absorption,reflection,and transmission(A-R-T)power coefficients of layered composites,the gradient concentration distribution was constructed to realize the impedance matching at low-concentration surface layer,electromagnetic loss at middle concentration interlayer and microwave reflection at high-concentration bottom layer.Consequently,the layered gradient composite(LG5-10-15)achieves complete absorption coverage of X-band at thickness of 2.00-2.20 mm with RL_(min) of-68.67 dB at 9.85 GHz in 2.05 mm,which is 199.0%,12.6%,and 50.6%higher than non-layered,layered and layered descending gradient composites,respectively.Therefore,this work confirms the importance of layered gradient structure in improving absorption performance and broadens the design of high-performance microwave absorption materials.
基金funded by the National Natural Science Foundation of China Youth Fund(Grant No.62304022)Science and Technology on Electromechanical Dynamic Control Laboratory(China,Grant No.6142601012304)the 2022e2024 China Association for Science and Technology Innovation Integration Association Youth Talent Support Project(Grant No.2022QNRC001).
文摘Metal Additive Manufacturing(MAM) technology has become an important means of rapid prototyping precision manufacturing of special high dynamic heterogeneous complex parts. In response to the micromechanical defects such as porosity issues, significant deformation, surface cracks, and challenging control of surface morphology encountered during the selective laser melting(SLM) additive manufacturing(AM) process of specialized Micro Electromechanical System(MEMS) components, multiparameter optimization and micro powder melt pool/macro-scale mechanical properties control simulation of specialized components are conducted. The optimal parameters obtained through highprecision preparation and machining of components and static/high dynamic verification are: laser power of 110 W, laser speed of 600 mm/s, laser diameter of 75 μm, and scanning spacing of 50 μm. The density of the subordinate components under this reference can reach 99.15%, the surface hardness can reach 51.9 HRA, the yield strength can reach 550 MPa, the maximum machining error of the components is 4.73%, and the average surface roughness is 0.45 μm. Through dynamic hammering and high dynamic firing verification, SLM components meet the requirements for overload resistance. The results have proven that MEM technology can provide a new means for the processing of MEMS components applied in high dynamic environments. The parameters obtained in the conclusion can provide a design basis for the additive preparation of MEMS components.
基金Supported by the Shaanxi Provincial Key Research and Development Plan Project,No.2020ZDLSF01-02.
文摘BACKGROUND At present,the conventional methods for diagnosing cerebral edema in clinical practice are computed tomography(CT)and magnetic resonance imaging(MRI),which can evaluate the location and degree of peripheral cerebral edema,but cannot realize quantification.When patients have symptoms of diffuse cerebral edema or high cranial pressure,CT or MRI often suggests that cerebral edema is lagging and cannot be dynamically monitored in real time.Intracranial pressure monitoring is the gold standard,but it is an invasive operation with high cost and complications.For clinical purposes,the ideal cerebral edema monitoring should be non-invasive,real-time,bedside,and continuous dynamic monitoring.The dis-turbance coefficient(DC)was used in this study to dynamically monitor the occu-rrence,development,and evolution of cerebral edema in patients with cerebral hemorrhage in real time,and review head CT or MRI to evaluate the development of the disease and guide further treatment,so as to improve the prognosis of patients with cerebral hemorrhage.AIM To offer a promising new approach for non-invasive adjuvant therapy in cerebral edema treatment.METHODS A total of 160 patients with hypertensive cerebral hemorrhage admitted to the Department of Neurosurgery,Second Affiliated Hospital of Xi’an Medical University from September 2018 to September 2019 were recruited.The patients were randomly divided into a control group(n=80)and an experimental group(n=80).Patients in the control group received conventional empirical treatment,while those in the experimental group were treated with mannitol dehydration under the guidance of DC.Subsequently,we compared the two groups with regards to the total dosage of mannitol,the total course of treatment,the incidence of complications,and prognosis.RESULTS The mean daily consumption of mannitol,the total course of treatment,and the mean hospitalization days were 362.7±117.7 mL,14.8±5.2 days,and 29.4±7.9 in the control group and 283.1±93.6 mL,11.8±4.2 days,and 23.9±8.3 in the experimental group(P<0.05).In the control group,there were 20 patients with pulmonary infection(25%),30 with electrolyte disturbance(37.5%),20 with renal impairment(25%),and 16 with stress ulcer(20%).In the experimental group,pulmonary infection occurred in 18 patients(22.5%),electrolyte disturbance in 6(7.5%),renal impairment in 2(2.5%),and stress ulcers in 15(18.8%)(P<0.05).According to the Glasgow coma scale score 6 months after discharge,the prognosis of the control group was good in 20 patients(25%),fair in 26(32.5%),and poor in 34(42.5%);the prognosis of the experimental group was good in 32(40%),fair in 36(45%),and poor in 12(15%)(P<0.05).CONCLUSION Using DC for non-invasive dynamic monitoring of cerebral edema demonstrates considerable clinical potential.It reduces mannitol dosage,treatment duration,complication rates,and hospital stays,ultimately lowering hospital-ization costs.Additionally,it improves overall patient prognosis,offering a promising new approach for non-invasive adjuvant therapy in cerebral edema treatment.
文摘BACKGROUND Various stone factors can affect the net results of shock wave lithotripsy(SWL).Recently a new factor called variation coefficient of stone density(VCSD)is being considered to have an impact on stone free rates.AIM To assess the role of VCSD in determining success of SWL in urinary calculi.METHODS Charts review was utilized for collection of data variables.The patients were subjected to SWL,using an electromagnetic lithotripter.Mean stone density(MSD),stone heterogeneity index(SHI),and VCSD were calculated by generating regions of interest on computed tomography(CT)images.Role of these factors were determined by applying the relevant statistical tests for continuous and categorical variables and a P value of<0.05 was gauged to be statistically significant.RESULTS There were a total of 407 patients included in the analysis.The mean age of the subjects in this study was 38.89±14.61 years.In total,165 out of the 407 patients could not achieve stone free status.The successful group had a significantly lower stone volume as compared to the unsuccessful group(P<0.0001).Skin to stone distance was not dissimilar among the two groups(P=0.47).MSD was significantly lower in the successful group(P<0.0001).SHI and VCSD were both significantly higher in the successful group(P<0.0001).CONCLUSION VCSD,a useful CT based parameter,can be utilized to gauge stone fragility and hence the prediction of SWL outcomes.
基金funding from the National Natural Science Foundation of China (Grant No.42277175)the pilot project of cooperation between the Ministry of Natural Resources and Hunan Province“Research and demonstration of key technologies for comprehensive remote sensing identification of geological hazards in typical regions of Hunan Province” (Grant No.2023ZRBSHZ056)the National Key Research and Development Program of China-2023 Key Special Project (Grant No.2023YFC2907400).
文摘Joint roughness coefficient(JRC)is the most commonly used parameter for quantifying surface roughness of rock discontinuities in practice.The system composed of multiple roughness statistical parameters to measure JRC is a nonlinear system with a lot of overlapping information.In this paper,a dataset of eight roughness statistical parameters covering 112 digital joints is established.Then,the principal component analysis method is introduced to extract the significant information,which solves the information overlap problem of roughness characterization.Based on the two principal components of extracted features,the white shark optimizer algorithm was introduced to optimize the extreme gradient boosting model,and a new machine learning(ML)prediction model was established.The prediction accuracy of the new model and the other 17 models was measured using statistical metrics.The results show that the prediction result of the new model is more consistent with the real JRC value,with higher recognition accuracy and generalization ability.
文摘Let f be a primitive holomorphic cusp form with even integral weight k≥2 for the full modular groupΓ=SL(2,Z)andλ_(sym^(j)f)(n)be the n-th coefficient of Dirichlet series of j-th symmetric L-function L(s,sym^(j)f)attached to f.In this paper,we study the mean value distribution over a specific sparse sequence of positive integers of the following sum∑(a^(2)+b^(2)+c^(2)+d^(2)≤x(a,b,c,d)∈Z^(4))λ_(sym^(j))^(i)f(a^(2)+b^(2)+c^(2)+d^(2))where j≥2 is a given positive integer,i=2,3,4 andαis sufficiently large.We utilize Python programming to design algorithms for higher power conditions,combining Perron's formula,latest results of representations of natural integers as sums of squares,as well as analytic properties and subconvexity and convexity bounds of automorphic L-functions,to ensure the accuracy and verifiability of asymptotic formulas.The conclusion we obtained improves previous results and extends them to a more general settings.
文摘The study of the morphometric parameters of the three most abundant species in the lower course of the Kouilou River (Chrysichthys auratus, Liza falcipinnis and Pellonula vorax) was carried out. The standard length of Chrysichthys auratus varies between 43.57 and 210 mm, for an average of 96.70 ± 28.63 mm;the weight varies between 2.92 and 140.83 mg, an average of 73.03 ± 21.62 mg. The condition coefficient is equal to 4.42 ± 1.52. Liza falcipinnis has a standard length which varies between 59.9 mm and 158.08 mm for an average of 88.15 ± 29.74 mm;its weight varies between 4.77 and 76.21 mg, an average of 18.61 ± 11.82 mg. The condition coefficient is equal to 2.47 ± 1.57. Pellonula vorax has a standard length which varies between 60.33 mm and 117.72 mm;for an average of 80.48 ± 17.75 mm;the weight varies between 3.61 and 25.17 mg, an average of 9.03 ± 3.61 mg. The condition coefficient is equal to 2.17 ± 0.57. These three species have a minor allometric growth.
基金supported by King Saud University,Riyadh,Saudi Arabia,through Researchers Supporting Project number RSP2025R498.
文摘The exponential growth of the Internet of Things(IoT)has revolutionized various domains such as healthcare,smart cities,and agriculture,generating vast volumes of data that require secure processing and storage in cloud environments.However,reliance on cloud infrastructure raises critical security challenges,particularly regarding data integrity.While existing cryptographic methods provide robust integrity verification,they impose significant computational and energy overheads on resource-constrained IoT devices,limiting their applicability in large-scale,real-time scenarios.To address these challenges,we propose the Cognitive-Based Integrity Verification Model(C-BIVM),which leverages Belief-Desire-Intention(BDI)cognitive intelligence and algebraic signatures to enable lightweight,efficient,and scalable data integrity verification.The model incorporates batch auditing,reducing resource consumption in large-scale IoT environments by approximately 35%,while achieving an accuracy of over 99.2%in detecting data corruption.C-BIVM dynamically adapts integrity checks based on real-time conditions,optimizing resource utilization by minimizing redundant operations by more than 30%.Furthermore,blind verification techniques safeguard sensitive IoT data,ensuring privacy compliance by preventing unauthorized access during integrity checks.Extensive experimental evaluations demonstrate that C-BIVM reduces computation time for integrity checks by up to 40%compared to traditional bilinear pairing-based methods,making it particularly suitable for IoT-driven applications in smart cities,healthcare,and beyond.These results underscore the effectiveness of C-BIVM in delivering a secure,scalable,and resource-efficient solution tailored to the evolving needs of IoT ecosystems.
基金phased research achievement of the Major Project of Philosophy and Social Sciences Research in Jiangsu Universities“Research on the Intervention Mechanism of Short Video Addiction”(2024SJZD145)。
文摘Combining the characteristics of the course“Comprehensive Training of E-Commerce Live Streaming,”this paper embeds the CDIO(Conceive-Design-Implement-Operate)method into the live streaming training process,carries out the virtual scene“e-commerce live streaming”course design and project-based teaching reform that integrates teaching training with learning effects,and establishes a set of cross-professional student live streaming training procedures guided by the CDIO engineering method.The training results show that the CDIO practical teaching model supported by data feedback plays an important role and significance in improving students’learning effects,and also provides some new experiences for integrating engineering thinking into the construction of new liberal arts.
基金supported in part by the Doctoral Initiation Fund of Nanchang Hangkong University(No.EA202403107)Jiangxi Province Early Career Youth Science and Technology Talent Training Project(No.CK202403509).
文摘This paper presents the design and ground verification for vision-based relative navigation systems of microsatellites,which offers a comprehensive hardware design solution and a robust experimental verification methodology for practical implementation of vision-based navigation technology on the microsatellite platform.Firstly,a low power consumption,light weight,and high performance vision-based relative navigation optical sensor is designed.Subsequently,a set of ground verification system is designed for the hardware-in-the-loop testing of the vision-based relative navigation systems.Finally,the designed vision-based relative navigation optical sensor and the proposed angles-only navigation algorithms are tested on the ground verification system.The results verify that the optical simulator after geometrical calibration can meet the requirements of the hardware-in-the-loop testing of vision-based relative navigation systems.Based on experimental results,the relative position accuracy of the angles-only navigation filter at terminal time is increased by 25.5%,and the relative speed accuracy is increased by 31.3% compared with those of optical simulator before geometrical calibration.
文摘Kinship verification is a key biometric recognition task that determines biological relationships based on physical features.Traditional methods predominantly use facial recognition,leveraging established techniques and extensive datasets.However,recent research has highlighted ear recognition as a promising alternative,offering advantages in robustness against variations in facial expressions,aging,and occlusions.Despite its potential,a significant challenge in ear-based kinship verification is the lack of large-scale datasets necessary for training deep learning models effectively.To address this challenge,we introduce the EarKinshipVN dataset,a novel and extensive collection of ear images designed specifically for kinship verification.This dataset consists of 4876 high-resolution color images from 157 multiracial families across different regions,forming 73,220 kinship pairs.EarKinshipVN,a diverse and large-scale dataset,advances kinship verification research using ear features.Furthermore,we propose the Mixer Attention Inception(MAI)model,an improved architecture that enhances feature extraction and classification accuracy.The MAI model fuses Inceptionv4 and MLP Mixer,integrating four attention mechanisms to enhance spatial and channel-wise feature representation.Experimental results demonstrate that MAI significantly outperforms traditional backbone architectures.It achieves an accuracy of 98.71%,surpassing Vision Transformer models while reducing computational complexity by up to 95%in parameter usage.These findings suggest that ear-based kinship verification,combined with an optimized deep learning model and a comprehensive dataset,holds significant promise for biometric applications.
基金supported by the Beijing Natural Science Foundation(L223025,4242003)Qin Xin Talents Cultivation Program of Beijing Information Science&Technology University(QXTCP B202405)。
文摘With the evolution of next-generation communication networks,ensuring robust Core Network(CN)architecture and data security has become paramount.This paper addresses critical vulnerabilities in the architecture of CN and data security by proposing a novel framework based on blockchain technology that is specifically designed for communication networks.Traditional centralized network architectures are vulnerable to Distributed Denial of Service(DDoS)attacks,particularly in roaming scenarios where there is also a risk of private data leakage,which imposes significant operational demands.To address these issues,we introduce the Blockchain-Enhanced Core Network Architecture(BECNA)and the Secure Decentralized Identity Authentication Scheme(SDIDAS).The BECNA utilizes blockchain technology to decentralize data storage,enhancing network security,stability,and reliability by mitigating Single Points of Failure(SPoF).The SDIDAS utilizes Decentralized Identity(DID)technology to secure user identity data and streamline authentication in roaming scenarios,significantly reducing the risk of data breaches during cross-network transmissions.Our framework employs Ethereum,free5GC,Wireshark,and UERANSIM tools to create a robust,tamper-evident system model.A comprehensive security analysis confirms substantial improvements in user privacy and network security.Simulation results indicate that our approach enhances communication CNs security and reliability,while also ensuring data security.
基金Supported by the National Key R&D Program of China(2022YFB3303501)the National Natural Science Foundation of China(Project Nos.52176041 and 12102308)the Fundamental Research Funds for the Central Universities(Project Nos.2042023kf0208 and 2042023kf0159).
文摘Verification and validation(V&V)is a helpful tool for evaluating simulation errors,but its application in unsteady cavitating flow remains a challenging issue due to the difficulty in meeting the requirement of an asymptotic range.Hence,a new V&V approach for large eddy simulation(LES)is proposed.This approach offers a viable solution for the error estimation of simulation data that are unable to satisfy the asymptotic range.The simulation errors of cavitating flow around a projectile near the free surface are assessed using the new V&V method.The evident error values are primarily dispersed around the cavity region and free surface.The increasingly intense cavitating flow increases the error magnitudes.In addition,the modeling error magnitudes of the Dynamic Smagorinsky-Lilly model are substantially smaller than that of the Smagorinsky-Lilly model.The present V&V method can capture the decrease in the modeling errors due to model enhancements,further exhibiting its applicability in cavitating flow simulations.Moreover,the monitoring points where the simulation data are beyond the asymptotic range are primarily dispersed near the cavity region,and the number of such points grows as the cavitating flow intensifies.The simulation outcomes also suggest that the re-entrant jet and shedding cavity collapse are the chief sources of vorticity motions,which remarkably affect the simulation accuracy.The results of this study provide a valuable reference for V&V research.
基金the financial support of the Natural Science Foundation of Hubei Province,China (Grant No.2022CFB770)。
文摘In the foundry industries,process design has traditionally relied on manuals and complex theoretical calculations.With the advent of 3D design in casting,computer-aided design(CAD)has been applied to integrate the features of casting process,thereby expanding the scope of design options.These technologies use parametric model design techniques for rapid component creation and use databases to access standard process parameters and design specifications.However,3D models are currently still created through inputting or calling parameters,which requires numerous verifications through calculations to ensure the design rationality.This process may be significantly slowed down due to repetitive modifications and extended design time.As a result,there are increasingly urgent demands for a real-time verification mechanism to address this issue.Therefore,this study proposed a novel closed-loop model and software development method that integrated contextual design with real-time verification,dynamically verifying relevant rules for designing 3D casting components.Additionally,the study analyzed three typical closed-loop scenarios of agile design in an independent developed intelligent casting process system.It is believed that foundry industries can potentially benefit from favorably reduced design cycles to yield an enhanced competitive product market.
基金supported by the National Natural Science Foundation of China(42276199).
文摘In order to get rid of the dependence on high-precision centrifuges in accelerometer nonlinear coefficients calibration,this paper proposes a system-level calibration method for field condition.Firstly,a 42-dimension Kalman filter is constructed to reduce impact brought by turntable.Then,a biaxial rotation path is designed based on the accelerometer output model,including orthogonal 22 positions and tilt 12 positions,which enhances gravity excitation on nonlinear coefficients of accelerometer.Finally,sampling is carried out for calibration and further experiments.The results of static inertial navigation experiments lasting 4000 s show that compared with the traditional method,the proposed method reduces the position error by about 390 m.
基金supported by the National Key R&D Program of China(Grant No.2022YFB3303500).
文摘systematic verification and validation(V&V)of our previously proposed momentum source wave generation method is performed.Some settings of previous numerical wave tanks(NWTs)of regular and irregular waves have been optimized.The H2-5 V&V method involving five mesh sizes with mesh refinement ratio being 1.225 is used to verify the NWT of regular waves,in which the wave height and mass conservation are mainly considered based on a Lv3(H s=0.75 m)and a Lv6(H s=5 m)regular wave.Additionally,eight different sea states are chosen to validate the wave height,mass conservation and wave frequency of regular waves.Regarding the NWT of irregular waves,five different sea states with significant wave heights ranging from 0.09 m to 12.5 m are selected to validate the statistical characteristics of irregular waves,including the profile of the wave spectrum,peak frequency and significant wave height.Results show that the verification errors for Lv3 and Lv6 regular wave on the most refined grid are−0.018 and−0.35 for wave height,respectively,and−0.14 and for−0.17 mass conservation,respectively.The uncertainty estimation analysis shows that the numerical error could be partially balanced out by the modelling error to achieve a smaller validation error by adjusting the mesh size elaborately.And the validation errors of the wave height,mass conservation and dominant frequency of regular waves under different sea states are no more than 7%,8% and 2%,respectively.For a Lv3(H_(s)=0.75 m)and a Lv6(H_(s)=5 m)regular wave,simulations are validated on the wave height in wave development section for safety factors FS≈1 and FS≈0.5-1,respectively.Regarding irregular waves,the validation errors of the significant wave height and peak frequency are both lower than 2%.
基金funded by the National Key Research and Development Program of China(No.2024YFE0208100).
文摘The scroll expander,as the core component of the micro-compressed air energy storage and power generation system,directly affects the output efficiency of the system.Meanwhile,the scroll profile plays a central role in determining the output performance of the scroll expander.In this study,in order to investigate the output characteristics of a variable cross-section scroll expander,numerical simulation and experimental studies were con-ducted by using Computational Fluid Dynamics(CFD)methods and dynamic mesh techniques.The impact of critical parameters on the output performance of the scroll expander was analyzed through the utilization of the control variable method.It is found that increasing the inlet pressure and temperature within a certain range can improve the output power of the scroll expander.However,the increase in temperature and meshing clearance leads to a decline in the overall output performance of the scroll expander,leading to a decrease in volumetric efficiency by 8.43%and 12.79%,respectively.The experiments demonstrate that under equal inlet pressure conditions,increasing the inlet temperature elevates both the rotational speed and torque output of the scroll expander.Specifically,compared to operating at normal temperatures,the output torque increases by 21.8%under high-temperature conditions.However,the rate of speed and torque variation decreases as a consequence of enlarged meshing clearance,resulting in increased internal leakage and reduction in isentropic efficiency.
基金National Natural Science Foundation of China(No.51705545)。
文摘To minimize the calculation errors in the sound absorption coefficient resulting from inaccurate measurements of flow resistivity,a simple method for determining the sound absorption coefficient of soundabsorbing materials is proposed.Firstly,the sound absorption coefficients of a fibrous sound-absorbing material are measured at two different frequencies using the impedance tube method.Secondly,utilizing the empirical formulas for the wavenumber and acoustic impedance in the fibrous material,the flow resistivity and porosity of the sound-absorbing materials are calculated using the MATLAB cycle program.Thirdly,based on the values obtained through reverse calculations,the sound absorption coefficient,the real and the imaginary parts of the acoustic impedance of the sound-absorbing material at different frequencies are theoretically computed.Finally,the accuracy of these theoretical calculations is verified through experiments.The experimental results indicate that the calculated values are basically consistent with the measured values,demonstrating the feasibility and reliability of this method.
基金supported in part by the National Natural Science Foundation of China(Numbers 62171445,62471477 and 62201592).
文摘The physical layer key generation technique provides an efficient method,which utilizes the natural dynamics of wireless channel.However,there are some extremely challenging security scenarios such as static or quasi-static environment,which lead to the low randomness of generated keys.Meanwhile,the coefficients of the static channel may be dropped into the guard space and discarded by the quantization approach,which causes low key generation rate.To tackle these issues,we propose a random coefficient-moving product based wireless key generation scheme(RCMP-WKG),where new random resources with remarkable fluctuations can be obtained by applying random coefficient and by moving product on the legitimate nodes.Furthermore,appropriate quantization approaches are used to increase the key generation rate.Moreover,the security of our proposed scheme is evaluated by analyzing different attacks and the eavesdropper’s mean square error(MSE).The simulation results reveal that the proposed scheme can achieve better performances in key capacity,key inconsistency rate(KIR)and key generation rate(KGR)compared with the prior works in static environment.Besides,the proposed scheme can deteriorate the MSE performance of the eavesdropper and improve the key generation performance of legitimate nodes by controlling the length of the moving product.