In the process of programmable networks simplifying network management and increasing network flexibility through custom packet behavior,security incidents caused by human logic errors are seriously threatening their ...In the process of programmable networks simplifying network management and increasing network flexibility through custom packet behavior,security incidents caused by human logic errors are seriously threatening their safe operation,robust verificationmethods are required to ensure their correctness.As one of the formalmethods,symbolic execution offers a viable approach for verifying programmable networks by systematically exploring all possible paths within a program.However,its application in this field encounters scalability issues due to path explosion and complex constraint-solving.Therefore,in this paper,we propose NetVerifier,a scalable verification system for programmable networks.Tomitigate the path explosion issue,we developmultiple pruning strategies that strategically eliminate irrelevant execution paths while preserving verification integrity by precisely identifying the execution paths related to the verification purpose.To address the complex constraint-solving problem,we introduce an execution results reuse solution to avoid redundant computation of the same constraints.To apply these solutions intelligently,a matching algorithm is implemented to automatically select appropriate solutions based on the characteristics of the verification requirement.Moreover,Language Aided Verification(LAV),an assertion language,is designed to express verification intentions in a concise form.Experimental results on diverse open-source programs of varying scales demonstrate NetVerifier’s improvement in scalability and effectiveness in identifying potential network errors.In the best scenario,compared with ASSERT-P4,NetVerifier reduced the execution path,verification time,and memory occupation of the verification process by 99.92%,94.76%,and 65.19%,respectively.展开更多
Hydraulic asphalt concrete(HAC)has been increasingly employed as an appropriate impervious structure in hydraulic and hydropower engineering.However,asphalt mortar,usually seen as the matrix of HAC composite,is partic...Hydraulic asphalt concrete(HAC)has been increasingly employed as an appropriate impervious structure in hydraulic and hydropower engineering.However,asphalt mortar,usually seen as the matrix of HAC composite,is particularly prone to damage under combined stress and seepage interactions,and the mesoscale investigations on the damage-seepage coupling behavior of HAC under complex stress states remain limited.This research develops a numerical three-dimensional mesoscale model composed of asphalt mortar and polyhedral aggregate to investigate the stress-damage-seepage coupling behavior in HAC.In this model,asphalt mortar yields the viscoelastic continuum damage law and aggregate obeys the Mazars’elastic-brittle damage law;simultaneously,the effective permeability coefficient of asphalt mortar is assumed to follow an exponential function of damage.The predicted deviatoric stress-strain and hydraulic gradient-seepage curves both are in good agreement with the reported experimental results,which shows the proposed model is valid and reasonable.The simulated results indicate that the damaged asphalt mortar can induce localized areas of high permeability,which in turn affects the overall impervious performance of HAC.展开更多
Although traditional gamma-gamma density(GGD)logging technology is widely utilized,its potential environmental risks have prompted the development of more environmentally friendly neutron-gamma density(NGD)logging tec...Although traditional gamma-gamma density(GGD)logging technology is widely utilized,its potential environmental risks have prompted the development of more environmentally friendly neutron-gamma density(NGD)logging technology.However,NGD measurements are influenced by both neutron and gamma radiations.In the logging environment,variations in the formation composition indicate different elemental compositions,which affect the neutron-gamma reaction cross-sections and gamma generation.Compared to traditional gamma sources such as Cs-137,these changes significantly affect the generation and transport of neutron-induced inelastic gamma rays and hinder accurate measurements.To address this,a novel method is proposed that incorporates the mass attenuation coefficient function to account for the effects of various lithologies and pore contents on gamma-ray attenuation,thereby achieving more accurate density measurements by clarifying the transport processes of inelastic gamma rays with varying energies and spatial distributions in varied logging environments.The proposed method avoids the complex correction of neutron transport and is verified through Monte Carlo simulations for its applicability across various lithologies and pore contents,demonstrating absolute density errors that are less than 0.02 g/cm^(3)in clean formations and indicating good accuracy.This study clarifies the NGD mechanism and provides theoretical guidance for the application of NGD logging methods.Further studies will be conducted on extreme environmental conditions and tool calibration.展开更多
Reconfigurable array architecture has become an important hardware platform for edge-side deployment of convolutional neural networks due to their high parallelism and flexible programmability.However,traditional mult...Reconfigurable array architecture has become an important hardware platform for edge-side deployment of convolutional neural networks due to their high parallelism and flexible programmability.However,traditional multi-branch convolutional networks suffer from computational redundancy,high memory access overhead,and inefficient branch fusion.Therefore,this paper proposes an adaptive multi-branch convolutional module(AMBC)that integrates software-hardware co-optimization.During training,the learnable fusion coefficients are introduced to enable adaptive fusion of multi-scale features,while in the inference phase,the multiple branches and their normalization parameters are merged with the fusion coefficients into a single 3×3 convolutional kernel through operator fusion.On the SIREA-288 reconfigurable platform,compared with unoptimized multi-branch networks,the proposed AMBC reduces external memory accesses by 47.91%and inference latency by 47.20%,achieving a 1.90×speedup.This approach maximizes the utilization of the reconfigurable logic while minimizing both reconfiguration and data-movement overheads in edge inference.展开更多
Ray tracing method is used to study the propagation of collimated beams in a liquid-core cylindrical lens(LCL),which has dual functions of diffusion cell and image formation.The diffusion images on the focal plane of ...Ray tracing method is used to study the propagation of collimated beams in a liquid-core cylindrical lens(LCL),which has dual functions of diffusion cell and image formation.The diffusion images on the focal plane of the used LCL are simulated by establishing and solving both linear and nonlinear ray equations,the calculated results indicate that the complex imaging results of LCL in inhomogeneous media can be treated by the law of ray propagation in homogeneous media under the condition of small refractive index gradient of diffusion solution.Guided by the calculation conditions,the diffusion process of triethylene glycol aqueous solution is experimentally studied at room temperature by using the LCL in this paper.The spatial and temporal concentration profile Ce(z,t)of diffusion solution is obtained by analyzing diffusion image appearing on the focal plane of the LCL;Then,the concentration-dependent diffusion coefficient is assumed to be a polynomial D(C)=D0×(1+α1C+α2C2+α3C3+…).The finite difference method is used to solve the Fick diffusion equation for calculating numerically the concentration profiles Cn(z,t).The D(C)of triethylene glycol aqueous solution is obtained by comparing the Cn(z,t)with Ce(z,t).Finally,the obtained polynomial D(C)is used to calculate the refractive index profiles nn(z,t)s of diffusion solution in the used LCL.Based on the ray propagation law in inhomogeneous media and the calculated n(z,t),the ray tracing method is used again to simulate the dynamic images of the whole experimental diffusion process to varify the correctness of the calculated D(C).The method presented in this work opens up a new way for both measuring and verifying the concentration-dependent liquid diffusion coefficients.展开更多
Improving the accuracy of anthropogenic volatile organic compounds(VOCs)emission inventory is crucial for reducing atmospheric pollution and formulating control policy of air pollution.In this study,an anthropogenic s...Improving the accuracy of anthropogenic volatile organic compounds(VOCs)emission inventory is crucial for reducing atmospheric pollution and formulating control policy of air pollution.In this study,an anthropogenic speciated VOCs emission inventory was established for Central China represented by Henan Province at a 3 km×3 km spatial resolution based on the emission factormethod.The 2019 VOCs emission in Henan Provincewas 1003.5 Gg,while industrial process source(33.7%)was the highest emission source,Zhengzhou(17.9%)was the city with highest emission and April and August were the months with the more emissions.High VOCs emission regions were concentrated in downtown areas and industrial parks.Alkanes and aromatic hydrocarbons were the main VOCs contribution groups.The species composition,source contribution and spatial distribution were verified and evaluated through tracer ratio method(TR),Positive Matrix Factorization Model(PMF)and remote sensing inversion(RSI).Results show that both the emission results by emission inventory(EI)(15.7 Gg)and by TRmethod(13.6 Gg)and source contribution by EI and PMF are familiar.The spatial distribution of HCHO primary emission based on RSI is basically consistent with that of HCHO emission based on EI with a R-value of 0.73.The verification results show that the VOCs emission inventory and speciated emission inventory established in this study are relatively reliable.展开更多
The morphological distribution of absorbent in composites is equally important with absorbents for the overall electromagnetic properties,but it is often ignored.Herein,a comprehensive consideration including electrom...The morphological distribution of absorbent in composites is equally important with absorbents for the overall electromagnetic properties,but it is often ignored.Herein,a comprehensive consideration including electromagnetic component regulation,layered arrangement structure,and gradient concentration distribution was used to optimize impedance matching and enhance electromagnetic loss.On the microscale,the incorporation of magnetic Ni nanoparticles into MXene nanosheets(Ni@MXene)endows suitable intrinsic permittivity and permeability.On the macroscale,the layered arrangement of Ni@MXene increases the effective interaction area with electromagnetic waves,inducing multiple reflection/scattering effects.On this basis,according to the analysis of absorption,reflection,and transmission(A-R-T)power coefficients of layered composites,the gradient concentration distribution was constructed to realize the impedance matching at low-concentration surface layer,electromagnetic loss at middle concentration interlayer and microwave reflection at high-concentration bottom layer.Consequently,the layered gradient composite(LG5-10-15)achieves complete absorption coverage of X-band at thickness of 2.00-2.20 mm with RL_(min) of-68.67 dB at 9.85 GHz in 2.05 mm,which is 199.0%,12.6%,and 50.6%higher than non-layered,layered and layered descending gradient composites,respectively.Therefore,this work confirms the importance of layered gradient structure in improving absorption performance and broadens the design of high-performance microwave absorption materials.展开更多
Kinship verification is a key biometric recognition task that determines biological relationships based on physical features.Traditional methods predominantly use facial recognition,leveraging established techniques a...Kinship verification is a key biometric recognition task that determines biological relationships based on physical features.Traditional methods predominantly use facial recognition,leveraging established techniques and extensive datasets.However,recent research has highlighted ear recognition as a promising alternative,offering advantages in robustness against variations in facial expressions,aging,and occlusions.Despite its potential,a significant challenge in ear-based kinship verification is the lack of large-scale datasets necessary for training deep learning models effectively.To address this challenge,we introduce the EarKinshipVN dataset,a novel and extensive collection of ear images designed specifically for kinship verification.This dataset consists of 4876 high-resolution color images from 157 multiracial families across different regions,forming 73,220 kinship pairs.EarKinshipVN,a diverse and large-scale dataset,advances kinship verification research using ear features.Furthermore,we propose the Mixer Attention Inception(MAI)model,an improved architecture that enhances feature extraction and classification accuracy.The MAI model fuses Inceptionv4 and MLP Mixer,integrating four attention mechanisms to enhance spatial and channel-wise feature representation.Experimental results demonstrate that MAI significantly outperforms traditional backbone architectures.It achieves an accuracy of 98.71%,surpassing Vision Transformer models while reducing computational complexity by up to 95%in parameter usage.These findings suggest that ear-based kinship verification,combined with an optimized deep learning model and a comprehensive dataset,holds significant promise for biometric applications.展开更多
In the foundry industries,process design has traditionally relied on manuals and complex theoretical calculations.With the advent of 3D design in casting,computer-aided design(CAD)has been applied to integrate the fea...In the foundry industries,process design has traditionally relied on manuals and complex theoretical calculations.With the advent of 3D design in casting,computer-aided design(CAD)has been applied to integrate the features of casting process,thereby expanding the scope of design options.These technologies use parametric model design techniques for rapid component creation and use databases to access standard process parameters and design specifications.However,3D models are currently still created through inputting or calling parameters,which requires numerous verifications through calculations to ensure the design rationality.This process may be significantly slowed down due to repetitive modifications and extended design time.As a result,there are increasingly urgent demands for a real-time verification mechanism to address this issue.Therefore,this study proposed a novel closed-loop model and software development method that integrated contextual design with real-time verification,dynamically verifying relevant rules for designing 3D casting components.Additionally,the study analyzed three typical closed-loop scenarios of agile design in an independent developed intelligent casting process system.It is believed that foundry industries can potentially benefit from favorably reduced design cycles to yield an enhanced competitive product market.展开更多
Metal Additive Manufacturing(MAM) technology has become an important means of rapid prototyping precision manufacturing of special high dynamic heterogeneous complex parts. In response to the micromechanical defects s...Metal Additive Manufacturing(MAM) technology has become an important means of rapid prototyping precision manufacturing of special high dynamic heterogeneous complex parts. In response to the micromechanical defects such as porosity issues, significant deformation, surface cracks, and challenging control of surface morphology encountered during the selective laser melting(SLM) additive manufacturing(AM) process of specialized Micro Electromechanical System(MEMS) components, multiparameter optimization and micro powder melt pool/macro-scale mechanical properties control simulation of specialized components are conducted. The optimal parameters obtained through highprecision preparation and machining of components and static/high dynamic verification are: laser power of 110 W, laser speed of 600 mm/s, laser diameter of 75 μm, and scanning spacing of 50 μm. The density of the subordinate components under this reference can reach 99.15%, the surface hardness can reach 51.9 HRA, the yield strength can reach 550 MPa, the maximum machining error of the components is 4.73%, and the average surface roughness is 0.45 μm. Through dynamic hammering and high dynamic firing verification, SLM components meet the requirements for overload resistance. The results have proven that MEM technology can provide a new means for the processing of MEMS components applied in high dynamic environments. The parameters obtained in the conclusion can provide a design basis for the additive preparation of MEMS components.展开更多
As the first gold mine discovered at the sea in China and the only coastal gold mine currently mined there,Sanshandao Gold Mine faces unique challenges.The mine's safety is under continual threat from its faulted ...As the first gold mine discovered at the sea in China and the only coastal gold mine currently mined there,Sanshandao Gold Mine faces unique challenges.The mine's safety is under continual threat from its faulted structure coupled with the overlying water.As the mining proceeds deeper,the risk of water inrush increases.The mine's maximum water yield reaches 15000 m3/day,which is attributable to water channels present in fault zones.Predominantly composed of soil–rock mixtures(SRM),these fault zones'seepage characteristics significantly impact water inrush risk.Consequently,investigating the seepage characteristics of SRM is of paramount importance.However,the existing literature mostly concentrates on a single stress state.Therefore,this study examined the characteristics of the permeability coefficient under three distinct stress states:osmotic,osmotic–uniaxial,and osmotic–triaxial pressure.The SRM samples utilized in this study were extracted from in situ fault zones and then reshaped in the laboratory.In addition,the micromechanical properties of the SRM samples were analyzed using computed tomography scanning.The findings reveal that the permeability coefficient is the highest under osmotic pressure and lowest under osmotic–triaxial pressure.The sensitivity coefficient shows a higher value when the rock block percentage ranges between 30%and 40%,but it falls below 1.0 when this percentage exceeds 50%under no confining pressure.Notably,rock block percentages of 40%and 60%represent the two peak points of the sensitivity coefficient under osmotic–triaxial pressure.However,SRM samples with a 40%rock block percentage consistently show the lowest permeability coefficient under all stress states.This study establishes that a power function can model the relationship between the permeability coefficient and osmotic pressure,while its relationship with axial pressure can be described using an exponential function.These insights are invaluable for developing water inrush prevention and control strategies in mining environments.展开更多
In order to solve the problem of the variable coefficient ordinary differen-tial equation on the bounded domain,the Lagrange interpolation method is used to approximate the exact solution of the equation,and the error...In order to solve the problem of the variable coefficient ordinary differen-tial equation on the bounded domain,the Lagrange interpolation method is used to approximate the exact solution of the equation,and the error between the numerical solution and the exact solution is obtained,and then compared with the error formed by the difference method,it is concluded that the Lagrange interpolation method is more effective in solving the variable coefficient ordinary differential equation.展开更多
BACKGROUND At present,the conventional methods for diagnosing cerebral edema in clinical practice are computed tomography(CT)and magnetic resonance imaging(MRI),which can evaluate the location and degree of peripheral...BACKGROUND At present,the conventional methods for diagnosing cerebral edema in clinical practice are computed tomography(CT)and magnetic resonance imaging(MRI),which can evaluate the location and degree of peripheral cerebral edema,but cannot realize quantification.When patients have symptoms of diffuse cerebral edema or high cranial pressure,CT or MRI often suggests that cerebral edema is lagging and cannot be dynamically monitored in real time.Intracranial pressure monitoring is the gold standard,but it is an invasive operation with high cost and complications.For clinical purposes,the ideal cerebral edema monitoring should be non-invasive,real-time,bedside,and continuous dynamic monitoring.The dis-turbance coefficient(DC)was used in this study to dynamically monitor the occu-rrence,development,and evolution of cerebral edema in patients with cerebral hemorrhage in real time,and review head CT or MRI to evaluate the development of the disease and guide further treatment,so as to improve the prognosis of patients with cerebral hemorrhage.AIM To offer a promising new approach for non-invasive adjuvant therapy in cerebral edema treatment.METHODS A total of 160 patients with hypertensive cerebral hemorrhage admitted to the Department of Neurosurgery,Second Affiliated Hospital of Xi’an Medical University from September 2018 to September 2019 were recruited.The patients were randomly divided into a control group(n=80)and an experimental group(n=80).Patients in the control group received conventional empirical treatment,while those in the experimental group were treated with mannitol dehydration under the guidance of DC.Subsequently,we compared the two groups with regards to the total dosage of mannitol,the total course of treatment,the incidence of complications,and prognosis.RESULTS The mean daily consumption of mannitol,the total course of treatment,and the mean hospitalization days were 362.7±117.7 mL,14.8±5.2 days,and 29.4±7.9 in the control group and 283.1±93.6 mL,11.8±4.2 days,and 23.9±8.3 in the experimental group(P<0.05).In the control group,there were 20 patients with pulmonary infection(25%),30 with electrolyte disturbance(37.5%),20 with renal impairment(25%),and 16 with stress ulcer(20%).In the experimental group,pulmonary infection occurred in 18 patients(22.5%),electrolyte disturbance in 6(7.5%),renal impairment in 2(2.5%),and stress ulcers in 15(18.8%)(P<0.05).According to the Glasgow coma scale score 6 months after discharge,the prognosis of the control group was good in 20 patients(25%),fair in 26(32.5%),and poor in 34(42.5%);the prognosis of the experimental group was good in 32(40%),fair in 36(45%),and poor in 12(15%)(P<0.05).CONCLUSION Using DC for non-invasive dynamic monitoring of cerebral edema demonstrates considerable clinical potential.It reduces mannitol dosage,treatment duration,complication rates,and hospital stays,ultimately lowering hospital-ization costs.Additionally,it improves overall patient prognosis,offering a promising new approach for non-invasive adjuvant therapy in cerebral edema treatment.展开更多
The scale effect on shear strength of rock joints is well-documented.However,whether scale effects are negative,positive,or even exist or not is still controversial.Joint roughness significantly influences the shear s...The scale effect on shear strength of rock joints is well-documented.However,whether scale effects are negative,positive,or even exist or not is still controversial.Joint roughness significantly influences the shear strength of rock joints.Compared to the shear tests,using the joint roughness coefficient(JRC)and its roughness parameters offers a more convenient method for describing the scale effect on shear strength.However,it is crucial to understand that the scale effect mechanisms of JRC are distinct from those of shear strength.Therefore,this paper aims to clarify these distinct mechanisms.By digitally extracting roughness parameters from granite samples,it is found that the scale effect of roughness parameters mainly comes from the sampling methods and the geometric characteristics of parameters.Furthermore,a full data sampling method considering heterogeneity is proposed to obtain more representative roughness parameters.To reveal the scale effect mechanisms of shear strength,Gaussian filtering is firstly used to separate the waviness and unevenness components of roughness,facilitating a deeper understanding of the geometric characteristics of roughness.It is suggested that the wavelength of the waviness component can reflect the scale effect on shear strength.Secondly,numerical simulations of ideal artificial joint models are conducted to validate that the wavelength of the waviness component serves as the dividing point between positive and negative scale effects.The mechanical mechanisms of positive and negative scale effects are also interpreted.Finally,these mechanisms successfully elucidate the occurrence patterns of the scale effect on natural joint profiles.展开更多
BACKGROUND Various stone factors can affect the net results of shock wave lithotripsy(SWL).Recently a new factor called variation coefficient of stone density(VCSD)is being considered to have an impact on stone free r...BACKGROUND Various stone factors can affect the net results of shock wave lithotripsy(SWL).Recently a new factor called variation coefficient of stone density(VCSD)is being considered to have an impact on stone free rates.AIM To assess the role of VCSD in determining success of SWL in urinary calculi.METHODS Charts review was utilized for collection of data variables.The patients were subjected to SWL,using an electromagnetic lithotripter.Mean stone density(MSD),stone heterogeneity index(SHI),and VCSD were calculated by generating regions of interest on computed tomography(CT)images.Role of these factors were determined by applying the relevant statistical tests for continuous and categorical variables and a P value of<0.05 was gauged to be statistically significant.RESULTS There were a total of 407 patients included in the analysis.The mean age of the subjects in this study was 38.89±14.61 years.In total,165 out of the 407 patients could not achieve stone free status.The successful group had a significantly lower stone volume as compared to the unsuccessful group(P<0.0001).Skin to stone distance was not dissimilar among the two groups(P=0.47).MSD was significantly lower in the successful group(P<0.0001).SHI and VCSD were both significantly higher in the successful group(P<0.0001).CONCLUSION VCSD,a useful CT based parameter,can be utilized to gauge stone fragility and hence the prediction of SWL outcomes.展开更多
Joint roughness coefficient(JRC)is the most commonly used parameter for quantifying surface roughness of rock discontinuities in practice.The system composed of multiple roughness statistical parameters to measure JRC...Joint roughness coefficient(JRC)is the most commonly used parameter for quantifying surface roughness of rock discontinuities in practice.The system composed of multiple roughness statistical parameters to measure JRC is a nonlinear system with a lot of overlapping information.In this paper,a dataset of eight roughness statistical parameters covering 112 digital joints is established.Then,the principal component analysis method is introduced to extract the significant information,which solves the information overlap problem of roughness characterization.Based on the two principal components of extracted features,the white shark optimizer algorithm was introduced to optimize the extreme gradient boosting model,and a new machine learning(ML)prediction model was established.The prediction accuracy of the new model and the other 17 models was measured using statistical metrics.The results show that the prediction result of the new model is more consistent with the real JRC value,with higher recognition accuracy and generalization ability.展开更多
Let f be a primitive holomorphic cusp form with even integral weight k≥2 for the full modular groupΓ=SL(2,Z)andλ_(sym^(j)f)(n)be the n-th coefficient of Dirichlet series of j-th symmetric L-function L(s,sym^(j)f)at...Let f be a primitive holomorphic cusp form with even integral weight k≥2 for the full modular groupΓ=SL(2,Z)andλ_(sym^(j)f)(n)be the n-th coefficient of Dirichlet series of j-th symmetric L-function L(s,sym^(j)f)attached to f.In this paper,we study the mean value distribution over a specific sparse sequence of positive integers of the following sum∑(a^(2)+b^(2)+c^(2)+d^(2)≤x(a,b,c,d)∈Z^(4))λ_(sym^(j))^(i)f(a^(2)+b^(2)+c^(2)+d^(2))where j≥2 is a given positive integer,i=2,3,4 andαis sufficiently large.We utilize Python programming to design algorithms for higher power conditions,combining Perron's formula,latest results of representations of natural integers as sums of squares,as well as analytic properties and subconvexity and convexity bounds of automorphic L-functions,to ensure the accuracy and verifiability of asymptotic formulas.The conclusion we obtained improves previous results and extends them to a more general settings.展开更多
Dear Editor,This letter presents an improved repetitive controller(IRC) that uses a complex-coefficient filter to enhance the tracking performance of a system for periodic signals. Compared with the low-pass filter us...Dear Editor,This letter presents an improved repetitive controller(IRC) that uses a complex-coefficient filter to enhance the tracking performance of a system for periodic signals. Compared with the low-pass filter used in the conventional repetitive controller(CRC), the complex-coefficient filter causes less change in the phase and amplitude of a signal at the frequencies of the periodic signal, especially at the fundamental frequency, when the two filters have the same cutofffrequency.展开更多
Edit distance is an algorithm to measure the difference between two strings,usually represented as the minimum number of editing operations required to transform one string into another.The edit distance algorithm inv...Edit distance is an algorithm to measure the difference between two strings,usually represented as the minimum number of editing operations required to transform one string into another.The edit distance algorithm involves complex dependencies and constraints,making state management and verification work tedious.This paper proposes a derivation and verification method that avoids directly handling dependencies and constraints by proving the equivalence between the edit distance algorithm and existing functional modeling.First,the derivation process of edit distance algorithm mainly includes 1)describing problem specifications,2)inductively deducing recursive relations,3)formally constructing loop invariants using the optimization theory(memorization technology and optimal decision table)and properties(optimal substructure property and subproblems overlapping property)of the edit distance algorithm,4)generating the Minimalistic Imperative Programming Language(IMP)code based on the recursive relations.Second,the problem specification,loop invariants,and generated IMP code are input into Verification Condition Generator(VCG),which automatically generate five verification conditions,and then the correctness of edit distance algorithm is verified in the Isabelle/HOL theorem prover.The method utilizes formal technologies and theorem prover to complete the derivation and verification of the edit distance algorithm,and it can be applied to linear and nonlinear dynamic programming problems.展开更多
The study of the morphometric parameters of the three most abundant species in the lower course of the Kouilou River (Chrysichthys auratus, Liza falcipinnis and Pellonula vorax) was carried out. The standard length of...The study of the morphometric parameters of the three most abundant species in the lower course of the Kouilou River (Chrysichthys auratus, Liza falcipinnis and Pellonula vorax) was carried out. The standard length of Chrysichthys auratus varies between 43.57 and 210 mm, for an average of 96.70 ± 28.63 mm;the weight varies between 2.92 and 140.83 mg, an average of 73.03 ± 21.62 mg. The condition coefficient is equal to 4.42 ± 1.52. Liza falcipinnis has a standard length which varies between 59.9 mm and 158.08 mm for an average of 88.15 ± 29.74 mm;its weight varies between 4.77 and 76.21 mg, an average of 18.61 ± 11.82 mg. The condition coefficient is equal to 2.47 ± 1.57. Pellonula vorax has a standard length which varies between 60.33 mm and 117.72 mm;for an average of 80.48 ± 17.75 mm;the weight varies between 3.61 and 25.17 mg, an average of 9.03 ± 3.61 mg. The condition coefficient is equal to 2.17 ± 0.57. These three species have a minor allometric growth.展开更多
基金supported by the National Key Research and Development Program of China under Grant 2023YFB2903902in part by the Science and Technology Innovation Leading Talents Subsidy Project of Central Plains under Grant 244200510038.
文摘In the process of programmable networks simplifying network management and increasing network flexibility through custom packet behavior,security incidents caused by human logic errors are seriously threatening their safe operation,robust verificationmethods are required to ensure their correctness.As one of the formalmethods,symbolic execution offers a viable approach for verifying programmable networks by systematically exploring all possible paths within a program.However,its application in this field encounters scalability issues due to path explosion and complex constraint-solving.Therefore,in this paper,we propose NetVerifier,a scalable verification system for programmable networks.Tomitigate the path explosion issue,we developmultiple pruning strategies that strategically eliminate irrelevant execution paths while preserving verification integrity by precisely identifying the execution paths related to the verification purpose.To address the complex constraint-solving problem,we introduce an execution results reuse solution to avoid redundant computation of the same constraints.To apply these solutions intelligently,a matching algorithm is implemented to automatically select appropriate solutions based on the characteristics of the verification requirement.Moreover,Language Aided Verification(LAV),an assertion language,is designed to express verification intentions in a concise form.Experimental results on diverse open-source programs of varying scales demonstrate NetVerifier’s improvement in scalability and effectiveness in identifying potential network errors.In the best scenario,compared with ASSERT-P4,NetVerifier reduced the execution path,verification time,and memory occupation of the verification process by 99.92%,94.76%,and 65.19%,respectively.
基金supported by the National Key Research and Development Program of China(Grant No.2022YFC3005603-01)the Natural Science Foundation Science of Anhui Province(Grant No.2308085US02).
文摘Hydraulic asphalt concrete(HAC)has been increasingly employed as an appropriate impervious structure in hydraulic and hydropower engineering.However,asphalt mortar,usually seen as the matrix of HAC composite,is particularly prone to damage under combined stress and seepage interactions,and the mesoscale investigations on the damage-seepage coupling behavior of HAC under complex stress states remain limited.This research develops a numerical three-dimensional mesoscale model composed of asphalt mortar and polyhedral aggregate to investigate the stress-damage-seepage coupling behavior in HAC.In this model,asphalt mortar yields the viscoelastic continuum damage law and aggregate obeys the Mazars’elastic-brittle damage law;simultaneously,the effective permeability coefficient of asphalt mortar is assumed to follow an exponential function of damage.The predicted deviatoric stress-strain and hydraulic gradient-seepage curves both are in good agreement with the reported experimental results,which shows the proposed model is valid and reasonable.The simulated results indicate that the damaged asphalt mortar can induce localized areas of high permeability,which in turn affects the overall impervious performance of HAC.
基金supported by the National Natural Science Foundation of China(U23B20151 and 52171253).
文摘Although traditional gamma-gamma density(GGD)logging technology is widely utilized,its potential environmental risks have prompted the development of more environmentally friendly neutron-gamma density(NGD)logging technology.However,NGD measurements are influenced by both neutron and gamma radiations.In the logging environment,variations in the formation composition indicate different elemental compositions,which affect the neutron-gamma reaction cross-sections and gamma generation.Compared to traditional gamma sources such as Cs-137,these changes significantly affect the generation and transport of neutron-induced inelastic gamma rays and hinder accurate measurements.To address this,a novel method is proposed that incorporates the mass attenuation coefficient function to account for the effects of various lithologies and pore contents on gamma-ray attenuation,thereby achieving more accurate density measurements by clarifying the transport processes of inelastic gamma rays with varying energies and spatial distributions in varied logging environments.The proposed method avoids the complex correction of neutron transport and is verified through Monte Carlo simulations for its applicability across various lithologies and pore contents,demonstrating absolute density errors that are less than 0.02 g/cm^(3)in clean formations and indicating good accuracy.This study clarifies the NGD mechanism and provides theoretical guidance for the application of NGD logging methods.Further studies will be conducted on extreme environmental conditions and tool calibration.
基金Supported by the National Science and Technology Major Project of China(2022ZD0119005)the Natural Science Project of Shaanxi Province(2025JC-YBMS-754,2024JC-YBMS-539)。
文摘Reconfigurable array architecture has become an important hardware platform for edge-side deployment of convolutional neural networks due to their high parallelism and flexible programmability.However,traditional multi-branch convolutional networks suffer from computational redundancy,high memory access overhead,and inefficient branch fusion.Therefore,this paper proposes an adaptive multi-branch convolutional module(AMBC)that integrates software-hardware co-optimization.During training,the learnable fusion coefficients are introduced to enable adaptive fusion of multi-scale features,while in the inference phase,the multiple branches and their normalization parameters are merged with the fusion coefficients into a single 3×3 convolutional kernel through operator fusion.On the SIREA-288 reconfigurable platform,compared with unoptimized multi-branch networks,the proposed AMBC reduces external memory accesses by 47.91%and inference latency by 47.20%,achieving a 1.90×speedup.This approach maximizes the utilization of the reconfigurable logic while minimizing both reconfiguration and data-movement overheads in edge inference.
基金the National Natural Science Foundation of China(Grant No.11804296)the Joint Key Project of Yunnan Province,China(Grant Nos.2018FY001-020 and 2018ZI002)the Fund from the Educational Department of Yunnan Province,China(Grant No.2016CYH05).
文摘Ray tracing method is used to study the propagation of collimated beams in a liquid-core cylindrical lens(LCL),which has dual functions of diffusion cell and image formation.The diffusion images on the focal plane of the used LCL are simulated by establishing and solving both linear and nonlinear ray equations,the calculated results indicate that the complex imaging results of LCL in inhomogeneous media can be treated by the law of ray propagation in homogeneous media under the condition of small refractive index gradient of diffusion solution.Guided by the calculation conditions,the diffusion process of triethylene glycol aqueous solution is experimentally studied at room temperature by using the LCL in this paper.The spatial and temporal concentration profile Ce(z,t)of diffusion solution is obtained by analyzing diffusion image appearing on the focal plane of the LCL;Then,the concentration-dependent diffusion coefficient is assumed to be a polynomial D(C)=D0×(1+α1C+α2C2+α3C3+…).The finite difference method is used to solve the Fick diffusion equation for calculating numerically the concentration profiles Cn(z,t).The D(C)of triethylene glycol aqueous solution is obtained by comparing the Cn(z,t)with Ce(z,t).Finally,the obtained polynomial D(C)is used to calculate the refractive index profiles nn(z,t)s of diffusion solution in the used LCL.Based on the ray propagation law in inhomogeneous media and the calculated n(z,t),the ray tracing method is used again to simulate the dynamic images of the whole experimental diffusion process to varify the correctness of the calculated D(C).The method presented in this work opens up a new way for both measuring and verifying the concentration-dependent liquid diffusion coefficients.
基金supported by Zhengzhou PM_(2.5)and O_(3)Collaborative Control and Monitoring Project(No.20220347A)the 2020 National Supercomputing Zhengzhou Center Innovation Ecosystem Construction Technology Project(No.201400210700).
文摘Improving the accuracy of anthropogenic volatile organic compounds(VOCs)emission inventory is crucial for reducing atmospheric pollution and formulating control policy of air pollution.In this study,an anthropogenic speciated VOCs emission inventory was established for Central China represented by Henan Province at a 3 km×3 km spatial resolution based on the emission factormethod.The 2019 VOCs emission in Henan Provincewas 1003.5 Gg,while industrial process source(33.7%)was the highest emission source,Zhengzhou(17.9%)was the city with highest emission and April and August were the months with the more emissions.High VOCs emission regions were concentrated in downtown areas and industrial parks.Alkanes and aromatic hydrocarbons were the main VOCs contribution groups.The species composition,source contribution and spatial distribution were verified and evaluated through tracer ratio method(TR),Positive Matrix Factorization Model(PMF)and remote sensing inversion(RSI).Results show that both the emission results by emission inventory(EI)(15.7 Gg)and by TRmethod(13.6 Gg)and source contribution by EI and PMF are familiar.The spatial distribution of HCHO primary emission based on RSI is basically consistent with that of HCHO emission based on EI with a R-value of 0.73.The verification results show that the VOCs emission inventory and speciated emission inventory established in this study are relatively reliable.
基金support for this work by Key Research and Development Project of Henan Province(Grant.No.241111232300)the National Natural Science Foundation of China(Grant.No.52273085 and 52303113)the Open Fund of Yaoshan Laboratory(Grant.No.2024003).
文摘The morphological distribution of absorbent in composites is equally important with absorbents for the overall electromagnetic properties,but it is often ignored.Herein,a comprehensive consideration including electromagnetic component regulation,layered arrangement structure,and gradient concentration distribution was used to optimize impedance matching and enhance electromagnetic loss.On the microscale,the incorporation of magnetic Ni nanoparticles into MXene nanosheets(Ni@MXene)endows suitable intrinsic permittivity and permeability.On the macroscale,the layered arrangement of Ni@MXene increases the effective interaction area with electromagnetic waves,inducing multiple reflection/scattering effects.On this basis,according to the analysis of absorption,reflection,and transmission(A-R-T)power coefficients of layered composites,the gradient concentration distribution was constructed to realize the impedance matching at low-concentration surface layer,electromagnetic loss at middle concentration interlayer and microwave reflection at high-concentration bottom layer.Consequently,the layered gradient composite(LG5-10-15)achieves complete absorption coverage of X-band at thickness of 2.00-2.20 mm with RL_(min) of-68.67 dB at 9.85 GHz in 2.05 mm,which is 199.0%,12.6%,and 50.6%higher than non-layered,layered and layered descending gradient composites,respectively.Therefore,this work confirms the importance of layered gradient structure in improving absorption performance and broadens the design of high-performance microwave absorption materials.
文摘Kinship verification is a key biometric recognition task that determines biological relationships based on physical features.Traditional methods predominantly use facial recognition,leveraging established techniques and extensive datasets.However,recent research has highlighted ear recognition as a promising alternative,offering advantages in robustness against variations in facial expressions,aging,and occlusions.Despite its potential,a significant challenge in ear-based kinship verification is the lack of large-scale datasets necessary for training deep learning models effectively.To address this challenge,we introduce the EarKinshipVN dataset,a novel and extensive collection of ear images designed specifically for kinship verification.This dataset consists of 4876 high-resolution color images from 157 multiracial families across different regions,forming 73,220 kinship pairs.EarKinshipVN,a diverse and large-scale dataset,advances kinship verification research using ear features.Furthermore,we propose the Mixer Attention Inception(MAI)model,an improved architecture that enhances feature extraction and classification accuracy.The MAI model fuses Inceptionv4 and MLP Mixer,integrating four attention mechanisms to enhance spatial and channel-wise feature representation.Experimental results demonstrate that MAI significantly outperforms traditional backbone architectures.It achieves an accuracy of 98.71%,surpassing Vision Transformer models while reducing computational complexity by up to 95%in parameter usage.These findings suggest that ear-based kinship verification,combined with an optimized deep learning model and a comprehensive dataset,holds significant promise for biometric applications.
基金the financial support of the Natural Science Foundation of Hubei Province,China (Grant No.2022CFB770)。
文摘In the foundry industries,process design has traditionally relied on manuals and complex theoretical calculations.With the advent of 3D design in casting,computer-aided design(CAD)has been applied to integrate the features of casting process,thereby expanding the scope of design options.These technologies use parametric model design techniques for rapid component creation and use databases to access standard process parameters and design specifications.However,3D models are currently still created through inputting or calling parameters,which requires numerous verifications through calculations to ensure the design rationality.This process may be significantly slowed down due to repetitive modifications and extended design time.As a result,there are increasingly urgent demands for a real-time verification mechanism to address this issue.Therefore,this study proposed a novel closed-loop model and software development method that integrated contextual design with real-time verification,dynamically verifying relevant rules for designing 3D casting components.Additionally,the study analyzed three typical closed-loop scenarios of agile design in an independent developed intelligent casting process system.It is believed that foundry industries can potentially benefit from favorably reduced design cycles to yield an enhanced competitive product market.
基金funded by the National Natural Science Foundation of China Youth Fund(Grant No.62304022)Science and Technology on Electromechanical Dynamic Control Laboratory(China,Grant No.6142601012304)the 2022e2024 China Association for Science and Technology Innovation Integration Association Youth Talent Support Project(Grant No.2022QNRC001).
文摘Metal Additive Manufacturing(MAM) technology has become an important means of rapid prototyping precision manufacturing of special high dynamic heterogeneous complex parts. In response to the micromechanical defects such as porosity issues, significant deformation, surface cracks, and challenging control of surface morphology encountered during the selective laser melting(SLM) additive manufacturing(AM) process of specialized Micro Electromechanical System(MEMS) components, multiparameter optimization and micro powder melt pool/macro-scale mechanical properties control simulation of specialized components are conducted. The optimal parameters obtained through highprecision preparation and machining of components and static/high dynamic verification are: laser power of 110 W, laser speed of 600 mm/s, laser diameter of 75 μm, and scanning spacing of 50 μm. The density of the subordinate components under this reference can reach 99.15%, the surface hardness can reach 51.9 HRA, the yield strength can reach 550 MPa, the maximum machining error of the components is 4.73%, and the average surface roughness is 0.45 μm. Through dynamic hammering and high dynamic firing verification, SLM components meet the requirements for overload resistance. The results have proven that MEM technology can provide a new means for the processing of MEMS components applied in high dynamic environments. The parameters obtained in the conclusion can provide a design basis for the additive preparation of MEMS components.
基金State Key Research Development Program of China,Grant/Award Number:2021YFC3001301。
文摘As the first gold mine discovered at the sea in China and the only coastal gold mine currently mined there,Sanshandao Gold Mine faces unique challenges.The mine's safety is under continual threat from its faulted structure coupled with the overlying water.As the mining proceeds deeper,the risk of water inrush increases.The mine's maximum water yield reaches 15000 m3/day,which is attributable to water channels present in fault zones.Predominantly composed of soil–rock mixtures(SRM),these fault zones'seepage characteristics significantly impact water inrush risk.Consequently,investigating the seepage characteristics of SRM is of paramount importance.However,the existing literature mostly concentrates on a single stress state.Therefore,this study examined the characteristics of the permeability coefficient under three distinct stress states:osmotic,osmotic–uniaxial,and osmotic–triaxial pressure.The SRM samples utilized in this study were extracted from in situ fault zones and then reshaped in the laboratory.In addition,the micromechanical properties of the SRM samples were analyzed using computed tomography scanning.The findings reveal that the permeability coefficient is the highest under osmotic pressure and lowest under osmotic–triaxial pressure.The sensitivity coefficient shows a higher value when the rock block percentage ranges between 30%and 40%,but it falls below 1.0 when this percentage exceeds 50%under no confining pressure.Notably,rock block percentages of 40%and 60%represent the two peak points of the sensitivity coefficient under osmotic–triaxial pressure.However,SRM samples with a 40%rock block percentage consistently show the lowest permeability coefficient under all stress states.This study establishes that a power function can model the relationship between the permeability coefficient and osmotic pressure,while its relationship with axial pressure can be described using an exponential function.These insights are invaluable for developing water inrush prevention and control strategies in mining environments.
文摘In order to solve the problem of the variable coefficient ordinary differen-tial equation on the bounded domain,the Lagrange interpolation method is used to approximate the exact solution of the equation,and the error between the numerical solution and the exact solution is obtained,and then compared with the error formed by the difference method,it is concluded that the Lagrange interpolation method is more effective in solving the variable coefficient ordinary differential equation.
基金Supported by the Shaanxi Provincial Key Research and Development Plan Project,No.2020ZDLSF01-02.
文摘BACKGROUND At present,the conventional methods for diagnosing cerebral edema in clinical practice are computed tomography(CT)and magnetic resonance imaging(MRI),which can evaluate the location and degree of peripheral cerebral edema,but cannot realize quantification.When patients have symptoms of diffuse cerebral edema or high cranial pressure,CT or MRI often suggests that cerebral edema is lagging and cannot be dynamically monitored in real time.Intracranial pressure monitoring is the gold standard,but it is an invasive operation with high cost and complications.For clinical purposes,the ideal cerebral edema monitoring should be non-invasive,real-time,bedside,and continuous dynamic monitoring.The dis-turbance coefficient(DC)was used in this study to dynamically monitor the occu-rrence,development,and evolution of cerebral edema in patients with cerebral hemorrhage in real time,and review head CT or MRI to evaluate the development of the disease and guide further treatment,so as to improve the prognosis of patients with cerebral hemorrhage.AIM To offer a promising new approach for non-invasive adjuvant therapy in cerebral edema treatment.METHODS A total of 160 patients with hypertensive cerebral hemorrhage admitted to the Department of Neurosurgery,Second Affiliated Hospital of Xi’an Medical University from September 2018 to September 2019 were recruited.The patients were randomly divided into a control group(n=80)and an experimental group(n=80).Patients in the control group received conventional empirical treatment,while those in the experimental group were treated with mannitol dehydration under the guidance of DC.Subsequently,we compared the two groups with regards to the total dosage of mannitol,the total course of treatment,the incidence of complications,and prognosis.RESULTS The mean daily consumption of mannitol,the total course of treatment,and the mean hospitalization days were 362.7±117.7 mL,14.8±5.2 days,and 29.4±7.9 in the control group and 283.1±93.6 mL,11.8±4.2 days,and 23.9±8.3 in the experimental group(P<0.05).In the control group,there were 20 patients with pulmonary infection(25%),30 with electrolyte disturbance(37.5%),20 with renal impairment(25%),and 16 with stress ulcer(20%).In the experimental group,pulmonary infection occurred in 18 patients(22.5%),electrolyte disturbance in 6(7.5%),renal impairment in 2(2.5%),and stress ulcers in 15(18.8%)(P<0.05).According to the Glasgow coma scale score 6 months after discharge,the prognosis of the control group was good in 20 patients(25%),fair in 26(32.5%),and poor in 34(42.5%);the prognosis of the experimental group was good in 32(40%),fair in 36(45%),and poor in 12(15%)(P<0.05).CONCLUSION Using DC for non-invasive dynamic monitoring of cerebral edema demonstrates considerable clinical potential.It reduces mannitol dosage,treatment duration,complication rates,and hospital stays,ultimately lowering hospital-ization costs.Additionally,it improves overall patient prognosis,offering a promising new approach for non-invasive adjuvant therapy in cerebral edema treatment.
基金funded by the National Natural Science Foundation Projects(Grant Nos.41772287 and 42277132)the Key R&D Project of Zhejiang Province(Grant No.2021C03159).
文摘The scale effect on shear strength of rock joints is well-documented.However,whether scale effects are negative,positive,or even exist or not is still controversial.Joint roughness significantly influences the shear strength of rock joints.Compared to the shear tests,using the joint roughness coefficient(JRC)and its roughness parameters offers a more convenient method for describing the scale effect on shear strength.However,it is crucial to understand that the scale effect mechanisms of JRC are distinct from those of shear strength.Therefore,this paper aims to clarify these distinct mechanisms.By digitally extracting roughness parameters from granite samples,it is found that the scale effect of roughness parameters mainly comes from the sampling methods and the geometric characteristics of parameters.Furthermore,a full data sampling method considering heterogeneity is proposed to obtain more representative roughness parameters.To reveal the scale effect mechanisms of shear strength,Gaussian filtering is firstly used to separate the waviness and unevenness components of roughness,facilitating a deeper understanding of the geometric characteristics of roughness.It is suggested that the wavelength of the waviness component can reflect the scale effect on shear strength.Secondly,numerical simulations of ideal artificial joint models are conducted to validate that the wavelength of the waviness component serves as the dividing point between positive and negative scale effects.The mechanical mechanisms of positive and negative scale effects are also interpreted.Finally,these mechanisms successfully elucidate the occurrence patterns of the scale effect on natural joint profiles.
文摘BACKGROUND Various stone factors can affect the net results of shock wave lithotripsy(SWL).Recently a new factor called variation coefficient of stone density(VCSD)is being considered to have an impact on stone free rates.AIM To assess the role of VCSD in determining success of SWL in urinary calculi.METHODS Charts review was utilized for collection of data variables.The patients were subjected to SWL,using an electromagnetic lithotripter.Mean stone density(MSD),stone heterogeneity index(SHI),and VCSD were calculated by generating regions of interest on computed tomography(CT)images.Role of these factors were determined by applying the relevant statistical tests for continuous and categorical variables and a P value of<0.05 was gauged to be statistically significant.RESULTS There were a total of 407 patients included in the analysis.The mean age of the subjects in this study was 38.89±14.61 years.In total,165 out of the 407 patients could not achieve stone free status.The successful group had a significantly lower stone volume as compared to the unsuccessful group(P<0.0001).Skin to stone distance was not dissimilar among the two groups(P=0.47).MSD was significantly lower in the successful group(P<0.0001).SHI and VCSD were both significantly higher in the successful group(P<0.0001).CONCLUSION VCSD,a useful CT based parameter,can be utilized to gauge stone fragility and hence the prediction of SWL outcomes.
基金funding from the National Natural Science Foundation of China (Grant No.42277175)the pilot project of cooperation between the Ministry of Natural Resources and Hunan Province“Research and demonstration of key technologies for comprehensive remote sensing identification of geological hazards in typical regions of Hunan Province” (Grant No.2023ZRBSHZ056)the National Key Research and Development Program of China-2023 Key Special Project (Grant No.2023YFC2907400).
文摘Joint roughness coefficient(JRC)is the most commonly used parameter for quantifying surface roughness of rock discontinuities in practice.The system composed of multiple roughness statistical parameters to measure JRC is a nonlinear system with a lot of overlapping information.In this paper,a dataset of eight roughness statistical parameters covering 112 digital joints is established.Then,the principal component analysis method is introduced to extract the significant information,which solves the information overlap problem of roughness characterization.Based on the two principal components of extracted features,the white shark optimizer algorithm was introduced to optimize the extreme gradient boosting model,and a new machine learning(ML)prediction model was established.The prediction accuracy of the new model and the other 17 models was measured using statistical metrics.The results show that the prediction result of the new model is more consistent with the real JRC value,with higher recognition accuracy and generalization ability.
文摘Let f be a primitive holomorphic cusp form with even integral weight k≥2 for the full modular groupΓ=SL(2,Z)andλ_(sym^(j)f)(n)be the n-th coefficient of Dirichlet series of j-th symmetric L-function L(s,sym^(j)f)attached to f.In this paper,we study the mean value distribution over a specific sparse sequence of positive integers of the following sum∑(a^(2)+b^(2)+c^(2)+d^(2)≤x(a,b,c,d)∈Z^(4))λ_(sym^(j))^(i)f(a^(2)+b^(2)+c^(2)+d^(2))where j≥2 is a given positive integer,i=2,3,4 andαis sufficiently large.We utilize Python programming to design algorithms for higher power conditions,combining Perron's formula,latest results of representations of natural integers as sums of squares,as well as analytic properties and subconvexity and convexity bounds of automorphic L-functions,to ensure the accuracy and verifiability of asymptotic formulas.The conclusion we obtained improves previous results and extends them to a more general settings.
基金supported in part by the National Natural Science Foundation of China(61873348,6230 3266,62273200)JSPS(Japan Society for the Promotion of Science) KAKENHI(22H03998,23K25252)
文摘Dear Editor,This letter presents an improved repetitive controller(IRC) that uses a complex-coefficient filter to enhance the tracking performance of a system for periodic signals. Compared with the low-pass filter used in the conventional repetitive controller(CRC), the complex-coefficient filter causes less change in the phase and amplitude of a signal at the frequencies of the periodic signal, especially at the fundamental frequency, when the two filters have the same cutofffrequency.
基金Supported by the National Natural Science Foundation of China(62462036,62462037)Key Project of Jiangxi Provincial Natural Science Foundation(20242BAB26017)Academic and Major Disciplines in Jiangxi Province Technical Leader Training Project(20232BCJ22013)。
文摘Edit distance is an algorithm to measure the difference between two strings,usually represented as the minimum number of editing operations required to transform one string into another.The edit distance algorithm involves complex dependencies and constraints,making state management and verification work tedious.This paper proposes a derivation and verification method that avoids directly handling dependencies and constraints by proving the equivalence between the edit distance algorithm and existing functional modeling.First,the derivation process of edit distance algorithm mainly includes 1)describing problem specifications,2)inductively deducing recursive relations,3)formally constructing loop invariants using the optimization theory(memorization technology and optimal decision table)and properties(optimal substructure property and subproblems overlapping property)of the edit distance algorithm,4)generating the Minimalistic Imperative Programming Language(IMP)code based on the recursive relations.Second,the problem specification,loop invariants,and generated IMP code are input into Verification Condition Generator(VCG),which automatically generate five verification conditions,and then the correctness of edit distance algorithm is verified in the Isabelle/HOL theorem prover.The method utilizes formal technologies and theorem prover to complete the derivation and verification of the edit distance algorithm,and it can be applied to linear and nonlinear dynamic programming problems.
文摘The study of the morphometric parameters of the three most abundant species in the lower course of the Kouilou River (Chrysichthys auratus, Liza falcipinnis and Pellonula vorax) was carried out. The standard length of Chrysichthys auratus varies between 43.57 and 210 mm, for an average of 96.70 ± 28.63 mm;the weight varies between 2.92 and 140.83 mg, an average of 73.03 ± 21.62 mg. The condition coefficient is equal to 4.42 ± 1.52. Liza falcipinnis has a standard length which varies between 59.9 mm and 158.08 mm for an average of 88.15 ± 29.74 mm;its weight varies between 4.77 and 76.21 mg, an average of 18.61 ± 11.82 mg. The condition coefficient is equal to 2.47 ± 1.57. Pellonula vorax has a standard length which varies between 60.33 mm and 117.72 mm;for an average of 80.48 ± 17.75 mm;the weight varies between 3.61 and 25.17 mg, an average of 9.03 ± 3.61 mg. The condition coefficient is equal to 2.17 ± 0.57. These three species have a minor allometric growth.