As the core component of inertial navigation systems, fiber optic gyroscope (FOG), with technical advantages such as low power consumption, long lifespan, fast startup speed, and flexible structural design, are widely...As the core component of inertial navigation systems, fiber optic gyroscope (FOG), with technical advantages such as low power consumption, long lifespan, fast startup speed, and flexible structural design, are widely used in aerospace, unmanned driving, and other fields. However, due to the temper-ature sensitivity of optical devices, the influence of environmen-tal temperature causes errors in FOG, thereby greatly limiting their output accuracy. This work researches on machine-learn-ing based temperature error compensation techniques for FOG. Specifically, it focuses on compensating for the bias errors gen-erated in the fiber ring due to the Shupe effect. This work pro-poses a composite model based on k-means clustering, sup-port vector regression, and particle swarm optimization algo-rithms. And it significantly reduced redundancy within the sam-ples by adopting the interval sequence sample. Moreover, met-rics such as root mean square error (RMSE), mean absolute error (MAE), bias stability, and Allan variance, are selected to evaluate the model’s performance and compensation effective-ness. This work effectively enhances the consistency between data and models across different temperature ranges and tem-perature gradients, improving the bias stability of the FOG from 0.022 °/h to 0.006 °/h. Compared to the existing methods utiliz-ing a single machine learning model, the proposed method increases the bias stability of the compensated FOG from 57.11% to 71.98%, and enhances the suppression of rate ramp noise coefficient from 2.29% to 14.83%. This work improves the accuracy of FOG after compensation, providing theoretical guid-ance and technical references for sensors error compensation work in other fields.展开更多
This study numerically examines the heat and mass transfer characteristics of two ternary nanofluids via converging and diverg-ing channels.Furthermore,the study aims to assess two ternary nanofluids combinations to d...This study numerically examines the heat and mass transfer characteristics of two ternary nanofluids via converging and diverg-ing channels.Furthermore,the study aims to assess two ternary nanofluids combinations to determine which configuration can provide better heat and mass transfer and lower entropy production,while ensuring cost efficiency.This work bridges the gap be-tween academic research and industrial feasibility by incorporating cost analysis,entropy generation,and thermal efficiency.To compare the velocity,temperature,and concentration profiles,we examine two ternary nanofluids,i.e.,TiO_(2)+SiO_(2)+Al_(2)O_(3)/H_(2)O and TiO_(2)+SiO_(2)+Cu/H_(2)O,while considering the shape of nanoparticles.The velocity slip and Soret/Dufour effects are taken into consideration.Furthermore,regression analysis for Nusselt and Sherwood numbers of the model is carried out.The Runge-Kutta fourth-order method with shooting technique is employed to acquire the numerical solution of the governed system of ordinary differential equations.The flow pattern attributes of ternary nanofluids are meticulously examined and simulated with the fluc-tuation of flow-dominating parameters.Additionally,the influence of these parameters is demonstrated in the flow,temperature,and concentration fields.For variation in Eckert and Dufour numbers,TiO_(2)+SiO_(2)+Al_(2)O_(3)/H_(2)O has a higher temperature than TiO_(2)+SiO_(2)+Cu/H_(2)O.The results obtained indicate that the ternary nanofluid TiO_(2)+SiO_(2)+Al_(2)O_(3)/H_(2)O has a higher heat transfer rate,lesser entropy generation,greater mass transfer rate,and lower cost than that of TiO_(2)+SiO_(2)+Cu/H_(2)O ternary nanofluid.展开更多
In this study,we examine the problem of sliced inverse regression(SIR),a widely used method for sufficient dimension reduction(SDR).It was designed to find reduced-dimensional versions of multivariate predictors by re...In this study,we examine the problem of sliced inverse regression(SIR),a widely used method for sufficient dimension reduction(SDR).It was designed to find reduced-dimensional versions of multivariate predictors by replacing them with a minimally adequate collection of their linear combinations without loss of information.Recently,regularization methods have been proposed in SIR to incorporate a sparse structure of predictors for better interpretability.However,existing methods consider convex relaxation to bypass the sparsity constraint,which may not lead to the best subset,and particularly tends to include irrelevant variables when predictors are correlated.In this study,we approach sparse SIR as a nonconvex optimization problem and directly tackle the sparsity constraint by establishing the optimal conditions and iteratively solving them by means of the splicing technique.Without employing convex relaxation on the sparsity constraint and the orthogonal constraint,our algorithm exhibits superior empirical merits,as evidenced by extensive numerical studies.Computationally,our algorithm is much faster than the relaxed approach for the natural sparse SIR estimator.Statistically,our algorithm surpasses existing methods in terms of accuracy for central subspace estimation and best subset selection and sustains high performance even with correlated predictors.展开更多
In recent years,machine learning(ML)techniques have been shown to be effective in accelerating the development process of optoelectronic devices.However,as"black box"models,they have limited theoretical inte...In recent years,machine learning(ML)techniques have been shown to be effective in accelerating the development process of optoelectronic devices.However,as"black box"models,they have limited theoretical interpretability.In this work,we leverage symbolic regression(SR)technique for discovering the explicit symbolic relationship between the structure of the optoelectronic Fabry-Perot(FP)laser and its optical field distribution,which greatly improves model transparency compared to ML.We demonstrated that the expressions explored through SR exhibit lower errors on the test set compared to ML models,which suggests that the expressions have better fitting and generalization capabilities.展开更多
Gastric cancer is the third leading cause of cancer-related mortality and remains a major global health issue^([1]).Annually,approximately 479,000individuals in China are diagnosed with gastric cancer,accounting for a...Gastric cancer is the third leading cause of cancer-related mortality and remains a major global health issue^([1]).Annually,approximately 479,000individuals in China are diagnosed with gastric cancer,accounting for almost 45%of all new cases worldwide^([2]).展开更多
Triaxial tests,a staple in rock engineering,are labor-intensive,sample-demanding,and costly,making their optimization highly advantageous.These tests are essential for characterizing rock strength,and by adopting a fa...Triaxial tests,a staple in rock engineering,are labor-intensive,sample-demanding,and costly,making their optimization highly advantageous.These tests are essential for characterizing rock strength,and by adopting a failure criterion,they allow for the derivation of criterion parameters through regression,facilitating their integration into modeling programs.In this study,we introduce the application of an underutilized statistical technique—orthogonal regression—well-suited for analyzing triaxial test data.Additionally,we present an innovation in this technique by minimizing the Euclidean distance while incorporating orthogonality between vectors as a constraint,for the case of orthogonal linear regression.Also,we consider the Modified Least Squares method.We exemplify this approach by developing the necessary equations to apply the Mohr-Coulomb,Murrell,Hoek-Brown,andÚcar criteria,and implement these equations in both spreadsheet calculations and R scripts.Finally,we demonstrate the technique's application using five datasets of varied lithologies from specialized literature,showcasing its versatility and effectiveness.展开更多
To cater the need for real-time crack monitoring of infrastructural facilities,a CNN-regression model is proposed to directly estimate the crack properties from patches.RGB crack images and their corresponding masks o...To cater the need for real-time crack monitoring of infrastructural facilities,a CNN-regression model is proposed to directly estimate the crack properties from patches.RGB crack images and their corresponding masks obtained from a public dataset are cropped into patches of 256 square pixels that are classified with a pre-trained deep convolution neural network,the true positives are segmented,and crack properties are extracted using two different methods.The first method is primarily based on active contour models and level-set segmentation and the second method consists of the domain adaptation of a mathematical morphology-based method known as FIL-FINDER.A statistical test has been performed for the comparison of the stated methods and a database prepared with the more suitable method.An advanced convolution neural network-based multi-output regression model has been proposed which was trained with the prepared database and validated with the held-out dataset for the prediction of crack-length,crack-width,and width-uncertainty directly from input image patches.The pro-posed model has been tested on crack patches collected from different locations.Huber loss has been used to ensure the robustness of the proposed model selected from a set of 288 different variations of it.Additionally,an ablation study has been conducted on the top 3 models that demonstrated the influence of each network component on the pre-diction results.Finally,the best performing model HHc-X among the top 3 has been proposed that predicted crack properties which are in close agreement to the ground truths in the test data.展开更多
The impact of different global and local variables in urban development processes requires a systematic study to fully comprehend the underlying complexities in them.The interplay between such variables is crucial for...The impact of different global and local variables in urban development processes requires a systematic study to fully comprehend the underlying complexities in them.The interplay between such variables is crucial for modelling urban growth to closely reflects reality.Despite extensive research,ambiguity remains about how variations in these input variables influence urban densification.In this study,we conduct a global sensitivity analysis(SA)using a multinomial logistic regression(MNL)model to assess the model’s explanatory and predictive power.We examine the influence of global variables,including spatial resolution,neighborhood size,and density classes,under different input combinations at a provincial scale to understand their impact on densification.Additionally,we perform a stepwise regression to identify the significant explanatory variables that are important for understanding densification in the Brussels Metropolitan Area(BMA).Our results indicate that a finer spatial resolution of 50 m and 100 m,smaller neighborhood size of 5×5 and 3×3,and specific density classes—namely 3(non-built-up,low and high built-up)and 4(non-built-up,low,medium and high built-up)—optimally explain and predict urban densification.In line with the same,the stepwise regression reveals that models with a coarser resolution of 300 m lack significant variables,reflecting a lower explanatory power for densification.This approach aids in identifying optimal and significant global variables with higher explanatory power for understanding and predicting urban densification.Furthermore,these findings are reproducible in a global urban context,offering valuable insights for planners,modelers and geographers in managing future urban growth and minimizing modelling.展开更多
Knowing the influence of the size of datasets for regression models can help in improving the accuracy of a solar power forecast and make the most out of renewable energy systems.This research explores the influence o...Knowing the influence of the size of datasets for regression models can help in improving the accuracy of a solar power forecast and make the most out of renewable energy systems.This research explores the influence of dataset size on the accuracy and reliability of regression models for solar power prediction,contributing to better forecasting methods.The study analyzes data from two solar panels,aSiMicro03036 and aSiTandem72-46,over 7,14,17,21,28,and 38 days,with each dataset comprising five independent and one dependent parameter,and split 80–20 for training and testing.Results indicate that Random Forest consistently outperforms other models,achieving the highest correlation coefficient of 0.9822 and the lowest Mean Absolute Error(MAE)of 2.0544 on the aSiTandem72-46 panel with 21 days of data.For the aSiMicro03036 panel,the best MAE of 4.2978 was reached using the k-Nearest Neighbor(k-NN)algorithm,which was set up as instance-based k-Nearest neighbors(IBk)in Weka after being trained on 17 days of data.Regression performance for most models(excluding IBk)stabilizes at 14 days or more.Compared to the 7-day dataset,increasing to 21 days reduced the MAE by around 20%and improved correlation coefficients by around 2.1%,highlighting the value of moderate dataset expansion.These findings suggest that datasets spanning 17 to 21 days,with 80%used for training,can significantly enhance the predictive accuracy of solar power generation models.展开更多
Bigeye tuna is a protein-rich fish that is susceptible to spoilage during cold storage,however,there is limited information on untargeted metabolomic profiling of bigeye tuna concerning spoilage-associated enzymes and...Bigeye tuna is a protein-rich fish that is susceptible to spoilage during cold storage,however,there is limited information on untargeted metabolomic profiling of bigeye tuna concerning spoilage-associated enzymes and metabolites.This study aimed to investigate how cold storage affects enzyme activities,nutrient composition,tissue microstructures and spoilage metabolites of bigeye tuna.The activities of cathepsins B,H,L increased,while Na^(+)/K^(+)-ATPase and Mg^(2+)-ATPase decreased,α-glucosidase,lipase and lipoxygenase first increased and then decreased during cold storage,suggesting that proteins undergo degradation and ATP metabolism occurs at a faster rate during cold storage.Nutrient composition(moisture and lipid content),total amino acids decreased,suggesting that the nutritional value of bigeye tuna was reduced.Besides,a logistic regression equation has been established as a food analysis tool and assesses the dynamics and correlation of the enzyme of bigeye tuna during cold storage.Based on untargeted metabolomic profiling analysis,a total of 524 metabolites were identified in the bigeye tuna contained several spoilage metabolites involved in lipid metabolism(glycerophosphocholine and choline phosphate),amino acid metabolism(L-histidine,5-deoxy-5′-(methylthio)adenosine,5-methylthioadenosine),carbohydrate metabolism(D-gluconic acid,α-D-fructose 1,6-bisphosphate,D-glyceraldehyde 3-phosphate).The results of tissue microstructures of tuna showed a looser network and visible deterioration of tissue fiber during cold storage.Therefore,metabolomic analysis and tissue microstructures provide insight into the spoilage mechanism investigations on bigeye tuna during cold storage.展开更多
In clinical research,subgroup analysis can help identify patient groups that respond better or worse to specific treatments,improve therapeutic effect and safety,and is of great significance in precision medicine.This...In clinical research,subgroup analysis can help identify patient groups that respond better or worse to specific treatments,improve therapeutic effect and safety,and is of great significance in precision medicine.This article considers subgroup analysis methods for longitudinal data containing multiple covariates and biomarkers.We divide subgroups based on whether a linear combination of these biomarkers exceeds a predetermined threshold,and assess the heterogeneity of treatment effects across subgroups using the interaction between subgroups and exposure variables.Quantile regression is used to better characterize the global distribution of the response variable and sparsity penalties are imposed to achieve variable selection of covariates and biomarkers.The effectiveness of our proposed methodology for both variable selection and parameter estimation is verified through random simulations.Finally,we demonstrate the application of this method by analyzing data from the PA.3 trial,further illustrating the practicality of the method proposed in this paper.展开更多
This article presents a mathematical model addressing a scenario involving a hybrid nanofluid flow between two infinite parallel plates.One plate remains stationary,while the other moves downward at a squeezing veloci...This article presents a mathematical model addressing a scenario involving a hybrid nanofluid flow between two infinite parallel plates.One plate remains stationary,while the other moves downward at a squeezing velocity.The space between these plates contains a Darcy-Forchheimer porous medium.A mixture of water-based fluid with gold(Au)and silicon dioxide(Si O2)nanoparticles is formulated.In contrast to the conventional Fourier's heat flux equation,this study employs the Cattaneo-Christov heat flux equation.A uniform magnetic field is applied perpendicular to the flow direction,invoking magnetohydrodynamic(MHD)effects.Further,the model accounts for Joule heating,which is the heat generated when an electric current passes through the fluid.The problem is solved via NDSolve in MATHEMATICA.Numerical and statistical analyses are conducted to provide insights into the behavior of the nanomaterials between the parallel plates with respect to the flow,energy transport,and skin friction.The findings of this study have potential applications in enhancing cooling systems and optimizing thermal management strategies.It is observed that the squeezing motion generates additional pressure gradients within the fluid,which enhances the flow rate but reduces the frictional drag.Consequently,the fluid is pushed more vigorously between the plates,increasing the flow velocity.As the fluid experiences higher flow rates due to the increased squeezing effect,it spends less time in the region between the plates.The thermal relaxation,however,abruptly changes the temperature,leading to a decrease in the temperature fluctuations.展开更多
Power converters are essential components in modern life,being widely used in industry,automation,transportation,and household appliances.In many critical applications,their failure can lead not only to financial loss...Power converters are essential components in modern life,being widely used in industry,automation,transportation,and household appliances.In many critical applications,their failure can lead not only to financial losses due to operational downtime but also to serious risks to human safety.The capacitors forming the output filter,typically aluminumelectrolytic capacitors(AECs),are among the most critical and susceptible components in power converters.The electrolyte in AECs often evaporates over time,causing the internal resistance to rise and the capacitance to drop,ultimately leading to component failure.Detecting this fault requires measuring the current in the capacitor,rendering the method invasive and frequently impractical due to spatial constraints or operational limitations imposed by the integration of a current sensor in the capacitor branch.This article proposes the implementation of an online noninvasive fault diagnosis technique for estimating the Equivalent Series Resistance(ESR)and Capacitance(C)values of the capacitor,employing a combination of signal processing techniques(SPT)and machine learning(ML)algorithms.This solution relies solely on the converter’s input and output signals,therefore making it a non-invasive approach.The ML algorithm used was linear regression,applied to 27 attributes,21 of which were generated through feature engineering to enhance the model’s performance.The proposed solution demonstrates an R^(2) score greater than 0.99 in the estimation of both ESR and C.展开更多
Branch size is a crucial characteristic,closely linked to both tree growth and wood quality.A review of existing branch size models reveals various approaches,but the ability to estimate branch diameter and length wit...Branch size is a crucial characteristic,closely linked to both tree growth and wood quality.A review of existing branch size models reveals various approaches,but the ability to estimate branch diameter and length within the same whorl remains underexplored.In this study,a total of 77 trees were sampled from Northeast China to model the vertical distribution of branch diameter and length within each whorl along the crown.Several commonly used functions were taken as the alternative model forms,and the quantile regression method was employed and compared with the classical two-step modeling approach.The analysis incorporated stand,tree,and competition factors,with a particular focus on how these factors influence branches of varying sizes.The modified Weibull function was chosen as the optimal model,due to its excellent performance across all quantiles.Eight quantile regression curves(ranging from 0.20 to 0.85)were combined to predict branch diameter,while seven curves(ranging from 0.20 to 0.80)were used for branch length.The results showed that the quantile regression method outperformed the classical approach at model fitting and validation,likely due to its ability to estimate different rates of change across the entire branch size distribution.Lager branches in each whorl were more sensitive to changes in DBH,crown length(CL),crown ratio(CR)and dominant tree height(H_(dom)),while slenderness(HDR)more effectively influenced small and medium-sized branches.The effect of stand basal area(BAS)was relatively consistent across different branch sizes.The findings indicate that quantile regression is a good way not only a more accurate method for predicting branch size but also a valuable tool for understanding how branch growth responds to stand and tree factors.The models developed in this study are prepared to be further integrated into tree growth and yield simulation system,contributing to the assessment and promotion of wood quality.展开更多
The packaging quality of coaxial laser diodes(CLDs)plays a pivotal role in determining their optical performance and long-term reliability.As the core packaging process,high-precision laser welding requires precise co...The packaging quality of coaxial laser diodes(CLDs)plays a pivotal role in determining their optical performance and long-term reliability.As the core packaging process,high-precision laser welding requires precise control of process parameters to suppress optical power loss.However,the complex nonlinear relationship between welding parameters and optical power loss renders traditional trial-and-error methods inefficient and imprecise.To address this challenge,a physics-informed(PI)and data-driven collaboration approach for welding parameter optimization is proposed.First,thermal-fluid-solid coupling finite element method(FEM)was employed to quantify the sensitivity of welding parameters to physical characteristics,including residual stress.This analysis facilitated the identification of critical factors contributing to optical power loss.Subsequently,a Gaussian process regression(GPR)model incorporating finite element simulation prior knowledge was constructed based on the selected features.By introducing physics-informed kernel(PIK)functions,stress distribution patterns were embedded into the prediction model,achieving high-precision optical power loss prediction.Finally,a Bayesian optimization(BO)algorithm with an adaptive sampling strategy was implemented for efficient parameter space exploration.Experimental results demonstrate that the proposedmethod effectively establishes explicit physical correlations between welding parameters and optical power loss.The optimized welding parameters reduced optical power loss by 34.1%,providing theoretical guidance and technical support for reliable CLD packaging.展开更多
Pinus densiflora is a pine species native to the Korean peninsula,and seed orchards have supplied mate-rial needed for afforestation in South Korea.Climate vari-ables affecting seed production have not been identified...Pinus densiflora is a pine species native to the Korean peninsula,and seed orchards have supplied mate-rial needed for afforestation in South Korea.Climate vari-ables affecting seed production have not been identified.The purpose of this study was to determine climate variables that influence annual seed production of two seed orchards using multiple linear regression(MLR),elastic net regres-sion(ENR)and partial least square regression(PLSR)mod-els.The PLSR model included 12 climatic variables from 2003 to 2020 and explained 74.3%of the total variation in seed production.It showed better predictive performance(R2=0.662)than the EN(0.516)and the MLR(0.366)mod-els.Among the 12 climatic variables,July temperature two years prior to seed production and July precipitation after one year had the strongest influence on seed production.The time periods indicated by the two variables corresponded to pollen cone initiation and female gametophyte development.The results will be helpful for developing seed collection plans,selecting new orchard sites with favorable climatic conditions,and investigating the relationships between seed production and climatic factors in related pine species.展开更多
Sonic Hedgehog Medulloblastoma(SHH-MB)is one of the four primary molecular subgroups of Medulloblastoma.It is estimated to be responsible for nearly one-third of allMB cases.Using transcriptomic and DNA methylation pr...Sonic Hedgehog Medulloblastoma(SHH-MB)is one of the four primary molecular subgroups of Medulloblastoma.It is estimated to be responsible for nearly one-third of allMB cases.Using transcriptomic and DNA methylation profiling techniques,new developments in this field determined four molecular subtypes for SHH-MB.SHH-MB subtypes show distinct DNAmethylation patterns that allow their discrimination fromoverlapping subtypes and predict clinical outcomes.Class overlapping occurs when two or more classes share common features,making it difficult to distinguish them as separate.Using the DNA methylation dataset,a novel classification technique is presented to address the issue of overlapping SHH-MBsubtypes.Penalizedmultinomial regression(PMR),Tomek links(TL),and singular value decomposition(SVD)were all smoothly integrated into a single framework.SVD and group lasso improve computational efficiency,address the problem of high-dimensional datasets,and clarify class distinctions by removing redundant or irrelevant features that might lead to class overlap.As a method to eliminate the issues of decision boundary overlap and class imbalance in the classification task,TL enhances dataset balance and increases the clarity of decision boundaries through the elimination of overlapping samples.Using fivefold cross-validation,our proposed method(TL-SVDPMR)achieved a remarkable overall accuracy of almost 95%in the classification of SHH-MB molecular subtypes.The results demonstrate the strong performance of the proposed classification model among the various SHH-MB subtypes given a high average of the area under the curve(AUC)values.Additionally,the statistical significance test indicates that TL-SVDPMR is more accurate than both SVM and random forest algorithms in classifying the overlapping SHH-MB subtypes,highlighting its importance for precision medicine applications.Our findings emphasized the success of combining SVD,TL,and PMRtechniques to improve the classification performance for biomedical applications with many features and overlapping subtypes.展开更多
Seismic fault rupture can extend to the surface,and the resulting surface deformation can cause severe damage to civil engineering structures crossing the fault zones.Coseismic Surface Rupture Prediction Models(CSRPMs...Seismic fault rupture can extend to the surface,and the resulting surface deformation can cause severe damage to civil engineering structures crossing the fault zones.Coseismic Surface Rupture Prediction Models(CSRPMs)play a crucial role in the structural design of fault-crossing engineering and in the hazard analysis of fault-intensive areas.In this study,a new global coseismic surface rupture database was constructed by compiling 171 earthquake events(Mw:5.5-7.9)that caused surface rupture.In contrast to the fault classification in traditional empirical relationships,this study categorizes earthquake events as strike-slip,dip-slip,and oblique-slip.CSRPMs utilizing Bayesian ridge regression(BRR)were developed to estimate parameters such as surface rupture length,average displacement,and maximum displacement.Based on Bayesian theory,BRR combines the benefits of both ridge regression and Bayesian linear regression.This approach effectively addresses the issue of overfitting while ensuring the strong model robustness.The reliability of the CSRPMs was validated by residual analysis and comparison with post-earthquake observations from the 2023 Türkiye earthquake doublet.The BRR-CSRPMs with new fault classification criteria are more suitable for the probabilistic hazard analysis of complex fault systems and dislocation design of fault-crossing engineering.展开更多
Damage to electrical equipment in an earthquake can lead to power outage of power systems.Seismic fragility analysis is a common method to assess the seismic reliability of electrical equipment.To further guarantee th...Damage to electrical equipment in an earthquake can lead to power outage of power systems.Seismic fragility analysis is a common method to assess the seismic reliability of electrical equipment.To further guarantee the efficiency of analysis,multi-source uncertainties including the structure itself and seismic excitation need to be considered.A method for seismic fragility analysis that reflects structural and seismic parameter uncertainty was developed in this study.The proposed method used a random sampling method based on Latin hypercube sampling(LHS)to account for the structure parameter uncertainty and the group structure characteristics of electrical equipment.Then,logistic Lasso regression(LLR)was used to find the seismic fragility surface based on double ground motion intensity measures(IM).The seismic fragility based on the finite element model of an±1000 kV main transformer(UHVMT)was analyzed using the proposed method.The results show that the seismic fragility function obtained by this method can be used to construct the relationship between the uncertainty parameters and the failure probability.The seismic fragility surface did not only provide the probabilities of seismic damage states under different IMs,but also had better stability than the fragility curve.Furthermore,the sensitivity analysis of the structural parameters revealed that the elastic module of the bushing and the height of the high-voltage bushing may have a greater influence.展开更多
Background:The COVID-1’s impact on influenza activity is of interest to inform future flu prevention and control strategies.Our study aim to examine COVID-19’s effects on influenza in Fujian Province,China,using a r...Background:The COVID-1’s impact on influenza activity is of interest to inform future flu prevention and control strategies.Our study aim to examine COVID-19’s effects on influenza in Fujian Province,China,using a regression discontinuity design.Methods:We utilized influenza-like illness(ILI)percentage as an indicator of influenza activity,with data from all sentinel hospitals between Week 4,2020,and Week 51,2023.The data is divided into two groups:the COVID-19 epidemic period and the post-epidemic period.Statistical analysis was performed with R software using robust RD design methods to account for potential confounders including seasonality,temperature,and influenza vaccination rates.Results:There was a discernible increase in the ILI percentage during the post-epidemic period.The robustness of the findings was confirmed with various RD design bandwidth selection methods and placebo tests,with certwo bandwidth providing the largest estimated effect size:a 14.6-percentage-point increase in the ILI percentage(β=0.146;95%CI:0.096–0.196).Sensitivity analyses and adjustments for confounders consistently pointed to an increased ILI percentage during the post-epidemic period compared to the epidemic period.Conclusion:The 14.6 percentage-point increase in the ILI percentage in Fujian Province,China,after the end of the COVID-19 pandemic suggests that there may be a need to re-evaluate and possibly enhance public health measures to control influenza transmission.Further research is needed to fully understand the factors contributing to this rise and to assess the ongoing impacts of post-pandemic behavioral changes.展开更多
基金supported by the National Natural Science Foundation of China(62375013).
文摘As the core component of inertial navigation systems, fiber optic gyroscope (FOG), with technical advantages such as low power consumption, long lifespan, fast startup speed, and flexible structural design, are widely used in aerospace, unmanned driving, and other fields. However, due to the temper-ature sensitivity of optical devices, the influence of environmen-tal temperature causes errors in FOG, thereby greatly limiting their output accuracy. This work researches on machine-learn-ing based temperature error compensation techniques for FOG. Specifically, it focuses on compensating for the bias errors gen-erated in the fiber ring due to the Shupe effect. This work pro-poses a composite model based on k-means clustering, sup-port vector regression, and particle swarm optimization algo-rithms. And it significantly reduced redundancy within the sam-ples by adopting the interval sequence sample. Moreover, met-rics such as root mean square error (RMSE), mean absolute error (MAE), bias stability, and Allan variance, are selected to evaluate the model’s performance and compensation effective-ness. This work effectively enhances the consistency between data and models across different temperature ranges and tem-perature gradients, improving the bias stability of the FOG from 0.022 °/h to 0.006 °/h. Compared to the existing methods utiliz-ing a single machine learning model, the proposed method increases the bias stability of the compensated FOG from 57.11% to 71.98%, and enhances the suppression of rate ramp noise coefficient from 2.29% to 14.83%. This work improves the accuracy of FOG after compensation, providing theoretical guid-ance and technical references for sensors error compensation work in other fields.
基金supported by DST-FIST(Government of India)(Grant No.SR/FIST/MS-1/2017/13)and Seed Money Project(Grant No.DoRDC/733).
文摘This study numerically examines the heat and mass transfer characteristics of two ternary nanofluids via converging and diverg-ing channels.Furthermore,the study aims to assess two ternary nanofluids combinations to determine which configuration can provide better heat and mass transfer and lower entropy production,while ensuring cost efficiency.This work bridges the gap be-tween academic research and industrial feasibility by incorporating cost analysis,entropy generation,and thermal efficiency.To compare the velocity,temperature,and concentration profiles,we examine two ternary nanofluids,i.e.,TiO_(2)+SiO_(2)+Al_(2)O_(3)/H_(2)O and TiO_(2)+SiO_(2)+Cu/H_(2)O,while considering the shape of nanoparticles.The velocity slip and Soret/Dufour effects are taken into consideration.Furthermore,regression analysis for Nusselt and Sherwood numbers of the model is carried out.The Runge-Kutta fourth-order method with shooting technique is employed to acquire the numerical solution of the governed system of ordinary differential equations.The flow pattern attributes of ternary nanofluids are meticulously examined and simulated with the fluc-tuation of flow-dominating parameters.Additionally,the influence of these parameters is demonstrated in the flow,temperature,and concentration fields.For variation in Eckert and Dufour numbers,TiO_(2)+SiO_(2)+Al_(2)O_(3)/H_(2)O has a higher temperature than TiO_(2)+SiO_(2)+Cu/H_(2)O.The results obtained indicate that the ternary nanofluid TiO_(2)+SiO_(2)+Al_(2)O_(3)/H_(2)O has a higher heat transfer rate,lesser entropy generation,greater mass transfer rate,and lower cost than that of TiO_(2)+SiO_(2)+Cu/H_(2)O ternary nanofluid.
文摘In this study,we examine the problem of sliced inverse regression(SIR),a widely used method for sufficient dimension reduction(SDR).It was designed to find reduced-dimensional versions of multivariate predictors by replacing them with a minimally adequate collection of their linear combinations without loss of information.Recently,regularization methods have been proposed in SIR to incorporate a sparse structure of predictors for better interpretability.However,existing methods consider convex relaxation to bypass the sparsity constraint,which may not lead to the best subset,and particularly tends to include irrelevant variables when predictors are correlated.In this study,we approach sparse SIR as a nonconvex optimization problem and directly tackle the sparsity constraint by establishing the optimal conditions and iteratively solving them by means of the splicing technique.Without employing convex relaxation on the sparsity constraint and the orthogonal constraint,our algorithm exhibits superior empirical merits,as evidenced by extensive numerical studies.Computationally,our algorithm is much faster than the relaxed approach for the natural sparse SIR estimator.Statistically,our algorithm surpasses existing methods in terms of accuracy for central subspace estimation and best subset selection and sustains high performance even with correlated predictors.
基金supported by the National Natural Science Foundation of China(No.92370117)the CAS Project for Young Scientists in Basic Research(No.YSBR-090)。
文摘In recent years,machine learning(ML)techniques have been shown to be effective in accelerating the development process of optoelectronic devices.However,as"black box"models,they have limited theoretical interpretability.In this work,we leverage symbolic regression(SR)technique for discovering the explicit symbolic relationship between the structure of the optoelectronic Fabry-Perot(FP)laser and its optical field distribution,which greatly improves model transparency compared to ML.We demonstrated that the expressions explored through SR exhibit lower errors on the test set compared to ML models,which suggests that the expressions have better fitting and generalization capabilities.
基金supported by the Natural Science Foundation of Shanghai(23ZR1463600)Shanghai Pudong New Area Health Commission Research Project(PW2021A-69)Research Project of Clinical Research Center of Shanghai Health Medical University(22MC2022002)。
文摘Gastric cancer is the third leading cause of cancer-related mortality and remains a major global health issue^([1]).Annually,approximately 479,000individuals in China are diagnosed with gastric cancer,accounting for almost 45%of all new cases worldwide^([2]).
文摘Triaxial tests,a staple in rock engineering,are labor-intensive,sample-demanding,and costly,making their optimization highly advantageous.These tests are essential for characterizing rock strength,and by adopting a failure criterion,they allow for the derivation of criterion parameters through regression,facilitating their integration into modeling programs.In this study,we introduce the application of an underutilized statistical technique—orthogonal regression—well-suited for analyzing triaxial test data.Additionally,we present an innovation in this technique by minimizing the Euclidean distance while incorporating orthogonality between vectors as a constraint,for the case of orthogonal linear regression.Also,we consider the Modified Least Squares method.We exemplify this approach by developing the necessary equations to apply the Mohr-Coulomb,Murrell,Hoek-Brown,andÚcar criteria,and implement these equations in both spreadsheet calculations and R scripts.Finally,we demonstrate the technique's application using five datasets of varied lithologies from specialized literature,showcasing its versatility and effectiveness.
文摘To cater the need for real-time crack monitoring of infrastructural facilities,a CNN-regression model is proposed to directly estimate the crack properties from patches.RGB crack images and their corresponding masks obtained from a public dataset are cropped into patches of 256 square pixels that are classified with a pre-trained deep convolution neural network,the true positives are segmented,and crack properties are extracted using two different methods.The first method is primarily based on active contour models and level-set segmentation and the second method consists of the domain adaptation of a mathematical morphology-based method known as FIL-FINDER.A statistical test has been performed for the comparison of the stated methods and a database prepared with the more suitable method.An advanced convolution neural network-based multi-output regression model has been proposed which was trained with the prepared database and validated with the held-out dataset for the prediction of crack-length,crack-width,and width-uncertainty directly from input image patches.The pro-posed model has been tested on crack patches collected from different locations.Huber loss has been used to ensure the robustness of the proposed model selected from a set of 288 different variations of it.Additionally,an ablation study has been conducted on the top 3 models that demonstrated the influence of each network component on the pre-diction results.Finally,the best performing model HHc-X among the top 3 has been proposed that predicted crack properties which are in close agreement to the ground truths in the test data.
基金funded by the INTER program and cofunded by the Fond National de la Recherche,Luxembourg(FNR)and the Fund for Scientific Research-FNRS,Belgium(F.R.S-FNRS),T.0233.20-‘Sustainable Residential Densification’project(SusDens,2020–2024).
文摘The impact of different global and local variables in urban development processes requires a systematic study to fully comprehend the underlying complexities in them.The interplay between such variables is crucial for modelling urban growth to closely reflects reality.Despite extensive research,ambiguity remains about how variations in these input variables influence urban densification.In this study,we conduct a global sensitivity analysis(SA)using a multinomial logistic regression(MNL)model to assess the model’s explanatory and predictive power.We examine the influence of global variables,including spatial resolution,neighborhood size,and density classes,under different input combinations at a provincial scale to understand their impact on densification.Additionally,we perform a stepwise regression to identify the significant explanatory variables that are important for understanding densification in the Brussels Metropolitan Area(BMA).Our results indicate that a finer spatial resolution of 50 m and 100 m,smaller neighborhood size of 5×5 and 3×3,and specific density classes—namely 3(non-built-up,low and high built-up)and 4(non-built-up,low,medium and high built-up)—optimally explain and predict urban densification.In line with the same,the stepwise regression reveals that models with a coarser resolution of 300 m lack significant variables,reflecting a lower explanatory power for densification.This approach aids in identifying optimal and significant global variables with higher explanatory power for understanding and predicting urban densification.Furthermore,these findings are reproducible in a global urban context,offering valuable insights for planners,modelers and geographers in managing future urban growth and minimizing modelling.
文摘Knowing the influence of the size of datasets for regression models can help in improving the accuracy of a solar power forecast and make the most out of renewable energy systems.This research explores the influence of dataset size on the accuracy and reliability of regression models for solar power prediction,contributing to better forecasting methods.The study analyzes data from two solar panels,aSiMicro03036 and aSiTandem72-46,over 7,14,17,21,28,and 38 days,with each dataset comprising five independent and one dependent parameter,and split 80–20 for training and testing.Results indicate that Random Forest consistently outperforms other models,achieving the highest correlation coefficient of 0.9822 and the lowest Mean Absolute Error(MAE)of 2.0544 on the aSiTandem72-46 panel with 21 days of data.For the aSiMicro03036 panel,the best MAE of 4.2978 was reached using the k-Nearest Neighbor(k-NN)algorithm,which was set up as instance-based k-Nearest neighbors(IBk)in Weka after being trained on 17 days of data.Regression performance for most models(excluding IBk)stabilizes at 14 days or more.Compared to the 7-day dataset,increasing to 21 days reduced the MAE by around 20%and improved correlation coefficients by around 2.1%,highlighting the value of moderate dataset expansion.These findings suggest that datasets spanning 17 to 21 days,with 80%used for training,can significantly enhance the predictive accuracy of solar power generation models.
基金supported by the Shanghai Sailing Program(22YF1416300)Youth Fund Project of National Natural Science Foundation of China(32202117)+1 种基金National Key Research and Development Program of China(2022YFD2100104)the China Agriculture Research System(CARS-47).
文摘Bigeye tuna is a protein-rich fish that is susceptible to spoilage during cold storage,however,there is limited information on untargeted metabolomic profiling of bigeye tuna concerning spoilage-associated enzymes and metabolites.This study aimed to investigate how cold storage affects enzyme activities,nutrient composition,tissue microstructures and spoilage metabolites of bigeye tuna.The activities of cathepsins B,H,L increased,while Na^(+)/K^(+)-ATPase and Mg^(2+)-ATPase decreased,α-glucosidase,lipase and lipoxygenase first increased and then decreased during cold storage,suggesting that proteins undergo degradation and ATP metabolism occurs at a faster rate during cold storage.Nutrient composition(moisture and lipid content),total amino acids decreased,suggesting that the nutritional value of bigeye tuna was reduced.Besides,a logistic regression equation has been established as a food analysis tool and assesses the dynamics and correlation of the enzyme of bigeye tuna during cold storage.Based on untargeted metabolomic profiling analysis,a total of 524 metabolites were identified in the bigeye tuna contained several spoilage metabolites involved in lipid metabolism(glycerophosphocholine and choline phosphate),amino acid metabolism(L-histidine,5-deoxy-5′-(methylthio)adenosine,5-methylthioadenosine),carbohydrate metabolism(D-gluconic acid,α-D-fructose 1,6-bisphosphate,D-glyceraldehyde 3-phosphate).The results of tissue microstructures of tuna showed a looser network and visible deterioration of tissue fiber during cold storage.Therefore,metabolomic analysis and tissue microstructures provide insight into the spoilage mechanism investigations on bigeye tuna during cold storage.
基金Supported by the Natural Science Foundation of Fujian Province(2022J011177,2024J01903)the Key Project of Fujian Provincial Education Department(JZ230054)。
文摘In clinical research,subgroup analysis can help identify patient groups that respond better or worse to specific treatments,improve therapeutic effect and safety,and is of great significance in precision medicine.This article considers subgroup analysis methods for longitudinal data containing multiple covariates and biomarkers.We divide subgroups based on whether a linear combination of these biomarkers exceeds a predetermined threshold,and assess the heterogeneity of treatment effects across subgroups using the interaction between subgroups and exposure variables.Quantile regression is used to better characterize the global distribution of the response variable and sparsity penalties are imposed to achieve variable selection of covariates and biomarkers.The effectiveness of our proposed methodology for both variable selection and parameter estimation is verified through random simulations.Finally,we demonstrate the application of this method by analyzing data from the PA.3 trial,further illustrating the practicality of the method proposed in this paper.
文摘This article presents a mathematical model addressing a scenario involving a hybrid nanofluid flow between two infinite parallel plates.One plate remains stationary,while the other moves downward at a squeezing velocity.The space between these plates contains a Darcy-Forchheimer porous medium.A mixture of water-based fluid with gold(Au)and silicon dioxide(Si O2)nanoparticles is formulated.In contrast to the conventional Fourier's heat flux equation,this study employs the Cattaneo-Christov heat flux equation.A uniform magnetic field is applied perpendicular to the flow direction,invoking magnetohydrodynamic(MHD)effects.Further,the model accounts for Joule heating,which is the heat generated when an electric current passes through the fluid.The problem is solved via NDSolve in MATHEMATICA.Numerical and statistical analyses are conducted to provide insights into the behavior of the nanomaterials between the parallel plates with respect to the flow,energy transport,and skin friction.The findings of this study have potential applications in enhancing cooling systems and optimizing thermal management strategies.It is observed that the squeezing motion generates additional pressure gradients within the fluid,which enhances the flow rate but reduces the frictional drag.Consequently,the fluid is pushed more vigorously between the plates,increasing the flow velocity.As the fluid experiences higher flow rates due to the increased squeezing effect,it spends less time in the region between the plates.The thermal relaxation,however,abruptly changes the temperature,leading to a decrease in the temperature fluctuations.
文摘Power converters are essential components in modern life,being widely used in industry,automation,transportation,and household appliances.In many critical applications,their failure can lead not only to financial losses due to operational downtime but also to serious risks to human safety.The capacitors forming the output filter,typically aluminumelectrolytic capacitors(AECs),are among the most critical and susceptible components in power converters.The electrolyte in AECs often evaporates over time,causing the internal resistance to rise and the capacitance to drop,ultimately leading to component failure.Detecting this fault requires measuring the current in the capacitor,rendering the method invasive and frequently impractical due to spatial constraints or operational limitations imposed by the integration of a current sensor in the capacitor branch.This article proposes the implementation of an online noninvasive fault diagnosis technique for estimating the Equivalent Series Resistance(ESR)and Capacitance(C)values of the capacitor,employing a combination of signal processing techniques(SPT)and machine learning(ML)algorithms.This solution relies solely on the converter’s input and output signals,therefore making it a non-invasive approach.The ML algorithm used was linear regression,applied to 27 attributes,21 of which were generated through feature engineering to enhance the model’s performance.The proposed solution demonstrates an R^(2) score greater than 0.99 in the estimation of both ESR and C.
基金supported by the Young Scientists Fund of the National Key R&D Program of China(No.2022YFD2201800)the Youth Science Fund Program of National Natural Science Foundation of China(No.32301581)+2 种基金the Joint Funds for Regional Innovation and Development of the National Natural Science Foundation of China(No.U21A20244)the China Postdoctoral Science Foundation(No.2024M750383)the Heilongjiang Touyan Innovation Team Program(Technology Development Team for High-Efficiency Silviculture of Forest Resources).
文摘Branch size is a crucial characteristic,closely linked to both tree growth and wood quality.A review of existing branch size models reveals various approaches,but the ability to estimate branch diameter and length within the same whorl remains underexplored.In this study,a total of 77 trees were sampled from Northeast China to model the vertical distribution of branch diameter and length within each whorl along the crown.Several commonly used functions were taken as the alternative model forms,and the quantile regression method was employed and compared with the classical two-step modeling approach.The analysis incorporated stand,tree,and competition factors,with a particular focus on how these factors influence branches of varying sizes.The modified Weibull function was chosen as the optimal model,due to its excellent performance across all quantiles.Eight quantile regression curves(ranging from 0.20 to 0.85)were combined to predict branch diameter,while seven curves(ranging from 0.20 to 0.80)were used for branch length.The results showed that the quantile regression method outperformed the classical approach at model fitting and validation,likely due to its ability to estimate different rates of change across the entire branch size distribution.Lager branches in each whorl were more sensitive to changes in DBH,crown length(CL),crown ratio(CR)and dominant tree height(H_(dom)),while slenderness(HDR)more effectively influenced small and medium-sized branches.The effect of stand basal area(BAS)was relatively consistent across different branch sizes.The findings indicate that quantile regression is a good way not only a more accurate method for predicting branch size but also a valuable tool for understanding how branch growth responds to stand and tree factors.The models developed in this study are prepared to be further integrated into tree growth and yield simulation system,contributing to the assessment and promotion of wood quality.
基金funded by the National Key R&D Program of China,Grant No.2024YFF0504904.
文摘The packaging quality of coaxial laser diodes(CLDs)plays a pivotal role in determining their optical performance and long-term reliability.As the core packaging process,high-precision laser welding requires precise control of process parameters to suppress optical power loss.However,the complex nonlinear relationship between welding parameters and optical power loss renders traditional trial-and-error methods inefficient and imprecise.To address this challenge,a physics-informed(PI)and data-driven collaboration approach for welding parameter optimization is proposed.First,thermal-fluid-solid coupling finite element method(FEM)was employed to quantify the sensitivity of welding parameters to physical characteristics,including residual stress.This analysis facilitated the identification of critical factors contributing to optical power loss.Subsequently,a Gaussian process regression(GPR)model incorporating finite element simulation prior knowledge was constructed based on the selected features.By introducing physics-informed kernel(PIK)functions,stress distribution patterns were embedded into the prediction model,achieving high-precision optical power loss prediction.Finally,a Bayesian optimization(BO)algorithm with an adaptive sampling strategy was implemented for efficient parameter space exploration.Experimental results demonstrate that the proposedmethod effectively establishes explicit physical correlations between welding parameters and optical power loss.The optimized welding parameters reduced optical power loss by 34.1%,providing theoretical guidance and technical support for reliable CLD packaging.
基金supported by the National Institute of Forest Sciencethe R&D Program for Forest Science Technology (No.2022458B10-2224-0201) of the Korea Forest Service
文摘Pinus densiflora is a pine species native to the Korean peninsula,and seed orchards have supplied mate-rial needed for afforestation in South Korea.Climate vari-ables affecting seed production have not been identified.The purpose of this study was to determine climate variables that influence annual seed production of two seed orchards using multiple linear regression(MLR),elastic net regres-sion(ENR)and partial least square regression(PLSR)mod-els.The PLSR model included 12 climatic variables from 2003 to 2020 and explained 74.3%of the total variation in seed production.It showed better predictive performance(R2=0.662)than the EN(0.516)and the MLR(0.366)mod-els.Among the 12 climatic variables,July temperature two years prior to seed production and July precipitation after one year had the strongest influence on seed production.The time periods indicated by the two variables corresponded to pollen cone initiation and female gametophyte development.The results will be helpful for developing seed collection plans,selecting new orchard sites with favorable climatic conditions,and investigating the relationships between seed production and climatic factors in related pine species.
基金funded by the Deanship of Graduate Studies and Scientific Research at Jouf University under grant No.(DGSSR-2024-02-01137).
文摘Sonic Hedgehog Medulloblastoma(SHH-MB)is one of the four primary molecular subgroups of Medulloblastoma.It is estimated to be responsible for nearly one-third of allMB cases.Using transcriptomic and DNA methylation profiling techniques,new developments in this field determined four molecular subtypes for SHH-MB.SHH-MB subtypes show distinct DNAmethylation patterns that allow their discrimination fromoverlapping subtypes and predict clinical outcomes.Class overlapping occurs when two or more classes share common features,making it difficult to distinguish them as separate.Using the DNA methylation dataset,a novel classification technique is presented to address the issue of overlapping SHH-MBsubtypes.Penalizedmultinomial regression(PMR),Tomek links(TL),and singular value decomposition(SVD)were all smoothly integrated into a single framework.SVD and group lasso improve computational efficiency,address the problem of high-dimensional datasets,and clarify class distinctions by removing redundant or irrelevant features that might lead to class overlap.As a method to eliminate the issues of decision boundary overlap and class imbalance in the classification task,TL enhances dataset balance and increases the clarity of decision boundaries through the elimination of overlapping samples.Using fivefold cross-validation,our proposed method(TL-SVDPMR)achieved a remarkable overall accuracy of almost 95%in the classification of SHH-MB molecular subtypes.The results demonstrate the strong performance of the proposed classification model among the various SHH-MB subtypes given a high average of the area under the curve(AUC)values.Additionally,the statistical significance test indicates that TL-SVDPMR is more accurate than both SVM and random forest algorithms in classifying the overlapping SHH-MB subtypes,highlighting its importance for precision medicine applications.Our findings emphasized the success of combining SVD,TL,and PMRtechniques to improve the classification performance for biomedical applications with many features and overlapping subtypes.
基金Foundation of China under Grant Nos. U2139207 and 52378517the Natural Science Foundation of Hubei Province under Grant No. 2023AFB934
文摘Seismic fault rupture can extend to the surface,and the resulting surface deformation can cause severe damage to civil engineering structures crossing the fault zones.Coseismic Surface Rupture Prediction Models(CSRPMs)play a crucial role in the structural design of fault-crossing engineering and in the hazard analysis of fault-intensive areas.In this study,a new global coseismic surface rupture database was constructed by compiling 171 earthquake events(Mw:5.5-7.9)that caused surface rupture.In contrast to the fault classification in traditional empirical relationships,this study categorizes earthquake events as strike-slip,dip-slip,and oblique-slip.CSRPMs utilizing Bayesian ridge regression(BRR)were developed to estimate parameters such as surface rupture length,average displacement,and maximum displacement.Based on Bayesian theory,BRR combines the benefits of both ridge regression and Bayesian linear regression.This approach effectively addresses the issue of overfitting while ensuring the strong model robustness.The reliability of the CSRPMs was validated by residual analysis and comparison with post-earthquake observations from the 2023 Türkiye earthquake doublet.The BRR-CSRPMs with new fault classification criteria are more suitable for the probabilistic hazard analysis of complex fault systems and dislocation design of fault-crossing engineering.
基金National Key R&D Program of China under Grant Nos.2018YFC1504504 and 2018YFC0809404。
文摘Damage to electrical equipment in an earthquake can lead to power outage of power systems.Seismic fragility analysis is a common method to assess the seismic reliability of electrical equipment.To further guarantee the efficiency of analysis,multi-source uncertainties including the structure itself and seismic excitation need to be considered.A method for seismic fragility analysis that reflects structural and seismic parameter uncertainty was developed in this study.The proposed method used a random sampling method based on Latin hypercube sampling(LHS)to account for the structure parameter uncertainty and the group structure characteristics of electrical equipment.Then,logistic Lasso regression(LLR)was used to find the seismic fragility surface based on double ground motion intensity measures(IM).The seismic fragility based on the finite element model of an±1000 kV main transformer(UHVMT)was analyzed using the proposed method.The results show that the seismic fragility function obtained by this method can be used to construct the relationship between the uncertainty parameters and the failure probability.The seismic fragility surface did not only provide the probabilities of seismic damage states under different IMs,but also had better stability than the fragility curve.Furthermore,the sensitivity analysis of the structural parameters revealed that the elastic module of the bushing and the height of the high-voltage bushing may have a greater influence.
基金supported by the Youth Scientific Research Project of Fujian Provincial Center for Disease Control and Prevention(2022QN02)the Fujian Provincial Health Youth Scientific Research Project(2023QNA040).
文摘Background:The COVID-1’s impact on influenza activity is of interest to inform future flu prevention and control strategies.Our study aim to examine COVID-19’s effects on influenza in Fujian Province,China,using a regression discontinuity design.Methods:We utilized influenza-like illness(ILI)percentage as an indicator of influenza activity,with data from all sentinel hospitals between Week 4,2020,and Week 51,2023.The data is divided into two groups:the COVID-19 epidemic period and the post-epidemic period.Statistical analysis was performed with R software using robust RD design methods to account for potential confounders including seasonality,temperature,and influenza vaccination rates.Results:There was a discernible increase in the ILI percentage during the post-epidemic period.The robustness of the findings was confirmed with various RD design bandwidth selection methods and placebo tests,with certwo bandwidth providing the largest estimated effect size:a 14.6-percentage-point increase in the ILI percentage(β=0.146;95%CI:0.096–0.196).Sensitivity analyses and adjustments for confounders consistently pointed to an increased ILI percentage during the post-epidemic period compared to the epidemic period.Conclusion:The 14.6 percentage-point increase in the ILI percentage in Fujian Province,China,after the end of the COVID-19 pandemic suggests that there may be a need to re-evaluate and possibly enhance public health measures to control influenza transmission.Further research is needed to fully understand the factors contributing to this rise and to assess the ongoing impacts of post-pandemic behavioral changes.