BACKGROUND Paternal perinatal depression(PPD)is closely associated with maternal mental health challenges,marital strain,and adverse child developmental outcomes.Despite its significant impact,PPD remains under-recogn...BACKGROUND Paternal perinatal depression(PPD)is closely associated with maternal mental health challenges,marital strain,and adverse child developmental outcomes.Despite its significant impact,PPD remains under-recognized in family-centered clinical practice.Concurrently,against the backdrop of rising rates of delayed marriage and China’s Maternity Incentive Policy,the proportion of women giving birth at an advanced maternal age is increasing.Nevertheless,research specifically examining PPD among spouses of older mothers remains critically scarce,both in China and globally.AIM To investigate PPD and its influencing factors in Chinese advanced maternal age families.METHODS This cross-sectional study included 358 participants;it was conducted among fathers of pregnant women of advanced maternal age at five hospitals in the Pearl River Delta region of China from September 2023 to June 2024.Data were collected via a general information questionnaire,the Social Support Rating Scale,and the Edinburgh Postnatal Depression Scale.Latent profile analysis and regression mixture models(RMMs)were adopted to analyze the latent PPD types and factors that influenced PPD.RESULTS The incidence of PPD was 16.48%,and three profiles were identified:Low-symptomatic(175 cases,48.89%),monophasic(140 cases,39.10%),and high-symptomatic(43 cases,12.01%).The RMM analysis revealed that first pregnancy,low income(<¥3000/month),part-time work,and a history of abnormal pregnancy were positively associated with the high-symptomatic type(P<0.05).Conversely,high subjective support and support utilization were negatively associated with the high-symptomatic type compared with the low-symptomatic type(P<0.05).Good couple relationships,high objective and subjective support,and high support utilization were negatively associated with monophasic disorder(P<0.05).CONCLUSION PPD incidence is high among Chinese fathers with advanced maternal age partners,and the characteristics of depression are varied.Healthcare practitioners should prioritize individuals with low levels of social support.展开更多
As the core component of inertial navigation systems, fiber optic gyroscope (FOG), with technical advantages such as low power consumption, long lifespan, fast startup speed, and flexible structural design, are widely...As the core component of inertial navigation systems, fiber optic gyroscope (FOG), with technical advantages such as low power consumption, long lifespan, fast startup speed, and flexible structural design, are widely used in aerospace, unmanned driving, and other fields. However, due to the temper-ature sensitivity of optical devices, the influence of environmen-tal temperature causes errors in FOG, thereby greatly limiting their output accuracy. This work researches on machine-learn-ing based temperature error compensation techniques for FOG. Specifically, it focuses on compensating for the bias errors gen-erated in the fiber ring due to the Shupe effect. This work pro-poses a composite model based on k-means clustering, sup-port vector regression, and particle swarm optimization algo-rithms. And it significantly reduced redundancy within the sam-ples by adopting the interval sequence sample. Moreover, met-rics such as root mean square error (RMSE), mean absolute error (MAE), bias stability, and Allan variance, are selected to evaluate the model’s performance and compensation effective-ness. This work effectively enhances the consistency between data and models across different temperature ranges and tem-perature gradients, improving the bias stability of the FOG from 0.022 °/h to 0.006 °/h. Compared to the existing methods utiliz-ing a single machine learning model, the proposed method increases the bias stability of the compensated FOG from 57.11% to 71.98%, and enhances the suppression of rate ramp noise coefficient from 2.29% to 14.83%. This work improves the accuracy of FOG after compensation, providing theoretical guid-ance and technical references for sensors error compensation work in other fields.展开更多
Knowing the influence of the size of datasets for regression models can help in improving the accuracy of a solar power forecast and make the most out of renewable energy systems.This research explores the influence o...Knowing the influence of the size of datasets for regression models can help in improving the accuracy of a solar power forecast and make the most out of renewable energy systems.This research explores the influence of dataset size on the accuracy and reliability of regression models for solar power prediction,contributing to better forecasting methods.The study analyzes data from two solar panels,aSiMicro03036 and aSiTandem72-46,over 7,14,17,21,28,and 38 days,with each dataset comprising five independent and one dependent parameter,and split 80–20 for training and testing.Results indicate that Random Forest consistently outperforms other models,achieving the highest correlation coefficient of 0.9822 and the lowest Mean Absolute Error(MAE)of 2.0544 on the aSiTandem72-46 panel with 21 days of data.For the aSiMicro03036 panel,the best MAE of 4.2978 was reached using the k-Nearest Neighbor(k-NN)algorithm,which was set up as instance-based k-Nearest neighbors(IBk)in Weka after being trained on 17 days of data.Regression performance for most models(excluding IBk)stabilizes at 14 days or more.Compared to the 7-day dataset,increasing to 21 days reduced the MAE by around 20%and improved correlation coefficients by around 2.1%,highlighting the value of moderate dataset expansion.These findings suggest that datasets spanning 17 to 21 days,with 80%used for training,can significantly enhance the predictive accuracy of solar power generation models.展开更多
This study numerically examines the heat and mass transfer characteristics of two ternary nanofluids via converging and diverg-ing channels.Furthermore,the study aims to assess two ternary nanofluids combinations to d...This study numerically examines the heat and mass transfer characteristics of two ternary nanofluids via converging and diverg-ing channels.Furthermore,the study aims to assess two ternary nanofluids combinations to determine which configuration can provide better heat and mass transfer and lower entropy production,while ensuring cost efficiency.This work bridges the gap be-tween academic research and industrial feasibility by incorporating cost analysis,entropy generation,and thermal efficiency.To compare the velocity,temperature,and concentration profiles,we examine two ternary nanofluids,i.e.,TiO_(2)+SiO_(2)+Al_(2)O_(3)/H_(2)O and TiO_(2)+SiO_(2)+Cu/H_(2)O,while considering the shape of nanoparticles.The velocity slip and Soret/Dufour effects are taken into consideration.Furthermore,regression analysis for Nusselt and Sherwood numbers of the model is carried out.The Runge-Kutta fourth-order method with shooting technique is employed to acquire the numerical solution of the governed system of ordinary differential equations.The flow pattern attributes of ternary nanofluids are meticulously examined and simulated with the fluc-tuation of flow-dominating parameters.Additionally,the influence of these parameters is demonstrated in the flow,temperature,and concentration fields.For variation in Eckert and Dufour numbers,TiO_(2)+SiO_(2)+Al_(2)O_(3)/H_(2)O has a higher temperature than TiO_(2)+SiO_(2)+Cu/H_(2)O.The results obtained indicate that the ternary nanofluid TiO_(2)+SiO_(2)+Al_(2)O_(3)/H_(2)O has a higher heat transfer rate,lesser entropy generation,greater mass transfer rate,and lower cost than that of TiO_(2)+SiO_(2)+Cu/H_(2)O ternary nanofluid.展开更多
In this study,we examine the problem of sliced inverse regression(SIR),a widely used method for sufficient dimension reduction(SDR).It was designed to find reduced-dimensional versions of multivariate predictors by re...In this study,we examine the problem of sliced inverse regression(SIR),a widely used method for sufficient dimension reduction(SDR).It was designed to find reduced-dimensional versions of multivariate predictors by replacing them with a minimally adequate collection of their linear combinations without loss of information.Recently,regularization methods have been proposed in SIR to incorporate a sparse structure of predictors for better interpretability.However,existing methods consider convex relaxation to bypass the sparsity constraint,which may not lead to the best subset,and particularly tends to include irrelevant variables when predictors are correlated.In this study,we approach sparse SIR as a nonconvex optimization problem and directly tackle the sparsity constraint by establishing the optimal conditions and iteratively solving them by means of the splicing technique.Without employing convex relaxation on the sparsity constraint and the orthogonal constraint,our algorithm exhibits superior empirical merits,as evidenced by extensive numerical studies.Computationally,our algorithm is much faster than the relaxed approach for the natural sparse SIR estimator.Statistically,our algorithm surpasses existing methods in terms of accuracy for central subspace estimation and best subset selection and sustains high performance even with correlated predictors.展开更多
In recent years,machine learning(ML)techniques have been shown to be effective in accelerating the development process of optoelectronic devices.However,as"black box"models,they have limited theoretical inte...In recent years,machine learning(ML)techniques have been shown to be effective in accelerating the development process of optoelectronic devices.However,as"black box"models,they have limited theoretical interpretability.In this work,we leverage symbolic regression(SR)technique for discovering the explicit symbolic relationship between the structure of the optoelectronic Fabry-Perot(FP)laser and its optical field distribution,which greatly improves model transparency compared to ML.We demonstrated that the expressions explored through SR exhibit lower errors on the test set compared to ML models,which suggests that the expressions have better fitting and generalization capabilities.展开更多
Gastric cancer is the third leading cause of cancer-related mortality and remains a major global health issue^([1]).Annually,approximately 479,000individuals in China are diagnosed with gastric cancer,accounting for a...Gastric cancer is the third leading cause of cancer-related mortality and remains a major global health issue^([1]).Annually,approximately 479,000individuals in China are diagnosed with gastric cancer,accounting for almost 45%of all new cases worldwide^([2]).展开更多
Triaxial tests,a staple in rock engineering,are labor-intensive,sample-demanding,and costly,making their optimization highly advantageous.These tests are essential for characterizing rock strength,and by adopting a fa...Triaxial tests,a staple in rock engineering,are labor-intensive,sample-demanding,and costly,making their optimization highly advantageous.These tests are essential for characterizing rock strength,and by adopting a failure criterion,they allow for the derivation of criterion parameters through regression,facilitating their integration into modeling programs.In this study,we introduce the application of an underutilized statistical technique—orthogonal regression—well-suited for analyzing triaxial test data.Additionally,we present an innovation in this technique by minimizing the Euclidean distance while incorporating orthogonality between vectors as a constraint,for the case of orthogonal linear regression.Also,we consider the Modified Least Squares method.We exemplify this approach by developing the necessary equations to apply the Mohr-Coulomb,Murrell,Hoek-Brown,andÚcar criteria,and implement these equations in both spreadsheet calculations and R scripts.Finally,we demonstrate the technique's application using five datasets of varied lithologies from specialized literature,showcasing its versatility and effectiveness.展开更多
To cater the need for real-time crack monitoring of infrastructural facilities,a CNN-regression model is proposed to directly estimate the crack properties from patches.RGB crack images and their corresponding masks o...To cater the need for real-time crack monitoring of infrastructural facilities,a CNN-regression model is proposed to directly estimate the crack properties from patches.RGB crack images and their corresponding masks obtained from a public dataset are cropped into patches of 256 square pixels that are classified with a pre-trained deep convolution neural network,the true positives are segmented,and crack properties are extracted using two different methods.The first method is primarily based on active contour models and level-set segmentation and the second method consists of the domain adaptation of a mathematical morphology-based method known as FIL-FINDER.A statistical test has been performed for the comparison of the stated methods and a database prepared with the more suitable method.An advanced convolution neural network-based multi-output regression model has been proposed which was trained with the prepared database and validated with the held-out dataset for the prediction of crack-length,crack-width,and width-uncertainty directly from input image patches.The pro-posed model has been tested on crack patches collected from different locations.Huber loss has been used to ensure the robustness of the proposed model selected from a set of 288 different variations of it.Additionally,an ablation study has been conducted on the top 3 models that demonstrated the influence of each network component on the pre-diction results.Finally,the best performing model HHc-X among the top 3 has been proposed that predicted crack properties which are in close agreement to the ground truths in the test data.展开更多
The impact of different global and local variables in urban development processes requires a systematic study to fully comprehend the underlying complexities in them.The interplay between such variables is crucial for...The impact of different global and local variables in urban development processes requires a systematic study to fully comprehend the underlying complexities in them.The interplay between such variables is crucial for modelling urban growth to closely reflects reality.Despite extensive research,ambiguity remains about how variations in these input variables influence urban densification.In this study,we conduct a global sensitivity analysis(SA)using a multinomial logistic regression(MNL)model to assess the model’s explanatory and predictive power.We examine the influence of global variables,including spatial resolution,neighborhood size,and density classes,under different input combinations at a provincial scale to understand their impact on densification.Additionally,we perform a stepwise regression to identify the significant explanatory variables that are important for understanding densification in the Brussels Metropolitan Area(BMA).Our results indicate that a finer spatial resolution of 50 m and 100 m,smaller neighborhood size of 5×5 and 3×3,and specific density classes—namely 3(non-built-up,low and high built-up)and 4(non-built-up,low,medium and high built-up)—optimally explain and predict urban densification.In line with the same,the stepwise regression reveals that models with a coarser resolution of 300 m lack significant variables,reflecting a lower explanatory power for densification.This approach aids in identifying optimal and significant global variables with higher explanatory power for understanding and predicting urban densification.Furthermore,these findings are reproducible in a global urban context,offering valuable insights for planners,modelers and geographers in managing future urban growth and minimizing modelling.展开更多
In this investigation,the Gradient Boosting(GB),Linear Regression(LR),Decision Tree(DT),and Voting algo-rithms were applied to predict the distribution pattern of Au geochemical data.Trace and indicator elements,inclu...In this investigation,the Gradient Boosting(GB),Linear Regression(LR),Decision Tree(DT),and Voting algo-rithms were applied to predict the distribution pattern of Au geochemical data.Trace and indicator elements,including Mo,Cu,Pb,Zn,Ag,Ni,Co,Mn,Fe,and As,were used with these machine learning algorithms(MLAs)to predict Au concentration values in the Doostbigloo porphyry Cu-Au-Mo mineralization area.The performance of the models was evaluated using the Mean Absolute Percentage Error(MAPE)and Root Mean Square Error(RMSE)metrics.The proposed ensemble Voting algorithm outperformed the other models,yielding more ac-curate predictions according to both metrics.The predicted data from the GB,LR,DT,and Voting MLAs were modeled using the Concentration-Area fractal method,and Au geochemical anomalies were mapped.To compare and validate the results,factors such as the location of the mineral deposits,their surface extent,and mineralization trend were considered.The results indicate that integrating hybrid MLAs with fractal modeling signifi-cantly improves geochemical prospectivity mapping.Among the four models,three(DT,GB,Voting)accurately identified both mineral deposits.The LR model,however,only identified Deposit I(central),and its mineralization trend diverged from the field data.The GB and Voting models produced similar results,with their final maps derived from fractal modeling showing the same anomalous areas.The anomaly boundaries identified by these two models are consistent with the two known reserves in the region.The results and plots related to prediction indicators and error rates for these two models also show high similarity,with lower error rates than the other models.Notably,the Voting model demonstrated superior performance in accurately delineating mineral deposit locations and identifying realistic mineralization trends while minimizing false anomalies.展开更多
Power converters are essential components in modern life,being widely used in industry,automation,transportation,and household appliances.In many critical applications,their failure can lead not only to financial loss...Power converters are essential components in modern life,being widely used in industry,automation,transportation,and household appliances.In many critical applications,their failure can lead not only to financial losses due to operational downtime but also to serious risks to human safety.The capacitors forming the output filter,typically aluminumelectrolytic capacitors(AECs),are among the most critical and susceptible components in power converters.The electrolyte in AECs often evaporates over time,causing the internal resistance to rise and the capacitance to drop,ultimately leading to component failure.Detecting this fault requires measuring the current in the capacitor,rendering the method invasive and frequently impractical due to spatial constraints or operational limitations imposed by the integration of a current sensor in the capacitor branch.This article proposes the implementation of an online noninvasive fault diagnosis technique for estimating the Equivalent Series Resistance(ESR)and Capacitance(C)values of the capacitor,employing a combination of signal processing techniques(SPT)and machine learning(ML)algorithms.This solution relies solely on the converter’s input and output signals,therefore making it a non-invasive approach.The ML algorithm used was linear regression,applied to 27 attributes,21 of which were generated through feature engineering to enhance the model’s performance.The proposed solution demonstrates an R^(2) score greater than 0.99 in the estimation of both ESR and C.展开更多
Bigeye tuna is a protein-rich fish that is susceptible to spoilage during cold storage,however,there is limited information on untargeted metabolomic profiling of bigeye tuna concerning spoilage-associated enzymes and...Bigeye tuna is a protein-rich fish that is susceptible to spoilage during cold storage,however,there is limited information on untargeted metabolomic profiling of bigeye tuna concerning spoilage-associated enzymes and metabolites.This study aimed to investigate how cold storage affects enzyme activities,nutrient composition,tissue microstructures and spoilage metabolites of bigeye tuna.The activities of cathepsins B,H,L increased,while Na^(+)/K^(+)-ATPase and Mg^(2+)-ATPase decreased,α-glucosidase,lipase and lipoxygenase first increased and then decreased during cold storage,suggesting that proteins undergo degradation and ATP metabolism occurs at a faster rate during cold storage.Nutrient composition(moisture and lipid content),total amino acids decreased,suggesting that the nutritional value of bigeye tuna was reduced.Besides,a logistic regression equation has been established as a food analysis tool and assesses the dynamics and correlation of the enzyme of bigeye tuna during cold storage.Based on untargeted metabolomic profiling analysis,a total of 524 metabolites were identified in the bigeye tuna contained several spoilage metabolites involved in lipid metabolism(glycerophosphocholine and choline phosphate),amino acid metabolism(L-histidine,5-deoxy-5′-(methylthio)adenosine,5-methylthioadenosine),carbohydrate metabolism(D-gluconic acid,α-D-fructose 1,6-bisphosphate,D-glyceraldehyde 3-phosphate).The results of tissue microstructures of tuna showed a looser network and visible deterioration of tissue fiber during cold storage.Therefore,metabolomic analysis and tissue microstructures provide insight into the spoilage mechanism investigations on bigeye tuna during cold storage.展开更多
This article presents a mathematical model addressing a scenario involving a hybrid nanofluid flow between two infinite parallel plates.One plate remains stationary,while the other moves downward at a squeezing veloci...This article presents a mathematical model addressing a scenario involving a hybrid nanofluid flow between two infinite parallel plates.One plate remains stationary,while the other moves downward at a squeezing velocity.The space between these plates contains a Darcy-Forchheimer porous medium.A mixture of water-based fluid with gold(Au)and silicon dioxide(Si O2)nanoparticles is formulated.In contrast to the conventional Fourier's heat flux equation,this study employs the Cattaneo-Christov heat flux equation.A uniform magnetic field is applied perpendicular to the flow direction,invoking magnetohydrodynamic(MHD)effects.Further,the model accounts for Joule heating,which is the heat generated when an electric current passes through the fluid.The problem is solved via NDSolve in MATHEMATICA.Numerical and statistical analyses are conducted to provide insights into the behavior of the nanomaterials between the parallel plates with respect to the flow,energy transport,and skin friction.The findings of this study have potential applications in enhancing cooling systems and optimizing thermal management strategies.It is observed that the squeezing motion generates additional pressure gradients within the fluid,which enhances the flow rate but reduces the frictional drag.Consequently,the fluid is pushed more vigorously between the plates,increasing the flow velocity.As the fluid experiences higher flow rates due to the increased squeezing effect,it spends less time in the region between the plates.The thermal relaxation,however,abruptly changes the temperature,leading to a decrease in the temperature fluctuations.展开更多
In clinical research,subgroup analysis can help identify patient groups that respond better or worse to specific treatments,improve therapeutic effect and safety,and is of great significance in precision medicine.This...In clinical research,subgroup analysis can help identify patient groups that respond better or worse to specific treatments,improve therapeutic effect and safety,and is of great significance in precision medicine.This article considers subgroup analysis methods for longitudinal data containing multiple covariates and biomarkers.We divide subgroups based on whether a linear combination of these biomarkers exceeds a predetermined threshold,and assess the heterogeneity of treatment effects across subgroups using the interaction between subgroups and exposure variables.Quantile regression is used to better characterize the global distribution of the response variable and sparsity penalties are imposed to achieve variable selection of covariates and biomarkers.The effectiveness of our proposed methodology for both variable selection and parameter estimation is verified through random simulations.Finally,we demonstrate the application of this method by analyzing data from the PA.3 trial,further illustrating the practicality of the method proposed in this paper.展开更多
Branch size is a crucial characteristic,closely linked to both tree growth and wood quality.A review of existing branch size models reveals various approaches,but the ability to estimate branch diameter and length wit...Branch size is a crucial characteristic,closely linked to both tree growth and wood quality.A review of existing branch size models reveals various approaches,but the ability to estimate branch diameter and length within the same whorl remains underexplored.In this study,a total of 77 trees were sampled from Northeast China to model the vertical distribution of branch diameter and length within each whorl along the crown.Several commonly used functions were taken as the alternative model forms,and the quantile regression method was employed and compared with the classical two-step modeling approach.The analysis incorporated stand,tree,and competition factors,with a particular focus on how these factors influence branches of varying sizes.The modified Weibull function was chosen as the optimal model,due to its excellent performance across all quantiles.Eight quantile regression curves(ranging from 0.20 to 0.85)were combined to predict branch diameter,while seven curves(ranging from 0.20 to 0.80)were used for branch length.The results showed that the quantile regression method outperformed the classical approach at model fitting and validation,likely due to its ability to estimate different rates of change across the entire branch size distribution.Lager branches in each whorl were more sensitive to changes in DBH,crown length(CL),crown ratio(CR)and dominant tree height(H_(dom)),while slenderness(HDR)more effectively influenced small and medium-sized branches.The effect of stand basal area(BAS)was relatively consistent across different branch sizes.The findings indicate that quantile regression is a good way not only a more accurate method for predicting branch size but also a valuable tool for understanding how branch growth responds to stand and tree factors.The models developed in this study are prepared to be further integrated into tree growth and yield simulation system,contributing to the assessment and promotion of wood quality.展开更多
BACKGROUND Aortic adverse remodeling remains a critical complication following thoracic endovascular aortic repair(TEVAR)for Stanford type B aortic dissection(TBAD),significantly impacting long-term survival.Accurate ...BACKGROUND Aortic adverse remodeling remains a critical complication following thoracic endovascular aortic repair(TEVAR)for Stanford type B aortic dissection(TBAD),significantly impacting long-term survival.Accurate risk prediction is essential for optimized clinical management.AIM To develop and validate a logistic regression-based risk prediction model for aortic adverse remodeling following TEVAR in patients with TBAD.METHODS This retrospective observational cohort study analyzed 140 TBAD patients undergoing TEVAR at a tertiary center(2019–2024).Based on European guidelines,patients were categorized into adverse remodeling(aortic growth rate>2.9 mm/year,n=45)and favorable remodeling groups(n=95).Comprehensive variables(clinical/imaging/surgical)were analyzed using multivariable logistic regression to develop a predictive model.Model performance was assessed via receiver operating characteristic-area under the curve(AUC)and Hosmer-Lemeshow tests.RESULTS Multivariable analysis identified several strong independent predictors of negative aortic remodeling.Larger false lumen diameter at the primary entry tear[odds ratio(OR):1.561,95%CI:1.197–2.035;P=0.001]and patency of the false lumen(OR:5.639,95%CI:4.372-8.181;P=0.004)were significant risk factors.False lumen involvement extending to the thoracoabdominal aorta was identified as the strongest predictor,significantly increasing the risk of adverse remodeling(OR:11.751,95%CI:9.841-15.612;P=0.001).Conversely,false lumen involvement confined to the thoracic aorta demonstrated a significant protective effect(OR:0.925,95%CI:0.614–0.831;P=0.015).The prediction model exhibited excellent discrimination(AUC=0.968)and calibration(Hosmer-Lemeshow P=0.824).CONCLUSION This validated risk prediction model identifies aortic adverse remodeling with high accuracy using routinely available clinical parameters.False lumen involvement thoracoabdominal aorta is the strongest predictor(11.751-fold increased risk).The tool enables preoperative risk stratification to guide tailored TEVAR strategies and improve long-term outcomes.展开更多
The packaging quality of coaxial laser diodes(CLDs)plays a pivotal role in determining their optical performance and long-term reliability.As the core packaging process,high-precision laser welding requires precise co...The packaging quality of coaxial laser diodes(CLDs)plays a pivotal role in determining their optical performance and long-term reliability.As the core packaging process,high-precision laser welding requires precise control of process parameters to suppress optical power loss.However,the complex nonlinear relationship between welding parameters and optical power loss renders traditional trial-and-error methods inefficient and imprecise.To address this challenge,a physics-informed(PI)and data-driven collaboration approach for welding parameter optimization is proposed.First,thermal-fluid-solid coupling finite element method(FEM)was employed to quantify the sensitivity of welding parameters to physical characteristics,including residual stress.This analysis facilitated the identification of critical factors contributing to optical power loss.Subsequently,a Gaussian process regression(GPR)model incorporating finite element simulation prior knowledge was constructed based on the selected features.By introducing physics-informed kernel(PIK)functions,stress distribution patterns were embedded into the prediction model,achieving high-precision optical power loss prediction.Finally,a Bayesian optimization(BO)algorithm with an adaptive sampling strategy was implemented for efficient parameter space exploration.Experimental results demonstrate that the proposedmethod effectively establishes explicit physical correlations between welding parameters and optical power loss.The optimized welding parameters reduced optical power loss by 34.1%,providing theoretical guidance and technical support for reliable CLD packaging.展开更多
Pinus densiflora is a pine species native to the Korean peninsula,and seed orchards have supplied mate-rial needed for afforestation in South Korea.Climate vari-ables affecting seed production have not been identified...Pinus densiflora is a pine species native to the Korean peninsula,and seed orchards have supplied mate-rial needed for afforestation in South Korea.Climate vari-ables affecting seed production have not been identified.The purpose of this study was to determine climate variables that influence annual seed production of two seed orchards using multiple linear regression(MLR),elastic net regres-sion(ENR)and partial least square regression(PLSR)mod-els.The PLSR model included 12 climatic variables from 2003 to 2020 and explained 74.3%of the total variation in seed production.It showed better predictive performance(R2=0.662)than the EN(0.516)and the MLR(0.366)mod-els.Among the 12 climatic variables,July temperature two years prior to seed production and July precipitation after one year had the strongest influence on seed production.The time periods indicated by the two variables corresponded to pollen cone initiation and female gametophyte development.The results will be helpful for developing seed collection plans,selecting new orchard sites with favorable climatic conditions,and investigating the relationships between seed production and climatic factors in related pine species.展开更多
Sonic Hedgehog Medulloblastoma(SHH-MB)is one of the four primary molecular subgroups of Medulloblastoma.It is estimated to be responsible for nearly one-third of allMB cases.Using transcriptomic and DNA methylation pr...Sonic Hedgehog Medulloblastoma(SHH-MB)is one of the four primary molecular subgroups of Medulloblastoma.It is estimated to be responsible for nearly one-third of allMB cases.Using transcriptomic and DNA methylation profiling techniques,new developments in this field determined four molecular subtypes for SHH-MB.SHH-MB subtypes show distinct DNAmethylation patterns that allow their discrimination fromoverlapping subtypes and predict clinical outcomes.Class overlapping occurs when two or more classes share common features,making it difficult to distinguish them as separate.Using the DNA methylation dataset,a novel classification technique is presented to address the issue of overlapping SHH-MBsubtypes.Penalizedmultinomial regression(PMR),Tomek links(TL),and singular value decomposition(SVD)were all smoothly integrated into a single framework.SVD and group lasso improve computational efficiency,address the problem of high-dimensional datasets,and clarify class distinctions by removing redundant or irrelevant features that might lead to class overlap.As a method to eliminate the issues of decision boundary overlap and class imbalance in the classification task,TL enhances dataset balance and increases the clarity of decision boundaries through the elimination of overlapping samples.Using fivefold cross-validation,our proposed method(TL-SVDPMR)achieved a remarkable overall accuracy of almost 95%in the classification of SHH-MB molecular subtypes.The results demonstrate the strong performance of the proposed classification model among the various SHH-MB subtypes given a high average of the area under the curve(AUC)values.Additionally,the statistical significance test indicates that TL-SVDPMR is more accurate than both SVM and random forest algorithms in classifying the overlapping SHH-MB subtypes,highlighting its importance for precision medicine applications.Our findings emphasized the success of combining SVD,TL,and PMRtechniques to improve the classification performance for biomedical applications with many features and overlapping subtypes.展开更多
基金Supported by High-level Professional Groups in Gangdong Province,No.GSPZYQ2020101Guangdong Province Educational Research Planning Project,No.2024GXJK742。
文摘BACKGROUND Paternal perinatal depression(PPD)is closely associated with maternal mental health challenges,marital strain,and adverse child developmental outcomes.Despite its significant impact,PPD remains under-recognized in family-centered clinical practice.Concurrently,against the backdrop of rising rates of delayed marriage and China’s Maternity Incentive Policy,the proportion of women giving birth at an advanced maternal age is increasing.Nevertheless,research specifically examining PPD among spouses of older mothers remains critically scarce,both in China and globally.AIM To investigate PPD and its influencing factors in Chinese advanced maternal age families.METHODS This cross-sectional study included 358 participants;it was conducted among fathers of pregnant women of advanced maternal age at five hospitals in the Pearl River Delta region of China from September 2023 to June 2024.Data were collected via a general information questionnaire,the Social Support Rating Scale,and the Edinburgh Postnatal Depression Scale.Latent profile analysis and regression mixture models(RMMs)were adopted to analyze the latent PPD types and factors that influenced PPD.RESULTS The incidence of PPD was 16.48%,and three profiles were identified:Low-symptomatic(175 cases,48.89%),monophasic(140 cases,39.10%),and high-symptomatic(43 cases,12.01%).The RMM analysis revealed that first pregnancy,low income(<¥3000/month),part-time work,and a history of abnormal pregnancy were positively associated with the high-symptomatic type(P<0.05).Conversely,high subjective support and support utilization were negatively associated with the high-symptomatic type compared with the low-symptomatic type(P<0.05).Good couple relationships,high objective and subjective support,and high support utilization were negatively associated with monophasic disorder(P<0.05).CONCLUSION PPD incidence is high among Chinese fathers with advanced maternal age partners,and the characteristics of depression are varied.Healthcare practitioners should prioritize individuals with low levels of social support.
基金supported by the National Natural Science Foundation of China(62375013).
文摘As the core component of inertial navigation systems, fiber optic gyroscope (FOG), with technical advantages such as low power consumption, long lifespan, fast startup speed, and flexible structural design, are widely used in aerospace, unmanned driving, and other fields. However, due to the temper-ature sensitivity of optical devices, the influence of environmen-tal temperature causes errors in FOG, thereby greatly limiting their output accuracy. This work researches on machine-learn-ing based temperature error compensation techniques for FOG. Specifically, it focuses on compensating for the bias errors gen-erated in the fiber ring due to the Shupe effect. This work pro-poses a composite model based on k-means clustering, sup-port vector regression, and particle swarm optimization algo-rithms. And it significantly reduced redundancy within the sam-ples by adopting the interval sequence sample. Moreover, met-rics such as root mean square error (RMSE), mean absolute error (MAE), bias stability, and Allan variance, are selected to evaluate the model’s performance and compensation effective-ness. This work effectively enhances the consistency between data and models across different temperature ranges and tem-perature gradients, improving the bias stability of the FOG from 0.022 °/h to 0.006 °/h. Compared to the existing methods utiliz-ing a single machine learning model, the proposed method increases the bias stability of the compensated FOG from 57.11% to 71.98%, and enhances the suppression of rate ramp noise coefficient from 2.29% to 14.83%. This work improves the accuracy of FOG after compensation, providing theoretical guid-ance and technical references for sensors error compensation work in other fields.
文摘Knowing the influence of the size of datasets for regression models can help in improving the accuracy of a solar power forecast and make the most out of renewable energy systems.This research explores the influence of dataset size on the accuracy and reliability of regression models for solar power prediction,contributing to better forecasting methods.The study analyzes data from two solar panels,aSiMicro03036 and aSiTandem72-46,over 7,14,17,21,28,and 38 days,with each dataset comprising five independent and one dependent parameter,and split 80–20 for training and testing.Results indicate that Random Forest consistently outperforms other models,achieving the highest correlation coefficient of 0.9822 and the lowest Mean Absolute Error(MAE)of 2.0544 on the aSiTandem72-46 panel with 21 days of data.For the aSiMicro03036 panel,the best MAE of 4.2978 was reached using the k-Nearest Neighbor(k-NN)algorithm,which was set up as instance-based k-Nearest neighbors(IBk)in Weka after being trained on 17 days of data.Regression performance for most models(excluding IBk)stabilizes at 14 days or more.Compared to the 7-day dataset,increasing to 21 days reduced the MAE by around 20%and improved correlation coefficients by around 2.1%,highlighting the value of moderate dataset expansion.These findings suggest that datasets spanning 17 to 21 days,with 80%used for training,can significantly enhance the predictive accuracy of solar power generation models.
基金supported by DST-FIST(Government of India)(Grant No.SR/FIST/MS-1/2017/13)and Seed Money Project(Grant No.DoRDC/733).
文摘This study numerically examines the heat and mass transfer characteristics of two ternary nanofluids via converging and diverg-ing channels.Furthermore,the study aims to assess two ternary nanofluids combinations to determine which configuration can provide better heat and mass transfer and lower entropy production,while ensuring cost efficiency.This work bridges the gap be-tween academic research and industrial feasibility by incorporating cost analysis,entropy generation,and thermal efficiency.To compare the velocity,temperature,and concentration profiles,we examine two ternary nanofluids,i.e.,TiO_(2)+SiO_(2)+Al_(2)O_(3)/H_(2)O and TiO_(2)+SiO_(2)+Cu/H_(2)O,while considering the shape of nanoparticles.The velocity slip and Soret/Dufour effects are taken into consideration.Furthermore,regression analysis for Nusselt and Sherwood numbers of the model is carried out.The Runge-Kutta fourth-order method with shooting technique is employed to acquire the numerical solution of the governed system of ordinary differential equations.The flow pattern attributes of ternary nanofluids are meticulously examined and simulated with the fluc-tuation of flow-dominating parameters.Additionally,the influence of these parameters is demonstrated in the flow,temperature,and concentration fields.For variation in Eckert and Dufour numbers,TiO_(2)+SiO_(2)+Al_(2)O_(3)/H_(2)O has a higher temperature than TiO_(2)+SiO_(2)+Cu/H_(2)O.The results obtained indicate that the ternary nanofluid TiO_(2)+SiO_(2)+Al_(2)O_(3)/H_(2)O has a higher heat transfer rate,lesser entropy generation,greater mass transfer rate,and lower cost than that of TiO_(2)+SiO_(2)+Cu/H_(2)O ternary nanofluid.
文摘In this study,we examine the problem of sliced inverse regression(SIR),a widely used method for sufficient dimension reduction(SDR).It was designed to find reduced-dimensional versions of multivariate predictors by replacing them with a minimally adequate collection of their linear combinations without loss of information.Recently,regularization methods have been proposed in SIR to incorporate a sparse structure of predictors for better interpretability.However,existing methods consider convex relaxation to bypass the sparsity constraint,which may not lead to the best subset,and particularly tends to include irrelevant variables when predictors are correlated.In this study,we approach sparse SIR as a nonconvex optimization problem and directly tackle the sparsity constraint by establishing the optimal conditions and iteratively solving them by means of the splicing technique.Without employing convex relaxation on the sparsity constraint and the orthogonal constraint,our algorithm exhibits superior empirical merits,as evidenced by extensive numerical studies.Computationally,our algorithm is much faster than the relaxed approach for the natural sparse SIR estimator.Statistically,our algorithm surpasses existing methods in terms of accuracy for central subspace estimation and best subset selection and sustains high performance even with correlated predictors.
基金supported by the National Natural Science Foundation of China(No.92370117)the CAS Project for Young Scientists in Basic Research(No.YSBR-090)。
文摘In recent years,machine learning(ML)techniques have been shown to be effective in accelerating the development process of optoelectronic devices.However,as"black box"models,they have limited theoretical interpretability.In this work,we leverage symbolic regression(SR)technique for discovering the explicit symbolic relationship between the structure of the optoelectronic Fabry-Perot(FP)laser and its optical field distribution,which greatly improves model transparency compared to ML.We demonstrated that the expressions explored through SR exhibit lower errors on the test set compared to ML models,which suggests that the expressions have better fitting and generalization capabilities.
基金supported by the Natural Science Foundation of Shanghai(23ZR1463600)Shanghai Pudong New Area Health Commission Research Project(PW2021A-69)Research Project of Clinical Research Center of Shanghai Health Medical University(22MC2022002)。
文摘Gastric cancer is the third leading cause of cancer-related mortality and remains a major global health issue^([1]).Annually,approximately 479,000individuals in China are diagnosed with gastric cancer,accounting for almost 45%of all new cases worldwide^([2]).
文摘Triaxial tests,a staple in rock engineering,are labor-intensive,sample-demanding,and costly,making their optimization highly advantageous.These tests are essential for characterizing rock strength,and by adopting a failure criterion,they allow for the derivation of criterion parameters through regression,facilitating their integration into modeling programs.In this study,we introduce the application of an underutilized statistical technique—orthogonal regression—well-suited for analyzing triaxial test data.Additionally,we present an innovation in this technique by minimizing the Euclidean distance while incorporating orthogonality between vectors as a constraint,for the case of orthogonal linear regression.Also,we consider the Modified Least Squares method.We exemplify this approach by developing the necessary equations to apply the Mohr-Coulomb,Murrell,Hoek-Brown,andÚcar criteria,and implement these equations in both spreadsheet calculations and R scripts.Finally,we demonstrate the technique's application using five datasets of varied lithologies from specialized literature,showcasing its versatility and effectiveness.
文摘To cater the need for real-time crack monitoring of infrastructural facilities,a CNN-regression model is proposed to directly estimate the crack properties from patches.RGB crack images and their corresponding masks obtained from a public dataset are cropped into patches of 256 square pixels that are classified with a pre-trained deep convolution neural network,the true positives are segmented,and crack properties are extracted using two different methods.The first method is primarily based on active contour models and level-set segmentation and the second method consists of the domain adaptation of a mathematical morphology-based method known as FIL-FINDER.A statistical test has been performed for the comparison of the stated methods and a database prepared with the more suitable method.An advanced convolution neural network-based multi-output regression model has been proposed which was trained with the prepared database and validated with the held-out dataset for the prediction of crack-length,crack-width,and width-uncertainty directly from input image patches.The pro-posed model has been tested on crack patches collected from different locations.Huber loss has been used to ensure the robustness of the proposed model selected from a set of 288 different variations of it.Additionally,an ablation study has been conducted on the top 3 models that demonstrated the influence of each network component on the pre-diction results.Finally,the best performing model HHc-X among the top 3 has been proposed that predicted crack properties which are in close agreement to the ground truths in the test data.
基金funded by the INTER program and cofunded by the Fond National de la Recherche,Luxembourg(FNR)and the Fund for Scientific Research-FNRS,Belgium(F.R.S-FNRS),T.0233.20-‘Sustainable Residential Densification’project(SusDens,2020–2024).
文摘The impact of different global and local variables in urban development processes requires a systematic study to fully comprehend the underlying complexities in them.The interplay between such variables is crucial for modelling urban growth to closely reflects reality.Despite extensive research,ambiguity remains about how variations in these input variables influence urban densification.In this study,we conduct a global sensitivity analysis(SA)using a multinomial logistic regression(MNL)model to assess the model’s explanatory and predictive power.We examine the influence of global variables,including spatial resolution,neighborhood size,and density classes,under different input combinations at a provincial scale to understand their impact on densification.Additionally,we perform a stepwise regression to identify the significant explanatory variables that are important for understanding densification in the Brussels Metropolitan Area(BMA).Our results indicate that a finer spatial resolution of 50 m and 100 m,smaller neighborhood size of 5×5 and 3×3,and specific density classes—namely 3(non-built-up,low and high built-up)and 4(non-built-up,low,medium and high built-up)—optimally explain and predict urban densification.In line with the same,the stepwise regression reveals that models with a coarser resolution of 300 m lack significant variables,reflecting a lower explanatory power for densification.This approach aids in identifying optimal and significant global variables with higher explanatory power for understanding and predicting urban densification.Furthermore,these findings are reproducible in a global urban context,offering valuable insights for planners,modelers and geographers in managing future urban growth and minimizing modelling.
文摘In this investigation,the Gradient Boosting(GB),Linear Regression(LR),Decision Tree(DT),and Voting algo-rithms were applied to predict the distribution pattern of Au geochemical data.Trace and indicator elements,including Mo,Cu,Pb,Zn,Ag,Ni,Co,Mn,Fe,and As,were used with these machine learning algorithms(MLAs)to predict Au concentration values in the Doostbigloo porphyry Cu-Au-Mo mineralization area.The performance of the models was evaluated using the Mean Absolute Percentage Error(MAPE)and Root Mean Square Error(RMSE)metrics.The proposed ensemble Voting algorithm outperformed the other models,yielding more ac-curate predictions according to both metrics.The predicted data from the GB,LR,DT,and Voting MLAs were modeled using the Concentration-Area fractal method,and Au geochemical anomalies were mapped.To compare and validate the results,factors such as the location of the mineral deposits,their surface extent,and mineralization trend were considered.The results indicate that integrating hybrid MLAs with fractal modeling signifi-cantly improves geochemical prospectivity mapping.Among the four models,three(DT,GB,Voting)accurately identified both mineral deposits.The LR model,however,only identified Deposit I(central),and its mineralization trend diverged from the field data.The GB and Voting models produced similar results,with their final maps derived from fractal modeling showing the same anomalous areas.The anomaly boundaries identified by these two models are consistent with the two known reserves in the region.The results and plots related to prediction indicators and error rates for these two models also show high similarity,with lower error rates than the other models.Notably,the Voting model demonstrated superior performance in accurately delineating mineral deposit locations and identifying realistic mineralization trends while minimizing false anomalies.
文摘Power converters are essential components in modern life,being widely used in industry,automation,transportation,and household appliances.In many critical applications,their failure can lead not only to financial losses due to operational downtime but also to serious risks to human safety.The capacitors forming the output filter,typically aluminumelectrolytic capacitors(AECs),are among the most critical and susceptible components in power converters.The electrolyte in AECs often evaporates over time,causing the internal resistance to rise and the capacitance to drop,ultimately leading to component failure.Detecting this fault requires measuring the current in the capacitor,rendering the method invasive and frequently impractical due to spatial constraints or operational limitations imposed by the integration of a current sensor in the capacitor branch.This article proposes the implementation of an online noninvasive fault diagnosis technique for estimating the Equivalent Series Resistance(ESR)and Capacitance(C)values of the capacitor,employing a combination of signal processing techniques(SPT)and machine learning(ML)algorithms.This solution relies solely on the converter’s input and output signals,therefore making it a non-invasive approach.The ML algorithm used was linear regression,applied to 27 attributes,21 of which were generated through feature engineering to enhance the model’s performance.The proposed solution demonstrates an R^(2) score greater than 0.99 in the estimation of both ESR and C.
基金supported by the Shanghai Sailing Program(22YF1416300)Youth Fund Project of National Natural Science Foundation of China(32202117)+1 种基金National Key Research and Development Program of China(2022YFD2100104)the China Agriculture Research System(CARS-47).
文摘Bigeye tuna is a protein-rich fish that is susceptible to spoilage during cold storage,however,there is limited information on untargeted metabolomic profiling of bigeye tuna concerning spoilage-associated enzymes and metabolites.This study aimed to investigate how cold storage affects enzyme activities,nutrient composition,tissue microstructures and spoilage metabolites of bigeye tuna.The activities of cathepsins B,H,L increased,while Na^(+)/K^(+)-ATPase and Mg^(2+)-ATPase decreased,α-glucosidase,lipase and lipoxygenase first increased and then decreased during cold storage,suggesting that proteins undergo degradation and ATP metabolism occurs at a faster rate during cold storage.Nutrient composition(moisture and lipid content),total amino acids decreased,suggesting that the nutritional value of bigeye tuna was reduced.Besides,a logistic regression equation has been established as a food analysis tool and assesses the dynamics and correlation of the enzyme of bigeye tuna during cold storage.Based on untargeted metabolomic profiling analysis,a total of 524 metabolites were identified in the bigeye tuna contained several spoilage metabolites involved in lipid metabolism(glycerophosphocholine and choline phosphate),amino acid metabolism(L-histidine,5-deoxy-5′-(methylthio)adenosine,5-methylthioadenosine),carbohydrate metabolism(D-gluconic acid,α-D-fructose 1,6-bisphosphate,D-glyceraldehyde 3-phosphate).The results of tissue microstructures of tuna showed a looser network and visible deterioration of tissue fiber during cold storage.Therefore,metabolomic analysis and tissue microstructures provide insight into the spoilage mechanism investigations on bigeye tuna during cold storage.
文摘This article presents a mathematical model addressing a scenario involving a hybrid nanofluid flow between two infinite parallel plates.One plate remains stationary,while the other moves downward at a squeezing velocity.The space between these plates contains a Darcy-Forchheimer porous medium.A mixture of water-based fluid with gold(Au)and silicon dioxide(Si O2)nanoparticles is formulated.In contrast to the conventional Fourier's heat flux equation,this study employs the Cattaneo-Christov heat flux equation.A uniform magnetic field is applied perpendicular to the flow direction,invoking magnetohydrodynamic(MHD)effects.Further,the model accounts for Joule heating,which is the heat generated when an electric current passes through the fluid.The problem is solved via NDSolve in MATHEMATICA.Numerical and statistical analyses are conducted to provide insights into the behavior of the nanomaterials between the parallel plates with respect to the flow,energy transport,and skin friction.The findings of this study have potential applications in enhancing cooling systems and optimizing thermal management strategies.It is observed that the squeezing motion generates additional pressure gradients within the fluid,which enhances the flow rate but reduces the frictional drag.Consequently,the fluid is pushed more vigorously between the plates,increasing the flow velocity.As the fluid experiences higher flow rates due to the increased squeezing effect,it spends less time in the region between the plates.The thermal relaxation,however,abruptly changes the temperature,leading to a decrease in the temperature fluctuations.
基金Supported by the Natural Science Foundation of Fujian Province(2022J011177,2024J01903)the Key Project of Fujian Provincial Education Department(JZ230054)。
文摘In clinical research,subgroup analysis can help identify patient groups that respond better or worse to specific treatments,improve therapeutic effect and safety,and is of great significance in precision medicine.This article considers subgroup analysis methods for longitudinal data containing multiple covariates and biomarkers.We divide subgroups based on whether a linear combination of these biomarkers exceeds a predetermined threshold,and assess the heterogeneity of treatment effects across subgroups using the interaction between subgroups and exposure variables.Quantile regression is used to better characterize the global distribution of the response variable and sparsity penalties are imposed to achieve variable selection of covariates and biomarkers.The effectiveness of our proposed methodology for both variable selection and parameter estimation is verified through random simulations.Finally,we demonstrate the application of this method by analyzing data from the PA.3 trial,further illustrating the practicality of the method proposed in this paper.
基金supported by the Young Scientists Fund of the National Key R&D Program of China(No.2022YFD2201800)the Youth Science Fund Program of National Natural Science Foundation of China(No.32301581)+2 种基金the Joint Funds for Regional Innovation and Development of the National Natural Science Foundation of China(No.U21A20244)the China Postdoctoral Science Foundation(No.2024M750383)the Heilongjiang Touyan Innovation Team Program(Technology Development Team for High-Efficiency Silviculture of Forest Resources).
文摘Branch size is a crucial characteristic,closely linked to both tree growth and wood quality.A review of existing branch size models reveals various approaches,but the ability to estimate branch diameter and length within the same whorl remains underexplored.In this study,a total of 77 trees were sampled from Northeast China to model the vertical distribution of branch diameter and length within each whorl along the crown.Several commonly used functions were taken as the alternative model forms,and the quantile regression method was employed and compared with the classical two-step modeling approach.The analysis incorporated stand,tree,and competition factors,with a particular focus on how these factors influence branches of varying sizes.The modified Weibull function was chosen as the optimal model,due to its excellent performance across all quantiles.Eight quantile regression curves(ranging from 0.20 to 0.85)were combined to predict branch diameter,while seven curves(ranging from 0.20 to 0.80)were used for branch length.The results showed that the quantile regression method outperformed the classical approach at model fitting and validation,likely due to its ability to estimate different rates of change across the entire branch size distribution.Lager branches in each whorl were more sensitive to changes in DBH,crown length(CL),crown ratio(CR)and dominant tree height(H_(dom)),while slenderness(HDR)more effectively influenced small and medium-sized branches.The effect of stand basal area(BAS)was relatively consistent across different branch sizes.The findings indicate that quantile regression is a good way not only a more accurate method for predicting branch size but also a valuable tool for understanding how branch growth responds to stand and tree factors.The models developed in this study are prepared to be further integrated into tree growth and yield simulation system,contributing to the assessment and promotion of wood quality.
基金Supported by Zhangjiajie"Xiao He(Young Talent)"Project,No.2024XHRC03Jishou University School-Level Research Project.
文摘BACKGROUND Aortic adverse remodeling remains a critical complication following thoracic endovascular aortic repair(TEVAR)for Stanford type B aortic dissection(TBAD),significantly impacting long-term survival.Accurate risk prediction is essential for optimized clinical management.AIM To develop and validate a logistic regression-based risk prediction model for aortic adverse remodeling following TEVAR in patients with TBAD.METHODS This retrospective observational cohort study analyzed 140 TBAD patients undergoing TEVAR at a tertiary center(2019–2024).Based on European guidelines,patients were categorized into adverse remodeling(aortic growth rate>2.9 mm/year,n=45)and favorable remodeling groups(n=95).Comprehensive variables(clinical/imaging/surgical)were analyzed using multivariable logistic regression to develop a predictive model.Model performance was assessed via receiver operating characteristic-area under the curve(AUC)and Hosmer-Lemeshow tests.RESULTS Multivariable analysis identified several strong independent predictors of negative aortic remodeling.Larger false lumen diameter at the primary entry tear[odds ratio(OR):1.561,95%CI:1.197–2.035;P=0.001]and patency of the false lumen(OR:5.639,95%CI:4.372-8.181;P=0.004)were significant risk factors.False lumen involvement extending to the thoracoabdominal aorta was identified as the strongest predictor,significantly increasing the risk of adverse remodeling(OR:11.751,95%CI:9.841-15.612;P=0.001).Conversely,false lumen involvement confined to the thoracic aorta demonstrated a significant protective effect(OR:0.925,95%CI:0.614–0.831;P=0.015).The prediction model exhibited excellent discrimination(AUC=0.968)and calibration(Hosmer-Lemeshow P=0.824).CONCLUSION This validated risk prediction model identifies aortic adverse remodeling with high accuracy using routinely available clinical parameters.False lumen involvement thoracoabdominal aorta is the strongest predictor(11.751-fold increased risk).The tool enables preoperative risk stratification to guide tailored TEVAR strategies and improve long-term outcomes.
基金funded by the National Key R&D Program of China,Grant No.2024YFF0504904.
文摘The packaging quality of coaxial laser diodes(CLDs)plays a pivotal role in determining their optical performance and long-term reliability.As the core packaging process,high-precision laser welding requires precise control of process parameters to suppress optical power loss.However,the complex nonlinear relationship between welding parameters and optical power loss renders traditional trial-and-error methods inefficient and imprecise.To address this challenge,a physics-informed(PI)and data-driven collaboration approach for welding parameter optimization is proposed.First,thermal-fluid-solid coupling finite element method(FEM)was employed to quantify the sensitivity of welding parameters to physical characteristics,including residual stress.This analysis facilitated the identification of critical factors contributing to optical power loss.Subsequently,a Gaussian process regression(GPR)model incorporating finite element simulation prior knowledge was constructed based on the selected features.By introducing physics-informed kernel(PIK)functions,stress distribution patterns were embedded into the prediction model,achieving high-precision optical power loss prediction.Finally,a Bayesian optimization(BO)algorithm with an adaptive sampling strategy was implemented for efficient parameter space exploration.Experimental results demonstrate that the proposedmethod effectively establishes explicit physical correlations between welding parameters and optical power loss.The optimized welding parameters reduced optical power loss by 34.1%,providing theoretical guidance and technical support for reliable CLD packaging.
基金supported by the National Institute of Forest Sciencethe R&D Program for Forest Science Technology (No.2022458B10-2224-0201) of the Korea Forest Service
文摘Pinus densiflora is a pine species native to the Korean peninsula,and seed orchards have supplied mate-rial needed for afforestation in South Korea.Climate vari-ables affecting seed production have not been identified.The purpose of this study was to determine climate variables that influence annual seed production of two seed orchards using multiple linear regression(MLR),elastic net regres-sion(ENR)and partial least square regression(PLSR)mod-els.The PLSR model included 12 climatic variables from 2003 to 2020 and explained 74.3%of the total variation in seed production.It showed better predictive performance(R2=0.662)than the EN(0.516)and the MLR(0.366)mod-els.Among the 12 climatic variables,July temperature two years prior to seed production and July precipitation after one year had the strongest influence on seed production.The time periods indicated by the two variables corresponded to pollen cone initiation and female gametophyte development.The results will be helpful for developing seed collection plans,selecting new orchard sites with favorable climatic conditions,and investigating the relationships between seed production and climatic factors in related pine species.
基金funded by the Deanship of Graduate Studies and Scientific Research at Jouf University under grant No.(DGSSR-2024-02-01137).
文摘Sonic Hedgehog Medulloblastoma(SHH-MB)is one of the four primary molecular subgroups of Medulloblastoma.It is estimated to be responsible for nearly one-third of allMB cases.Using transcriptomic and DNA methylation profiling techniques,new developments in this field determined four molecular subtypes for SHH-MB.SHH-MB subtypes show distinct DNAmethylation patterns that allow their discrimination fromoverlapping subtypes and predict clinical outcomes.Class overlapping occurs when two or more classes share common features,making it difficult to distinguish them as separate.Using the DNA methylation dataset,a novel classification technique is presented to address the issue of overlapping SHH-MBsubtypes.Penalizedmultinomial regression(PMR),Tomek links(TL),and singular value decomposition(SVD)were all smoothly integrated into a single framework.SVD and group lasso improve computational efficiency,address the problem of high-dimensional datasets,and clarify class distinctions by removing redundant or irrelevant features that might lead to class overlap.As a method to eliminate the issues of decision boundary overlap and class imbalance in the classification task,TL enhances dataset balance and increases the clarity of decision boundaries through the elimination of overlapping samples.Using fivefold cross-validation,our proposed method(TL-SVDPMR)achieved a remarkable overall accuracy of almost 95%in the classification of SHH-MB molecular subtypes.The results demonstrate the strong performance of the proposed classification model among the various SHH-MB subtypes given a high average of the area under the curve(AUC)values.Additionally,the statistical significance test indicates that TL-SVDPMR is more accurate than both SVM and random forest algorithms in classifying the overlapping SHH-MB subtypes,highlighting its importance for precision medicine applications.Our findings emphasized the success of combining SVD,TL,and PMRtechniques to improve the classification performance for biomedical applications with many features and overlapping subtypes.