A simplified method is proposed for analyzing the overpressure history of an optional point on the walls of a closed cuboid due to its internal optional point-explosion. Firstly, the overpressure histories of all node...A simplified method is proposed for analyzing the overpressure history of an optional point on the walls of a closed cuboid due to its internal optional point-explosion. Firstly, the overpressure histories of all nodes on the walls of a cube with a side-length of 2 m are computed under a reference-charge explosion at each node of its inner space using the LS-DYNA software, and then are collected to form a reference database. Next, with the thought of the isoparametric finite element, an interpolating algori...展开更多
Based on the constructing thought of the displacement model of isoparametric finite element, an extended interpolating algorithm is deduced for calculating the overpressure history of an optional point on the walls of...Based on the constructing thought of the displacement model of isoparametric finite element, an extended interpolating algorithm is deduced for calculating the overpressure history of an optional point on the walls of the rectangle-section tunnel under an optional point-explosion in its internal space. According to the working principle, the overpressure histories of all nodes on the walls of a tunnel with the equal width and height of 2 m, induced by a reference-charge explosion at each node in this tunnel's cross section, are computed using the LS-DYNA software, and then are gathered to establish a reference database, which makes it possible to set optionally the positions of the explosive and the overpressure-observed point. In addition, some variation factors of peak values and durations of overpressure on the walls, reflecting some changes on the charge weight and the sizes of width and height of the section, are included in this algorithm in order to simulate approximately the overpressure responses on the walls under the optional charge weight and cross-section size. Some example analyses indicate the rapidity and validity of this method, and therefore this will bring it a good prospect in engineering application.展开更多
The goal of this research is to introduce the simulation studies of the vector-host disease nonlinear system(VHDNS)along with the numerical treatment of artificial neural networks(ANNs)techniques supported by Levenber...The goal of this research is to introduce the simulation studies of the vector-host disease nonlinear system(VHDNS)along with the numerical treatment of artificial neural networks(ANNs)techniques supported by Levenberg-Marquardt backpropagation(LMQBP),known as ANNs-LMQBP.This mechanism is physically appropriate,where the number of infected people is increasing along with the limited health services.Furthermore,the biological effects have fadingmemories and exhibit transition behavior.Initially,the model is developed by considering the two and three categories for the humans and the vector species.The VHDNS is constructed with five classes,susceptible humans Sh(t),infected humans Ih(t),recovered humans Rh(t),infected vectors Iv(t),and susceptible vector Sv(t)based system of the fractional-order nonlinear ordinary differential equations.To solve the number of variations of the VHDNS,the numerical simulations are performed using the stochastic ANNs-LMQBP.The achieved numerical solutions for solving the VHDNS using the stochastic ANNs-LMQBP have been described for training,verifying,and testing data to decrease the mean square error(MSE).An extensive analysis is provided using the correlation studies,MSE,error histograms(EHs),state transitions(STs),and regression to observe the accuracy,efficiency,expertise,and aptitude of the computing ANNs-LMQBP.展开更多
Accurate taxonomic classification is essential to understanding microbial diversity and function through metagenomic sequencing.However,this task is complicated by the vast variety of microbial genomes and the computa...Accurate taxonomic classification is essential to understanding microbial diversity and function through metagenomic sequencing.However,this task is complicated by the vast variety of microbial genomes and the computational limitations of bioinformatics tools.The aim of this study was to evaluate the impact of reference database selection and confidence score(CS)settings on the performance of Kraken2,a widely used k-mer-based metagenomic classifier.In this study,we generated simulated metagenomic datasets to systematically evaluate how the choice of reference databases,from the compact Minikraken v1 to the expansive nt-and GTDB r202,and different CS(from 0 to 1.0)affect the key performance metrics of Kraken2.These metrics include classification rate,precision,recall,F1 score,and accuracy of true versus calculated bacterial abundance estimation.Our results show that higher CS,which increases the rigor of taxonomic classification by requiring greater k-mer agreement,generally decreases the classification rate.This effect is particularly pronounced for smaller databases such as Minikraken and Standard-16,where no reads could be classified when the CS was above 0.4.In contrast,for larger databases such as Standard,nt and GTDB r202,precision and F1 scores improved significantly with increasing CS,highlighting their robustness to stringent conditions.Recovery rates were mostly stable,indicating consistent detection of species under different CS settings.Crucially,the results show that a comprehensive reference database combined with a moderate CS(0.2 or 0.4)significantly improves classification accuracy and sensitivity.This finding underscores the need for careful selection of database and CS parameters tailored to specific scientific questions and available computational resources to optimize the results of metagenomic analyses.展开更多
基金Supported by National Natural Science Foundation of China (No. 50678116)National Key Technology R&D Program of China (No. 2006BAJ13B02)Tianjin Municipal Major Project of Application Foundation and Frontal Technology Research (No. 08JCZDJC19500)
文摘A simplified method is proposed for analyzing the overpressure history of an optional point on the walls of a closed cuboid due to its internal optional point-explosion. Firstly, the overpressure histories of all nodes on the walls of a cube with a side-length of 2 m are computed under a reference-charge explosion at each node of its inner space using the LS-DYNA software, and then are collected to form a reference database. Next, with the thought of the isoparametric finite element, an interpolating algori...
基金Supported by National Natural Science Foundation of China(No.50678116,50638030 and 50528808)National Key Technology R&D Program(No.2006BAJ13B02)
文摘Based on the constructing thought of the displacement model of isoparametric finite element, an extended interpolating algorithm is deduced for calculating the overpressure history of an optional point on the walls of the rectangle-section tunnel under an optional point-explosion in its internal space. According to the working principle, the overpressure histories of all nodes on the walls of a tunnel with the equal width and height of 2 m, induced by a reference-charge explosion at each node in this tunnel's cross section, are computed using the LS-DYNA software, and then are gathered to establish a reference database, which makes it possible to set optionally the positions of the explosive and the overpressure-observed point. In addition, some variation factors of peak values and durations of overpressure on the walls, reflecting some changes on the charge weight and the sizes of width and height of the section, are included in this algorithm in order to simulate approximately the overpressure responses on the walls under the optional charge weight and cross-section size. Some example analyses indicate the rapidity and validity of this method, and therefore this will bring it a good prospect in engineering application.
基金funded by National Research Council of Thailand(NRCT)and Khon Kaen University:N42A650291。
文摘The goal of this research is to introduce the simulation studies of the vector-host disease nonlinear system(VHDNS)along with the numerical treatment of artificial neural networks(ANNs)techniques supported by Levenberg-Marquardt backpropagation(LMQBP),known as ANNs-LMQBP.This mechanism is physically appropriate,where the number of infected people is increasing along with the limited health services.Furthermore,the biological effects have fadingmemories and exhibit transition behavior.Initially,the model is developed by considering the two and three categories for the humans and the vector species.The VHDNS is constructed with five classes,susceptible humans Sh(t),infected humans Ih(t),recovered humans Rh(t),infected vectors Iv(t),and susceptible vector Sv(t)based system of the fractional-order nonlinear ordinary differential equations.To solve the number of variations of the VHDNS,the numerical simulations are performed using the stochastic ANNs-LMQBP.The achieved numerical solutions for solving the VHDNS using the stochastic ANNs-LMQBP have been described for training,verifying,and testing data to decrease the mean square error(MSE).An extensive analysis is provided using the correlation studies,MSE,error histograms(EHs),state transitions(STs),and regression to observe the accuracy,efficiency,expertise,and aptitude of the computing ANNs-LMQBP.
基金supported by the Central Public-Interest Scientific Institution Basal Research Fund of the Chinese Academy of Agricultural Sciences(Y2022QC10)Agricultural Sciences and Technology Innovation Program of the Chinese Academy of Agricultural Sciences(CAAS-IFRZDRW202404,CAAS-ASTIP-2023-IFR-04).
文摘Accurate taxonomic classification is essential to understanding microbial diversity and function through metagenomic sequencing.However,this task is complicated by the vast variety of microbial genomes and the computational limitations of bioinformatics tools.The aim of this study was to evaluate the impact of reference database selection and confidence score(CS)settings on the performance of Kraken2,a widely used k-mer-based metagenomic classifier.In this study,we generated simulated metagenomic datasets to systematically evaluate how the choice of reference databases,from the compact Minikraken v1 to the expansive nt-and GTDB r202,and different CS(from 0 to 1.0)affect the key performance metrics of Kraken2.These metrics include classification rate,precision,recall,F1 score,and accuracy of true versus calculated bacterial abundance estimation.Our results show that higher CS,which increases the rigor of taxonomic classification by requiring greater k-mer agreement,generally decreases the classification rate.This effect is particularly pronounced for smaller databases such as Minikraken and Standard-16,where no reads could be classified when the CS was above 0.4.In contrast,for larger databases such as Standard,nt and GTDB r202,precision and F1 scores improved significantly with increasing CS,highlighting their robustness to stringent conditions.Recovery rates were mostly stable,indicating consistent detection of species under different CS settings.Crucially,the results show that a comprehensive reference database combined with a moderate CS(0.2 or 0.4)significantly improves classification accuracy and sensitivity.This finding underscores the need for careful selection of database and CS parameters tailored to specific scientific questions and available computational resources to optimize the results of metagenomic analyses.