This study demonstrates the complexity and importance of water quality as a measure of the health and sustainability of ecosystems that directly influence biodiversity,human health,and the world economy.The predictabi...This study demonstrates the complexity and importance of water quality as a measure of the health and sustainability of ecosystems that directly influence biodiversity,human health,and the world economy.The predictability of water quality thus plays a crucial role in managing our ecosystems to make informed decisions and,hence,proper environmental management.This study addresses these challenges by proposing an effective machine learning methodology applied to the“Water Quality”public dataset.The methodology has modeled the dataset suitable for providing prediction classification analysis with high values of the evaluating parameters such as accuracy,sensitivity,and specificity.The proposed methodology is based on two novel approaches:(a)the SMOTE method to deal with unbalanced data and(b)the skillfully involved classical machine learning models.This paper uses Random Forests,Decision Trees,XGBoost,and Support Vector Machines because they can handle large datasets,train models for handling skewed datasets,and provide high accuracy in water quality classification.A key contribution of this work is the use of custom sampling strategies within the SMOTE approach,which significantly enhanced performance metrics and improved class imbalance handling.The results demonstrate significant improvements in predictive performance,achieving the highest reported metrics:accuracy(98.92%vs.96.06%),sensitivity(98.3%vs.71.26%),and F1 score(98.37%vs.79.74%)using the XGBoost model.These improvements underscore the effectiveness of our custom SMOTE sampling strategies in addressing class imbalance.The findings contribute to environmental management by enabling ecology specialists to develop more accurate strategies for monitoring,assessing,and managing drinking water quality,ensuring better ecosystem and public health outcomes.展开更多
Effectively handling imbalanced datasets remains a fundamental challenge in computational modeling and machine learning,particularly when class overlap significantly deteriorates classification performance.Traditional...Effectively handling imbalanced datasets remains a fundamental challenge in computational modeling and machine learning,particularly when class overlap significantly deteriorates classification performance.Traditional oversampling methods often generate synthetic samples without considering density variations,leading to redundant or misleading instances that exacerbate class overlap in high-density regions.To address these limitations,we propose Wasserstein Generative Adversarial Network Variational Density Estimation WGAN-VDE,a computationally efficient density-aware adversarial resampling framework that enhances minority class representation while strategically reducing class overlap.The originality of WGAN-VDE lies in its density-aware sample refinement,ensuring that synthetic samples are positioned in underrepresented regions,thereby improving class distinctiveness.By applying structured feature representation,targeted sample generation,and density-based selection mechanisms strategies,the proposed framework ensures the generation of well-separated and diverse synthetic samples,improving class separability and reducing redundancy.The experimental evaluation on 20 benchmark datasets demonstrates that this approach outperforms 11 state-of-the-art rebalancing techniques,achieving superior results in F1-score,Accuracy,G-Mean,and AUC metrics.These results establish the proposed method as an effective and robust computational approach,suitable for diverse engineering and scientific applications involving imbalanced data classification and computational modeling.展开更多
Monte Carlo(MC) simulations have been performed to refine the estimation of the correction-toscaling exponent ω in the 2D φ^(4)model,which belongs to one of the most fundamental universality classes.If corrections h...Monte Carlo(MC) simulations have been performed to refine the estimation of the correction-toscaling exponent ω in the 2D φ^(4)model,which belongs to one of the most fundamental universality classes.If corrections have the form ∝ L^(-ω),then we find ω=1.546(30) andω=1.509(14) as the best estimates.These are obtained from the finite-size scaling of the susceptibility data in the range of linear lattice sizes L ∈[128,2048] at the critical value of the Binder cumulant and from the scaling of the corresponding pseudocritical couplings within L∈[64,2048].These values agree with several other MC estimates at the assumption of the power-law corrections and are comparable with the known results of the ε-expansion.In addition,we have tested the consistency with the scaling corrections of the form ∝ L^(-4/3),∝L^(-4/3)In L and ∝L^(-4/3)/ln L,which might be expected from some considerations of the renormalization group and Coulomb gas model.The latter option is consistent with our MC data.Our MC results served as a basis for a critical reconsideration of some earlier theoretical conjectures and scaling assumptions.In particular,we have corrected and refined our previous analysis by grouping Feynman diagrams.The renewed analysis gives ω≈4-d-2η as some approximation for spatial dimensions d <4,or ω≈1.5 in two dimensions.展开更多
将Flipped Class Model引入到高校网球课教学,有助于激发学生对高校网球项目课学习的积极性,增强学生网球项目课学习的自主性,加强师生间的交流。运用SWOT态势分析法,将Flipped Class Model引入到高校网球教学,将会促进教师教学能力提...将Flipped Class Model引入到高校网球课教学,有助于激发学生对高校网球项目课学习的积极性,增强学生网球项目课学习的自主性,加强师生间的交流。运用SWOT态势分析法,将Flipped Class Model引入到高校网球教学,将会促进教师教学能力提高、教学过程中教师和学生角色转换、学生学习习惯改变等。展开更多
The lithofacies classification is essential for oil and gas reservoir exploration and development.The traditional method of lithofacies classification is based on"core calibration logging"and the experience ...The lithofacies classification is essential for oil and gas reservoir exploration and development.The traditional method of lithofacies classification is based on"core calibration logging"and the experience of geologists.This approach has strong subjectivity,low efficiency,and high uncertainty.This uncertainty may be one of the key factors affecting the results of 3 D modeling of tight sandstone reservoirs.In recent years,deep learning,which is a cutting-edge artificial intelligence technology,has attracted attention from various fields.However,the study of deep-learning techniques in the field of lithofacies classification has not been sufficient.Therefore,this paper proposes a novel hybrid deep-learning model based on the efficient data feature-extraction ability of convolutional neural networks(CNN)and the excellent ability to describe time-dependent features of long short-term memory networks(LSTM)to conduct lithological facies-classification experiments.The results of a series of experiments show that the hybrid CNN-LSTM model had an average accuracy of 87.3%and the best classification effect compared to the CNN,LSTM or the three commonly used machine learning models(Support vector machine,random forest,and gradient boosting decision tree).In addition,the borderline synthetic minority oversampling technique(BSMOTE)is introduced to address the class-imbalance issue of raw data.The results show that processed data balance can significantly improve the accuracy of lithofacies classification.Beside that,based on the fine lithofacies constraints,the sequential indicator simulation method is used to establish a three-dimensional lithofacies model,which completes the fine description of the spatial distribution of tight sandstone reservoirs in the study area.According to this comprehensive analysis,the proposed CNN-LSTM model,which eliminates class imbalance,can be effectively applied to lithofacies classification,and is expected to improve the reality of the geological model for the tight sandstone reservoirs.展开更多
A model for a bubble column slurry reactor is developed based on the experiment of Rhenpreussen Koppers demonstration plant for slurry phase Fischer-Tropsch synthesis reported by Koelble et al. This model is applicabl...A model for a bubble column slurry reactor is developed based on the experiment of Rhenpreussen Koppers demonstration plant for slurry phase Fischer-Tropsch synthesis reported by Koelble et al. This model is applicable to the operation in the churn-turbulent regime and incorporates the information on the bubble size. The axial dispersion model is adopted to describe the flow characteristics of the Fischer-Tropsch slurry reactor. With the model developed, simulations are performed to identify the steady state behavior of a Fischer-Tropsch slurry reactor of commercial size. Predictions of the two-bubble class model is compared with that of the conventional single- bubble class model. The results show that under a variety of conditions, the two-bubble class model gives results different from those for the single-bubble class model.展开更多
Latent class analysis (LCA) is a widely used statistical technique for identifying subgroups in the population based upon multiple indicator variables. It has a number of advantages over other unsupervised grouping pr...Latent class analysis (LCA) is a widely used statistical technique for identifying subgroups in the population based upon multiple indicator variables. It has a number of advantages over other unsupervised grouping procedures such as cluster analysis, including stronger theoretical underpinnings, more clearly defined measures of model fit, and the ability to conduct confirmatory analyses. In addition, it is possible to ascertain whether an LCA solution is equally applicable to multiple known groups, using invariance assessment techniques. This study compared the effectiveness of multiple statistics for detecting group LCA invariance, including a chi-square difference test, a bootstrap likelihood ratio test, and several information indices. Results of the simulation study found that the bootstrap likelihood ratio test was the optimal invariance assessment statistic. In addition to the simulation, LCA group invariance assessment was demonstrated in an application with the Youth Risk Behavior Survey (YRBS). Implications of the simulation results for practice are discussed.展开更多
Background:Due to the high heterogeneity among hepatocellular carcinoma(HCC)patients receiving transarterial chemoembolization(TACE),the prognosis of patients varies significantly.The decisionmaking on the initiation ...Background:Due to the high heterogeneity among hepatocellular carcinoma(HCC)patients receiving transarterial chemoembolization(TACE),the prognosis of patients varies significantly.The decisionmaking on the initiation and/or repetition of TACE under different liver functions is a matter of concern in clinical practice.Thus,we aimed to develop a prediction model for TACE candidates using risk stratification based on varied liver function.Methods:A total of 222 unresectable HCC patients who underwent TACE as their only treatment were included in this study.Cox proportional hazards regression was performed to select the independent risk factors and establish a predictive model for the overall survival(OS).The model was validated in patients with different Child-Pugh class and compared to previous TACE scoring systems.Results:The five independent risk factors,including alpha-fetoprotein(AFP)level,maximal tumor size,the increase of albumin-bilirubin(ALBI)grade score,tumor response,and the increase of aspartate aminotransferase(AST),were used to build a prognostic model(ASARA).In the training and validation cohorts,the OS of patients with ASARA score≤2 was significantly higher than that of patients with ASARA score>2(P<0.001,P=0.006,respectively).The ASARA model and its modified version“AS(ARA)”can effectively distinguish the OS(P<0.001,P=0.004)between patients with Child-Pugh class A and B,and the C-index was 0.687 and 0.706,respectively.For repeated TACE,the ASARA model was superior to Assessment for Retreatment with TACE(ART)and ALBI grade,maximal tumor size,AFP,and tumor response(ASAR)among Child-Pugh class A patients.For the first TACE,the performance of AS(ARA)was better than that of modified hepatoma arterial-embolization prognostic(mHAP),mHAP3,and ASA(R)models among Child-Pugh class B patients.Conclusions:The ASARA scoring system is valuable in the decision-making of TACE repetition for HCC patients,especially Child-Pugh class A patients.The modified AS(ARA)can be used to screen the ideal candidate for TACE initiation in Child-Pugh class B patients with poor liver function.展开更多
The issue of document management has been raised for a long time, especially with the appearance of office automation in the 1980s, which led to dematerialization and Electronic Document Management (EDM). In the same ...The issue of document management has been raised for a long time, especially with the appearance of office automation in the 1980s, which led to dematerialization and Electronic Document Management (EDM). In the same period, workflow management has experienced significant development, but has become more focused on the industry. However, it seems to us that document workflows have not had the same interest for the scientific community. But nowadays, the emergence and supremacy of the Internet in electronic exchanges are leading to a massive dematerialization of documents;which requires a conceptual reconsideration of the organizational framework for the processing of said documents in both public and private administrations. This problem seems open to us and deserves the interest of the scientific community. Indeed, EDM has mainly focused on the storage (referencing) and circulation of documents (traceability). It paid little attention to the overall behavior of the system in processing documents. The purpose of our researches is to model document processing systems. In the previous works, we proposed a general model and its specialization in the case of small documents (any document processed by a single person at a time during its processing life cycle), which represent 70% of documents processed by administrations, according to our study. In this contribution, we extend the model for processing small documents to the case where they are managed in a system comprising document classes organized in subclasses;which is the case for most administrations. We have thus observed that this model is a Markovian <i>M<sup>L×K</sup>/M<sup>L×K</sup>/</i>1 queues network. We have analyzed the constraints of this model and deduced certain characteristics and metrics. <span style="white-space:normal;"><i></i></span><i>In fine<span style="white-space:normal;"></span></i>, the ultimate objective of our work is to design a document workflow management system, integrating a component of global behavior prediction.展开更多
A three-step XML Schema modeling method is presented, namely first establishing a diagram of conceptual modeling, then transforming it to UML class diagram and finally mapping it to XML Schema. A case study of handlin...A three-step XML Schema modeling method is presented, namely first establishing a diagram of conceptual modeling, then transforming it to UML class diagram and finally mapping it to XML Schema. A case study of handling furniture design data is given to illustrate the detail of conversion process.展开更多
A new lattice Bhatnagar-Gross-Krook (LBGK) model for a class of the generalized Burgers equations is proposed. It is a general LBGK model for nonlinear Burgers equations with source term in arbitrary dimensional spa...A new lattice Bhatnagar-Gross-Krook (LBGK) model for a class of the generalized Burgers equations is proposed. It is a general LBGK model for nonlinear Burgers equations with source term in arbitrary dimensional space. The linear stability of the model is also studied. The model is numerically tested for three problems in different dimensional space, and the numerical results are compared with either analytic solutions or numerical results obtained by other methods. Satisfactory results are obtained by the numerical simulations.展开更多
Transportation issue is one of the significant zones of utilization of Linear Programming Model. In this paper, transportation model is utilized to decide an ideal answer for the transportation issue in a run of the m...Transportation issue is one of the significant zones of utilization of Linear Programming Model. In this paper, transportation model is utilized to decide an ideal answer for the transportation issue in a run of the mill world class university utilizing Covenant University as a contextual analysis. Covenant University is a potential world class University. The quick development of Covenant University Campus over the most recent fourteen years affects its transportation framework. This paper particularly takes a gander at streamlining the time spent by the students moving from their lodgings to lecture rooms. Google guide was utilized to figure the separation and time between every cause and every goal. North-west corner technique, Least Cost strategy and Vogel’s estimation technique were utilized to decide the underlying fundamental plausible arrangement (initial feasible solution) and MODI strategy was utilized to locate the ideal arrangement (optimal solution). The last outcome demonstrates that the development of understudies from hostel to lecture rooms can be streamlined if the total time spent is decreased.展开更多
Discriminant space defining area classes is an important conceptual construct for uncertainty characterization in area-class maps.Discriminant models were promoted as they can enhance consistency in area-class mapping...Discriminant space defining area classes is an important conceptual construct for uncertainty characterization in area-class maps.Discriminant models were promoted as they can enhance consistency in area-class mapping and replicability in error modeling.As area classes are rarely completely separable in empirically realized discriminant space,where class inseparabil-ity becomes more complicated for change categorization,we seek to quantify uncertainty in area classes(and change classes)due to measurement errors and semantic discrepancy separately and hence assess their relative margins objectively.Experiments using real datasets were carried out,and a Bayesian method was used to obtain change maps.We found that there are large differences be-tween uncertainty statistics referring to data classes and information classes.Therefore,uncertainty characterization in change categorization should be based on discriminant modeling of measurement errors and semantic mismatch analysis,enabling quanti-fication of uncertainty due to partially random measurement errors,and systematic categorical discrepancies,respectively.展开更多
We present a stochastic critical slope sandpile model, where the amount of grains that fall in an overturning event is stochastic variable. The model is local, conservative, and Abelian. We apply the moment analysis t...We present a stochastic critical slope sandpile model, where the amount of grains that fall in an overturning event is stochastic variable. The model is local, conservative, and Abelian. We apply the moment analysis to evaluate critical exponents and finite size scaling method to consistently test the obtained results. Numerical results show that this model, Oslo model, and one-dimensional Abelian Manna model have the same critical behavior although the three models have different stochastic toppling rules, which provides evidences suggesting that Abelian sandpile models with different stochastic toppling rules are in the same universality class.展开更多
The Federal Railroad Administration (FRA)’s Web Based Accident Prediction System (WBAPS) is used by federal, state and local agencies to get a preliminary idea on safety at a rail-highway grade crossing. It is an int...The Federal Railroad Administration (FRA)’s Web Based Accident Prediction System (WBAPS) is used by federal, state and local agencies to get a preliminary idea on safety at a rail-highway grade crossing. It is an interactive and user-friendly tool used to make funding decisions. WBAPS is almost three decades old and involves a three-step approach making it difficult to interpret the contribution of the variables included in the model. It also does not directly account for regional/local developments and technological advancements pertaining to signals and signs implemented at rail-highway grade crossings. Further, characteristics of a rail-highway grade crossing vary by track class which is not explicitly considered by WBAPS. This research, therefore, examines and develops a method and models to estimate crashes at rail-highway grade crossings by track class using regional/local level data. The method and models developed for each track class as well as considering all track classes together are based on data for the state of North Carolina. Linear, as well as count models based on Poisson and Negative Binomial (NB) distributions, was tested for applicability. Negative binomial models were found to be the best fit for the data used in this research. Models for each track class have better goodness of fit statistics compared to the model considering data for all track classes together. This is primarily because traffic, design, and operational characteristics at rail-highway grade crossings are different for each track class. The findings from statistical models in this research are supported by model validation.展开更多
Pneumonia is an acute lung infection that has caused many fatalitiesglobally. Radiologists often employ chest X-rays to identify pneumoniasince they are presently the most effective imaging method for this purpose.Com...Pneumonia is an acute lung infection that has caused many fatalitiesglobally. Radiologists often employ chest X-rays to identify pneumoniasince they are presently the most effective imaging method for this purpose.Computer-aided diagnosis of pneumonia using deep learning techniques iswidely used due to its effectiveness and performance. In the proposed method,the Synthetic Minority Oversampling Technique (SMOTE) approach is usedto eliminate the class imbalance in the X-ray dataset. To compensate forthe paucity of accessible data, pre-trained transfer learning is used, and anensemble Convolutional Neural Network (CNN) model is developed. Theensemble model consists of all possible combinations of the MobileNetv2,Visual Geometry Group (VGG16), and DenseNet169 models. MobileNetV2and DenseNet169 performed well in the Single classifier model, with anaccuracy of 94%, while the ensemble model (MobileNetV2+DenseNet169)achieved an accuracy of 96.9%. Using the data synchronous parallel modelin Distributed Tensorflow, the training process accelerated performance by98.6% and outperformed other conventional approaches.展开更多
The presented research aims to design a new prevention class(P)in the HIV nonlinear system,i.e.,the HIPV model.Then numerical treatment of the newly formulated HIPV model is portrayed handled by using the strength of ...The presented research aims to design a new prevention class(P)in the HIV nonlinear system,i.e.,the HIPV model.Then numerical treatment of the newly formulated HIPV model is portrayed handled by using the strength of stochastic procedure based numerical computing schemes exploiting the artificial neural networks(ANNs)modeling legacy together with the optimization competence of the hybrid of global and local search schemes via genetic algorithms(GAs)and active-set approach(ASA),i.e.,GA-ASA.The optimization performances through GA-ASA are accessed by presenting an error-based fitness function designed for all the classes of the HIPV model and its corresponding initial conditions represented with nonlinear systems of ODEs.To check the exactness of the proposed stochastic scheme,the comparison of the obtained results and Adams numerical results is performed.For the convergence measures,the learning curves are presented based on the different contact rate values.Moreover,the statistical performances through different operators indicate the stability and reliability of the proposed stochastic scheme to solve the novel designed HIPV model.展开更多
Stop frequency models, as one of the elements of activity based models, represent an important part of travel behavior. Unobserved heterogeneity across the travelers should be taken into consideration to prevent biase...Stop frequency models, as one of the elements of activity based models, represent an important part of travel behavior. Unobserved heterogeneity across the travelers should be taken into consideration to prevent biasedness and inconsistency in the estimated parameters in the stop frequency models. Additionally, previous studies on the stop frequency have mostly been done in larger metropolitan areas and less attention has been paid to the areas with less population. This study addresses these gaps by using 2012 travel data from a medium sized U.S. urban area using the work tour for the case study. Stop in the work tour were classified into three groups of outbound leg, work based subtour, and inbound leg of the commutes. Latent Class Poisson Regression Models were used to analyze the data. The results indicate the presence of heterogeneity across the commuters. Using latent class models significantly improves the predictive power of the models compared to regular one class Poisson regression models. In contrast to one class Poisson models, gender becomes insignificant in predicting the number of tours when unobserved heterogeneity is accounted for. The commuters are associated with increased stops on their work based subtour when the employment density of service-related occupations increases in their work zone, but employment density of retail employment does not significantly contribute to the stop making likelihood of the commuters. Additionally, an increase in the number of work tours was associated with fewer stops on the inbound leg of the commute. The results of this study suggest the consideration of unobserved heterogeneity in the stop frequency models and help transportation agencies and policy makers make better inferences from such models.展开更多
Improving college students' listening and speaking ability is a very important part of college English teaching Based on the foreign and domestic studies on how to improve students' listening and speaking ability, t...Improving college students' listening and speaking ability is a very important part of college English teaching Based on the foreign and domestic studies on how to improve students' listening and speaking ability, this paper explores the 1 + 1 model in listening and speaking class, which is to divide the listening and speaking class into two parts--small speaking class combined with students' online autonomous learning. Through one-year experiment and study on two classes, although there is no significant difference between the performances of these classes, this study has shed some light on how to vary the teaching methods, how to improve the class efficiency, students' autonomy in leaming, and how to build new assessment system. Further studies could be made later based on this展开更多
With the development of computer technology, the way of language teaching has experienced great changes. In the past the methods of language teaching mainly focus on the direct method, the grammar-translation, audio-l...With the development of computer technology, the way of language teaching has experienced great changes. In the past the methods of language teaching mainly focus on the direct method, the grammar-translation, audio-lingual, the structure approach, communicative language teaching, community language learning, task-based language learning, etc. These methods start and develop with the need and application of some theories. Now at the present time the computer is widely used in every field and the Internet is compulsory of daily life. So the ways of teaching language is changing accordingly. This paper is to analyze the new way of language teaching, the flipped class model. The first part is the general introduction of the flipped class model, and the second part is the application of the flipped class model to translation course.展开更多
文摘This study demonstrates the complexity and importance of water quality as a measure of the health and sustainability of ecosystems that directly influence biodiversity,human health,and the world economy.The predictability of water quality thus plays a crucial role in managing our ecosystems to make informed decisions and,hence,proper environmental management.This study addresses these challenges by proposing an effective machine learning methodology applied to the“Water Quality”public dataset.The methodology has modeled the dataset suitable for providing prediction classification analysis with high values of the evaluating parameters such as accuracy,sensitivity,and specificity.The proposed methodology is based on two novel approaches:(a)the SMOTE method to deal with unbalanced data and(b)the skillfully involved classical machine learning models.This paper uses Random Forests,Decision Trees,XGBoost,and Support Vector Machines because they can handle large datasets,train models for handling skewed datasets,and provide high accuracy in water quality classification.A key contribution of this work is the use of custom sampling strategies within the SMOTE approach,which significantly enhanced performance metrics and improved class imbalance handling.The results demonstrate significant improvements in predictive performance,achieving the highest reported metrics:accuracy(98.92%vs.96.06%),sensitivity(98.3%vs.71.26%),and F1 score(98.37%vs.79.74%)using the XGBoost model.These improvements underscore the effectiveness of our custom SMOTE sampling strategies in addressing class imbalance.The findings contribute to environmental management by enabling ecology specialists to develop more accurate strategies for monitoring,assessing,and managing drinking water quality,ensuring better ecosystem and public health outcomes.
基金supported by Ongoing Research Funding Program(ORF-2025-488)King Saud University,Riyadh,Saudi Arabia.
文摘Effectively handling imbalanced datasets remains a fundamental challenge in computational modeling and machine learning,particularly when class overlap significantly deteriorates classification performance.Traditional oversampling methods often generate synthetic samples without considering density variations,leading to redundant or misleading instances that exacerbate class overlap in high-density regions.To address these limitations,we propose Wasserstein Generative Adversarial Network Variational Density Estimation WGAN-VDE,a computationally efficient density-aware adversarial resampling framework that enhances minority class representation while strategically reducing class overlap.The originality of WGAN-VDE lies in its density-aware sample refinement,ensuring that synthetic samples are positioned in underrepresented regions,thereby improving class distinctiveness.By applying structured feature representation,targeted sample generation,and density-based selection mechanisms strategies,the proposed framework ensures the generation of well-separated and diverse synthetic samples,improving class separability and reducing redundancy.The experimental evaluation on 20 benchmark datasets demonstrates that this approach outperforms 11 state-of-the-art rebalancing techniques,achieving superior results in F1-score,Accuracy,G-Mean,and AUC metrics.These results establish the proposed method as an effective and robust computational approach,suitable for diverse engineering and scientific applications involving imbalanced data classification and computational modeling.
文摘Monte Carlo(MC) simulations have been performed to refine the estimation of the correction-toscaling exponent ω in the 2D φ^(4)model,which belongs to one of the most fundamental universality classes.If corrections have the form ∝ L^(-ω),then we find ω=1.546(30) andω=1.509(14) as the best estimates.These are obtained from the finite-size scaling of the susceptibility data in the range of linear lattice sizes L ∈[128,2048] at the critical value of the Binder cumulant and from the scaling of the corresponding pseudocritical couplings within L∈[64,2048].These values agree with several other MC estimates at the assumption of the power-law corrections and are comparable with the known results of the ε-expansion.In addition,we have tested the consistency with the scaling corrections of the form ∝ L^(-4/3),∝L^(-4/3)In L and ∝L^(-4/3)/ln L,which might be expected from some considerations of the renormalization group and Coulomb gas model.The latter option is consistent with our MC data.Our MC results served as a basis for a critical reconsideration of some earlier theoretical conjectures and scaling assumptions.In particular,we have corrected and refined our previous analysis by grouping Feynman diagrams.The renewed analysis gives ω≈4-d-2η as some approximation for spatial dimensions d <4,or ω≈1.5 in two dimensions.
文摘将Flipped Class Model引入到高校网球课教学,有助于激发学生对高校网球项目课学习的积极性,增强学生网球项目课学习的自主性,加强师生间的交流。运用SWOT态势分析法,将Flipped Class Model引入到高校网球教学,将会促进教师教学能力提高、教学过程中教师和学生角色转换、学生学习习惯改变等。
基金supported by the Fundamental Research Funds for the Central Universities(Grant No.300102278402)。
文摘The lithofacies classification is essential for oil and gas reservoir exploration and development.The traditional method of lithofacies classification is based on"core calibration logging"and the experience of geologists.This approach has strong subjectivity,low efficiency,and high uncertainty.This uncertainty may be one of the key factors affecting the results of 3 D modeling of tight sandstone reservoirs.In recent years,deep learning,which is a cutting-edge artificial intelligence technology,has attracted attention from various fields.However,the study of deep-learning techniques in the field of lithofacies classification has not been sufficient.Therefore,this paper proposes a novel hybrid deep-learning model based on the efficient data feature-extraction ability of convolutional neural networks(CNN)and the excellent ability to describe time-dependent features of long short-term memory networks(LSTM)to conduct lithological facies-classification experiments.The results of a series of experiments show that the hybrid CNN-LSTM model had an average accuracy of 87.3%and the best classification effect compared to the CNN,LSTM or the three commonly used machine learning models(Support vector machine,random forest,and gradient boosting decision tree).In addition,the borderline synthetic minority oversampling technique(BSMOTE)is introduced to address the class-imbalance issue of raw data.The results show that processed data balance can significantly improve the accuracy of lithofacies classification.Beside that,based on the fine lithofacies constraints,the sequential indicator simulation method is used to establish a three-dimensional lithofacies model,which completes the fine description of the spatial distribution of tight sandstone reservoirs in the study area.According to this comprehensive analysis,the proposed CNN-LSTM model,which eliminates class imbalance,can be effectively applied to lithofacies classification,and is expected to improve the reality of the geological model for the tight sandstone reservoirs.
文摘A model for a bubble column slurry reactor is developed based on the experiment of Rhenpreussen Koppers demonstration plant for slurry phase Fischer-Tropsch synthesis reported by Koelble et al. This model is applicable to the operation in the churn-turbulent regime and incorporates the information on the bubble size. The axial dispersion model is adopted to describe the flow characteristics of the Fischer-Tropsch slurry reactor. With the model developed, simulations are performed to identify the steady state behavior of a Fischer-Tropsch slurry reactor of commercial size. Predictions of the two-bubble class model is compared with that of the conventional single- bubble class model. The results show that under a variety of conditions, the two-bubble class model gives results different from those for the single-bubble class model.
文摘Latent class analysis (LCA) is a widely used statistical technique for identifying subgroups in the population based upon multiple indicator variables. It has a number of advantages over other unsupervised grouping procedures such as cluster analysis, including stronger theoretical underpinnings, more clearly defined measures of model fit, and the ability to conduct confirmatory analyses. In addition, it is possible to ascertain whether an LCA solution is equally applicable to multiple known groups, using invariance assessment techniques. This study compared the effectiveness of multiple statistics for detecting group LCA invariance, including a chi-square difference test, a bootstrap likelihood ratio test, and several information indices. Results of the simulation study found that the bootstrap likelihood ratio test was the optimal invariance assessment statistic. In addition to the simulation, LCA group invariance assessment was demonstrated in an application with the Youth Risk Behavior Survey (YRBS). Implications of the simulation results for practice are discussed.
基金This study was supported by a grant from Tianjin Key Medical Discipline(Specialty)Construction Project.
文摘Background:Due to the high heterogeneity among hepatocellular carcinoma(HCC)patients receiving transarterial chemoembolization(TACE),the prognosis of patients varies significantly.The decisionmaking on the initiation and/or repetition of TACE under different liver functions is a matter of concern in clinical practice.Thus,we aimed to develop a prediction model for TACE candidates using risk stratification based on varied liver function.Methods:A total of 222 unresectable HCC patients who underwent TACE as their only treatment were included in this study.Cox proportional hazards regression was performed to select the independent risk factors and establish a predictive model for the overall survival(OS).The model was validated in patients with different Child-Pugh class and compared to previous TACE scoring systems.Results:The five independent risk factors,including alpha-fetoprotein(AFP)level,maximal tumor size,the increase of albumin-bilirubin(ALBI)grade score,tumor response,and the increase of aspartate aminotransferase(AST),were used to build a prognostic model(ASARA).In the training and validation cohorts,the OS of patients with ASARA score≤2 was significantly higher than that of patients with ASARA score>2(P<0.001,P=0.006,respectively).The ASARA model and its modified version“AS(ARA)”can effectively distinguish the OS(P<0.001,P=0.004)between patients with Child-Pugh class A and B,and the C-index was 0.687 and 0.706,respectively.For repeated TACE,the ASARA model was superior to Assessment for Retreatment with TACE(ART)and ALBI grade,maximal tumor size,AFP,and tumor response(ASAR)among Child-Pugh class A patients.For the first TACE,the performance of AS(ARA)was better than that of modified hepatoma arterial-embolization prognostic(mHAP),mHAP3,and ASA(R)models among Child-Pugh class B patients.Conclusions:The ASARA scoring system is valuable in the decision-making of TACE repetition for HCC patients,especially Child-Pugh class A patients.The modified AS(ARA)can be used to screen the ideal candidate for TACE initiation in Child-Pugh class B patients with poor liver function.
文摘The issue of document management has been raised for a long time, especially with the appearance of office automation in the 1980s, which led to dematerialization and Electronic Document Management (EDM). In the same period, workflow management has experienced significant development, but has become more focused on the industry. However, it seems to us that document workflows have not had the same interest for the scientific community. But nowadays, the emergence and supremacy of the Internet in electronic exchanges are leading to a massive dematerialization of documents;which requires a conceptual reconsideration of the organizational framework for the processing of said documents in both public and private administrations. This problem seems open to us and deserves the interest of the scientific community. Indeed, EDM has mainly focused on the storage (referencing) and circulation of documents (traceability). It paid little attention to the overall behavior of the system in processing documents. The purpose of our researches is to model document processing systems. In the previous works, we proposed a general model and its specialization in the case of small documents (any document processed by a single person at a time during its processing life cycle), which represent 70% of documents processed by administrations, according to our study. In this contribution, we extend the model for processing small documents to the case where they are managed in a system comprising document classes organized in subclasses;which is the case for most administrations. We have thus observed that this model is a Markovian <i>M<sup>L×K</sup>/M<sup>L×K</sup>/</i>1 queues network. We have analyzed the constraints of this model and deduced certain characteristics and metrics. <span style="white-space:normal;"><i></i></span><i>In fine<span style="white-space:normal;"></span></i>, the ultimate objective of our work is to design a document workflow management system, integrating a component of global behavior prediction.
基金Supported by the National Key Project Foundation of China (No. 2001BA201A06)
文摘A three-step XML Schema modeling method is presented, namely first establishing a diagram of conceptual modeling, then transforming it to UML class diagram and finally mapping it to XML Schema. A case study of handling furniture design data is given to illustrate the detail of conversion process.
基金Project supported by the National Natural Science Foundation of China (Grant Nos 70271069 and 60073044).
文摘A new lattice Bhatnagar-Gross-Krook (LBGK) model for a class of the generalized Burgers equations is proposed. It is a general LBGK model for nonlinear Burgers equations with source term in arbitrary dimensional space. The linear stability of the model is also studied. The model is numerically tested for three problems in different dimensional space, and the numerical results are compared with either analytic solutions or numerical results obtained by other methods. Satisfactory results are obtained by the numerical simulations.
文摘Transportation issue is one of the significant zones of utilization of Linear Programming Model. In this paper, transportation model is utilized to decide an ideal answer for the transportation issue in a run of the mill world class university utilizing Covenant University as a contextual analysis. Covenant University is a potential world class University. The quick development of Covenant University Campus over the most recent fourteen years affects its transportation framework. This paper particularly takes a gander at streamlining the time spent by the students moving from their lodgings to lecture rooms. Google guide was utilized to figure the separation and time between every cause and every goal. North-west corner technique, Least Cost strategy and Vogel’s estimation technique were utilized to decide the underlying fundamental plausible arrangement (initial feasible solution) and MODI strategy was utilized to locate the ideal arrangement (optimal solution). The last outcome demonstrates that the development of understudies from hostel to lecture rooms can be streamlined if the total time spent is decreased.
基金Supported by the National Natural Science Foundation of China (No.41171346,No. 41071286)the Fundamental Research Funds for the Central Universities (No. 20102130103000005)the National 973 Program of China (No. 2007CB714402‐5)
文摘Discriminant space defining area classes is an important conceptual construct for uncertainty characterization in area-class maps.Discriminant models were promoted as they can enhance consistency in area-class mapping and replicability in error modeling.As area classes are rarely completely separable in empirically realized discriminant space,where class inseparabil-ity becomes more complicated for change categorization,we seek to quantify uncertainty in area classes(and change classes)due to measurement errors and semantic discrepancy separately and hence assess their relative margins objectively.Experiments using real datasets were carried out,and a Bayesian method was used to obtain change maps.We found that there are large differences be-tween uncertainty statistics referring to data classes and information classes.Therefore,uncertainty characterization in change categorization should be based on discriminant modeling of measurement errors and semantic mismatch analysis,enabling quanti-fication of uncertainty due to partially random measurement errors,and systematic categorical discrepancies,respectively.
基金国家自然科学基金,the State Key Laboratory of Laser of China
文摘We present a stochastic critical slope sandpile model, where the amount of grains that fall in an overturning event is stochastic variable. The model is local, conservative, and Abelian. We apply the moment analysis to evaluate critical exponents and finite size scaling method to consistently test the obtained results. Numerical results show that this model, Oslo model, and one-dimensional Abelian Manna model have the same critical behavior although the three models have different stochastic toppling rules, which provides evidences suggesting that Abelian sandpile models with different stochastic toppling rules are in the same universality class.
文摘The Federal Railroad Administration (FRA)’s Web Based Accident Prediction System (WBAPS) is used by federal, state and local agencies to get a preliminary idea on safety at a rail-highway grade crossing. It is an interactive and user-friendly tool used to make funding decisions. WBAPS is almost three decades old and involves a three-step approach making it difficult to interpret the contribution of the variables included in the model. It also does not directly account for regional/local developments and technological advancements pertaining to signals and signs implemented at rail-highway grade crossings. Further, characteristics of a rail-highway grade crossing vary by track class which is not explicitly considered by WBAPS. This research, therefore, examines and develops a method and models to estimate crashes at rail-highway grade crossings by track class using regional/local level data. The method and models developed for each track class as well as considering all track classes together are based on data for the state of North Carolina. Linear, as well as count models based on Poisson and Negative Binomial (NB) distributions, was tested for applicability. Negative binomial models were found to be the best fit for the data used in this research. Models for each track class have better goodness of fit statistics compared to the model considering data for all track classes together. This is primarily because traffic, design, and operational characteristics at rail-highway grade crossings are different for each track class. The findings from statistical models in this research are supported by model validation.
文摘Pneumonia is an acute lung infection that has caused many fatalitiesglobally. Radiologists often employ chest X-rays to identify pneumoniasince they are presently the most effective imaging method for this purpose.Computer-aided diagnosis of pneumonia using deep learning techniques iswidely used due to its effectiveness and performance. In the proposed method,the Synthetic Minority Oversampling Technique (SMOTE) approach is usedto eliminate the class imbalance in the X-ray dataset. To compensate forthe paucity of accessible data, pre-trained transfer learning is used, and anensemble Convolutional Neural Network (CNN) model is developed. Theensemble model consists of all possible combinations of the MobileNetv2,Visual Geometry Group (VGG16), and DenseNet169 models. MobileNetV2and DenseNet169 performed well in the Single classifier model, with anaccuracy of 94%, while the ensemble model (MobileNetV2+DenseNet169)achieved an accuracy of 96.9%. Using the data synchronous parallel modelin Distributed Tensorflow, the training process accelerated performance by98.6% and outperformed other conventional approaches.
文摘The presented research aims to design a new prevention class(P)in the HIV nonlinear system,i.e.,the HIPV model.Then numerical treatment of the newly formulated HIPV model is portrayed handled by using the strength of stochastic procedure based numerical computing schemes exploiting the artificial neural networks(ANNs)modeling legacy together with the optimization competence of the hybrid of global and local search schemes via genetic algorithms(GAs)and active-set approach(ASA),i.e.,GA-ASA.The optimization performances through GA-ASA are accessed by presenting an error-based fitness function designed for all the classes of the HIPV model and its corresponding initial conditions represented with nonlinear systems of ODEs.To check the exactness of the proposed stochastic scheme,the comparison of the obtained results and Adams numerical results is performed.For the convergence measures,the learning curves are presented based on the different contact rate values.Moreover,the statistical performances through different operators indicate the stability and reliability of the proposed stochastic scheme to solve the novel designed HIPV model.
文摘Stop frequency models, as one of the elements of activity based models, represent an important part of travel behavior. Unobserved heterogeneity across the travelers should be taken into consideration to prevent biasedness and inconsistency in the estimated parameters in the stop frequency models. Additionally, previous studies on the stop frequency have mostly been done in larger metropolitan areas and less attention has been paid to the areas with less population. This study addresses these gaps by using 2012 travel data from a medium sized U.S. urban area using the work tour for the case study. Stop in the work tour were classified into three groups of outbound leg, work based subtour, and inbound leg of the commutes. Latent Class Poisson Regression Models were used to analyze the data. The results indicate the presence of heterogeneity across the commuters. Using latent class models significantly improves the predictive power of the models compared to regular one class Poisson regression models. In contrast to one class Poisson models, gender becomes insignificant in predicting the number of tours when unobserved heterogeneity is accounted for. The commuters are associated with increased stops on their work based subtour when the employment density of service-related occupations increases in their work zone, but employment density of retail employment does not significantly contribute to the stop making likelihood of the commuters. Additionally, an increase in the number of work tours was associated with fewer stops on the inbound leg of the commute. The results of this study suggest the consideration of unobserved heterogeneity in the stop frequency models and help transportation agencies and policy makers make better inferences from such models.
文摘Improving college students' listening and speaking ability is a very important part of college English teaching Based on the foreign and domestic studies on how to improve students' listening and speaking ability, this paper explores the 1 + 1 model in listening and speaking class, which is to divide the listening and speaking class into two parts--small speaking class combined with students' online autonomous learning. Through one-year experiment and study on two classes, although there is no significant difference between the performances of these classes, this study has shed some light on how to vary the teaching methods, how to improve the class efficiency, students' autonomy in leaming, and how to build new assessment system. Further studies could be made later based on this
文摘With the development of computer technology, the way of language teaching has experienced great changes. In the past the methods of language teaching mainly focus on the direct method, the grammar-translation, audio-lingual, the structure approach, communicative language teaching, community language learning, task-based language learning, etc. These methods start and develop with the need and application of some theories. Now at the present time the computer is widely used in every field and the Internet is compulsory of daily life. So the ways of teaching language is changing accordingly. This paper is to analyze the new way of language teaching, the flipped class model. The first part is the general introduction of the flipped class model, and the second part is the application of the flipped class model to translation course.