Local and parallel finite element algorithms based on two-grid discretization for Navier-Stokes equations in two dimension are presented. Its basis is a coarse finite element space on the global domain and a fine fini...Local and parallel finite element algorithms based on two-grid discretization for Navier-Stokes equations in two dimension are presented. Its basis is a coarse finite element space on the global domain and a fine finite element space on the subdomain. The local algorithm consists of finding a solution for a given nonlinear problem in the coarse finite element space and a solution for a linear problem in the fine finite element space, then droping the coarse solution of the region near the boundary. By overlapping domain decomposition, the parallel algorithms are obtained. This paper analyzes the error of these algorithms and gets some error estimates which are better than those of the standard finite element method. The numerical experiments are given too. By analyzing and comparing these results, it is shown that these algorithms are correct and high efficient.展开更多
This study discusses generalized Rayleigh quotient and high efficiency finite element discretization schemes. Some results are as follows: 1) Rayleigh quotient accelerate technique is extended to nonselfadjoint proble...This study discusses generalized Rayleigh quotient and high efficiency finite element discretization schemes. Some results are as follows: 1) Rayleigh quotient accelerate technique is extended to nonselfadjoint problems. Generalized Rayleigh quotients of operator form and weak form are defined and the basic relationship between approximate eigenfunction and its generalized Rayleigh quotient is established. 2) New error estimates are obtained by replacing the ascent of exact eigenvalue with the ascent of finite element approximate eigenvalue. 3) Based on the work of Xu Jinchao and Zhou Aihui, finite element two-grid discretization schemes are established to solve nonselfadjoint elliptic differential operator eigenvalue problems and these schemes are used in both conforming finite element and non-conforming finite element. Besides, the efficiency of the schemes is proved by both theoretical analysis and numerical experiments. 4) Iterated Galerkin method, interpolated correction method and gradient recovery for selfadjoint elliptic differential operator eigenvalue problems are extended to nonselfadjoint elliptic differential operator eigenvalue problems.展开更多
In this paper,we extend the work of Brenner and Sung[Math.Comp.59,321–338(1992)]and present a regularity estimate for the elastic equations in concave domains.Based on the regularity estimate we prove that the consta...In this paper,we extend the work of Brenner and Sung[Math.Comp.59,321–338(1992)]and present a regularity estimate for the elastic equations in concave domains.Based on the regularity estimate we prove that the constants in the error estimates of the nonconforming Crouzeix-Raviart element approximations for the elastic equations/eigenvalue problem are independent of Laméconstant,which means the nonconforming Crouzeix-Raviart element approximations are locking-free.We also establish two kinds of two-grid discretization schemes for the elastic eigenvalue problem,and analyze that when the mesh sizes of coarse grid and fine grid satisfy some relationship,the resulting solutions can achieve the optimal accuracy.Numerical examples are provided to show the efficiency of two-grid schemes for the elastic eigenvalue problem.展开更多
This paper extends the two-grid discretization scheme of the conforming finite elements proposed by Xu and Zhou (Math. Comput., 70 (2001), pp.17-25) to the nonconforming finite elements for eigenvalue problems. In...This paper extends the two-grid discretization scheme of the conforming finite elements proposed by Xu and Zhou (Math. Comput., 70 (2001), pp.17-25) to the nonconforming finite elements for eigenvalue problems. In particular, two two-grid discretization schemes based on Rayleigh quotient technique are proposed. By using these new schemes, the solution of an eigenvalue problem on a fine mesh is reduced to that on a much coarser mesh together with the solution of a linear algebraic system on the fine mesh. The resulting solution still maintains an asymptotically optimal accuracy. Comparing with the two-grid discretization scheme of the conforming finite elements, the main advantages of our new schemes are twofold when the mesh size is small enough. First, the lower bounds of the exact eigenvalues in our two-grid discretization schemes can be obtained. Second, the first eigenvalue given by the new schemes has much better accuracy than that obtained by solving the eigenvalue problems on the fine mesh directly.展开更多
In this paper,we study an efficient scheme for nonlinear reaction-diffusion equations discretized by mixed finite element methods.We mainly concern the case when pressure coefficients and source terms are nonlinear.To...In this paper,we study an efficient scheme for nonlinear reaction-diffusion equations discretized by mixed finite element methods.We mainly concern the case when pressure coefficients and source terms are nonlinear.To linearize the nonlinear mixed equations,we use the two-grid algorithm.We first solve the nonlinear equations on the coarse grid,then,on the fine mesh,we solve a linearized problem using Newton iteration once.It is shown that the algorithm can achieve asymptotically optimal approximation as long as the mesh sizes satisfy H=O(h 1/2).As a result,solving such a large class of nonlinear equations will not be much more difficult than getting solutions of one linearized system.展开更多
Precast concrete pavements(PCPs)represent an innovative solution in the construction industry,addressing the need for rapid,intelligent,and low-carbon pavement technologies that significantly reduce construction time ...Precast concrete pavements(PCPs)represent an innovative solution in the construction industry,addressing the need for rapid,intelligent,and low-carbon pavement technologies that significantly reduce construction time and environmental impact.However,the integration of prefabricated technology in pavement surface and base layers lacks systematic classification and understanding.This paper aims to fill this gap by introducing a detailed analysis of discretization and assembly connection technology for cement concrete pavement(CCP)structures.Through a comprehensive review of domestic and international literature,the study classifies prefabricated pavement technology based on discrete assembly structural layers and presents specific conclusions(i)surface layer discrete units are categorized into bottom plates,top plates,plate-rod separated assemblies,and prestressed connections,with optimal material compositions identified to enhance mechanical properties;(ii)base layer discrete units include block-type,plate-type,and beam-type elements,highlighting their contributions to sustainability by incorporating recycled materials(iii)planar assembly connection types are assessed,ranking them by load transfer efficiency,with specific dimensions provided for optimal performance;and(iv)vertical assembly connections are defined by their leveling and sealing layers,suitable for both new constructions and repairs of existing roads.The insights gained from this review not only clarify the distinctions between various structural layers but also provide practical guidelines for enhancing the design and implementation of PCP.This work contributes to advancing sustainable and resilient road construction practices,making it a significant reference for researchers and practitioners in the field.展开更多
The discretization of random fields is the first and most important step in the stochastic analysis of engineering structures with spatially dependent random parameters.The essential step of discretization is solving ...The discretization of random fields is the first and most important step in the stochastic analysis of engineering structures with spatially dependent random parameters.The essential step of discretization is solving the Fredholm integral equation to obtain the eigenvalues and eigenfunctions of the covariance functions of the random fields.The collocation method,which has fewer integral operations,is more efficient in accomplishing the task than the timeconsuming Galerkin method,and it is more suitable for engineering applications with complex geometries and a large number of elements.With the help of isogeometric analysis that preserves accurate geometry in analysis,the isogeometric collocation method can efficiently achieve the results with sufficient accuracy.An adaptive moment abscissa is proposed to calculate the coordinates of the collocation points to further improve the accuracy of the collocation method.The adaptive moment abscissae led to more accurate results than the classical Greville abscissae when using the moment parameter optimized with intelligent algorithms.Numerical and engineering examples illustrate the advantages of the proposed isogeometric collocation method based on the adaptive moment abscissae over existing methods in terms of accuracy and efficiency.展开更多
In contrast to cyclic polymers with ring-like backbones,side-chain cyclization is another intriguing structural feature that has not been extensively studied.In this study,a library of orthogonally protected monomers ...In contrast to cyclic polymers with ring-like backbones,side-chain cyclization is another intriguing structural feature that has not been extensively studied.In this study,a library of orthogonally protected monomers featuring monocyclic,dicyclic,or tricyclic pendant motifs was designed and prepared based on malic acid derivatives.Polyesters with precise chemical structures and uniform chain lengths were prepared modularly through iterative growth.Meticulous control over the chemical details allows for a close investigation of the topological effects on the polymer properties.Compared to their linear side chain counterparts,the presence of cyclic pendant groups has a significant impact on chain conformation,leading to a reduction in hydrodynamic volume and an enhancement in the glass transition temperature.These results underscore the potential of tailoring polymer properties through rational engineering of side chain topology.展开更多
The strong convergence of an explicit full-discrete scheme is investigated for the stochastic Burgers-Huxley equation driven by additive space-time white noise,which possesses both Burgers-type and cubic nonlinearitie...The strong convergence of an explicit full-discrete scheme is investigated for the stochastic Burgers-Huxley equation driven by additive space-time white noise,which possesses both Burgers-type and cubic nonlinearities.To discretize the continuous problem in space,we utilize a spectral Galerkin method.Subsequently,we introduce a nonlinear-tamed exponential integrator scheme,resulting in a fully discrete scheme.Within the framework of semigroup theory,this study provides precise estimations of the Sobolev regularity,L^(∞) regularity in space,and Hölder continuity in time for the mild solution,as well as for its semi-discrete and full-discrete approximations.Building upon these results,we establish moment boundedness for the numerical solution and obtain strong convergence rates in both spatial and temporal dimensions.A numerical example is presented to validate the theoretical findings.展开更多
Fractures are typically characterized by roughness that significantlyaffects the mechanical and hydraulic characteristics of reservoirs.However,hydraulic fracturing mechanisms under the influenceof fracture morphology...Fractures are typically characterized by roughness that significantlyaffects the mechanical and hydraulic characteristics of reservoirs.However,hydraulic fracturing mechanisms under the influenceof fracture morphology remain largely unexplored.Leveraging the advantages of the finite-discrete element method(FDEM)for explicitly simulating fracture propagation and the strengths of the unifiedpipe model(UPM)for efficientlymodeling dual-permeability seepage,we propose a new hydromechanical(HM)coupling approach for modeling hydraulic fracturing.Validated against benchmark examples,the proposed FDEM-UPM model is further augmented by incorporating a Fourier-based methodology for reconstructing non-planar fractures,enabling quantitative analysis of hydraulic fracturing behavior within rough discrete fracture networks(DFNs).The FDEM-UPM model demonstrates computational advantages in accurately capturing transient hydraulic seepage phenomena,while the asynchronous time-stepping schemes between hydraulic and mechanical analyses substantially enhanced computational efficiencywithout compromising computational accuracy.Our results show that fracture morphology can affect both macroscopic fracture networks and microscopic interaction types between hydraulic fractures(HFs)and natural fractures(NFs).In an isotropic stress field,the initiation azimuth,propagation direction and microcracking mechanism are significantly influencedby fracture roughness.In an anisotropic stress field,HFs invariably propagate parallel to the direction of the maximum principal stress,reducing the overall complexity of the stimulated fracture networks.Additionally,stress concentration and perturbation attributed to fracture morphology tend to be compromised as the leak-off increases,while the breakdown and propagation pressures remain insensitive to fracture morphology.These findingsprovide new insights into the hydraulic fracturing mechanisms of fractured reservoirs containing complex rough DFNs.展开更多
This paper presents a procedure for assessing the reinforcement force of geosynthetics required for maintaining dynamic stability of a steep soil slope. Such a procedure is achieved with the use of the discretization ...This paper presents a procedure for assessing the reinforcement force of geosynthetics required for maintaining dynamic stability of a steep soil slope. Such a procedure is achieved with the use of the discretization technique and kinematic analysis of plasticity theory, i.e. discretization-based kinematic analysis. The discretization technique allows discretization of the analyzed slope into various components and generation of a kinematically admissible failure mechanism based on an associated flow rule.Accordingly, variations in soil properties including soil cohesion, internal friction angle and unit weight are accounted for with ease, while the conventional kinematic analysis fails to consider the changes in soil properties. The spatialetemporal effects of dynamic accelerations represented by primary and shear seismic waves are considered using the pseudo-dynamic approach. In the presence of geosynthetic reinforcement, tensile failure is discussed providing that the geosynthetics are installed with sufficient length. Equating the total rates of work done by external forces to the internal rates of work yields the upper bound solution of required reinforcement force, below which slopes fail. The reinforcement force is sought by optimizing the objective function with regard to independent variables, and presented in a normalized form. Pseudo-static analysis is a special case and hence readily transformed from pseudodynamic analysis. Comparisons of the pseudo-static/dynamic solutions calculated in this study are highlighted. Although the pseudo-static approach yields a conservative solution, its ability to give a reasonable result is substantiated for steep slopes. In order to provide a more meaningful solution to a stability analysis, the pseudo-dynamic approach is recommended due to considerations of spatial etemporal effect of earthquake input.展开更多
A new method for discretization of continuous attributes is put forward to overcome the limitation of the traditional rough sets, which cannot deal with continuous attributes.The method is based on an improved algorit...A new method for discretization of continuous attributes is put forward to overcome the limitation of the traditional rough sets, which cannot deal with continuous attributes.The method is based on an improved algorithm to produce candidate cut points and an algorithm of reduction based on variable precision rough information entropy. With the guarantee of consistency of decision system, the method can reduce the number of cut points and im- prove efficiency of reduction. Adopting variable precision rough information entropy as measure criterion, it has a good tolerance to noise. Experiments show that the algorithm yields satisfying reduction results.展开更多
Rough set theory plays an important role in knowledge discovery, but cannot deal with continuous attributes, thus discretization is a problem which we cannot neglect. And discretization of decision systems in rough se...Rough set theory plays an important role in knowledge discovery, but cannot deal with continuous attributes, thus discretization is a problem which we cannot neglect. And discretization of decision systems in rough set theory has some particular characteristics. Consistency must be satisfied and cuts for discretization is expected to be as small as possible. Consistent and minimal discretization problem is NP-complete. In this paper, an immune algorithm for the problem is proposed. The correctness and effectiveness were shown in experiments. The discretization method presented in this paper can also be used as a data pre- treating step for other symbolic knowledge discovery or machine learning methods other than rough set theory.展开更多
It is being widely studied how to extract knowledge from a decision table based on rough set theory. The novel problem is how to discretize a decision table having continuous attribute. In order to obtain more reasona...It is being widely studied how to extract knowledge from a decision table based on rough set theory. The novel problem is how to discretize a decision table having continuous attribute. In order to obtain more reasonable discretization results, a discretization algorithm is proposed, which arranges half-global discretization based on the correlational coefficient of each continuous attribute while considering the uniqueness of rough set theory. When choosing heuristic information, stability is combined with rough entropy. In terms of stability, the possibility of classifying objects belonging to certain sub-interval of a given attribute into neighbor sub-intervals is minimized. By doing this, rational discrete intervals can be determined. Rough entropy is employed to decide the optimal cut-points while guaranteeing the consistency of the decision table after discretization. Thought of this algorithm is elaborated through Iris data and then some experiments by comparing outcomes of four discritized datasets are also given, which are calculated by the proposed algorithm and four other typical algorithras for discritization respectively. After that, classification rules are deduced and summarized through rough set based classifiers. Results show that the proposed discretization algorithm is able to generate optimal classification accuracy while minimizing the number of discrete intervals. It displays superiority especially when dealing with a decision table having a large attribute number.展开更多
A computational technique is proposed for the Galerkin discretization of axially moving strings with geometric nonlinearity. The Galerkin discretization is based on the eigenfunctions of stationary strings. The discre...A computational technique is proposed for the Galerkin discretization of axially moving strings with geometric nonlinearity. The Galerkin discretization is based on the eigenfunctions of stationary strings. The discretized equations are simplified by regrouping nonlinear terms to reduce the computation work. The scheme can be easily implemented in the practical programming. Numerical results show the effectiveness of the technique. The results also highlight the feature of Galerkin's discretization of gyroscopic continua that the term number in Galerkin's discretization should be even. The technique is generalized from elastic strings to viscoelastic strings.展开更多
Gaze estimation has become an important field of image and information processing.Estimating gaze from full-face images using convolutional neural network(CNN) has achieved fine accuracy.However,estimating gaze from e...Gaze estimation has become an important field of image and information processing.Estimating gaze from full-face images using convolutional neural network(CNN) has achieved fine accuracy.However,estimating gaze from eye images is very challenging due to the less information contained in eye images than in full-face images,and it’s still vital since eye-image-based methods have wider applications.In this paper,we propose the discretization-gaze network(DGaze-Net) to optimize monocular three-dimensional(3D) gaze estimation accuracy by feature discretization and attention mechanism.The gaze predictor of DGaze-Net is optimized based on feature discretization.By discretizing the gaze angle into K bins,a classification constraint is added to the gaze predictor.In the gaze predictor,the gaze angle is pre-applied with a binned classification before regressing with the real gaze angle to improve gaze estimation accuracy.In addition,the attention mechanism is applied to the backbone to enhance the ability to extract eye features related to gaze.The proposed method is validated on three gaze datasets and achieves encouraging gaze estimation accuracy.展开更多
The commonly used discretization approaches for distributed hydrological models can be broadly categorized into four types,based on the nature of the discrete components:Regular Mesh,Triangular Irregular Networks(TINs...The commonly used discretization approaches for distributed hydrological models can be broadly categorized into four types,based on the nature of the discrete components:Regular Mesh,Triangular Irregular Networks(TINs),Representative Elementary Watershed(REWs) and Hydrologic Response Units(HRUs).In this paper,a new discretization approach for landforms that have similar hydrologic properties is developed and discussed here for the Integrated Hydrologic Model(IHM),a combining simulation of surface and groundwater processes,accounting for the interaction between the systems.The approach used in the IHM is to disaggregate basin parameters into discrete landforms that have similar hydrologic properties.These landforms may be impervious areas,related areas,areas with high or low clay or organic fractions,areas with significantly different depths-to-water-table,and areas with different types of land cover or different land uses.Incorporating discrete landforms within basins allows significant distributed parameter analysis,but requires an efficient computational structure.The IHM integration represents a new approach interpreting fluxes across the model interface and storages near the interface for transfer to the appropriate model component,accounting for the disparate discretization while rigidly maintaining mass conservation.The discretization approaches employed in IHM will provide some ideas and insights which are helpful to those researchers who have been working on the integrated models for surface-groundwater interaction.展开更多
AIM: To evaluate the use of short-duration transient visual evoked potentials(VEP) and color reflectivity discretization analysis(CORDA) in glaucomatous eyes,eyes suspected of having glaucoma,and healthy eyes.MET...AIM: To evaluate the use of short-duration transient visual evoked potentials(VEP) and color reflectivity discretization analysis(CORDA) in glaucomatous eyes,eyes suspected of having glaucoma,and healthy eyes.METHODS: The study included 136 eyes from 136 subjects: 49 eyes with glaucoma,45 glaucoma suspect eyes,and 42 healthy eyes.Subjects underwent Humphrey visual field(VF) testing,VEP testing,as well as peripapillary retinal nerve fiber layer optical coherence tomography imaging studies with post-acquisition CORDA applied.Statistical analysis was performed using means and ranges,ANOVA,post-hoc comparisons using Turkey's adjustment,Fisher's Exact test,area under the curve,and Spearman correlation coefficients.RESULTS: Parameters from VEP and CORDA correlated significantly with VF mean deviation(MD)(P〈0.05).In distinguishing glaucomatous eyes from controls,VEP demonstrated area under the curve(AUC) values of 0.64-0.75 for amplitude and 0.67-0.81 for latency.The CORDA HR1 parameter was highly discriminative for glaucomatous eyes vs controls(AUC=0.94).CONCLUSION: Significant correlations are found between MD and parameters of short-duration transient VEP and CORDA,diagnostic modalities which warrant further consideration in identifying glaucoma characteristics.展开更多
The selection of a suitable discretization method(DM) to discretize spatially continuous variables(SCVs)is critical in ML-based natural hazard susceptibility assessment. However, few studies start to consider the infl...The selection of a suitable discretization method(DM) to discretize spatially continuous variables(SCVs)is critical in ML-based natural hazard susceptibility assessment. However, few studies start to consider the influence due to the selected DMs and how to efficiently select a suitable DM for each SCV. These issues were well addressed in this study. The information loss rate(ILR), an index based on the information entropy, seems can be used to select optimal DM for each SCV. However, the ILR fails to show the actual influence of discretization because such index only considers the total amount of information of the discretized variables departing from the original SCV. Facing this issue, we propose an index, information change rate(ICR), that focuses on the changed amount of information due to the discretization based on each cell, enabling the identification of the optimal DM. We develop a case study with Random Forest(training/testing ratio of 7 : 3) to assess flood susceptibility in Wanan County, China.The area under the curve-based and susceptibility maps-based approaches were presented to compare the ILR and ICR. The results show the ICR-based optimal DMs are more rational than the ILR-based ones in both cases. Moreover, we observed the ILR values are unnaturally small(<1%), whereas the ICR values are obviously more in line with general recognition(usually 10%–30%). The above results all demonstrate the superiority of the ICR. We consider this study fills up the existing research gaps, improving the MLbased natural hazard susceptibility assessments.展开更多
Discretization based on rough set theory aims to seek the possible minimum number of the cut set without weakening the indiscemibility of the original decision system. Optimization of discretization is an NP-complete ...Discretization based on rough set theory aims to seek the possible minimum number of the cut set without weakening the indiscemibility of the original decision system. Optimization of discretization is an NP-complete problem and the genetic algorithm is an appropriate method to solve it. In order to achieve optimal discretization, first the choice of the initial set of cut set is discussed, because a good initial cut set can enhance the efficiency and quality of the follow-up algorithm. Second, an effective heuristic genetic algorithm for discretization of continuous attributes of the decision table is proposed, which takes the significance of cut dots as heuristic information and introduces a novel operator to maintain the indiscernibility of the original decision system and enhance the local research ability of the algorithm. So the algorithm converges quickly and has global optimizing ability. Finally, the effectiveness of the algorithm is validated through experiment.展开更多
基金Project supported by the National Natural Science Foundation of China (No. 10371096)
文摘Local and parallel finite element algorithms based on two-grid discretization for Navier-Stokes equations in two dimension are presented. Its basis is a coarse finite element space on the global domain and a fine finite element space on the subdomain. The local algorithm consists of finding a solution for a given nonlinear problem in the coarse finite element space and a solution for a linear problem in the fine finite element space, then droping the coarse solution of the region near the boundary. By overlapping domain decomposition, the parallel algorithms are obtained. This paper analyzes the error of these algorithms and gets some error estimates which are better than those of the standard finite element method. The numerical experiments are given too. By analyzing and comparing these results, it is shown that these algorithms are correct and high efficient.
基金supported by National Natural Science Foundation of China (Grant No.10761003) the Governor's Special Foundation of Guizhou Province for Outstanding Scientific Education Personnel (Grant No.[2005]155)
文摘This study discusses generalized Rayleigh quotient and high efficiency finite element discretization schemes. Some results are as follows: 1) Rayleigh quotient accelerate technique is extended to nonselfadjoint problems. Generalized Rayleigh quotients of operator form and weak form are defined and the basic relationship between approximate eigenfunction and its generalized Rayleigh quotient is established. 2) New error estimates are obtained by replacing the ascent of exact eigenvalue with the ascent of finite element approximate eigenvalue. 3) Based on the work of Xu Jinchao and Zhou Aihui, finite element two-grid discretization schemes are established to solve nonselfadjoint elliptic differential operator eigenvalue problems and these schemes are used in both conforming finite element and non-conforming finite element. Besides, the efficiency of the schemes is proved by both theoretical analysis and numerical experiments. 4) Iterated Galerkin method, interpolated correction method and gradient recovery for selfadjoint elliptic differential operator eigenvalue problems are extended to nonselfadjoint elliptic differential operator eigenvalue problems.
基金supported by the National Natural Science Foundation of China (Grant No.11761022)。
文摘In this paper,we extend the work of Brenner and Sung[Math.Comp.59,321–338(1992)]and present a regularity estimate for the elastic equations in concave domains.Based on the regularity estimate we prove that the constants in the error estimates of the nonconforming Crouzeix-Raviart element approximations for the elastic equations/eigenvalue problem are independent of Laméconstant,which means the nonconforming Crouzeix-Raviart element approximations are locking-free.We also establish two kinds of two-grid discretization schemes for the elastic eigenvalue problem,and analyze that when the mesh sizes of coarse grid and fine grid satisfy some relationship,the resulting solutions can achieve the optimal accuracy.Numerical examples are provided to show the efficiency of two-grid schemes for the elastic eigenvalue problem.
基金supported by National Natural Science Foundation of China (No. 10761003)by the Foundation of Guizhou Province Scientific Research for Senior Personnel, China
文摘This paper extends the two-grid discretization scheme of the conforming finite elements proposed by Xu and Zhou (Math. Comput., 70 (2001), pp.17-25) to the nonconforming finite elements for eigenvalue problems. In particular, two two-grid discretization schemes based on Rayleigh quotient technique are proposed. By using these new schemes, the solution of an eigenvalue problem on a fine mesh is reduced to that on a much coarser mesh together with the solution of a linear algebraic system on the fine mesh. The resulting solution still maintains an asymptotically optimal accuracy. Comparing with the two-grid discretization scheme of the conforming finite elements, the main advantages of our new schemes are twofold when the mesh size is small enough. First, the lower bounds of the exact eigenvalues in our two-grid discretization schemes can be obtained. Second, the first eigenvalue given by the new schemes has much better accuracy than that obtained by solving the eigenvalue problems on the fine mesh directly.
基金National Science Foundation of China(11271145)Foundation for Talent Introduction of Guangdong Provincial University,Specialized Research Fund for the Doctoral Program of Higher Education(20114407110009)the Project of Department of Education of Guangdong Province(2012KJCX0036).
文摘In this paper,we study an efficient scheme for nonlinear reaction-diffusion equations discretized by mixed finite element methods.We mainly concern the case when pressure coefficients and source terms are nonlinear.To linearize the nonlinear mixed equations,we use the two-grid algorithm.We first solve the nonlinear equations on the coarse grid,then,on the fine mesh,we solve a linearized problem using Newton iteration once.It is shown that the algorithm can achieve asymptotically optimal approximation as long as the mesh sizes satisfy H=O(h 1/2).As a result,solving such a large class of nonlinear equations will not be much more difficult than getting solutions of one linearized system.
基金supported by the Research Program of Wuhan Building Energy Efficiency Office(grant number 202331).
文摘Precast concrete pavements(PCPs)represent an innovative solution in the construction industry,addressing the need for rapid,intelligent,and low-carbon pavement technologies that significantly reduce construction time and environmental impact.However,the integration of prefabricated technology in pavement surface and base layers lacks systematic classification and understanding.This paper aims to fill this gap by introducing a detailed analysis of discretization and assembly connection technology for cement concrete pavement(CCP)structures.Through a comprehensive review of domestic and international literature,the study classifies prefabricated pavement technology based on discrete assembly structural layers and presents specific conclusions(i)surface layer discrete units are categorized into bottom plates,top plates,plate-rod separated assemblies,and prestressed connections,with optimal material compositions identified to enhance mechanical properties;(ii)base layer discrete units include block-type,plate-type,and beam-type elements,highlighting their contributions to sustainability by incorporating recycled materials(iii)planar assembly connection types are assessed,ranking them by load transfer efficiency,with specific dimensions provided for optimal performance;and(iv)vertical assembly connections are defined by their leveling and sealing layers,suitable for both new constructions and repairs of existing roads.The insights gained from this review not only clarify the distinctions between various structural layers but also provide practical guidelines for enhancing the design and implementation of PCP.This work contributes to advancing sustainable and resilient road construction practices,making it a significant reference for researchers and practitioners in the field.
基金Supported by National Natural Science Foundation of China(Grant Nos.U22A6001 and 52375273)Major Project of Science and Technology Innovation 2030(Grant No.2021ZD0113100)Zhejiang Provincial Natural Science Foundation of China(Grant No.LZ24E050005)。
文摘The discretization of random fields is the first and most important step in the stochastic analysis of engineering structures with spatially dependent random parameters.The essential step of discretization is solving the Fredholm integral equation to obtain the eigenvalues and eigenfunctions of the covariance functions of the random fields.The collocation method,which has fewer integral operations,is more efficient in accomplishing the task than the timeconsuming Galerkin method,and it is more suitable for engineering applications with complex geometries and a large number of elements.With the help of isogeometric analysis that preserves accurate geometry in analysis,the isogeometric collocation method can efficiently achieve the results with sufficient accuracy.An adaptive moment abscissa is proposed to calculate the coordinates of the collocation points to further improve the accuracy of the collocation method.The adaptive moment abscissae led to more accurate results than the classical Greville abscissae when using the moment parameter optimized with intelligent algorithms.Numerical and engineering examples illustrate the advantages of the proposed isogeometric collocation method based on the adaptive moment abscissae over existing methods in terms of accuracy and efficiency.
基金financially supported by the National Natural Science Foundation of China(No.22273026)Scientific Research Innovation Capability Support Project for Young Faculty(No.ZYGXQNJSKYCXNLZCXM-I15)+3 种基金Basic and Applied Basic Research Foundation of Guangdong Province(2024A1515012401)GJYC program of Guangzhou(No.2024D03J0002)the China Postdoctoral Science Foundation(No.2024M750938)Postdoctoral Fellowship Program of CPSF(No.GZC20240492)for their financial support。
文摘In contrast to cyclic polymers with ring-like backbones,side-chain cyclization is another intriguing structural feature that has not been extensively studied.In this study,a library of orthogonally protected monomers featuring monocyclic,dicyclic,or tricyclic pendant motifs was designed and prepared based on malic acid derivatives.Polyesters with precise chemical structures and uniform chain lengths were prepared modularly through iterative growth.Meticulous control over the chemical details allows for a close investigation of the topological effects on the polymer properties.Compared to their linear side chain counterparts,the presence of cyclic pendant groups has a significant impact on chain conformation,leading to a reduction in hydrodynamic volume and an enhancement in the glass transition temperature.These results underscore the potential of tailoring polymer properties through rational engineering of side chain topology.
基金partially supported by the National Natural Science Foundation of China(Grant No.12071073)financial support by the Jiangsu Provincial Scientific Research Center of Applied Mathematics(Grant No.BK20233002).
文摘The strong convergence of an explicit full-discrete scheme is investigated for the stochastic Burgers-Huxley equation driven by additive space-time white noise,which possesses both Burgers-type and cubic nonlinearities.To discretize the continuous problem in space,we utilize a spectral Galerkin method.Subsequently,we introduce a nonlinear-tamed exponential integrator scheme,resulting in a fully discrete scheme.Within the framework of semigroup theory,this study provides precise estimations of the Sobolev regularity,L^(∞) regularity in space,and Hölder continuity in time for the mild solution,as well as for its semi-discrete and full-discrete approximations.Building upon these results,we establish moment boundedness for the numerical solution and obtain strong convergence rates in both spatial and temporal dimensions.A numerical example is presented to validate the theoretical findings.
基金supported by the National Natural Science Foundation of China(Grant Nos.52574103 and 42277150).
文摘Fractures are typically characterized by roughness that significantlyaffects the mechanical and hydraulic characteristics of reservoirs.However,hydraulic fracturing mechanisms under the influenceof fracture morphology remain largely unexplored.Leveraging the advantages of the finite-discrete element method(FDEM)for explicitly simulating fracture propagation and the strengths of the unifiedpipe model(UPM)for efficientlymodeling dual-permeability seepage,we propose a new hydromechanical(HM)coupling approach for modeling hydraulic fracturing.Validated against benchmark examples,the proposed FDEM-UPM model is further augmented by incorporating a Fourier-based methodology for reconstructing non-planar fractures,enabling quantitative analysis of hydraulic fracturing behavior within rough discrete fracture networks(DFNs).The FDEM-UPM model demonstrates computational advantages in accurately capturing transient hydraulic seepage phenomena,while the asynchronous time-stepping schemes between hydraulic and mechanical analyses substantially enhanced computational efficiencywithout compromising computational accuracy.Our results show that fracture morphology can affect both macroscopic fracture networks and microscopic interaction types between hydraulic fractures(HFs)and natural fractures(NFs).In an isotropic stress field,the initiation azimuth,propagation direction and microcracking mechanism are significantly influencedby fracture roughness.In an anisotropic stress field,HFs invariably propagate parallel to the direction of the maximum principal stress,reducing the overall complexity of the stimulated fracture networks.Additionally,stress concentration and perturbation attributed to fracture morphology tend to be compromised as the leak-off increases,while the breakdown and propagation pressures remain insensitive to fracture morphology.These findingsprovide new insights into the hydraulic fracturing mechanisms of fractured reservoirs containing complex rough DFNs.
基金financial support for the first author’s PhD program by the President’s Graduate Fellowship in Singapore
文摘This paper presents a procedure for assessing the reinforcement force of geosynthetics required for maintaining dynamic stability of a steep soil slope. Such a procedure is achieved with the use of the discretization technique and kinematic analysis of plasticity theory, i.e. discretization-based kinematic analysis. The discretization technique allows discretization of the analyzed slope into various components and generation of a kinematically admissible failure mechanism based on an associated flow rule.Accordingly, variations in soil properties including soil cohesion, internal friction angle and unit weight are accounted for with ease, while the conventional kinematic analysis fails to consider the changes in soil properties. The spatialetemporal effects of dynamic accelerations represented by primary and shear seismic waves are considered using the pseudo-dynamic approach. In the presence of geosynthetic reinforcement, tensile failure is discussed providing that the geosynthetics are installed with sufficient length. Equating the total rates of work done by external forces to the internal rates of work yields the upper bound solution of required reinforcement force, below which slopes fail. The reinforcement force is sought by optimizing the objective function with regard to independent variables, and presented in a normalized form. Pseudo-static analysis is a special case and hence readily transformed from pseudodynamic analysis. Comparisons of the pseudo-static/dynamic solutions calculated in this study are highlighted. Although the pseudo-static approach yields a conservative solution, its ability to give a reasonable result is substantiated for steep slopes. In order to provide a more meaningful solution to a stability analysis, the pseudo-dynamic approach is recommended due to considerations of spatial etemporal effect of earthquake input.
文摘A new method for discretization of continuous attributes is put forward to overcome the limitation of the traditional rough sets, which cannot deal with continuous attributes.The method is based on an improved algorithm to produce candidate cut points and an algorithm of reduction based on variable precision rough information entropy. With the guarantee of consistency of decision system, the method can reduce the number of cut points and im- prove efficiency of reduction. Adopting variable precision rough information entropy as measure criterion, it has a good tolerance to noise. Experiments show that the algorithm yields satisfying reduction results.
基金Project supported by the National Basic Research Program (973)of China (No. 2002CB312106), China Postdoctoral Science Founda-tion (No. 2004035715), the Science & Technology Program of Zhe-jiang Province (No. 2004C31098), and the Postdoctoral Foundation of Zhejiang Province (No. 2004-bsh-023), China
文摘Rough set theory plays an important role in knowledge discovery, but cannot deal with continuous attributes, thus discretization is a problem which we cannot neglect. And discretization of decision systems in rough set theory has some particular characteristics. Consistency must be satisfied and cuts for discretization is expected to be as small as possible. Consistent and minimal discretization problem is NP-complete. In this paper, an immune algorithm for the problem is proposed. The correctness and effectiveness were shown in experiments. The discretization method presented in this paper can also be used as a data pre- treating step for other symbolic knowledge discovery or machine learning methods other than rough set theory.
文摘It is being widely studied how to extract knowledge from a decision table based on rough set theory. The novel problem is how to discretize a decision table having continuous attribute. In order to obtain more reasonable discretization results, a discretization algorithm is proposed, which arranges half-global discretization based on the correlational coefficient of each continuous attribute while considering the uniqueness of rough set theory. When choosing heuristic information, stability is combined with rough entropy. In terms of stability, the possibility of classifying objects belonging to certain sub-interval of a given attribute into neighbor sub-intervals is minimized. By doing this, rational discrete intervals can be determined. Rough entropy is employed to decide the optimal cut-points while guaranteeing the consistency of the decision table after discretization. Thought of this algorithm is elaborated through Iris data and then some experiments by comparing outcomes of four discritized datasets are also given, which are calculated by the proposed algorithm and four other typical algorithras for discritization respectively. After that, classification rules are deduced and summarized through rough set based classifiers. Results show that the proposed discretization algorithm is able to generate optimal classification accuracy while minimizing the number of discrete intervals. It displays superiority especially when dealing with a decision table having a large attribute number.
基金supported by the National Outstanding Young Scientists Fund of China(No.10725209)the National Natural Science Foundation of China(No.10672092)+3 种基金Shanghai Subject Chief Scientist Project(No.09XD1401700)Shanghai Municipal Education Commission Scientific Research Project(No.07ZZ07)Shanghai Leading Academic Discipline Project (No.S30106)Changjiang Scholars and Innovative Research Team in University Program(No.IRT0844).
文摘A computational technique is proposed for the Galerkin discretization of axially moving strings with geometric nonlinearity. The Galerkin discretization is based on the eigenfunctions of stationary strings. The discretized equations are simplified by regrouping nonlinear terms to reduce the computation work. The scheme can be easily implemented in the practical programming. Numerical results show the effectiveness of the technique. The results also highlight the feature of Galerkin's discretization of gyroscopic continua that the term number in Galerkin's discretization should be even. The technique is generalized from elastic strings to viscoelastic strings.
基金supported by the Major Science and Technology Special Plan Projects of Yunnan (No.202002AD080001)。
文摘Gaze estimation has become an important field of image and information processing.Estimating gaze from full-face images using convolutional neural network(CNN) has achieved fine accuracy.However,estimating gaze from eye images is very challenging due to the less information contained in eye images than in full-face images,and it’s still vital since eye-image-based methods have wider applications.In this paper,we propose the discretization-gaze network(DGaze-Net) to optimize monocular three-dimensional(3D) gaze estimation accuracy by feature discretization and attention mechanism.The gaze predictor of DGaze-Net is optimized based on feature discretization.By discretizing the gaze angle into K bins,a classification constraint is added to the gaze predictor.In the gaze predictor,the gaze angle is pre-applied with a binned classification before regressing with the real gaze angle to improve gaze estimation accuracy.In addition,the attention mechanism is applied to the backbone to enhance the ability to extract eye features related to gaze.The proposed method is validated on three gaze datasets and achieves encouraging gaze estimation accuracy.
基金Under the auspices of National Natural Science Foundation of China(No.40901026)Beijing Municipal Science & Technology New Star Project Funds(No.2010B046)+1 种基金Beijing Municipal Natural Science Foundation(No.8123041)Southwest Florida Water Management District(SFWMD) Project
文摘The commonly used discretization approaches for distributed hydrological models can be broadly categorized into four types,based on the nature of the discrete components:Regular Mesh,Triangular Irregular Networks(TINs),Representative Elementary Watershed(REWs) and Hydrologic Response Units(HRUs).In this paper,a new discretization approach for landforms that have similar hydrologic properties is developed and discussed here for the Integrated Hydrologic Model(IHM),a combining simulation of surface and groundwater processes,accounting for the interaction between the systems.The approach used in the IHM is to disaggregate basin parameters into discrete landforms that have similar hydrologic properties.These landforms may be impervious areas,related areas,areas with high or low clay or organic fractions,areas with significantly different depths-to-water-table,and areas with different types of land cover or different land uses.Incorporating discrete landforms within basins allows significant distributed parameter analysis,but requires an efficient computational structure.The IHM integration represents a new approach interpreting fluxes across the model interface and storages near the interface for transfer to the appropriate model component,accounting for the disparate discretization while rigidly maintaining mass conservation.The discretization approaches employed in IHM will provide some ideas and insights which are helpful to those researchers who have been working on the integrated models for surface-groundwater interaction.
文摘AIM: To evaluate the use of short-duration transient visual evoked potentials(VEP) and color reflectivity discretization analysis(CORDA) in glaucomatous eyes,eyes suspected of having glaucoma,and healthy eyes.METHODS: The study included 136 eyes from 136 subjects: 49 eyes with glaucoma,45 glaucoma suspect eyes,and 42 healthy eyes.Subjects underwent Humphrey visual field(VF) testing,VEP testing,as well as peripapillary retinal nerve fiber layer optical coherence tomography imaging studies with post-acquisition CORDA applied.Statistical analysis was performed using means and ranges,ANOVA,post-hoc comparisons using Turkey's adjustment,Fisher's Exact test,area under the curve,and Spearman correlation coefficients.RESULTS: Parameters from VEP and CORDA correlated significantly with VF mean deviation(MD)(P〈0.05).In distinguishing glaucomatous eyes from controls,VEP demonstrated area under the curve(AUC) values of 0.64-0.75 for amplitude and 0.67-0.81 for latency.The CORDA HR1 parameter was highly discriminative for glaucomatous eyes vs controls(AUC=0.94).CONCLUSION: Significant correlations are found between MD and parameters of short-duration transient VEP and CORDA,diagnostic modalities which warrant further consideration in identifying glaucoma characteristics.
文摘The selection of a suitable discretization method(DM) to discretize spatially continuous variables(SCVs)is critical in ML-based natural hazard susceptibility assessment. However, few studies start to consider the influence due to the selected DMs and how to efficiently select a suitable DM for each SCV. These issues were well addressed in this study. The information loss rate(ILR), an index based on the information entropy, seems can be used to select optimal DM for each SCV. However, the ILR fails to show the actual influence of discretization because such index only considers the total amount of information of the discretized variables departing from the original SCV. Facing this issue, we propose an index, information change rate(ICR), that focuses on the changed amount of information due to the discretization based on each cell, enabling the identification of the optimal DM. We develop a case study with Random Forest(training/testing ratio of 7 : 3) to assess flood susceptibility in Wanan County, China.The area under the curve-based and susceptibility maps-based approaches were presented to compare the ILR and ICR. The results show the ICR-based optimal DMs are more rational than the ILR-based ones in both cases. Moreover, we observed the ILR values are unnaturally small(<1%), whereas the ICR values are obviously more in line with general recognition(usually 10%–30%). The above results all demonstrate the superiority of the ICR. We consider this study fills up the existing research gaps, improving the MLbased natural hazard susceptibility assessments.
文摘Discretization based on rough set theory aims to seek the possible minimum number of the cut set without weakening the indiscemibility of the original decision system. Optimization of discretization is an NP-complete problem and the genetic algorithm is an appropriate method to solve it. In order to achieve optimal discretization, first the choice of the initial set of cut set is discussed, because a good initial cut set can enhance the efficiency and quality of the follow-up algorithm. Second, an effective heuristic genetic algorithm for discretization of continuous attributes of the decision table is proposed, which takes the significance of cut dots as heuristic information and introduces a novel operator to maintain the indiscernibility of the original decision system and enhance the local research ability of the algorithm. So the algorithm converges quickly and has global optimizing ability. Finally, the effectiveness of the algorithm is validated through experiment.