In Advances in Pure Mathematics (www.scirp.org/journal/apm), Vol. 1, No. 4 (July 2011), pp. 136-154, the mathematical structure of the much discussed problem of probability known as the Monty Hall problem was mapped i...In Advances in Pure Mathematics (www.scirp.org/journal/apm), Vol. 1, No. 4 (July 2011), pp. 136-154, the mathematical structure of the much discussed problem of probability known as the Monty Hall problem was mapped in detail. It is styled here as Monty Hall 1.0. The proposed analysis was then generalized to related cases involving any number of doors (d), cars (c), and opened doors (o) (Monty Hall 2.0) and 1 specific case involving more than 1 picked door (p) (Monty Hall 3.0). In cognitive terms, this analysis was interpreted in function of the presumed digital nature of rational thought and language. In the present paper, Monty Hall 1.0 and 2.0 are briefly reviewed (§§2-3). Additional generalizations of the problem are then presented in §§4-7. They concern expansions of the problem to the following items: (1) to any number of picked doors, with p denoting the number of doors initially picked and q the number of doors picked when switching doors after doors have been opened to reveal goats (Monty Hall 3.0;see §4);(3) to the precise conditions under which one’s chances increase or decrease in instances of Monty Hall 3.0 (Monty Hall 3.2;see §6);and (4) to any number of switches of doors (s) (Monty Hall 4.0;see §7). The afore-mentioned article in APM, Vol. 1, No. 4 may serve as a useful introduction to the analysis of the higher variations of the Monty Hall problem offered in the present article. The body of the article is by Leo Depuydt. An appendix by Richard D. Gill (see §8) provides additional context by building a bridge to modern probability theory in its conventional notation and by pointing to the benefits of certain interesting and relevant tools of computation now available on the Internet. The cognitive component of the earlier investigation is extended in §9 by reflections on the foundations of mathematics. It will be proposed, in the footsteps of George Boole, that the phenomenon of mathematics needs to be defined in empirical terms as something that happens to the brain or something that the brain does. It is generally assumed that mathematics is a property of nature or reality or whatever one may call it. There is not the slightest intention in this paper to falsify this assumption because it cannot be falsified, just as it cannot be empirically or positively proven. But there is no way that this assumption can be a factual observation. It can be no more than an altogether reasonable, yet fully secondary, inference derived mainly from the fact that mathematics appears to work, even if some may deem the fact of this match to constitute proof. On the deepest empirical level, mathematics can only be directly observed and therefore directly analyzed as an activity of the brain. The study of mathematics therefore becomes an essential part of the study of cognition and human intelligence. The reflections on mathematics as a phenomenon offered in the present article will serve as a prelude to planned articles on how to redefine the foundations of probability as one type of mathematics in cognitive fashion and on how exactly Boole’s theory of probability subsumes, supersedes, and completes classical probability theory. §§2-7 combined, on the one hand, and §9, on the other hand, are both self-sufficient units and can be read independently from one another. The ultimate design of the larger project of which this paper is part remains the increase of digitalization of the analysis of rational thought and language, that is, of (rational, not emotional) human intelligence. To reach out to other disciplines, an effort is made to describe the mathematics more explicitly than is usual.展开更多
The Monty Hall problem has received its fair share of attention in mathematics. Recently, an entire monograph has been devoted to its history. There has been a multiplicity of approaches to the problem. These approach...The Monty Hall problem has received its fair share of attention in mathematics. Recently, an entire monograph has been devoted to its history. There has been a multiplicity of approaches to the problem. These approaches are not necessarily mutually exclusive. The design of the present paper is to add one more approach by analyzing the mathematical structure of the Monty Hall problem in digital terms. The structure of the problem is described as much as possible in the tradition and the spirit—and as much as possible by means of the algebraic conventions—of George Boole’s Investigation of the Laws of Thought (1854), the Magna Charta of the digital age, and of John Venn’s Symbolic Logic (second edition, 1894), which is squarely based on Boole’s Investigation and elucidates it in many ways. The focus is not only on the digital-mathematical structure itself but also on its relation to the presumed digital nature of cognition as expressed in rational thought and language. The digital approach is outlined in part 1. In part 2, the Monty Hall problem is analyzed digitally. To ensure the generality of the digital approach and demonstrate its reliability and productivity, the Monty Hall problem is extended and generalized in parts 3 and 4 to related cases in light of the axioms of probability theory. In the full mapping of the mathematical structure of the Monty Hall problem and any extensions thereof, a digital or non-quantitative skeleton is fleshed out by a quantitative component. The pertinent mathematical equations are developed and presented and illustrated by means of examples.展开更多
This article refers to the “Mathematics of Harmony” by Alexey Stakhov in 2009, a new interdisciplinary direction of modern science. The main goal of the article is to describe two modern scientific discoveries–New ...This article refers to the “Mathematics of Harmony” by Alexey Stakhov in 2009, a new interdisciplinary direction of modern science. The main goal of the article is to describe two modern scientific discoveries–New Geometric Theory of Phyllotaxis (Bodnar’s Geometry) and Hilbert’s Fourth Problem based on the Hyperbolic Fibonacci and Lucas Functions and “Golden” Fibonacci λ-Goniometry (λ > 0 is a given positive real number). Although these discoveries refer to different areas of science (mathematics and theoretical botany), however they are based on one and the same scientific ideas-the “golden mean,” which had been introduced by Euclid in his Elements, and its generalization—the “metallic means,” which have been studied recently by Argentinian mathematician Vera Spinadel. The article is a confirmation of interdisciplinary character of the “Mathematics of Harmony”, which originates from Euclid’s Elements.展开更多
This article refers to the “Mathematics of Harmony” by Alexey Stakhov [1], a new interdisciplinary direction of modern science. The main goal of the article is to describe two modern scientific discoveries—New Geom...This article refers to the “Mathematics of Harmony” by Alexey Stakhov [1], a new interdisciplinary direction of modern science. The main goal of the article is to describe two modern scientific discoveries—New Geometric Theory of Phyl-lotaxis (Bodnar’s Geometry) and Hilbert’s Fourth Problem based on the Hyperbolic Fibonacci and Lucas Functions and “Golden” Fibonacci -Goniometry ( is a given positive real number). Although these discoveries refer to different areas of science (mathematics and theoretical botany), however they are based on one and the same scien-tific ideas—The “golden mean,” which had been introduced by Euclid in his Elements, and its generalization—The “metallic means,” which have been studied recently by Argentinian mathematician Vera Spinadel. The article is a confirmation of interdisciplinary character of the “Mathematics of Harmony”, which originates from Euclid’s Elements.展开更多
Our paper presents a project that involves two research questions: does the choice of a related problem by the tutorial system allow the problem solving process which is blocked for the student to be restarted? What i...Our paper presents a project that involves two research questions: does the choice of a related problem by the tutorial system allow the problem solving process which is blocked for the student to be restarted? What information about learning do related problems returned by the system provide us? We answer the first question according to the didactic engineering, whose mode of validation is internal and based on the confrontation between an a priori analysis and an a posteriori analysis that relies on data from experiments in schools. We consider the student as a subject whose adaptation processes are conditioned by the problem and the possible interactions with the computer environment, and also by his knowledge, usually implicit, of the institutional norms that condition his relationship with geometry. Choosing a set of good problems within the system is therefore an essential element of the learning model. Since the source of a problem depends on the student’s actions with the computer tool, it is necessary to wait and see what are the related to problems that are returned to him before being able to identify patterns and assess the learning. With the simultaneity of collecting and analysing interactions in each class, we answer the second question according to a grounded theory analysis. By approaching the problems posed by the system and the designs in play at learning blockages, our analysis links the characteristics of problems to the design components in order to theorize on the decisional, epistemological, representational, didactic and instrumental aspects of the subject-milieu system in interaction.展开更多
Two new regularization algorithms for solving the first-kind Volterra integral equation, which describes the pressure-rate deconvolution problem in well test data interpretation, are developed in this paper. The main ...Two new regularization algorithms for solving the first-kind Volterra integral equation, which describes the pressure-rate deconvolution problem in well test data interpretation, are developed in this paper. The main features of the problem are the strong nonuniform scale of the solution and large errors (up to 15%) in the input data. In both algorithms, the solution is represented as decomposition on special basic functions, which satisfy given a priori information on solution, and this idea allow us significantly to improve the quality of approximate solution and simplify solving the minimization problem. The theoretical details of the algorithms, as well as the results of numerical experiments for proving robustness of the algorithms, are presented.展开更多
Various optimal boundary control problems for linear infinite order distributed hyperbolic systems involving constant time lags are considered. Constraints on controls are imposed. Necessary and sufficient optimality ...Various optimal boundary control problems for linear infinite order distributed hyperbolic systems involving constant time lags are considered. Constraints on controls are imposed. Necessary and sufficient optimality conditions for the Neumann problem with the quadratic performance functional are derived.展开更多
The distributed Lagrange multiplier/fictitious domain(DLM/FD)-mixed finite element method is developed and analyzed in this paper for a transient Stokes interface problem with jump coefficients.The semi-and fully disc...The distributed Lagrange multiplier/fictitious domain(DLM/FD)-mixed finite element method is developed and analyzed in this paper for a transient Stokes interface problem with jump coefficients.The semi-and fully discrete DLM/FD-mixed finite element scheme are developed for the first time for this problem with a moving interface,where the arbitrary Lagrangian-Eulerian(ALE)technique is employed to deal with the moving and immersed subdomain.Stability and optimal convergence properties are obtained for both schemes.Numerical experiments are carried out for different scenarios of jump coefficients,and all theoretical results are validated.展开更多
The problems of optimal control (OCPs) related to PDEs are a very active area of research. These problems deal with the processes of mechanical engineering, heat aeronautics, physics, hydro and gas dynamics, the physi...The problems of optimal control (OCPs) related to PDEs are a very active area of research. These problems deal with the processes of mechanical engineering, heat aeronautics, physics, hydro and gas dynamics, the physics of plasma and other real life problems. In this paper, we deal with a class of the constrained OCP for parabolic systems. It is converted to new unconstrained OCP by adding a penalty function to the cost functional. The existence solution of the considering system of parabolic optimal control problem (POCP) is introduced. In this way, the uniqueness theorem for the solving POCP is introduced. Therefore, a theorem for the sufficient differentiability conditions has been proved.展开更多
We deal with the Copenhagen problem where the two big bodies of equal masses are also magnetic dipoles and we study some aspects of the dynamics of a charged particle which moves in the electromagnetic field produced ...We deal with the Copenhagen problem where the two big bodies of equal masses are also magnetic dipoles and we study some aspects of the dynamics of a charged particle which moves in the electromagnetic field produced by the primaries. We investigate the equilibrium positions of the particle and their parametric variations, as well as the basins of attraction for various numerical methods and various values of the parameter λ.展开更多
The Magneto-acoustic Tomography with Current Injection (MAT-CI) is a new biological electrical impedance imaging technique that combines Electrical Impedance Tomography (EIT) with Ultrasonic Imaging (UI), which posses...The Magneto-acoustic Tomography with Current Injection (MAT-CI) is a new biological electrical impedance imaging technique that combines Electrical Impedance Tomography (EIT) with Ultrasonic Imaging (UI), which possesses the non-invasive and high-contrast of the EIT and the high-resolution of the UI. The MAT-CI is expected to acquire high quality image and embraces a wide application. Its principle is to put the conductive sample in the Static Magnetic Field(SMF) and inject a time-varying current, during which the SMF and the current interact and generate the Lorentz Force that inspire ultrasonic signal received by the ultrasonic transducers positioned around the sample. And then according to related reconstruction algorithm and ultrasonic signal, electrical conductivity image is obtained. In this paper, a forward problem mathematical model of the MAT-CI has been set up to deduce the theoretical equation of the electromagnetic field and solve the sound source distribution by Green’s function. Secondly, a sound field restoration by Wiener filtering and reconstruction of current density by time-rotating method have deduced the Laplace’s equation that caters to the current density to further acquire the electrical conductivity distribution image of the sample through iteration method. In the end, double-loop coils experiments have been conducted to verify its feasibility.展开更多
Several problems arising in science and engineering are modeled by differential equations that involve conditions that are specified at more than one point. The non-linear two-point boundary value problem (TPBVP) (Br...Several problems arising in science and engineering are modeled by differential equations that involve conditions that are specified at more than one point. The non-linear two-point boundary value problem (TPBVP) (Bratu’s equation, Troesch’s problems) occurs engineering and science, including the modeling of chemical reactions diffusion processes and heat transfer. An analytical expression pertaining to the concentration of substrate is obtained using Homotopy perturbation method for all values of parameters. These approximate analytical results were found to be in good agreement with the simulation results.展开更多
In this note we consider some basic, yet unusual, issues pertaining to the accuracy and stability of numerical integration methods to follow the solution of first order and second order initial value problems (IVP). I...In this note we consider some basic, yet unusual, issues pertaining to the accuracy and stability of numerical integration methods to follow the solution of first order and second order initial value problems (IVP). Included are remarks on multiple solutions, multi-step methods, effect of initial value perturbations, as well as slowing and advancing the computed motion in second order problems.展开更多
We study waiting time problems for first-order Markov dependent trials via conditional probability generating functions. Our models involve α frequency cells and β run cells with prescribed quotas and an additional ...We study waiting time problems for first-order Markov dependent trials via conditional probability generating functions. Our models involve α frequency cells and β run cells with prescribed quotas and an additional γ slack cells without quotas. For any given and , in our Model I we determine the waiting time until at least frequency cells and at least run cells reach their quotas. For any given τ ≤ α + β, in our Model II we determine the waiting time until τ cells reach their quotas. Computer algorithms are developed to calculate the distributions, expectations and standard deviations of the waiting time random variables of the two models. Numerical results demonstrate the efficiency of the algorithms.展开更多
The cosmological constant problem arises because the magnitude of vacuum energy density predicted by the Quantum Field Theory is about 120 orders of magnitude larger then the value implied by cosmological observations...The cosmological constant problem arises because the magnitude of vacuum energy density predicted by the Quantum Field Theory is about 120 orders of magnitude larger then the value implied by cosmological observations of accelerating cosmic expansion. We pointed out that the fractal nature of the quantum space-time with negative Hausdorff-Colombeau dimensions can resolve this tension. The canonical Quantum Field Theory is widely believed to break down at some fundamental high-energy cutoff and therefore the quantum fluctuations in the vacuum can be treated classically seriously only up to this high-energy cutoff. In this paper we argue that the Quantum Field Theory in fractal space-time with negative Hausdorff-Colombeau dimensions gives high-energy cutoff on natural way. We argue that there exists hidden physical mechanism which cancels divergences in canonical QED4, QCD4, Higher-Derivative-Quantum gravity, etc. In fact we argue that corresponding supermassive Pauli-Villars ghost fields really exist. It means that there exists the ghost-driven acceleration of the universe hidden in cosmological constant. In order to obtain the desired physical result we apply the canonical Pauli-Villars regularization up to Λ*. This would fit in the observed value of the dark energy needed to explain the accelerated expansion of the universe if we choose highly symmetric masses distribution between standard matter and ghost matter below the scale Λ*, i.e., The small value of the cosmological constant is explained by tiny violation of the symmetry between standard matter and ghost matter. Dark matter nature is also explained using a common origin of the dark energy and dark matter phenomena.展开更多
During the use of robotics in applications such as antiterrorism or combat,a motion-constrained pursuer vehicle,such as a Dubins unmanned surface vehicle(USV),must get close enough(within a prescribed zero or positive...During the use of robotics in applications such as antiterrorism or combat,a motion-constrained pursuer vehicle,such as a Dubins unmanned surface vehicle(USV),must get close enough(within a prescribed zero or positive distance)to a moving target as quickly as possible,resulting in the extended minimum-time intercept problem(EMTIP).Existing research has primarily focused on the zero-distance intercept problem,MTIP,establishing the necessary or sufficient conditions for MTIP optimality,and utilizing analytic algorithms,such as root-finding algorithms,to calculate the optimal solutions.However,these approaches depend heavily on the properties of the analytic algorithm,making them inapplicable when problem settings change,such as in the case of a positive effective range or complicated target motions outside uniform rectilinear motion.In this study,an approach employing a high-accuracy and quality-guaranteed mixed-integer piecewise-linear program(QG-PWL)is proposed for the EMTIP.This program can accommodate different effective interception ranges and complicated target motions(variable velocity or complicated trajectories).The high accuracy and quality guarantees of QG-PWL originate from elegant strategies such as piecewise linearization and other developed operation strategies.The approximate error in the intercept path length is proved to be bounded to h^(2)/(4√2),where h is the piecewise length.展开更多
On the basis of similar structure of solutions of ordinary differential equation (ODE) boundary value problem, the similar construction method was put forward by solving problems of fluid flow in porous media through ...On the basis of similar structure of solutions of ordinary differential equation (ODE) boundary value problem, the similar construction method was put forward by solving problems of fluid flow in porous media through the homogeneous reservoir. It is indicate that the pressure distribution of dimensionless reservoir and bottom hole in Laplace space, which take on the radial flow, also shows similar structure, and the internal relationship between the above solutions were illustrated in detail.展开更多
In the following pages I will try to give a solution to this very known unsolved problem of theory of numbers. The solution is given here with an important analysis of the proof of formula (4.18), with the introductio...In the following pages I will try to give a solution to this very known unsolved problem of theory of numbers. The solution is given here with an important analysis of the proof of formula (4.18), with the introduction of special intervals between square of prime numbers that I call silver intervals . And I make introduction of another also new mathematic phenomenon of logical proposition “In mathematics nothing happens without reason” for which I use the ancient Greek term “catholic information”. From the theorem of prime numbers we know that the expected multitude of prime numbers in an interval is given by formula ?considering that interval as a continuous distribution of real numbers that represents an elementary natural numbers interval. From that we find that in the elementary interval around of a natural number ν we easily get by dx=1 the probability that has the ν to be a prime number. From the last formula one can see that the second part of formula (4.18) is absolutely in agreement with the above theorem of prime numbers. But the benefit of the (4.18) is that this formula enables correct calculations in set N on finding the multitude of twin prime numbers, in contrary of the above logarithmic relation which is an approximation and must tend to be correct as ν tends to infinity. Using the relationship (4.18) we calculate here the multitude of twins in N, concluding that this multitude tends to infinite. But for the validity of the computation, the distribution of the primes in a random silver interval is examined, proving on the basis of catholic information that the density of primes in the same random silver interval is statistically constant. Below, in introduction, we will define this concept of “catholic information” stems of “information theory” [1] and it is defined to use only general forms in set N, because these represent the set N and not finite parts of it. This concept must be correlated to Riemann Hypothesis.展开更多
The VRP is classified as an NP-hard problem. Hence exact optimization methods may be difficult to solve these problems in acceptable CPU times, when the problem involves real-world data sets that are very large. To ge...The VRP is classified as an NP-hard problem. Hence exact optimization methods may be difficult to solve these problems in acceptable CPU times, when the problem involves real-world data sets that are very large. To get solutions in determining routes which are realistic and very close to the actual solution, we use heuristics and metaheuristics which are of the combinatorial optimization type. A literature review of VRPTW, TDVRP, and a metaheuristic such as the genetic algorithm was conducted. In this paper, the implementation of the VRPTW and its extension, the time-dependent VRPTW (TDVRPTW) has been carried out using the model as well as metaheuristics such as the genetic algorithm (GA). The algorithms were implemented, using Matlab and HeuristicLab optimization software. A plugin was developed using Visual C# and DOT NET framework 4.5. Results were tested using Solomon’s 56 benchmark instances classified into groups such as C1, C2, R1, R2, RC1, RC2, with 100 customer nodes, 25 vehicles and each vehicle capacity of 200. The results were comparable to the earlier algorithms developed and in some cases the current algorithm yielded better results in terms of total distance travelled and the average number of vehicles used.展开更多
A meshfree method namely, element free Gelerkin (EFG) method, is presented in this paper for the solution of governing equations of 2-D potential problems. The EFG method is a numerical method which uses nodal points ...A meshfree method namely, element free Gelerkin (EFG) method, is presented in this paper for the solution of governing equations of 2-D potential problems. The EFG method is a numerical method which uses nodal points in order to discretize the computational domain, but where the use of connectivity is absent. The unknowns in the problems are approximated by means of connectivity-free technique known as moving least squares (MLS) approximation. The effect of irregular distribution of nodal points on the accuracy of the EFG method is the main goal of this paper as a complement to the precedent researches investigated by proposing an irregularity index (II) in order to analyze some 2-D benchmark examples and the results of sensitivity analysis on the parameters of the method are presented.展开更多
文摘In Advances in Pure Mathematics (www.scirp.org/journal/apm), Vol. 1, No. 4 (July 2011), pp. 136-154, the mathematical structure of the much discussed problem of probability known as the Monty Hall problem was mapped in detail. It is styled here as Monty Hall 1.0. The proposed analysis was then generalized to related cases involving any number of doors (d), cars (c), and opened doors (o) (Monty Hall 2.0) and 1 specific case involving more than 1 picked door (p) (Monty Hall 3.0). In cognitive terms, this analysis was interpreted in function of the presumed digital nature of rational thought and language. In the present paper, Monty Hall 1.0 and 2.0 are briefly reviewed (§§2-3). Additional generalizations of the problem are then presented in §§4-7. They concern expansions of the problem to the following items: (1) to any number of picked doors, with p denoting the number of doors initially picked and q the number of doors picked when switching doors after doors have been opened to reveal goats (Monty Hall 3.0;see §4);(3) to the precise conditions under which one’s chances increase or decrease in instances of Monty Hall 3.0 (Monty Hall 3.2;see §6);and (4) to any number of switches of doors (s) (Monty Hall 4.0;see §7). The afore-mentioned article in APM, Vol. 1, No. 4 may serve as a useful introduction to the analysis of the higher variations of the Monty Hall problem offered in the present article. The body of the article is by Leo Depuydt. An appendix by Richard D. Gill (see §8) provides additional context by building a bridge to modern probability theory in its conventional notation and by pointing to the benefits of certain interesting and relevant tools of computation now available on the Internet. The cognitive component of the earlier investigation is extended in §9 by reflections on the foundations of mathematics. It will be proposed, in the footsteps of George Boole, that the phenomenon of mathematics needs to be defined in empirical terms as something that happens to the brain or something that the brain does. It is generally assumed that mathematics is a property of nature or reality or whatever one may call it. There is not the slightest intention in this paper to falsify this assumption because it cannot be falsified, just as it cannot be empirically or positively proven. But there is no way that this assumption can be a factual observation. It can be no more than an altogether reasonable, yet fully secondary, inference derived mainly from the fact that mathematics appears to work, even if some may deem the fact of this match to constitute proof. On the deepest empirical level, mathematics can only be directly observed and therefore directly analyzed as an activity of the brain. The study of mathematics therefore becomes an essential part of the study of cognition and human intelligence. The reflections on mathematics as a phenomenon offered in the present article will serve as a prelude to planned articles on how to redefine the foundations of probability as one type of mathematics in cognitive fashion and on how exactly Boole’s theory of probability subsumes, supersedes, and completes classical probability theory. §§2-7 combined, on the one hand, and §9, on the other hand, are both self-sufficient units and can be read independently from one another. The ultimate design of the larger project of which this paper is part remains the increase of digitalization of the analysis of rational thought and language, that is, of (rational, not emotional) human intelligence. To reach out to other disciplines, an effort is made to describe the mathematics more explicitly than is usual.
文摘The Monty Hall problem has received its fair share of attention in mathematics. Recently, an entire monograph has been devoted to its history. There has been a multiplicity of approaches to the problem. These approaches are not necessarily mutually exclusive. The design of the present paper is to add one more approach by analyzing the mathematical structure of the Monty Hall problem in digital terms. The structure of the problem is described as much as possible in the tradition and the spirit—and as much as possible by means of the algebraic conventions—of George Boole’s Investigation of the Laws of Thought (1854), the Magna Charta of the digital age, and of John Venn’s Symbolic Logic (second edition, 1894), which is squarely based on Boole’s Investigation and elucidates it in many ways. The focus is not only on the digital-mathematical structure itself but also on its relation to the presumed digital nature of cognition as expressed in rational thought and language. The digital approach is outlined in part 1. In part 2, the Monty Hall problem is analyzed digitally. To ensure the generality of the digital approach and demonstrate its reliability and productivity, the Monty Hall problem is extended and generalized in parts 3 and 4 to related cases in light of the axioms of probability theory. In the full mapping of the mathematical structure of the Monty Hall problem and any extensions thereof, a digital or non-quantitative skeleton is fleshed out by a quantitative component. The pertinent mathematical equations are developed and presented and illustrated by means of examples.
文摘This article refers to the “Mathematics of Harmony” by Alexey Stakhov in 2009, a new interdisciplinary direction of modern science. The main goal of the article is to describe two modern scientific discoveries–New Geometric Theory of Phyllotaxis (Bodnar’s Geometry) and Hilbert’s Fourth Problem based on the Hyperbolic Fibonacci and Lucas Functions and “Golden” Fibonacci λ-Goniometry (λ > 0 is a given positive real number). Although these discoveries refer to different areas of science (mathematics and theoretical botany), however they are based on one and the same scientific ideas-the “golden mean,” which had been introduced by Euclid in his Elements, and its generalization—the “metallic means,” which have been studied recently by Argentinian mathematician Vera Spinadel. The article is a confirmation of interdisciplinary character of the “Mathematics of Harmony”, which originates from Euclid’s Elements.
文摘This article refers to the “Mathematics of Harmony” by Alexey Stakhov [1], a new interdisciplinary direction of modern science. The main goal of the article is to describe two modern scientific discoveries—New Geometric Theory of Phyl-lotaxis (Bodnar’s Geometry) and Hilbert’s Fourth Problem based on the Hyperbolic Fibonacci and Lucas Functions and “Golden” Fibonacci -Goniometry ( is a given positive real number). Although these discoveries refer to different areas of science (mathematics and theoretical botany), however they are based on one and the same scien-tific ideas—The “golden mean,” which had been introduced by Euclid in his Elements, and its generalization—The “metallic means,” which have been studied recently by Argentinian mathematician Vera Spinadel. The article is a confirmation of interdisciplinary character of the “Mathematics of Harmony”, which originates from Euclid’s Elements.
文摘Our paper presents a project that involves two research questions: does the choice of a related problem by the tutorial system allow the problem solving process which is blocked for the student to be restarted? What information about learning do related problems returned by the system provide us? We answer the first question according to the didactic engineering, whose mode of validation is internal and based on the confrontation between an a priori analysis and an a posteriori analysis that relies on data from experiments in schools. We consider the student as a subject whose adaptation processes are conditioned by the problem and the possible interactions with the computer environment, and also by his knowledge, usually implicit, of the institutional norms that condition his relationship with geometry. Choosing a set of good problems within the system is therefore an essential element of the learning model. Since the source of a problem depends on the student’s actions with the computer tool, it is necessary to wait and see what are the related to problems that are returned to him before being able to identify patterns and assess the learning. With the simultaneity of collecting and analysing interactions in each class, we answer the second question according to a grounded theory analysis. By approaching the problems posed by the system and the designs in play at learning blockages, our analysis links the characteristics of problems to the design components in order to theorize on the decisional, epistemological, representational, didactic and instrumental aspects of the subject-milieu system in interaction.
文摘Two new regularization algorithms for solving the first-kind Volterra integral equation, which describes the pressure-rate deconvolution problem in well test data interpretation, are developed in this paper. The main features of the problem are the strong nonuniform scale of the solution and large errors (up to 15%) in the input data. In both algorithms, the solution is represented as decomposition on special basic functions, which satisfy given a priori information on solution, and this idea allow us significantly to improve the quality of approximate solution and simplify solving the minimization problem. The theoretical details of the algorithms, as well as the results of numerical experiments for proving robustness of the algorithms, are presented.
文摘Various optimal boundary control problems for linear infinite order distributed hyperbolic systems involving constant time lags are considered. Constraints on controls are imposed. Necessary and sufficient optimality conditions for the Neumann problem with the quadratic performance functional are derived.
基金P.Sun was supported by NSF Grant DMS-1418806C.S.Zhang was partially supported by the National Key Research and Development Program of China(Grant No.2016YFB0201304)+1 种基金the Major Research Plan of National Natural Science Foundation of China(Grant Nos.91430215,91530323)the Key Research Program of Frontier Sciences of CAS.
文摘The distributed Lagrange multiplier/fictitious domain(DLM/FD)-mixed finite element method is developed and analyzed in this paper for a transient Stokes interface problem with jump coefficients.The semi-and fully discrete DLM/FD-mixed finite element scheme are developed for the first time for this problem with a moving interface,where the arbitrary Lagrangian-Eulerian(ALE)technique is employed to deal with the moving and immersed subdomain.Stability and optimal convergence properties are obtained for both schemes.Numerical experiments are carried out for different scenarios of jump coefficients,and all theoretical results are validated.
文摘The problems of optimal control (OCPs) related to PDEs are a very active area of research. These problems deal with the processes of mechanical engineering, heat aeronautics, physics, hydro and gas dynamics, the physics of plasma and other real life problems. In this paper, we deal with a class of the constrained OCP for parabolic systems. It is converted to new unconstrained OCP by adding a penalty function to the cost functional. The existence solution of the considering system of parabolic optimal control problem (POCP) is introduced. In this way, the uniqueness theorem for the solving POCP is introduced. Therefore, a theorem for the sufficient differentiability conditions has been proved.
文摘We deal with the Copenhagen problem where the two big bodies of equal masses are also magnetic dipoles and we study some aspects of the dynamics of a charged particle which moves in the electromagnetic field produced by the primaries. We investigate the equilibrium positions of the particle and their parametric variations, as well as the basins of attraction for various numerical methods and various values of the parameter λ.
文摘The Magneto-acoustic Tomography with Current Injection (MAT-CI) is a new biological electrical impedance imaging technique that combines Electrical Impedance Tomography (EIT) with Ultrasonic Imaging (UI), which possesses the non-invasive and high-contrast of the EIT and the high-resolution of the UI. The MAT-CI is expected to acquire high quality image and embraces a wide application. Its principle is to put the conductive sample in the Static Magnetic Field(SMF) and inject a time-varying current, during which the SMF and the current interact and generate the Lorentz Force that inspire ultrasonic signal received by the ultrasonic transducers positioned around the sample. And then according to related reconstruction algorithm and ultrasonic signal, electrical conductivity image is obtained. In this paper, a forward problem mathematical model of the MAT-CI has been set up to deduce the theoretical equation of the electromagnetic field and solve the sound source distribution by Green’s function. Secondly, a sound field restoration by Wiener filtering and reconstruction of current density by time-rotating method have deduced the Laplace’s equation that caters to the current density to further acquire the electrical conductivity distribution image of the sample through iteration method. In the end, double-loop coils experiments have been conducted to verify its feasibility.
文摘Several problems arising in science and engineering are modeled by differential equations that involve conditions that are specified at more than one point. The non-linear two-point boundary value problem (TPBVP) (Bratu’s equation, Troesch’s problems) occurs engineering and science, including the modeling of chemical reactions diffusion processes and heat transfer. An analytical expression pertaining to the concentration of substrate is obtained using Homotopy perturbation method for all values of parameters. These approximate analytical results were found to be in good agreement with the simulation results.
文摘In this note we consider some basic, yet unusual, issues pertaining to the accuracy and stability of numerical integration methods to follow the solution of first order and second order initial value problems (IVP). Included are remarks on multiple solutions, multi-step methods, effect of initial value perturbations, as well as slowing and advancing the computed motion in second order problems.
文摘We study waiting time problems for first-order Markov dependent trials via conditional probability generating functions. Our models involve α frequency cells and β run cells with prescribed quotas and an additional γ slack cells without quotas. For any given and , in our Model I we determine the waiting time until at least frequency cells and at least run cells reach their quotas. For any given τ ≤ α + β, in our Model II we determine the waiting time until τ cells reach their quotas. Computer algorithms are developed to calculate the distributions, expectations and standard deviations of the waiting time random variables of the two models. Numerical results demonstrate the efficiency of the algorithms.
文摘The cosmological constant problem arises because the magnitude of vacuum energy density predicted by the Quantum Field Theory is about 120 orders of magnitude larger then the value implied by cosmological observations of accelerating cosmic expansion. We pointed out that the fractal nature of the quantum space-time with negative Hausdorff-Colombeau dimensions can resolve this tension. The canonical Quantum Field Theory is widely believed to break down at some fundamental high-energy cutoff and therefore the quantum fluctuations in the vacuum can be treated classically seriously only up to this high-energy cutoff. In this paper we argue that the Quantum Field Theory in fractal space-time with negative Hausdorff-Colombeau dimensions gives high-energy cutoff on natural way. We argue that there exists hidden physical mechanism which cancels divergences in canonical QED4, QCD4, Higher-Derivative-Quantum gravity, etc. In fact we argue that corresponding supermassive Pauli-Villars ghost fields really exist. It means that there exists the ghost-driven acceleration of the universe hidden in cosmological constant. In order to obtain the desired physical result we apply the canonical Pauli-Villars regularization up to Λ*. This would fit in the observed value of the dark energy needed to explain the accelerated expansion of the universe if we choose highly symmetric masses distribution between standard matter and ghost matter below the scale Λ*, i.e., The small value of the cosmological constant is explained by tiny violation of the symmetry between standard matter and ghost matter. Dark matter nature is also explained using a common origin of the dark energy and dark matter phenomena.
基金supported by the National Natural Sci‐ence Foundation of China(Grant No.62306325)。
文摘During the use of robotics in applications such as antiterrorism or combat,a motion-constrained pursuer vehicle,such as a Dubins unmanned surface vehicle(USV),must get close enough(within a prescribed zero or positive distance)to a moving target as quickly as possible,resulting in the extended minimum-time intercept problem(EMTIP).Existing research has primarily focused on the zero-distance intercept problem,MTIP,establishing the necessary or sufficient conditions for MTIP optimality,and utilizing analytic algorithms,such as root-finding algorithms,to calculate the optimal solutions.However,these approaches depend heavily on the properties of the analytic algorithm,making them inapplicable when problem settings change,such as in the case of a positive effective range or complicated target motions outside uniform rectilinear motion.In this study,an approach employing a high-accuracy and quality-guaranteed mixed-integer piecewise-linear program(QG-PWL)is proposed for the EMTIP.This program can accommodate different effective interception ranges and complicated target motions(variable velocity or complicated trajectories).The high accuracy and quality guarantees of QG-PWL originate from elegant strategies such as piecewise linearization and other developed operation strategies.The approximate error in the intercept path length is proved to be bounded to h^(2)/(4√2),where h is the piecewise length.
文摘On the basis of similar structure of solutions of ordinary differential equation (ODE) boundary value problem, the similar construction method was put forward by solving problems of fluid flow in porous media through the homogeneous reservoir. It is indicate that the pressure distribution of dimensionless reservoir and bottom hole in Laplace space, which take on the radial flow, also shows similar structure, and the internal relationship between the above solutions were illustrated in detail.
文摘In the following pages I will try to give a solution to this very known unsolved problem of theory of numbers. The solution is given here with an important analysis of the proof of formula (4.18), with the introduction of special intervals between square of prime numbers that I call silver intervals . And I make introduction of another also new mathematic phenomenon of logical proposition “In mathematics nothing happens without reason” for which I use the ancient Greek term “catholic information”. From the theorem of prime numbers we know that the expected multitude of prime numbers in an interval is given by formula ?considering that interval as a continuous distribution of real numbers that represents an elementary natural numbers interval. From that we find that in the elementary interval around of a natural number ν we easily get by dx=1 the probability that has the ν to be a prime number. From the last formula one can see that the second part of formula (4.18) is absolutely in agreement with the above theorem of prime numbers. But the benefit of the (4.18) is that this formula enables correct calculations in set N on finding the multitude of twin prime numbers, in contrary of the above logarithmic relation which is an approximation and must tend to be correct as ν tends to infinity. Using the relationship (4.18) we calculate here the multitude of twins in N, concluding that this multitude tends to infinite. But for the validity of the computation, the distribution of the primes in a random silver interval is examined, proving on the basis of catholic information that the density of primes in the same random silver interval is statistically constant. Below, in introduction, we will define this concept of “catholic information” stems of “information theory” [1] and it is defined to use only general forms in set N, because these represent the set N and not finite parts of it. This concept must be correlated to Riemann Hypothesis.
文摘The VRP is classified as an NP-hard problem. Hence exact optimization methods may be difficult to solve these problems in acceptable CPU times, when the problem involves real-world data sets that are very large. To get solutions in determining routes which are realistic and very close to the actual solution, we use heuristics and metaheuristics which are of the combinatorial optimization type. A literature review of VRPTW, TDVRP, and a metaheuristic such as the genetic algorithm was conducted. In this paper, the implementation of the VRPTW and its extension, the time-dependent VRPTW (TDVRPTW) has been carried out using the model as well as metaheuristics such as the genetic algorithm (GA). The algorithms were implemented, using Matlab and HeuristicLab optimization software. A plugin was developed using Visual C# and DOT NET framework 4.5. Results were tested using Solomon’s 56 benchmark instances classified into groups such as C1, C2, R1, R2, RC1, RC2, with 100 customer nodes, 25 vehicles and each vehicle capacity of 200. The results were comparable to the earlier algorithms developed and in some cases the current algorithm yielded better results in terms of total distance travelled and the average number of vehicles used.
文摘A meshfree method namely, element free Gelerkin (EFG) method, is presented in this paper for the solution of governing equations of 2-D potential problems. The EFG method is a numerical method which uses nodal points in order to discretize the computational domain, but where the use of connectivity is absent. The unknowns in the problems are approximated by means of connectivity-free technique known as moving least squares (MLS) approximation. The effect of irregular distribution of nodal points on the accuracy of the EFG method is the main goal of this paper as a complement to the precedent researches investigated by proposing an irregularity index (II) in order to analyze some 2-D benchmark examples and the results of sensitivity analysis on the parameters of the method are presented.