期刊文献+
共找到3,380篇文章
< 1 2 169 >
每页显示 20 50 100
F-GEOMETRIC ERGODIC OF CONTINUOUS TIME MARKOV PROCESSES BY COUPLING METHOD
1
作者 ZHU Zhi-feng ZHOU Jun-chao 《数学杂志》 2025年第6期493-501,共9页
In this paper,we study the geometric ergodicity of continuous time Markov pro-cesses in general state space.For the geometric ergodic continuous time Markov processes,the condition π(f^(p))<∞,p>1 is added.Usin... In this paper,we study the geometric ergodicity of continuous time Markov pro-cesses in general state space.For the geometric ergodic continuous time Markov processes,the condition π(f^(p))<∞,p>1 is added.Using the coupling method,we obtain the existence of a full absorbing set on which continuous time Markov processes are f-geometric ergodic. 展开更多
关键词 markov process COUPLING f-norm geometric ergodicity f-geometric ergodicity
在线阅读 下载PDF
Comments on“The theory of two-parameter Markov processes”
2
《Chinese Science Bulletin》 SCIE CAS 1998年第10期876-877,共2页
关键词 Comments on The theory of two-parameter markov processes
在线阅读 下载PDF
Transformation of state space for two-parameter Markov processes
3
作者 周健伟 《Science China Mathematics》 SCIE 1996年第10期1058-1064,共7页
Let X=(X) be a two-parameter *-Markov process with a transition function (p1, p2, p), where X, takes values in the state space (Er,), T=[0,)2. For each r T, let f, be a measurable transformation of (E,) into the state... Let X=(X) be a two-parameter *-Markov process with a transition function (p1, p2, p), where X, takes values in the state space (Er,), T=[0,)2. For each r T, let f, be a measurable transformation of (E,) into the state space (E’r, ). Set Y,=f,(X,), r T. A sufficient condition is given for the process Y=(Yr) still to be a two-parameter *-Markov process with a transition function in terms of transition function (p1, p2, p) and fr. For *-Markov families of two-parameter processes with a transition function, a similar problem is also discussed. 展开更多
关键词 two-parameter markov processes markov FIELDS TRANSFORMATION of the state space.
原文传递
A COUNTEREXAMPLE ON TWO-PARAMETER MARKOV PROCESSES
4
作者 黄长全 《Chinese Science Bulletin》 SCIE EI CAS 1989年第18期1503-1506,共4页
I. INTRODUCTION AND DEFINITIONS In this report, we shall give a simple counterexample to negative Theorem 1 and Proposition 3 (c)(ii)in [1] and explain the difference between the large-past Markov property and *-Marko... I. INTRODUCTION AND DEFINITIONS In this report, we shall give a simple counterexample to negative Theorem 1 and Proposition 3 (c)(ii)in [1] and explain the difference between the large-past Markov property and *-Markov property. Thereby some mistakes are cleared up. 展开更多
关键词 two-parameter stochastic process large-past markov PROPERTY *-markov PROPERTY large-future markov PROPERTY i-markov PROPERTY (i=1 2)
在线阅读 下载PDF
Hausdorff Dimension of Range and Graph for General Markov Processes
5
作者 CHEN Zhi-He 《应用概率统计》 CSCD 北大核心 2024年第6期942-956,共15页
We establish the Hausdorff dimension of the graph of general Markov processes on Rd based on some probability estimates of the processes staying or leaving small balls in small time.In particular,our results indicate ... We establish the Hausdorff dimension of the graph of general Markov processes on Rd based on some probability estimates of the processes staying or leaving small balls in small time.In particular,our results indicate that,for symmetric diffusion processes(withα=2)or symmetricα-stable-like processes(withα∈(0,2))on Rd,it holds almost surely that dimH GrX([0,1])=1{α<1}+(2−1/α)1{α≥1,d=1}+(d∧α)1{α≥1,d≥2}.We also systematically prove the corresponding results about the Hausdorff dimension of the range of the processes. 展开更多
关键词 markov process Hausdorff dimension RANGE GRAPH
在线阅读 下载PDF
On approximating multifractal traffic burstiness with Markov modulated Poisson processes 被引量:1
6
作者 纪其进 《Journal of Southeast University(English Edition)》 EI CAS 2004年第4期436-441,共6页
We investigate the approximating capability of Markov modulated Poisson processes (MMPP) for modeling multifractal Internet traffic. The choice of MMPP is motivated by its ability to capture the variability and correl... We investigate the approximating capability of Markov modulated Poisson processes (MMPP) for modeling multifractal Internet traffic. The choice of MMPP is motivated by its ability to capture the variability and correlation in moderate time scales while being analytically tractable. Important statistics of traffic burstiness are described and a customized moment-based fitting procedure of MMPP to traffic traces is presented. Our methodology of doing this is to examine whether the MMPP can be used to predict the performance of a queue to which MMPP sample paths and measured traffic traces are fed for comparison respectively, in addition to the goodness-of-fit test of MMPP. Numerical results and simulations show that the fitted MMPP can approximate multifractal traffic quite well, i.e. accurately predict the queueing performance. 展开更多
关键词 multifractal traffic markov modulated Poisson processes queueing delay packet loss rate
在线阅读 下载PDF
一个包含先天免疫因素的Markov切换传染病模型
7
作者 陈丽君 《四川大学学报(自然科学版)》 北大核心 2025年第4期812-822,共11页
一些传染病具有变异快、烈度高、传播迅速及发病隐匿等典型特征.为了更好预防和控制传染病的传播扩散,利用数学模型来研究传染病的传播规律是一种基本方法.本文建立了一种同时包含疫苗接种效果、Beddington-DeAngelis发生率及饱和先天... 一些传染病具有变异快、烈度高、传播迅速及发病隐匿等典型特征.为了更好预防和控制传染病的传播扩散,利用数学模型来研究传染病的传播规律是一种基本方法.本文建立了一种同时包含疫苗接种效果、Beddington-DeAngelis发生率及饱和先天免疫因素的Markov切换传染病模型.结合停时理论,本文通过构造Lyapunov函数证明模型的解具有全局正性.在适当条件下,本文运用一般伊藤公式证明:当基本再生数大于1时,模型的解是一个平稳Markov过程,具有遍历性;当基本再生数小于1时,潜伏者与感染者的数量趋于绝灭.数值模拟验证了理论结果. 展开更多
关键词 随机传染病模型 先天免疫 疫苗接种 平稳markov过程 绝灭
在线阅读 下载PDF
Modeling and Design of Real-Time Pricing Systems Based on Markov Decision Processes 被引量:4
8
作者 Koichi Kobayashi Ichiro Maruta +1 位作者 Kazunori Sakurama Shun-ichi Azuma 《Applied Mathematics》 2014年第10期1485-1495,共11页
A real-time pricing system of electricity is a system that charges different electricity prices for different hours of the day and for different days, and is effective for reducing the peak and flattening the load cur... A real-time pricing system of electricity is a system that charges different electricity prices for different hours of the day and for different days, and is effective for reducing the peak and flattening the load curve. In this paper, using a Markov decision process (MDP), we propose a modeling method and an optimal control method for real-time pricing systems. First, the outline of real-time pricing systems is explained. Next, a model of a set of customers is derived as a multi-agent MDP. Furthermore, the optimal control problem is formulated, and is reduced to a quadratic programming problem. Finally, a numerical simulation is presented. 展开更多
关键词 markov DECISION process OPTIMAL Control REAL-TIME PRICING System
暂未订购
SMALL PERTURBATION CRAMER METHODS AND MODERATE DEVIATIONS FOR MARKOV PROCESSES 被引量:2
9
作者 高付清 《Acta Mathematica Scientia》 SCIE CSCD 1995年第4期394-405,共12页
This paper presents a small perturbation Cramer method for obtaining the large deviation principle of a family of measures (β,ε> 0) on a topological vector space. As an application, we obtain the moderate deviati... This paper presents a small perturbation Cramer method for obtaining the large deviation principle of a family of measures (β,ε> 0) on a topological vector space. As an application, we obtain the moderate deviation estimations for uniformly ergodic Markov processes. 展开更多
关键词 Large deviations Cramer methods markov processes moderate deviations.
在线阅读 下载PDF
Convergence of Invariant Measures of Truncation Approximations to Markov Processes 被引量:2
10
作者 Andrew G. Hart Richard L. Tweedie 《Applied Mathematics》 2012年第12期2205-2215,共11页
Let Q be the Q-matrix of an irreducible, positive recurrent Markov process on a countable state space. We show that, under a number of conditions, the stationary distributions of the n × n north-west corner augme... Let Q be the Q-matrix of an irreducible, positive recurrent Markov process on a countable state space. We show that, under a number of conditions, the stationary distributions of the n × n north-west corner augmentations of Q converge in total variation to the stationary distribution of the process. Two conditions guaranteeing such convergence include exponential ergodicity and stochastic monotonicity of the process. The same also holds for processes dominated by a stochastically monotone Markov process. In addition, we shall show that finite perturbations of stochastically monotone processes may be viewed as being dominated by a stochastically monotone process, thus extending the scope of these results to a larger class of processes. Consequently, the augmentation method provides an attractive, intuitive method for approximating the stationary distributions of a large class of Markov processes on countably infinite state spaces from a finite amount of known information. 展开更多
关键词 Invariant Measure TRUNCATION Approximation Augmentation EXPONENTIAL ERGODICITY Stochastic MONOTONICITY markov process
在线阅读 下载PDF
A GENERAL FORM OF THE INCREMENTS OF A TWO-PARAMETER WIENER PROCESS 被引量:2
11
作者 林正炎 陆传荣 《Applied Mathematics(A Journal of Chinese Universities)》 SCIE CSCD 1993年第1期54-63,共10页
In this paper, we consider a general form of the increments for a two-parameter Wiener process. Both the Csorgo-Revesz's increments and a class of the lag increments are the special cases of this general form of i... In this paper, we consider a general form of the increments for a two-parameter Wiener process. Both the Csorgo-Revesz's increments and a class of the lag increments are the special cases of this general form of increments. Our results imply the theorem that have been given by Csorgo and Revesz (1978), and some of their conditions are removed. 展开更多
关键词 two-parameter Wiener process Increments.
在线阅读 下载PDF
Solving Markov Decision Processes with Downside Risk Adjustment 被引量:1
12
作者 Abhijit Gosavi Anish Parulekar 《International Journal of Automation and computing》 EI CSCD 2016年第3期235-245,共11页
Markov decision processes (MDPs) and their variants are widely studied in the theory of controls for stochastic discrete- event systems driven by Markov chains. Much of the literature focusses on the risk-neutral cr... Markov decision processes (MDPs) and their variants are widely studied in the theory of controls for stochastic discrete- event systems driven by Markov chains. Much of the literature focusses on the risk-neutral criterion in which the expected rewards, either average or discounted, are maximized. There exists some literature on MDPs that takes risks into account. Much of this addresses the exponential utility (EU) function and mechanisms to penalize different forms of variance of the rewards. EU functions have some numerical deficiencies, while variance measures variability both above and below the mean rewards; the variability above mean rewards is usually beneficial and should not be penalized/avoided. As such, risk metrics that account for pre-specified targets (thresholds) for rewards have been considered in the literature, where the goal is to penalize the risks of revenues falling below those targets. Existing work on MDPs that takes targets into account seeks to minimize risks of this nature. Minimizing risks can lead to poor solutions where the risk is zero or near zero, but the average rewards are also rather low. In this paper, hence, we study a risk-averse criterion, in particular the so-called downside risk, which equals the probability of the revenues falling below a given target, where, in contrast to minimizing such risks, we only reduce this risk at the cost of slightly lowered average rewards. A solution where the risk is low and the average reward is quite high, although not at its maximum attainable value, is very attractive in practice. To be more specific, in our formulation, the objective function is the expected value of the rewards minus a scalar times the downside risk. In this setting, we analyze the infinite horizon MDP, the finite horizon MDP, and the infinite horizon semi-MDP (SMDP). We develop dynamic programming and reinforcement learning algorithms for the finite and infinite horizon. The algorithms are tested in numerical studies and show encouraging performance. 展开更多
关键词 Downside risk markov decision processes reinforcement learning dynamic programming TARGETS thresholds.
原文传递
ON THE HITTING PROBABILITY AND POLARITY FOR A CLASS OF SELF-SIMILAR MARKOV PROCESSES 被引量:1
13
作者 熊双平 刘禄勤 《Acta Mathematica Scientia》 SCIE CSCD 1999年第2期226-233,共8页
The anthem investigate the hitting probability, polarity and the relationship between the polarity and Hausdorff dimension for self-similar Markov processes with state space (0, infinity) and increasing path.
关键词 self-similar markov process hitting probability polar set essential polar set Hausdorff dimension
在线阅读 下载PDF
Robust analysis of discounted Markov decision processes with uncertain transition probabilities 被引量:3
14
作者 LOU Zhen-kai HOU Fu-jun LOU Xu-ming 《Applied Mathematics(A Journal of Chinese Universities)》 SCIE CSCD 2020年第4期417-436,共20页
Optimal policies in Markov decision problems may be quite sensitive with regard to transition probabilities.In practice,some transition probabilities may be uncertain.The goals of the present study are to find the rob... Optimal policies in Markov decision problems may be quite sensitive with regard to transition probabilities.In practice,some transition probabilities may be uncertain.The goals of the present study are to find the robust range for a certain optimal policy and to obtain value intervals of exact transition probabilities.Our research yields powerful contributions for Markov decision processes(MDPs)with uncertain transition probabilities.We first propose a method for estimating unknown transition probabilities based on maximum likelihood.Since the estimation may be far from accurate,and the highest expected total reward of the MDP may be sensitive to these transition probabilities,we analyze the robustness of an optimal policy and propose an approach for robust analysis.After giving the definition of a robust optimal policy with uncertain transition probabilities represented as sets of numbers,we formulate a model to obtain the optimal policy.Finally,we define the value intervals of the exact transition probabilities and construct models to determine the lower and upper bounds.Numerical examples are given to show the practicability of our methods. 展开更多
关键词 markov decision processes uncertain transition probabilities robustness and sensitivity robust optimal policy value interval
在线阅读 下载PDF
Variance minimization for continuous-time Markov decision processes: two approaches 被引量:1
15
作者 ZHU Quan-xin 《Applied Mathematics(A Journal of Chinese Universities)》 SCIE CSCD 2010年第4期400-410,共11页
This paper studies the limit average variance criterion for continuous-time Markov decision processes in Polish spaces. Based on two approaches, this paper proves not only the existence of solutions to the variance mi... This paper studies the limit average variance criterion for continuous-time Markov decision processes in Polish spaces. Based on two approaches, this paper proves not only the existence of solutions to the variance minimization optimality equation and the existence of a variance minimal policy that is canonical, but also the existence of solutions to the two variance minimization optimality inequalities and the existence of a variance minimal policy which may not be canonical. An example is given to illustrate all of our conditions. 展开更多
关键词 Continuous-time markov decision process Polish space variance minimization optimality equation optimality inequality.
在线阅读 下载PDF
Wind power time series simulation model based on typical daily output processes and Markov algorithm 被引量:3
16
作者 Zhihui Cong Yuecong Yu +1 位作者 Linyan Li Jie Yan 《Global Energy Interconnection》 EI CAS CSCD 2022年第1期44-54,共11页
The simulation of wind power time series is a key process in renewable power allocation planning,operation mode calculation,and safety assessment.Traditional single-point modeling methods discretely generate wind powe... The simulation of wind power time series is a key process in renewable power allocation planning,operation mode calculation,and safety assessment.Traditional single-point modeling methods discretely generate wind power at each moment;however,they ignore the daily output characteristics and are unable to consider both modeling accuracy and efficiency.To resolve this problem,a wind power time series simulation model based on typical daily output processes and Markov algorithm is proposed.First,a typical daily output process classification method based on time series similarity and modified K-means clustering algorithm is presented.Second,considering the typical daily output processes as status variables,a wind power time series simulation model based on Markov algorithm is constructed.Finally,a case is analyzed based on the measured data of a wind farm in China.The proposed model is then compared with traditional methods to verify its effectiveness and applicability.The comparison results indicate that the statistical characteristics,probability distributions,and autocorrelation characteristics of the wind power time series generated by the proposed model are better than those of the traditional methods.Moreover,modeling efficiency considerably improves. 展开更多
关键词 Wind power Time series Typical daily output processes markov algorithm Modified K-means clustering algorithm
在线阅读 下载PDF
THE EQUILIBRIUM PROBLEM AND CAPACITY FOR JUMP MARKOV PROCESSES 被引量:1
17
作者 Liu Luqin 《Acta Mathematica Scientia》 SCIE CSCD 1995年第1期15-30,共16页
Let X=(Ω,■,■,X_(t),θ_(t),P^(x))be a jump Markov process with q-pair q(x)-q(x,A).In this paper,the equilibrium principle is established and equilibrium functions,energy,capacity and related problems is investigated... Let X=(Ω,■,■,X_(t),θ_(t),P^(x))be a jump Markov process with q-pair q(x)-q(x,A).In this paper,the equilibrium principle is established and equilibrium functions,energy,capacity and related problems is investigated in terms of the q-pair q(x)-q(x,A). 展开更多
关键词 markov process JUMP process EQUILIBRIUM PRINCIPLE ENERGY CAPACITY EQUILIBRIUM FUNCTION
在线阅读 下载PDF
Optimal Policies for Quantum Markov Decision Processes 被引量:2
18
作者 Ming-Sheng Ying Yuan Feng Sheng-Gang Ying 《International Journal of Automation and computing》 EI CSCD 2021年第3期410-421,共12页
Markov decision process(MDP)offers a general framework for modelling sequential decision making where outcomes are random.In particular,it serves as a mathematical framework for reinforcement learning.This paper intro... Markov decision process(MDP)offers a general framework for modelling sequential decision making where outcomes are random.In particular,it serves as a mathematical framework for reinforcement learning.This paper introduces an extension of MDP,namely quantum MDP(q MDP),that can serve as a mathematical model of decision making about quantum systems.We develop dynamic programming algorithms for policy evaluation and finding optimal policies for q MDPs in the case of finite-horizon.The results obtained in this paper provide some useful mathematical tools for reinforcement learning techniques applied to the quantum world. 展开更多
关键词 Quantum markov decision processes quantum machine learning reinforcement learning dynamic programming decision making
原文传递
Stability Estimation for Markov Control Processes with Discounted Cost 被引量:1
19
作者 Jaime Eduardo Martínez-Sánchez 《Applied Mathematics》 2020年第6期491-509,共19页
This article explores controllable Borel spaces, stationary, homogeneous Markov processes, discrete time with infinite horizon, with bounded cost functions and using the expected total discounted cost criterion. The p... This article explores controllable Borel spaces, stationary, homogeneous Markov processes, discrete time with infinite horizon, with bounded cost functions and using the expected total discounted cost criterion. The problem of the estimation of stability for this type of process is set. The central objective is to obtain a bounded stability index expressed in terms of the Lévy-Prokhorov metric;likewise, sufficient conditions are provided for the existence of such inequalities. 展开更多
关键词 Discrete-Time markov Control process Expected Total Discounted Cost Stability Index Probabilistic Metric Lévy-Prokhorov Metric
在线阅读 下载PDF
Non-Recursive Base Conversion Using a Deterministic Markov Process
20
作者 Louis M. Houston 《Journal of Applied Mathematics and Physics》 2024年第6期2112-2118,共7页
We prove that non-recursive base conversion can always be implemented by using a deterministic Markov process. Our paper discusses the pros and cons of recursive and non-recursive methods, in general. And we include a... We prove that non-recursive base conversion can always be implemented by using a deterministic Markov process. Our paper discusses the pros and cons of recursive and non-recursive methods, in general. And we include a comparison between non-recursion and a deterministic Markov process, proving that the Markov process is twice as efficient. 展开更多
关键词 Base Conversion RECURSION Euclidean Division Geometric Series markov process
在线阅读 下载PDF
上一页 1 2 169 下一页 到第
使用帮助 返回顶部