Lecture10-handout

Download as pdf or txt
Download as pdf or txt
You are on page 1of 11

Modelling attempts Discrete time Markov chain

wi4525TU
Monte Carlo Simulation and Stochastic Processes

L.E. Meester

26 November 2024, Lecture 10

1/ 11
Modelling attempts Discrete time Markov chain

View back and ahead

8 (Further) Conditioning techniques


9 Two models and some stochastic processes we find in there
10 Describing, modeling, evolution in time; the Markov property
11 Discrete time Markov chains, long-term behaviour
12 Continuous time Markov chains
13 Brownian motion and stochastic differential equations
14 Processes and properties. Repair shop finale.

2/ 11
Modelling attempts Discrete time Markov chain

Today’s program

Theme:
Describing/modelling the evolution in time of the repair shop
(Keep the questions in mind we have about the lost production)

1 A series of modelling attempts (details on the “blackboard”)

2 Markov chains in discrete time: definition


Definition
Transition probabilities

3/ 11
Modelling attempts Discrete time Markov chain

Model description

The repair shop model


m identical machines;
the time between start-up of a machine and the next instant
it requires maintenance, the uptime, is modeled by a
nonnegative r.v. U, assumed to follow an Exp (λ)-distribution;
one repair crew that works FCFS (first come first served);
repairtime: modeled by a nonnegative random variable R,
distribution not further specified;
all uptimes are iid;
all repairtimes are iid, and independent of the uptimes.
Description complete once we specify λ and the distribution of R.

(Unit of time: day.)


4/ 11
Modelling attempts Discrete time Markov chain

States and state space(s)

Xi (t): state of machine i at time t.

Machine states:
u: up
d: down, delayed waiting to be repaired
r : in repair

System state: X (t) = (X1 (t), . . . , Xm (t)).


System process: {X (t), t ≥ 0}.

So, the state space is


S = {u, d, r }m = {all m-vectors of symbols from {u, d, r }}.

Exercise. For m = 10 there are 310 = 59049 states in S. Some of


them are not visited. Which? How many?
5/ 11
Modelling attempts Discrete time Markov chain

Model building attempts:

Purely describe, for:


1 m = 1?
2 m > 1, considering machine number 1 only?
3 m > 1, the whole system?

More ambitious approach, addressing the question:


Is knowledge of X (t) = x, plus
the kind of information we “put” in the state x, sufficient
to know how the system will further evolve after t?
N.B. Not predict the future, but describe it probabilistically.
4 m = 1 (will skip, because too modest).
5 m > 1, consider machine number 1 only.
Cases: X1 (t) = u, X1 (t) = r , X1 (t) = d.
6 m > 1, whole system: undoable . . .
6/ 11
Modelling attempts Discrete time Markov chain

Stuck? We look for a new approach:


the machines are identical,
can we use this to simplify the model?

We model a different quantity:


N(t): the number of machines in state u at time t.

The state space must be redefined: S = {0, 1, . . . , m}.

Question. A considerable reduction in complexity, it seems.


What price do we pay, if any?

7 m > 1, whole system: still undoable. . . .


8 m > 1: looking at special (event) times.
9 m > 1: description based on a discrete time Markov chain.

7/ 11
Modelling attempts Discrete time Markov chain Definition Transition probabilities

1 A series of modelling attempts (details on the “blackboard”)

2 Markov chains in discrete time: definition


Definition
Transition probabilities

8/ 11
Modelling attempts Discrete time Markov chain Definition Transition probabilities

Discrete time Markov chain (DTMC): definition

X = {Xn : n = 0, 1, 2, . . .}:
stochastic process with finite or countable state space S.
Default assumption: states labeled so that S = {0, 1, 2, . . .}.
Markov chain
If, for all states i0 , . . . , in−1 , all i, j, and n ≥ 0,

P(Xn+1 = j | Xn = i, Xn−1 = in−1 , . . . , X0 = i0 ) = P(Xn+1 = j | Xn = i),


(1)
then the process X is called a (discrete time) Markov chain.

Relation (1) is called the Markov property; intuitively, it says:


Given the present (Xn = i), the future (Xn+1 ) is indepen-
dent of the past (Xn−1 = in−1 , . . . , X0 = i0 ).
This statement is still true if Xn+1 is replaced by Xn+1 , . . . , Xn+m .
9/ 11
Modelling attempts Discrete time Markov chain Definition Transition probabilities

Transition probabilities

Default assumption we make:


P(Xn+1 = j | Xn = i) does not depend on n.
The chain then is called time-homogeneous and we may define:
Transition probabilities
For a time-homogeneous Markov chain, pij = P(Xn+1 = j | Xn = i)
is called the transition probability from state i to state j.
The transition matrix is defined by P = (pij )i,j∈S .

A discrete time Markov chain is completely specified by:


the state space S,
the initial state X0 ,
the transition probability matrix P.

10/ 11
Modelling attempts Discrete time Markov chain Definition Transition probabilities

Wrap-up

Today’s lecture is almost completely self-contained. Next week we


will start from the Markov chain definition.

11/ 11

You might also like