In: Electrical Engineering
Describe the steady-state solution of Markov chains and the state probabilities:
(Typed answer only .no hand written):
Let us consider the following:
Sequence X0, X1, X2,
X3,.. Xt,.... form a Markov chain.
s1, s2 ,...., sk be the k states associated with the above Markov chain.
Also let the state transition matrix be T.
Here we consider the case where the transition probabilities are independent of time t.
X is known to have stationary transition probabilities.
The ith column of matrix T gives us the probabilities on the all the branches converging on to the ith state si.
The ith row of matrix T gives us the probabilities on the all the branches emerging from the ith state si.
Also let vt be the probability distribution of Xt. vt = (vt[1], vt[2], vt[3], .., vt[k]), where vt[i] = P(Xt = si).
From law of total probability,
Thus we can write the following equation:
Suppose X0 has some distribution v0, then v1=v0T, v2=v0T2, v3=v0T3, ..., vn=v0Tn,..
The steady state distribution of the Markov chain will be:
Steady state Theorem:
If the Markov chain is irreducible and aperiodic, then irrespective of the initial distribution v0, there exists an unique steady state distribution v* such that vt approaches v*.
Solving the above linear equations gives us the steady state probabilities for the random variable X.
Some points to ponder:
The steady state distribution will be the Eigenvector for the transpose of the transition matrix with Eigenvalue = 1.
Now given that the process is converging to steady state, one can say that the probability of X being in some state i after n steps from now, does not depend on n.