In: Statistics and Probability
An analysis of the results of a football team reveals
that whether it will win its
next game or not depends on the results of the previous two games.
If it won its
last and last-but-one game, then it will win the next game with
probability 0.6; if
it won last-but-one but not last game, it will win the next game
with probability
0.8; if it did not win the last-but-one game, but won the last one,
it will win
the
next game with probability 0.4; if it did not win the last-but-one
nor the last game,
it will win the next game with probability 0.2. The dynamics of
consecutive pairs
of results for the team follows a discrete time Markov chain with
state space S =
{(W, W), (L, W), (W, L), (L, L)}, where W and L means the team won
and lost
respectively. To simplify the notation put 1 ≡ (W, W), 2 ≡ (L, W),
3 ≡ (W, L) and
4 ≡ (L, L), so that the state space becomes S = {1, 2, 3, 4}.
i. Write down the transition probability matrix for the
chain.
ii. Find the mean number of consecutive games the team won
An analysis of the results of a football team reveals
that whether it will win its
next game or not depends on the results of the previous two games.
If it won its
last and last-but-one game, then it will win the next game with
probability 0.6; if
it won last-but-one but not last game, it will win the next game
with probability
0.8; if it did not win the last-but-one game, but won the last one,
it will win the
next game with probability 0.4; if it did not win the last-but-one
nor the last game,
it will win the next game with probability 0.2. The dynamics of
consecutive pairs
of results for the team follows a discrete time Markov chain with
state space S =
{(W, W), (L, W), (W, L), (L, L)}, where W and L means the team won
and lost
respectively. To simplify the notation put 1 ≡ (W, W), 2 ≡ (L, W),
3 ≡ (W, L) and
4 ≡ (L, L), so that the state space becomes S = {1, 2, 3, 4}.
i. Write down the transition probability matrix for the
chain.
ii. Find the mean number of consecutive games the team won
We have the following probabilities of winning the next game given the results of last and second last game represented as pairs ( last, second last):
(W,W) = 0.6
(L,W) = 0.8
(W,L) = 0.4
(L,L) = 0.2
where W = win and L = loss.
We are given that The dynamics of consecutive pairs of results for the team follows a discrete time Markov chain with state space S = {(W, W), (L, W), (W, L), (L, L)}.To simplify the notation put 1 ≡ (W, W), 2 ≡ (L, W), 3 ≡ (W, L) and 4 ≡ (L, L), so that the state space becomes S = {1, 2, 3, 4}.
So we have to calculate the Transition Probability Matrix ( TPM ) for the given 4 states:
Let's calculate for 1 -> 1 i.e. (W,W) -> (W,W),
we have Prob of winning = 0.6 when last two games were won, therefore, (W,W) -> (W,-).
Now again for the fourth win we have prob of winning = 0.6, therefore the two probabilities will be multiplied i.e. 0.6 * 0.6 = 0.36.
Similarly we can calculate for the other cases:
1 -> 1 i.e. (W,W) -> (W,W), probability is = 0.6 * 0.6 = 0.36
1 -> 2 i.e. (W,W) -> (L,W), probability is = 0.4 * 0.4 = 0.16 ( probability of losing = 1 - probability of winning )
1 -> 3 i.e. (W,W) -> (W,L), probability is = 0.6 * 0.4 = 0.24
1 -> 4 i.e. (W,W) -> (L,L), probability is = 0.4 * 0.6 = 0.24
2 -> 1 i.e. (L,W) -> (W,W), probability is = 0.8 * 0.6 = 0.48
2 -> 2 i.e. (L,W) -> (L,W), probability is = 0.2 * 0.4 = 0.08
2 -> 3 i.e. (L,W) -> (W,L), probability is = 0.8 * 0.4 = 0.32
2 -> 4 i.e. (L,W) -> (L,L), probability is = 0.2 * 0.6 = 0.12
3 -> 1 i.e. (W,L), -> (W,W), probability is = 0.4 * 0.8 = 0.32
3 -> 2 i.e. (W,L), -> (L,W), probability is = 0.6 * 0.2 = 0.12
3 -> 3 i.e. (W,L), -> (W,L), probability is = 0.4 * 0.2 = 0.08
3 -> 4 i.e. (W,L), -> (L,L), probability is = 0.6 * 0.8 = .48
4 -> 1 i.e. (L,L) -> (W,W), probability is = 0.2 * 0.8 = 0.16
4 -> 2 i.e. (L,L) -> (L,W), probability is = 0.8 * 0.2 = 0.16
4 -> 3 i.e. (L,L) -> (W,L), probability is = 0.2 * 0.2 = 0.04
4 -> 4 i.e. (L,L) -> (L,L), probability is = 0.8 * 0.8 = 0.64
Therefore, we have our TPM as:
(i)
1 | 2 | 3 | 4 | Row Sum | |
1 | 0.36 | 0.16 | 0.24 | 0.24 | 1 |
2 | 0.48 | 0.08 | 0.32 | 0.12 | 1 |
3 | 0.32 | 0.12 | 0.08 | 0.48 | 1 |
4 | 0.16 | 0.16 | 0.04 | 0.64 | 1 |
We can also see that all the row sum is equal to 1, which is one property of TPM and hence, our TPM is correct.
(ii) Now to find the mean number of consecutive wins, we form a matrix of number of consecutive wins having the same 4 state spaces:
1 | 2 | 3 | 4 | |
1 | 3 | 1 | 2 | 1 |
2 | 2 | 0 | 1 | 0 |
3 | 1 | 0 | 0 | 0 |
4 | 1 | 0 | 0 | 0 |
For cell (1,1) of matrix we have ((W,W), (W,W)) i.e. we have 3 consecutive wins and similarly we can calculate for the other cells.
Now, to calculate mean number of consecutive wins we use the following formula:
Mean =
where x = number of consecutive wins
P(x) = probability from the TPM corresponding to the respective cell
So, we have
Mean = Sum of elements of matrix obtained by multiplication of corresponding elements of TPM and Consecutive wins matrix
= 3.72