Question

In: Math

2. Set up both the vector of state probabilities and the matrix of transition probabilities given...

2. Set up both the vector of state probabilities and the matrix of transition probabilities given the following information: Store 1 currently has 40% of the market; store 2 currently has 60% of the market. In each period, store 1 customers have an 80% chance of returning; 20% of switching to store 2.

In each period, store 2 customers have a 90% chance of returning; 10% of switching to store 1. a.

Find the percentage of market for each store after 2 periods. b. Find the equilibrium conditions of 2 stores (limiting probabilities). What’s the meaning of these probabilities?

Solutions

Expert Solution

(a) The R code is:

mat1 <- matrix(c(0.4,0.6), ncol = 2, nrow = 1)
mat1 <- as.matrix(mat1)
mat2 <- as.matrix(matrix(c(0.8, 0.9, 0.2, 0.1), ncol = 2, nrow = 2))
mat2 <- as.matrix(mat2)
i = 0
while(i < 2)
{
mat3 = mat1%*%mat2
mat1 = mat3
i = i + 1
}
mat3

The percentage of market for store 1 after 2 periods = 81.4%

The percentage of market for store 2 after 2 periods = 18.6%

(b) The R code is:

mat1 <- matrix(c(0.4,0.6), ncol = 2, nrow = 1)
mat1 <- as.matrix(mat1)
mat2 <- as.matrix(matrix(c(0.8, 0.9, 0.2, 0.1), ncol = 2, nrow = 2))
mat2 <- as.matrix(mat2)
mat3 = mat1%*%mat2
while(abs(mat1 - mat3) > 0.0001)
{
mat3 = mat1%*%mat2
mat3=mat1+mat3
mat1=mat3-mat1
mat3=mat3-mat1
}
mat3

The limiting probability of market for store 1 = 81.814%

The limiting probability of market for store 2 = 18.186%

It means that market for store 1 and store 2 will be respectively 81.814% and 18.816% after long period of time


Related Solutions

You are given a transition matrix P. Find the steady-state distribution vector. HINT [See Example 4.]...
You are given a transition matrix P. Find the steady-state distribution vector. HINT [See Example 4.] P = [0.6 0 0.4 1 0 0 0 0.2 0.8]
You are given a transition matrix P. Find the steady-state distribution vector. HINT [See Example 4.]...
You are given a transition matrix P. Find the steady-state distribution vector. HINT [See Example 4.] P = [0.6 0 0.4 1 0 0 0 0.2 0.8]
Given the transition matrix P for a Markov chain, find the stable vector W. Write entries...
Given the transition matrix P for a Markov chain, find the stable vector W. Write entries as fractions in lowest terms. P= 0.5 0 0.5     0.2 0.2 0.6       0    1     0
9.2.8 Find the steady-state vector for the transition matrix. 0.6 0.1 0.1 0.4 0.8 0.4 0...
9.2.8 Find the steady-state vector for the transition matrix. 0.6 0.1 0.1 0.4 0.8 0.4 0 0.1 0.5
Find the equilibrium vector for the transition matrix below. left bracket Start 3 By 3 Matrix...
Find the equilibrium vector for the transition matrix below. left bracket Start 3 By 3 Matrix 1st Row 1st Column 0.3 2nd Column 0.3 3rd Column 0.4 2nd Row 1st Column 0.2 2nd Column 0.4 3rd Column 0.4 3rd Row 1st Column 0.3 2nd Column 0.2 3rd Column 0.5 EndMatrix right bracket, The equilibrium vector is?
Find the equilibrium vector for the transition matrix below. 0.4......0.6 0.3......0.7 The equilibrium vector is.... Find...
Find the equilibrium vector for the transition matrix below. 0.4......0.6 0.3......0.7 The equilibrium vector is.... Find the equilibrium vector for the transition matrix below. 0.2...0.2...0.6 0.2...0.4...0.4 0.2...0.3...0.5 The equilibrium vector is... Find the equilibrium vector for the transition matrix below. 0.2...0.2...0.6 0.8...0.1...0.1 0.2...0.4...0.4 The equilibrium vector is ...
Xn is a Markov Chain with state-space E = {0, 1, 2}, and transition matrix 0.4...
Xn is a Markov Chain with state-space E = {0, 1, 2}, and transition matrix 0.4 0.2 0.4 P = 0.6 0.3 0.1 0.5 0.3 0.2 And initial probability vector a = [0.2, 0.3, 0.5] For the Markov Chain with state-space, initial vector, and transition matrix discuss how we would calculate the follow; explain in words how to calculate the question below. a) P(X1 = 0, X2 = 0, X3 = 1, X4 = 2|X0 = 2) b) P(X2 =...
Xn is a discrete-time Markov chain with state-space {1,2,3}, transition matrix, P = .2 .1 .7...
Xn is a discrete-time Markov chain with state-space {1,2,3}, transition matrix, P = .2 .1 .7 .3 .3 .4 .6 .3 .1 a) find E[X1|X0=2] = b)  The P(X9=1|X7=3) = C) The P(X2=2) =
Find the equilibrium vector for the transition matrix below. 0.75...0.10...0.15 0.10...0.70...0.20 0.10...0.40...0.50 The equilibrium vector is......
Find the equilibrium vector for the transition matrix below. 0.75...0.10...0.15 0.10...0.70...0.20 0.10...0.40...0.50 The equilibrium vector is... Find the equilibrium vector for the transition matrix below. 0.58...0.12...0.30 0........0.59...0.41 0..........0..........1 The equilibrium vector is... 0.65...0.10...0.25 0.10...0.65...0.25 0.10...0.30...0.60 The equilibrium vector is...
QUESTION 2 If you set up your initial review of the literature using the Matrix Method,...
QUESTION 2 If you set up your initial review of the literature using the Matrix Method, you already have a start on the Matrix Indexing System. However, the indexing system differs from the Matrix Method. The Matrix Method is a strategy for acquiring, analyzing, and writing a synthesis of a review of the scientific literature. In contrast, the Matrix Indexing System provides a way of coping with all the information and documents that you use or create as a result...
ADVERTISEMENT
ADVERTISEMENT
ADVERTISEMENT