In: Advanced Math
In a small town there are two places to eat: 1) a Chinese restaurant and 2) a pizza place. Everyone in town eats dinner at one of the these two places or eats dinner at home.
Assume the 20% of those who eat in the Chinese restaurant go to the pizza place the next time and 40% eat at home. From those who eat at the pizza place, 50% go to the Chinese restaurant and 30% eat at home the next time. From those who eat at home, 20% go to the Chinese restaurant and 40% to the pizza place next time. We call this situation a system. This system can be modeled as a discrete-time Markov chainwith three states.
a. Let,
C - Eat at Chinese Restaurant,
P - Eat at Pizza Place
H - Eat at home
be the 3 states. Using shorthand , we have
Then we know that, . Therefore,
Also, we know that, . Therefore,
Also, we know that.. Therefore,
The probability matrix can then be written as, (C - first column and row, P - second column and row, H - third column and row)
--------------------------------------------------------
b. Given, Therefore,
Therefore, the probability vector,
Then, after two days, the probability vector will be,
Thus, the probability that the family will eat dinner at home in two days is 0.348.
c. Given, . We want to find,
, similarly, we define
Then ,
Now, . Therefore,
Similarly, we can construct,
Collecting the above three equations and rearranging their terms gives the following 3 equations
In matrix form
The solution to this equation is
Therefore, days.
------------------------------------
d. Given, and . Then we need to find, by Markov Property.
by Markov Property
Using similar reasoning as in Part (c), we get,
Rearranging, we get the matrix form
The solution to this equation is,
. Therefore, days.
---------
only 4 parts can be solved in 1 question.