In: Computer Science
1 Linear Algebra in Numpy
(1) Create a random 100-by-100 matrix M, using numpy method
"np.random.randn(100, 100)", where
each element is drawn from a random normal distribution.
(2) Calculate the mean and variance of all the elements in M;
(3) Use "for loop" to calculate the mean and variance of each row
of M.
(4) Use matrix operation instead of "for loop" to calculate the
mean of each row of M, hint: create a vector
of ones using np.ones(100, 1)?
(5) Calculate the inverse matrix M−1
(6) Verify that M−1M = MM−1 = I. Are the off-diagnoal elements
exactly 0, why?
The code will be
# coding: utf-8
# In[10]:
import numpy as np
M=np.random.randn(100,100)
# In[31]:
mean=np.mean(M)
variance=np.var(M)
# In[32]:
means=[];#stores mean for each row
variances = []#stores variance for each row
for i in range(len(M)):
means.append(np.mean(M[i]))
variances.append(np.var(M[i]))
# In[16]:
means_with_ops=np.mean(M,axis=1)
# In[26]:
M1=np.linalg.inv(M)
# In[27]:
M1M=np.dot(M1,M)
# In[28]:
MM1=np.dot(M,M1)
# In[29]:
np.allclose(MM1,M1M,atol=1e-5)#to check whether the two matrix are
same or not
The output and code snippet is
DO give a thumbs up and in case there are doubts leave a comment.