In: Computer Science
Using a python program find the numerical derivative at the indicated point, using the backward, forward, and centered formula. Use ℎ=0.05h=0.05, then each case compute the error. Which one is more accurate?
Using a python program find the Monte Carlo integration, compute the value of π.
I hope this solves your issue. If you have any doubt or anything related to it, do comment down, I'll revert and will solve your doubts. Do drop a "LIKE" for the efforts
# this is the template of our weight function g(x)
def g_of_x(x, A, lamda):
e = 2.71828
return A*math.pow(e, -1*lamda*x)
def inverse_G_of_r(r, lamda):
return (-1 * math.log(float(r)))/lamda
def get_IS_variance(lamda, num_samples):
"""
This function calculates the variance if a Monte Carlo
using importance sampling.
Args:
- lamda (float) : lamdba value of g(x) being tested
Return:
- Variance
"""
A = lamda
int_max = 5
# get sum of squares
running_total = 0
for i in range(num_samples):
x = get_rand_number(0, int_max)
running_total += (f_of_x(x)/g_of_x(x, A, lamda))**2
sum_of_sqs = running_total / num_samples
# get squared average
running_total = 0
for i in range(num_samples):
x = get_rand_number(0, int_max)
running_total += f_of_x(x)/g_of_x(x, A, lamda)
sq_ave = (running_total/num_samples)**2
return sum_of_sqs - sq_ave
# get variance as a function of lambda by testing many
# different lambdas
test_lamdas = [i*0.05 for i in range(1, 61)]
variances = []
for i, lamda in enumerate(test_lamdas):
print(f"lambda {i+1}/{len(test_lamdas)}: {lamda}")
A = lamda
variances.append(get_IS_variance(lamda, 10000))
clear_output(wait=True)
optimal_lamda = test_lamdas[np.argmin(np.asarray(variances))]
IS_variance = variances[np.argmin(np.asarray(variances))]
print(f"Optimal Lambda: {optimal_lamda}")
print(f"Optimal Variance: {IS_variance}")
print(f"Error: {(IS_variance/10000)**0.5}")