Question

In: Statistics and Probability

How do you use R to derive the parameters maximizes the log-likelihood?

How do you use R to derive the parameters maximizes the log-likelihood?

Solutions

Expert Solution

To calculate Maximum likelihood estimate of the parameter of a given distribution

R code is --

install.packages("EstimationTools")
library(EstimationTools)
maxlogL(x, dist, optimizer, lower = NULL, upper = NULL)

So, firstly we install the package EstimationTools.

For example, Let us apply this code for normal distribution

R code is -

set.seed(1000)
z <- rnorm(n = 1000, mean = 10, sd = 1)
fit1 <- maxlogL(x = z, dist = 'dnorm', start=c(2, 3),
lower=c(-15, 0), upper=c(15, 10))
summary(fit1)

Output is -

> set.seed(1000)
> z <- rnorm(n = 1000, mean = 10, sd = 1)
> fit1 <- maxlogL(x = z, dist = 'dnorm', start=c(2, 3),
+ lower=c(-15, 0), upper=c(15, 10))
> summary(fit1)
---------------------------------------------------------------
Optimization routine: nlminb
Standard Error calculation: Hessian from optim
---------------------------------------------------------------
AIC BIC
2804.033 2800.033
---------------------------------------------------------------
Estimate Std. Error
mean 9.98752 0.0310
sd 0.98126 0.0219
-----

Thus MLE for is 9.98752 and MLE for is 0.98126


Related Solutions

9.3 Use method of moments and method of maximum likelihood to estimate A) parameters a and...
9.3 Use method of moments and method of maximum likelihood to estimate A) parameters a and b is a sample from Uniform(a, b) distribution is observed B) parameter λ if a sample from Exponential(λ) distribution is observed C) Parameter μ if a sample from Neutral (μ, σ) distribution is observed, and we already know σ D) Parameter σ if a sample from Neutral (μ, σ) distribution is observed, and we already know μ E) Parameter μ and σ if a...
Use maximum likelihood to find the parameters in logistic regression, where the domain is x and...
Use maximum likelihood to find the parameters in logistic regression, where the domain is x and the sigmoid is used for the ’activation’.
Suppose you found an estimate of a parameter by setting the derivative of the log-likelihood equal...
Suppose you found an estimate of a parameter by setting the derivative of the log-likelihood equal to 0 and solving the equation. Give examples of two situations where this is this not the maximum likelihood estimate. (There are at least 4 possibilities, you only need to mention two.) You do not need to reference specific distributions or numbers, just features/shapes of the likelihood. Please limit your answer to 2 or 3 sentences total, preferably 1.
In this problem, you need to derive the block-pricing scheme that maximizes profit in the case...
In this problem, you need to derive the block-pricing scheme that maximizes profit in the case of 2nd-degree price discrimination when a monopolist faces a consumer with high demand PH = 80 − QH, a consumer with low demand PL = 50 − QL, and constant marginal cost of $10. For your derivations, assume that the monopolist serves both consumer types and use TL and TH to denote the fixed fees (dollar amounts) charged to the consumers with low and...
1 a)Describe features of the covariance correlation matrix    b) How is a log likelihood ratio...
1 a)Describe features of the covariance correlation matrix    b) How is a log likelihood ratio test is constructed to assess the adequacy of a given model
How to find exact and conditional log likelihood for MA(1) model. Find ML estimates. (For Nile...
How to find exact and conditional log likelihood for MA(1) model. Find ML estimates. (For Nile data in R but if you tell me the process I’ll apply it to that data)   
1. How logistic regression maps all outcome to either 0 or 1. The equation for log-likelihood...
1. How logistic regression maps all outcome to either 0 or 1. The equation for log-likelihood function (LLF) is : LLF = Σi( i log( ( i)) + (1 − i) log(1 − ( i))). y p x y p x How logistic regression uses this in maximum likelihood estimation? 2. We can apply PCA to reduce features in a data set for model construction. But, why do we still need regularization? What is the difference between lasso and ridge...
How logistic regression maps all outcome to either 0 or 1. The equation for log-likelihood function...
How logistic regression maps all outcome to either 0 or 1. The equation for log-likelihood function (LLF) is : LLF = Σi( i log( ( i)) + (1 − i) log(1 − ( i))). y p x y p x How logistic regression uses this in maximum likelihood estimation?
How do you mathematically derive the "2nd Law of Thermodynamics"?
How do you mathematically derive the "2nd Law of Thermodynamics"?
Why do we use B-trees if ??(log?? ??) = ??(??????) for all ??, ??
Why do we use B-trees if ??(log?? ??) = ??(??????) for all ??, ??
ADVERTISEMENT
ADVERTISEMENT
ADVERTISEMENT