The Maximum Likelihood Estimator uses gathered sample data to predict the population parameter or the parameter of the true distribution.
The likelihood function, L(θ) is the product of the pdf evaluated at the data points. Multiply 1 by the pdf at each data point
Steps to find the MLE
- Find L(θ) (also known as the likelihood function)
- Find ln L(θ)
- Calculate the Score Function
∂θ∂lnL(θ)
- Find the Score equation by setting the Score Function = 0
- Solve for the parameter
- Check that the second derivative of ln L(θ) is negative, that is
∂θ2∂2lnL(θ)
- Check the support
Example
f(xiσ)=2σ1e−σ∣x∣L(θ)=i=1∏nf(x∣θ=σ)=i=1∏n2σ1e−σ∣xi∣L(θ)=[2σ1e−σ∣x1∣][2σ1e−σ∣x2∣]…[2σ1e−σ∣xi∣]L(θ)=(2σ1)ne−σ∑i=1n∣xi∣
<hr >
lnL(θ)=ln[(2σ1)ne−σ∑i=1n∣xi∣]lnL(θ)=ln[(2σ1)n]+ln[e−σ∑i=1n∣xi∣]lnL(θ)=n∗ln(1)−n∗ln(2)−n∗ln(σ)+ln[e−σ∑i=1n∣xi∣]lnL(θ)=0−n∗ln(2)−n∗ln(σ)+e−σ∑i=1n∣xi∣
<hr >
The last term is technically sigma to the power of -1, so we can use the power rule on it
Score=dθdℓ(θ)=dσd[−n∗ln(2)−n∗ln(σ)+e−σ∑i=1n∣xi∣]Score=−σn+σ2∑i=1n∣xi∣
Maximums and minimums occur when the score is equal to 0, so we must set the score equal to 0
−σn+σ2∑i=1n∣xi∣=0σ2∑i=1n∣xi∣=σnnσ=i=1∑n∣xi∣σ=n∑i=1n∣xi∣
This is our Maximum Likelihood Estimator; we can estimate sigma by adding all of our data points and dividing by n. However, before doing this, we need to check the concavity to ensure that it really is the maximum likelihood estimator (must be negative).
<hr >
d2θd2<0→dθdScore<0dσd[−σn+σ2∑i=1n∣xi∣]=σ2n−2σ3∑i=1n∣xi∣<0We must combine the two fractions together in order to determine whether the term is less than 0σ2n−2σ3∑i=1n∣xi∣=σ3nσ−2σ3∑i=1n∣xi∣=σ3nσ−2∑i=1n∣xi∣Substituting the first derivative into the second derivative:σ3nn∑i=1n∣xi∣−2∑i=1n∣xi∣=σ3∑i=1n∣xi∣−2∑i=1n∣xi∣=−σ3∑i=1n∣xi∣<0Because the numerator must always be positive (sum of absolute values), the second derivative is therefore always less than 0.
<hr >
Final answer: σ^=n∑i=1n∣xi∣ is the MLE of σ