## Chi-Square

### LogLikelihood Ratio has a *Chi-Square* Distribution

#### Confidence Criterion

The Central Limit Theorem assures that Maximum Likelihood Estimators have, asymptotically, a multivariate normal density. As a consequence of that normal behavior, Log Likelihood Ratios have a *Chi-Square* density. That means we can use \(\chi^2_{df}\) as a measure of closeness in the neighborhood of the MLE itself, and thus we have a criterion for constructing confidence bounds.

#### Still Hate Statistics?

**(It’s kinda useful.)**

The normal distribution is the parent density for many ** other distributions**, and has a close familial relationship with many more. For example, Sums of Squares of samples from the standard normal distribution have a

*chi-square*distribution.

Now, if something has a probability density, then we can evaluate the probability that the something takes on values as extreme, or more extreme, than what we are interested in. In our case we have the distribution of the logarithm of the ratio of the Weibull parameters, \(\eta, \beta \) to their maximum values, the MLEs. We can move the Weibull eta, Weibull beta pair away from their maximum likelihood values and see the effect on the Weibull model. If we don’t move too far the resulting model will still be plausible, but not optimal (given the data).

How far is too far? If we choose a 95% confidence neighborhood, then the distance is \(\chi^2 /2\) (evaluated at 2 degrees-of-freedom, since we have two model parameters).

We now have everything we need to construct a confidence neighborhood around the MLEs for the (\(\eta, \beta \)) pair. Every point on that boundary will correspond to a ( \(\eta, \beta \)) pair, and each pair represents a Weibull model. Construct all the models: the locus of their extremes is the corresponding 95% confidence bound on the Weibull model.

You can see two examples, here and here.

**Notes:**The textbook definition for the loglikelihood ratio places the alternate parameter values in the numerator and the MLEs in the denominator, so that the quantity -2 log(likelihood ratio) is distributed asymptotically as \(\chi^2_{df}\), with \(df\) equal to the number of parameters in the model. Computationally that is equivalent to looking at the ratio of alternate values for \(\eta, \beta \) to their MLEs (so the maximum ratio is 1 and the log is thus zero) in which case the log(likelihood ratio) is distributed as is \(\chi^2_{df}/2.\)