This method is based on the concept of maximising the log likelihood with respect to the variance parameters while treating the fixed effects, a, as constants. Having obtained the variance parameter estimates, the fixed effects estimates are then obtained by treating the variance parameters as fixed and finding the values of a which maximise the log likelihood. This method has the effect of producing variance parameter estimates that are biased downwards to some degree. This can be illustrated with a very simple example. Suppose we have a simple random sample, x1, x2, ..., xn, and wish to estimate the mean and variance. If jl is the sample mean, then the ML variance estimator would be J2¡(Xi — jl)2/n rather than the unbiased estimator ^¡(Xj — jl)2/(n — 1). The bias is greatest when small numbers of degrees of freedom are used for estimating the variance parameters.
Was this article helpful?