Maximum likelihood and REML

To predict the random effects, P (for simplicity we use the term random effects to refer to either random effects or coefficients), we define a likelihood function in terms of a, P and y (y is the vector of variance parameters). This can be written as the product of the likelihoods for P and y conditional on the value of P:

L(a, p, y;y) = L(a, yr; y|p)L(yG; p), where yR = variance parameters in the R matrix, yG = variance parameters in the G matrix.

Using multivariate normal distributions for y |P and P we have

L(a, P, y; y) a |Rr1/2 exp(-±(y - Xa - zp/R^Cy - Xa - ZP)) x |Gp1/2 exp(-ip'G_1p), giving the corresponding log likelihood as log(L) = -i[log |R| + (y - Xa - Zp)'R-1(y - Xa - Zp) + log |G|+p'G-1p] + K.

The ML solution for P can be obtained by differentiating this log likelihood with respect to P and setting the resulting expression to zero:

Setting to zero gives

P(Z'R-1Z + G-1) = Z'R-1(y - Xa), P = (Z'R-1Z + G-1)-1Z'R-1(y - Xa). (A)

As discussed in Chapter 1, estimates are 'shrunken' compared with what they would have been if fitted as fixed. Note that since the estimates are centred about zero, the intercept estimate would need to be added in order to obtain mean random effects estimates. In random effects models the R matrix is diagonal, R = a 21, and we can alternatively write

Compared with the OLS solution for a fixed effects model, a = (X'X)-1X'y, we notice the additional term, G-1 /a 2, in the denominator. It is this term that causes the estimates to be shrunken towards zero.

Also an alternative, more compact, form for P can be obtained from the solution (A) above using matrix manipulation and recalling that V = ZGZ' + R:

The variance of P can be obtained as var(P) = GZ V-1ZG - GZ'V-1X(X'V 1X)-1X'V 1ZG.

As with var(a), this formula is based on the assumption that V is known. Because V is, in fact, estimated there will be some downward bias in var(P), although this is usually small (see Section 2.4.3). Again, the Bayesian approach (Section 2.3) avoids having to make this assumption and the problem of bias does not arise.

Was this article helpful?

0 0

Post a comment