Home forums Mixed Models When to uncorrelate random effects

Viewing 1 reply thread
  • Author
    Posts
    • #204
      João Santiago
      Participant

      After reading a substantial amount of literature about LMMs, I still don’t have a good intuition for when to uncorrelate random effects (for example using the double pipe ||, (a + b || z)

      What is a practical example of this? Could be a research question that explicitly calls for this, for example.

    • #205
      henrik
      Keymaster

      There probably are research question that explicitly call for suppressed correlation estimate. However, they are mostly suppressed to avoid convergence problems. We wrote in our chapter on mixed models:

      For the limited sample sizes that are common in psychology and related disciplines a common problem is that the maximal model is not fully identified (Bates, Kliegl, Vasishth, & Baayen, 2015), especially for mixed models with complicated random effects structures. Even though the optimization algorithm converges to the optimum (i.e., the maximum-likelihood estimate for a given data set) the variance-covariance matrix of the random effects parameters at the optimum is degenerate or singular. At least for models estimated with lme4 this is often signified by convergence warnings. Other signs of singular fits are variance estimates of or near zero and correlation estimates of $\pm 1$. The occurrence of such situations is due to the fact the parameters associated to random effects (e.g., $\sigma^2_{S_\delta}$) are more difficult to estimate than fixed effects (e.g., $\beta_{\delta}$).

      The important part for your question is the last sentence. Random-effects parameters are more difficult to estimate than fixed-effects. We need considerably more data to estimate a variance or a correlation parameter than a simple fixed-effect. Furthermore, correlations require even more data than variances.

      Therefore, in case a model shows convergence problems (which can be indicated by warnings or problematic parameter values as mentioned in the quote) a good first strategy to address these problems is to remove the correlation among random-effects parameters. We do this because these are the most difficult to estimate and they are often not of primary interest. Also removing them, often comes with comparatively little cost in terms of model precision.

      More details and some really instructive examples of this are given in the “Parsimonious Mixed Models” paper by Bates, Kliegl, Vasishth and Baayen (2015): https://arxiv.org/abs/1506.04967
      See especially Figures 3 and 5.

      There is also some relevant discussion on stats.stackexchange: https://stats.stackexchange.com/a/323341/442

Viewing 1 reply thread
  • You must be logged in to reply to this topic.