Compute effect sizes for mixed() objects

Home forums Mixed Models Compute effect sizes for mixed() objects

This topic contains 6 replies, has 3 voices, and was last updated by  Xinming Xu 2 months, 1 week ago.

  • Author
    Posts
  • #293

    blazko m. (b1azk0)
    Participant

    Hello,

    with the help of the discussion we had here: Finding the optimal structure[…] I finished got my article reviewed – so a big thanks for hints.

    Now, to satisfy one of the reviewers I was asked to add partial eta squared effect sizes to each of the F/t tests reported in the paper.
    My question here is this: is there any automatic / or semi auto method for computing etas for anova(mixed()) objects as well as for pairs(emmeans(m0, ~A*B|C), interaction=TRUE) simple effects or contrasts?

    I couldn’t find any package computing effects sizes for lmerMod objects but as afex::aov can do it pretty easy I thought I ask here.

    Any help with this issue would be great.

    All the best

  • #294

    blazko m. (b1azk0)
    Participant

    In case someone in here knows an answer, I posted this question on CrossValidated

    I’m actively monitoring CV as well as this topic.

    All the bests

  • #295

    henrik
    Keymaster

    Unfortunately this is currently not possible. The problem is that the effect on the response scale need to be normalized by some estimate of variability (e.g., standard deviation). And it is not really clear which estimate to take here in the case of a mixed model, as there are usually several. This is also one of the reasons why there is not easy way to calculate R^2 in LMMs: http://bbolker.github.io/mixedmodels-misc/glmmFAQ.html#how-do-i-compute-a-coefficient-of-determination-r2-or-an-analogue-for-glmms

    I believe that most of these problems are also discussed in a recent Psych Methods paper which can be found here:
    Rights, J. D., & Sterba, S. K. (2018). Quantifying explained variance in multilevel models: An integrative framework for defining R-squared measures. Psychological Methods. Advance online publication. http://dx.doi.org/10.1037/met0000184

    The fact that calculating a global measure of model fit (such as R2) is already riddled with complications and that no simple single number can be found, should be a hint that doing so for a subset of the model parameters (i.e., main-effects or interactions) is even more difficult. Given this, I would not recommend to try finding a measure of standardized effect sizes for mixed models.

    It is also important to note that APA in fact recommends unstandardized compared to standardized effect sizes. This is even listed in the first paragraph on effect sizes on wikipedia: https://en.wikipedia.org/wiki/Effect_size

    I believe that a similar message of reporting unstandardized effect sizes is being conveyed in a different recent Psych Methods paper:
    Pek, J., & Flora, D. B. (2018). Reporting effect sizes in original psychological research: A discussion and tutorial. Psychological Methods, 23(2), 208-225. http://dx.doi.org/10.1037/met0000126

    I know that you still need to somehow handle the reviewer. My first suggestion is to report unstandardized effect sizes and cite the corresponding APA recommendation (we did this e.g., here, Table 2). Alternatively, you could try to follow some of the recommendations in the Rights and Sterba paper. Finally, if this also does not help you, you might tell the reviewer something like:

    Unfortunately, due to the way that variance is partitioned in linear mixed models (e.g., Rights & Sterba, in press, Psych Methods), there does not exist an agreed upon way to calculate standard effect sizes for individual model terms such as main effects or interactions. We nevertheless decided to primarily employ mixed models in our analysis, because mixed models are vastly superior in controlling for Type I errors than alternative approaches and consequently results from mixed models are more likely to generalize to new observations (e.g., Barr, Levy, Scheepers, & Tily, 2013; Judd, Westfall, & Kenny, 2012). Whenever possible, we report unstandardized effect sizes which is in line with general recommendation of how to report effect sizes (e.g., Pek & Flora, 2018).

    References:
    Barr, D. J., Levy, R., Scheepers, C., & Tily, H. J. (2013). Random effects structure for confirmatory hypothesis testing: Keep it maximal. Journal of Memory and Language, 68(3), 255–278. https://doi.org/10.1016/j.jml.2012.11.001
    Judd, C. M., Westfall, J., & Kenny, D. A. (2012). Treating stimuli as a random factor in social psychology: A new and comprehensive solution to a pervasive but largely ignored problem. Journal of Personality and Social Psychology, 103(1), 54–69. https://doi.org/10.1037/a0028347
    Pek, J., & Flora, D. B. (2018). Reporting effect sizes in original psychological research: A discussion and tutorial. Psychological Methods, 23, 208–225. https://doi.org/10.1037/met0000126
    Rights, J. D., & Sterba, S. K. (in press). Quantifying explained variance in multilevel models: An integrative framework for defining R-squared measures. Psychological Methods. https://doi.org/10.1037/met0000184

    • #300

      Xinming Xu
      Participant

      Then how to perform power analysis as also required by the journals, if standard effect size is not available (e.g., in PANGEA)? Thanks.

      • #306

        henrik
        Keymaster

        Good question and honestly, I am not sure. Pangea seems to require some d measure of standardized effect size. I think you could use one of two approaches:
        1. Use some reasonable default value (e.g., small effect size) and explain that this is something of a lower bound of power because you do not expect a smaller effect, but likely larger.
        2. Alternatively, simply standardize the observed mean difference from a previous study by a reasonable measure of standard deviation for that specific difference. In the mixed model case maybe the by-participant random slope standard deviation. I guess it really depends on the specific case on how to do it. But if one makes a reasonable argument why this is okay for the sake of power analysis, then that should be okay.

        As should be maybe clear from this paragraph. I do not often use power-analysis myself. For such highly parameterized models like mixed-models they require so many assumptions that it is really unclear what the value of them is. If possible I would avoid them and make other arguments for why I decided to collect a specific number of participants (e.g., prior samples sizes, money or time restrictions).

        • #307

          Xinming Xu
          Participant

          Thanks for your suggestions. I might just use some default values then.

  • #296

    blazko m. (b1azk0)
    Participant

    Henrik, thank you very much!
    This is a really insightful answer and it clears a lot to me.
    Great help.

You must be logged in to reply to this topic.