Home forums Mixed Models Printing tables with sjt.lmer -function

Viewing 10 reply threads
  • Author
    Posts
    • #191
      Jarkko Hautala
      Participant

      I’ve managed to run my analyses with afex mixed-function.
      Now I want to print result tables and the most convenient way to do is to use sjt.lmer -function from sjPlot-package.
      However, it seem not to find the TYPE III test p-values but provides different p-values and possibly confidence intervals (Wald).
      Is this something which may be solved from afex or is solely the issue of sjPlot -package handling the object returned from mixed-function?

    • #192
      henrik
      Keymaster

      I do not think afex and sjt. or sjp. functions are compatible. The main reason is that the sj functions focus on the standard lmer output and their fixed-effects parameters. In contrast, afex focuses on tests of effects. In many cases, effects involve more than one parameter.

      However, afex involves many functions that support nice printing of the "mixed" objects returned from mixed. Mainly nice() and anova(). See the following example:

      library("afex")
      data("Machines", package = "MEMSS") 
      
      # simple model with random-slopes for repeated-measures factor
      m1 <- mixed(score ~ Machine + (Machine|Worker), data=Machines)
      m1
      # Mixed Model Anova Table (Type 3 tests, KR-method)
      # 
      # Model: score ~ Machine + (Machine | Worker)
      # Data: Machines
      #    Effect   df        F p.value
      # 1 Machine 2, 4 32.80 **    .003
      # ---
      # Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘+’ 0.1 ‘ ’ 1
      
      nice(m1)
      # Mixed Model Anova Table (Type 3 tests, KR-method)
      # 
      # Model: score ~ Machine + (Machine | Worker)
      # Data: Machines
      #    Effect   df        F p.value
      # 1 Machine 2, 4 32.80 **    .003
      # ---
      # Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘+’ 0.1 ‘ ’ 1
      
      knitr::kable(nice(m1))
      # |Effect  |df   |F        |p.value |
      # |:-------|:----|:--------|:-------|
      # |Machine |2, 4 |32.80 ** |.003    |
      
      anova(m1)
      # Mixed Model Anova Table (Type 3 tests, KR-method)
      # 
      # Model: score ~ Machine + (Machine | Worker)
      # Data: Machines
      #         num Df den Df      F   Pr(>F)   
      # Machine      2      4 32.803 0.003302 **
      # ---
      # Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
      
      knitr::kable(anova(m1))
      # |        | num Df| den Df|        F|    Pr(>F)|
      # |:-------|------:|------:|--------:|---------:|
      # |Machine |      2|      4| 32.80302| 0.0033024|

      Please let me know if this is not enough. If you definitely want to use the sj family of functions you need to contact the maintainer and ask if he is willing to support afex functions.

    • #193
      Jarkko Hautala
      Participant

      Thanks, I opened an issue also on sjPlot -GIT page.

      I believe I need to few other values, at least confidence intervals and variance explained for the fixed and random parts of the model.
      What would be best way to get these?

      I’ve used:
      >r.squaredGLMM(fit$full_model)
      >confint(fit$full_model)

      but at least the confint seem to take a lot of computation time at least if continuous predictors are involved.

    • #194
      henrik
      Keymaster

      I do not know exactly which package r.squaredGLMM comes from, but I expect that, similar to confint, it also works on the level of the model parameters and not the level of the model effects (or ‘model terms’). So they are in principle incompatible with afex. If in your case all effects only consist of one parameter, then you can of course use them.

      In such a case, the simplest way would be to add these values to the data.frames returned by either nice() or anova(). Continuing from the previous example:

      df1 <- nice(m1)
      df1$new_col <- 2
      df1
      # Mixed Model Anova Table (Type 3 tests, KR-method)
      # 
      # Model: score ~ Machine + (Machine | Worker)
      # Data: Machines
      #    Effect   df        F p.value new_col
      # 1 Machine 2, 4 32.80 **    .003       2
      # ---
      # Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘+’ 0.1 ‘ ’ 1
      
      df2 <- anova(m1)
      df2$new_col <- 2
      df2
      # Mixed Model Anova Table (Type 3 tests, KR-method)
      # 
      # Model: score ~ Machine + (Machine | Worker)
      # Data: Machines
      #         num Df den Df      F    Pr(>F) new_col
      # Machine      2      4 32.803 0.0033024       2

      As you can see, both functions simply return a data.frame for which specific print methods exists. But you can simply add additional columns. If you want to remove the specific print methods, simply wrap the returned object in a as.data.frame() call. For example:

      as.data.frame(df2)
      #         num Df den Df        F      Pr(>F) new_col
      # Machine      2      4 32.80302 0.003302373       2

      Note that you can also get confidence intervals via the methods implemented in emmeans, which work nicely with afex functions, see: https://cran.r-project.org/package=emmeans

    • #195
      Jarkko Hautala
      Participant

      Ok, so the emmeans seems to be the way to go as I have a rather complex models to report, and in some models I have factors with three levels (=multiple parameters?).

    • #196
      henrik
      Keymaster

      In general, I would give up on the idea of having something like a standardized effect size on the level of the effects (i.e. model terms) in mixed models. They generally do not appear to work as soon as a model includes random slopes or multiple random-effects grouping factors. Same applies for something like R-squared.

      Confidence intervals can be obtained on the level of the cell means (i.e., factor level or linear combinations thereof) using emmeans. For linear mixed models emm_options(lmer.df = ...) allows you to choose how the degrees of freedoms are calculated which can have dramatic consequences on the speed. Three options exist: lmer.df = c("kenward-roger", "satterthwaite", "asymptotic").

      It is not always trivial how to combine the table of effects with a measure of effect size, but one can sometimes find ways. For example see Table 2 in: http://singmann.org/download/publications/2015_kellen_singmann_vogt_klauer.pdf

    • #197
      Jarkko Hautala
      Participant

      Many thanks I was now being able to plot figures:

      >R=ref_grid(fit, type=”response”, at = list(a = c(-2, -1, 0, 1, 2)))
      >A2=emmeans(R, specs=c(“a”,”b”, “c”))
      >emmip(A2, b~a|c, CIs=T)

      And also confidence intervals for a result table, e.g:
      >R1=ref_grid(fit, type=”response”)
      >A3=emmeans(R1, specs=c(“a”))

      Here a=continuous predictor, b, and c categorical predictors.

      Its a pity if no variance explained or equivalent value cannot be computed, that would be highly appreciated by researchers.
      Of course we get sense of how strong the predictors are by looking at the figures.

    • #198
      Jarkko Hautala
      Participant

      One thing more:

      Is there a way to back-transform the values returned by summary-function for a mixed object?

    • #199
      henrik
      Keymaster

      I am sorry, but I do not understand what you want to back-transform into what. In general, a reproducible example (e.g., http://stackoverflow.com/q/5963269/289572) helps a lot in this regard. But you at least have to show the output you have and then tell me exactly what you want.

    • #200
      Jarkko Hautala
      Participant

      >fit=mixed(log(A)~B+C+(1+B|id)+(1+B|item), method=”S”, data=dat2, progress=TRUE, cl=cl)
      >summary(fit)
      here i would like to get regression estimates and random effect variance on the original values, not on log-transformed

      another problem I’m encountering is that when I try to use emmeans emmip or summary -functions it fails when the model includes the expand_re=TRUE specification:

      >fit=mixed(log(A)~B+C+(1+B||id)+(1+B||item), method=”S”, data=dat2, progress=TRUE, expand_re=TRUE, cl=cl)
      >Grid=ref_grid(fit, at=list(C=c(-2, -1, 0, 1, 2)))
      >EMM=emmeans(Grid, specs=c(“C”, “B”))
      >emmip(EMM, type=”response”, B~C, CIs=TRUE)

      Error in eval(predvars, data, env) : object ‘re1.B1’ not found

    • #203
      henrik
      Keymaster

      The output from summary() for mixed() objects is identical to the output of summary() for lmer() objects (this is why you get output on the level of the parameters and not at the level of model terms/effects). So if this is possible for normal lmer() models it is possible here as well (if it does make sense in your case is not something I can answer). If you want to get at the contents of the summary output, you can save it and work with it:

      summ <- summary(fit)
      as.data.frame(summ$varcor)
      summ$coefficients
      

      The problem when using expand_re = TRUE is that it creates a new data.frame that is used for fitting. When trying to access this, this fails. There are several ways around this.
      1. You could avoid accessing it. emmeans only tries to access it, if the lmer.df are "satterthwaite":

      library("afex")
      data("Machines", package = "MEMSS") 
      m1 <- mixed(score ~ Machine + (Machine  || Worker), data=Machines, expand_re = TRUE)
      
      emm_options(lmer.df = "kenward-roger")
      emmeans(m1, "Machine")  #works
      #  Machine   emmean       SE    df lower.CL upper.CL
      #  A       52.35556 2.397324  7.35 46.74087 57.97024
      #  B       60.32222 2.633283  9.04 54.36897 66.27548
      #  C       66.27222 2.836796 11.60 60.06746 72.47698
      # 
      # Degrees-of-freedom method: kenward-roger 
      # Confidence level used: 0.95 
      
      emm_options(lmer.df = "satterthwaite")
      emmeans(m1, "Machine") #fails
      # Error in eval(predvars, data, env) : object 're1.Machine1' not found
      
      emm_options(lmer.df = "asymptotic")
      emmeans(m1, "Machine") #works
      #  Machine   emmean       SE  df asymp.LCL asymp.UCL
      #  A       52.35556 2.397324 Inf  47.65689  57.05422
      #  B       60.32222 2.633283 Inf  55.16108  65.48336
      #  C       66.27222 2.836796 Inf  60.71220  71.83224
      # 
      # Degrees-of-freedom method: asymptotic 
      # Confidence level used: 0.95

      2. You can overwrite your existing data.framewith one that contains the missing columns (note that this might change the number of rows if you have missing data). I would only do this on a copy of my real data:

      m1 <- mixed(score ~ Machine + (Machine  || Worker), data=Machines, expand_re = TRUE)
      Machines <- mixed(score ~ Machine + (Machine  || Worker), data=Machines, expand_re = TRUE, return = "data")
      emm_options(lmer.df = "satterthwaite")
      emmeans(m1, "Machine") # works now
      #  Machine   emmean       SE    df lower.CL upper.CL
      #  A       52.35556 2.397324  7.35 46.74094 57.97017
      #  B       60.32222 2.633283  9.04 54.36898 66.27546
      #  C       66.27222 2.836796 11.56 60.06544 72.47901
      # 
      # Degrees-of-freedom method: satterthwaite 
      # Confidence level used: 0.95 
Viewing 10 reply threads
  • You must be logged in to reply to this topic.