Home forums Mixed Models Logistic models using mixed()

Viewing 4 reply threads
  • Author
    Posts
    • #320
      David Sidhu
      Participant

      I just read your excellent new chapter. The note at the end “However, due to the nonlinear nature of most link functions, the interpretations of most model predictions, specifically of lower-order effects in factorial designs, can be quite challenging.” gave me pause. I have used glmer() to analyze 2×2 designs in the past. I was hoping to use mixed() to analyze a 2x2x2 (all within subjects) experiment in which subjects made binary decisions to stimuli. I am interested in fitting all possible fixed effects (i.e., all main effects, two-way interactions, and the three-way interaction).

      Is there any reason I should not use mixed() to estimate parameters and p values for all effects (using effects coding) and then interpret the main effects?

      • This topic was modified 5 years, 5 months ago by David Sidhu.
    • #322
      David Sidhu
      Participant

      I’ll add that I don’t have > 40 levels of my random effects, so I assume I should use parametric bootstrapping? Is the afex package, used in this way, the best option available to me?

    • #323
      henrik
      Keymaster

      There are several questions here:

      1. Is afex the best option for me?

      If you want to stay in the frequentist realm, than afex is probably as easy as it gets (but I might of course be biased).

      If you can consider going Bayesian then both packages, rstanarm or brms are pretty good options. However, note that if you use those packages you have to make sure to use the correct coding (e.g., afex::set_sum_contrasts()) when running the model. afex does so automatically, but not the other packages.

      2. Should I use parametric bootstrapping?

      If computationally feasible that would be great. If not, LRTs is your only remaining option.

      3. However, due to the nonlinear nature of most link functions, the interpretations of most model predictions, specifically of lower-order effects in factorial designs, can be quite challenging.

      What we mean here is that due to the non-linear nature of the link function, the lower order effects might not faithfully represent your lower-order effect in the data. So it is worth checking whether or not the lower order effect actually represent pattern that are in the data and not an artifact. So compare the marginal estimates on the response scale with those in the data. If this makes sense, you should be mostly fine.

    • #324
      David Sidhu
      Participant

      Thanks very much for the reply! I was wondering if you could just comment on parametric bootstrapping (and potentially LRTs, though I don’t seem to have the number of levels in my random effects to make this work) vs. the p values that glmer() generates? I believe that these are based on Wald tests. I seem to be getting convergence errors when using afex that don’t occur when just using lme4 and glmer(). Does this mean that the results of glmer() shouldn’t be trusted?

      Note that this is after setting:

      control = glmerControl(optCtrl = list(maxfun = 1e6))

      as well as trying

      all_fit = TRUE

      I get two types of convergence errors:

      Model failed to converge with max|grad| = 0.00533471 (tol = 0.001, component 1)

      and

      unable to evaluate scaled gradientModel failed to converge: degenerate Hessian with 1 negative eigenvalues

    • #326
      henrik
      Keymaster

      I seem to be getting convergence errors when using afex that don’t occur when just using lme4 and glmer(). Does this mean that the results of glmer() shouldn’t be trusted?

      No, this is not a good reason. Note that afex uses glmer. So if the model uses the same parameterization (which can be achieved by running afex::set_sum_contrasts()before fitting withglmer) then the (full) model should be identical. In this case, runningsummaryon both models (the one fitted withafexand the one fitted withglmer`) will reval them being identical.

      The problem is really that Wald tests for generalized models are not particularly trustworthy. In my opinion, LRTs are way better then. However, as said above, if computationally feasible parametric bootstrap is the better choice. But of course, if fitting the model takes very long, then this is not a good option (as parametric bootstrap fits the model several times, preferably 1000 times or more).

      Note that the convergence warnings can be false positives, some more on that:
      https://rdrr.io/cran/lme4/man/convergence.html
      https://biologyforfun.wordpress.com/2018/04/09/help-i-have-convergence-warnings/ (note that this blog seems somewhat too alarmist for my taste)

Viewing 4 reply threads
  • You must be logged in to reply to this topic.