Home › forums › Mixed Models › Logistic models using mixed()
Tagged: logistic regression
 This topic has 4 replies, 2 voices, and was last updated 1 year, 5 months ago by henrik.

AuthorPosts


October 24, 2018 at 23:35 GMT+0000 #320David SidhuParticipant
I just read your excellent new chapter. The note at the end “However, due to the nonlinear nature of most link functions, the interpretations of most model predictions, specifically of lowerorder effects in factorial designs, can be quite challenging.” gave me pause. I have used glmer() to analyze 2×2 designs in the past. I was hoping to use mixed() to analyze a 2x2x2 (all within subjects) experiment in which subjects made binary decisions to stimuli. I am interested in fitting all possible fixed effects (i.e., all main effects, twoway interactions, and the threeway interaction).
Is there any reason I should not use mixed() to estimate parameters and p values for all effects (using effects coding) and then interpret the main effects?
 This topic was modified 1 year, 5 months ago by David Sidhu.

October 24, 2018 at 23:46 GMT+0000 #322David SidhuParticipant
I’ll add that I don’t have > 40 levels of my random effects, so I assume I should use parametric bootstrapping? Is the afex package, used in this way, the best option available to me?

October 25, 2018 at 18:49 GMT+0000 #323henrikKeymaster
There are several questions here:
1. Is afex the best option for me?
If you want to stay in the frequentist realm, than
afex
is probably as easy as it gets (but I might of course be biased).If you can consider going Bayesian then both packages,
rstanarm
orbrms
are pretty good options. However, note that if you use those packages you have to make sure to use the correct coding (e.g.,afex::set_sum_contrasts()
) when running the model.afex
does so automatically, but not the other packages.2. Should I use parametric bootstrapping?
If computationally feasible that would be great. If not, LRTs is your only remaining option.
3. However, due to the nonlinear nature of most link functions, the interpretations of most model predictions, specifically of lowerorder effects in factorial designs, can be quite challenging.
What we mean here is that due to the nonlinear nature of the link function, the lower order effects might not faithfully represent your lowerorder effect in the data. So it is worth checking whether or not the lower order effect actually represent pattern that are in the data and not an artifact. So compare the marginal estimates on the response scale with those in the data. If this makes sense, you should be mostly fine.

October 25, 2018 at 20:08 GMT+0000 #324David SidhuParticipant
Thanks very much for the reply! I was wondering if you could just comment on parametric bootstrapping (and potentially LRTs, though I don’t seem to have the number of levels in my random effects to make this work) vs. the p values that glmer() generates? I believe that these are based on Wald tests. I seem to be getting convergence errors when using afex that don’t occur when just using lme4 and glmer(). Does this mean that the results of glmer() shouldn’t be trusted?
Note that this is after setting:
control = glmerControl(optCtrl = list(maxfun = 1e6))
as well as trying
all_fit = TRUE
I get two types of convergence errors:
Model failed to converge with maxgrad = 0.00533471 (tol = 0.001, component 1)
and
unable to evaluate scaled gradientModel failed to converge: degenerate Hessian with 1 negative eigenvalues

October 26, 2018 at 17:37 GMT+0000 #326henrikKeymaster
I seem to be getting convergence errors when using afex that don’t occur when just using lme4 and glmer(). Does this mean that the results of glmer() shouldn’t be trusted?
No, this is not a good reason. Note that
afex
usesglmer
. So if the model uses the same parameterization (which can be achieved by running afex::set_sum_contrasts()before fitting with
glmer) then the (full) model should be identical. In this case, running
summaryon both models (the one fitted with
afexand the one fitted with
glmer`) will reval them being identical.The problem is really that Wald tests for generalized models are not particularly trustworthy. In my opinion, LRTs are way better then. However, as said above, if computationally feasible parametric bootstrap is the better choice. But of course, if fitting the model takes very long, then this is not a good option (as parametric bootstrap fits the model several times, preferably 1000 times or more).
Note that the convergence warnings can be false positives, some more on that:
https://rdrr.io/cran/lme4/man/convergence.html
https://biologyforfun.wordpress.com/2018/04/09/helpihaveconvergencewarnings/ (note that this blog seems somewhat too alarmist for my taste)


AuthorPosts
 You must be logged in to reply to this topic.