Forum Replies Created
-
AuthorPosts
-
henrik
KeymasterSolving specific problems without a reproducible example is generally difficult. At least for the data oin the vignette, it still works:
library("afex") # needed for mixed() and attaches lme4 automatically. library("emmeans") # emmeans is needed for follow-up tests library("multcomp") # for advanced control for multiple testing/Type 1 errors. data("fhch2010") # load fhch <- droplevels(fhch2010[ fhch2010$correct,]) # remove errors str(fhch2010) # structure of the data m9s <- mixed(log_rt ~ task*stimulus*density*frequency + (stimulus+frequency||id)+ (task|item), fhch, control = lmerControl(optCtrl = list(maxfun = 1e6)), expand_re = TRUE) emm_options(lmer.df = "asymptotic") emm_i1 <- emmeans(m9s, "frequency", by = c("stimulus", "task")) update(pairs(emm_i1), by = NULL, adjust = "holm") summary(as.glht(update(pairs(emm_i1), by = NULL)), test = adjusted("free"))
This produces:
Note: adjust = "tukey" was changed to "sidak" because "tukey" is only appropriate for one set of pairwise comparisons Simultaneous Tests for General Linear Hypotheses Linear Hypotheses: Estimate Std. Error z value Pr(>|z|) low - high, word, naming == 0 0.05877 0.01491 3.941 0.000162 *** low - high, nonword, naming == 0 -0.20699 0.01497 -13.823 < 2e-16 *** low - high, word, lexdec == 0 0.06353 0.01669 3.807 0.000162 *** low - high, nonword, lexdec == 0 0.11086 0.01675 6.619 1.08e-10 *** --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 (Adjusted p values reported -- free method)
henrik
KeymasterIf you do want the sequential type tests in which each test does not the effects of a higher level, you can simply use
type = 2
, this is exactly what it does. Inmixed
the implementation of Type II depends on the specific method, but the general approach is the same across methods.So in this case I would run Type II tests (i.e.,
type = 2
) and then look at the highest order effect that is significant. Then, refit the model that only includes those effects and use this model to inspect the effects. However, if there are interactions with continus variables, I would suggest usingemmeans::emtrends
for this instead of looking at the parameter estimates directly (i.e.,emmeans::emtrends
for looking at conditional slopes of interactions of continuous with categorical/continuous variables).henrik
KeymasterIf you have interactions involving categorical variables, I do not see why you would use treatment contrasts. In my eyes sum-to-zero just makes more sense. So I would not spend too much time thinking about what treatment contrasts would mean in that case.
The answer to you question only depends on the balance of the two categorical variables and is essentially the same as the Type III versus Type II sums of squares question. In the balanced case it does not matter, both models would lead to the same outcome. In the case of imbalance, removing the three-way interaction is the same as using Type II tests.
In case of imbalance, refitting without the three-way interaction does not adjust (or “control”) the two-way interactions for the three-way interaction. In case of observational data that is probably preferred. In case of experimental data, I would use the full model with the three-way interaction.
henrik
KeymasterWithout additional information it is difficult to say what the issue is here. It might be that in the original fit you used an older version of
lme4
and they have changed some optimizer settings. But it seems unlikely that an older result was more reliable than one from the current version. In other words, if with the recent version you get different results I would probably trust those recent results more.You might try to set
all_fit = TRUE
to see if that helps. And in the second model you might want to remove the correlation as you do in the first model.One more comment, usually one only enters variables one wants to adjust for (i.e., gender and age) only as main effects. I am not sure if the interaction of those with group is a good idea.
henrik
KeymasterI do not feel compotent to answer this question. More specifically, I have no experience in changing the
nAGQ
setting not am I aware of any study evaluating its effect on the error rates of a procedure.However, if the results are consistent across different settings of
nAGQ
and random-effects structures that would encourage confidence in the results.henrik
KeymasterThe best is always to report specific contrasts from
emmeans
(i.e., the estimate from the contrast). They particularly answer your research question and can easily be interpreted.I do not understand really understand your further suggestions. There is no such thing as one
b
in the case of more than two levels. Consider for example the case of theMachines
data with three levels.library("afex") library("emmeans") data("Machines", package = "MEMSS") emm_options(lmer.df = "asymptotic") # simple model with random-slopes for repeated-measures factor m1 <- mixed(score ~ Machine + (Machine|Worker), data=Machines) em1 <- emmeans(m1, "Machine") em1 # Machine emmean SE df lower.CL upper.CL # A 52.4 1.68 5 48.0 56.7 # B 60.3 3.53 5 51.3 69.4 # C 66.3 1.81 5 61.6 70.9 # # Degrees-of-freedom method: kenward-roger # Confidence level used: 0.95
The parameter that is actually estimated from the model is the difference from the intercept (i.e., unweighted grand mean) as the following code shows:
round(summary(m1)$coefficients, 3) # Estimate Std. Error df t value Pr(>|t|) # (Intercept) 59.650 2.145 4.999 27.811 0.000 # Machine1 -7.294 1.079 4.999 -6.760 0.001 # Machine2 0.672 1.539 4.999 0.437 0.681 summary(em1)$emmean - mean(summary(em1)$emmean) # [1] -7.2944444 0.6722222 6.6222222
We see that the parameters (the two
b
s),Machine1
andMachine2
are the first two differences. The third difference is the negative sum of the two. Thus, the average of the three differences is:mean(summary(em1)$emmean - mean(summary(em1)$emmean)) # [1] 0
So try to figure out which contrasts are of interest to you or answer your research question and report those.
henrik
KeymasterThe warning does not seem to be emitted from
afex
, but from the package performing the tests (lmerTest
orcar
), so I cannot really explain it. Try using a differentmethod
(e.g.,method = "S"
) and hopefully this works.henrik
Keymasterscale()
does not return a vector. It is safer to calculate directly. One of the following should work:df$blockID.c <- df$blockID - 2.5
df$blockID.c <- df$blockID - mean(df$blockID)
henrik
KeymasterAs far as I understand your design, the
SM.blockDiff
andblockID
interaction only varies across participants. If this is indeed the case, it cannot receive random slopes. In this case,(blockID + SM.blockDiff|VP)
is indeed the maximal model justified by the design for the by-participant random intercept. For more on this I recommend our chapter:
I also do not see how centering
blockID
would do any harm. That is usually a good idea for conintuous covariates. Again, I recommend reading:
Dalal, D. K., & Zickar, M. J. (2012). Some Common Myths About Centering Predictor Variables in Moderated Multiple Regression and Polynomial Regression. Organizational Research Methods, 15(3), 339–362. https://doi.org/10.1177/1094428111430540henrik
Keymastermixed
should only emit a warning if you enter a continuous covariate, not an error. This warning tells you that all model termsblockID
interacts with are now evaluated whenblockID
is zero. In your case this refers to all terms of yourm1
(i.e., main effects ofcondition
andSM.blockDiff
as well as their interaction), they are now all simple effects whenblockID
= 0. That is why centering makes sense as they are then evaluated at the mean ofblockID
(if centered at the mean). For more on this issue see:
Dalal, D. K., & Zickar, M. J. (2012). Some Common Myths About Centering Predictor Variables in Moderated Multiple Regression and Polynomial Regression. Organizational Research Methods, 15(3), 339–362. https://doi.org/10.1177/1094428111430540You can avoid the issue of centering by only adding
blockID
as a main effect:quest ~ blockID + condition * SM.blockDiff + (blockID + SM.blockDiff|VP)
. However, I am generally not a big fan of “conrolling for” factors in this sense. It is often unclear to me what kind of substantial question are answered by that if indded shared variance is removed then.Note that in any case, adding
blockID
only accounts for linear effects of time. If you want to take the idea of exhaustion or attention seriously, you need to consider more complicated models that allow for non-linear effects. See:
Baayen, H., Vasishth, S., Kliegl, R., & Bates, D. (2017). The cave of shadows: Addressing the human factor with generalized additive mixed models. Journal of Memory and Language, 94, 206–234. https://doi.org/10.1016/j.jml.2016.11.006January 16, 2020 at 06:58 GMT+0000 in reply to: cl causes an error when passing weights variable #394henrik
KeymasterSorry for the slow reply. Can you try using
weights = performance.expt2$wt
?If this still gives an error you might have to export
performance.expt2
tocl
first:clusterExport(cl, "performance.expt2")
November 1, 2019 at 17:15 GMT+0000 in reply to: Mixed-design experiment with non-binary non-ordinal response variable #382henrik
KeymasterIf the DV is categorical with five levels, you cannot use such a mixed model. Binomial models only support categorical data with two categories. For more categories you need to use multinomial logistic models. I find these model quite advanced and their results are not easy to interpret.
You can however split your categories in what is called nested dichotomies and analyse each of these with a binomial model. The theory is described in the books by John Fox.
Let’s say you only have three categories, A, B, C. Then you first pick one category of interest, such as A. You then make a binary variable with A versus the rest (i.e., B and C). You analyse this with a binomial model. In the next step you discard all observations in which the response is A and only analyse the remaining data with a new binomial variable B versus C. We did exactly this in one paper once: http://singmann.org/download/publications/Douven_Elqayam_Singmann_Wijnbergen-Huitink-HIT_in_press.pdf
October 31, 2019 at 14:41 GMT+0000 in reply to: Mixed-design experiment with non-binary non-ordinal response variable #376henrik
KeymasterI am not sure I fully understand the question, but I guess the simple answer is yes. The question of how to map aspects of your design onto the random-effects part of the model (i.e., for which grouping-factors to you want to have random intercepts and for which fixed factors random slopes) is independent of the choice of the family or link function. So you can easily do this for a response variable that is assumed to have a conditional normal distribution.
For an introduction to this topics see our chapter:
henrik
KeymasterThe idea of unstandardised effects is to simply report the effect on the response scale. For example, if the dependent variable is response time in seconds you report the effect in time (e.g., “the difference between condition was 0.1 seconds” or “the estimated slope was 1.5 seconds”).
henrik
KeymasterThe reason I have not implemented the approach you suggest is because I believe it is not statistically fully appropriate. Specifically, the Mauchly test, as any significance test, has a specific power and Type I error rate (the latter of which is controlled by the alpha/significance level). I do not believe it is worthwhile to incorporate these additional error probabilities into an analysis. Instead, I believe that the small loss in power by always using Greenhouse-Geisser is overall a more appropriate strategy and more conservative.
Consider the case in which the Mauchly test fails to correctly detect a violation because the power is too low in a specific case. If this happens, you get inflated Type I error rates. I consider this a potentially larger drawback than occasionally committing a Type II error in case Greenhouse-Geisser is too strict.
I believe that in general, significance tests for testing assumptions is not a good idea. For some arguments why see:
https://stats.stackexchange.com/q/2824/442
https://stats.stackexchange.com/q/2492/442Nevertheless, the current development version of
afex
now makes it easier to get assumption tests, see:
https://github.com/singmann/afex/pull/69henrik
KeymasterIt does not matter how many levels each of your factor has. What matters if each participants sees both levels of each factor (i.e., it varies within-participants). If this is the case, corresponding random slopes should be included.
henrik
KeymasterThat can happen due to hierarchical shrinkage, which nudges the individual-level effects to follow a normal distribution. If the individual-level effects show a normal distribution around their mean, which is one of the assumptions of the mixed-model framework, this should not have too dramatic effects. Your plot suggests that this assumption is violated here. In any case, it suggests also that the specific pattern across levels of threat is not very strong.
It is difficult to say more without additional details (at least the corresponding standard errors). But removing random-slopes should only be done mildly. Try removing the correlations among slopes first. Please have a look at this discussion in my chapter: http://singmann.org/download/publications/singmann_kellen-introduction-mixed-models.pdf
henrik
KeymasterI forgot to say one more thing. If any of your factors or additional variables varies within-participants, it should be added as a random slope. Random-intercept only models, as the one you fit, seem generally quite dubious. Please see my discussion on this issue at: http://singmann.org/download/publications/singmann_kellen-introduction-mixed-models.pdf
henrik
KeymasterI have some answer to your questions, but it makes sense to read something more on variable selection in regression modeling as this is probably the problem your are struggling with and it is not a trivial problem. A good starting point may be ‘Regression Modeling Strategies’ by Frank Harrel (2015): https://www.springer.com/gb/book/9781441929181
Note that this book deals a lot with simple regression models (i.e., not mixed models), but the problem you have holds in the same manner.Model selection using AIC and hypothesis testing using significance test do different things. Reasons why they can diverge can be because they use different cut-offs. However, usually a variable that dramatically improves AIC should also be significant in a significance test. Because what this test essentially does is fit a restricted model in which this variable is withheld and then compare the fit using something akin a likelihood-ratio test that compares the fit of two nested models.
If a variable alone improves AIC but not jointly is probably a case of multicollinearity: https://en.wikipedia.org/wiki/Multicollinearity
Your additional predictor variables are probably itself correlated with each other. So the predictive ability by one can then often be covered by other variables that are still correlated with it.To answer your question directly: A variable should not really be non-significant but improve the model considerably in terms of AIC. At least not with all the other variables are in there but only the one is withhold.
I repeat again that variable selection is generally a difficult problem and many complicated procedures (not implemented in afex or lme4) exist to tackle this in a more principled manner.
henrik
KeymasterSorry for the slow reply, your post somehow went through the cracks. You need to set
factorize = FALSE
in the call to use numerical covariates.For the example data that would be:
data(obk.long, package = "afex") aov_ez("id", "value", obk.long, between = c("treatment", "gender"), within = c("phase", "hour"), covariate = "age", observed = c("gender", "age"), factorize = FALSE)
In your case this would be:
fit.grat2 <-aov_car(grat ~ tgrat+condition*time + Error(PPT/time), data=mf_long2, covariate=centgrat, factorize = FALSE)
I should probably add a corresponding note to the documentation.
March 5, 2019 at 16:51 GMT+0000 in reply to: Mixed model specification and centering of predictor #343henrik
KeymasterTreating something as a random-effects grouping factor (i.e., to the right side of
|
) leads to shrinkage. Levels of this factor (e.g., specific electrodes) for which the effect differs quite strongly from the mean are shrunk towards the overall mean. If this is an assumption you are willing to make (i.e., that specific electrodes for which the effect diverges relatively strongly from the others is probably a measurement error), then it makes sense to treat it as a random effect.The benefit of treating a factor as random-effects grouping factor is that under this assumption (i.e., that the effect across the different electrodes is randomly distributed around the grand mean effect) the overall prediction will be better.
There are two downsides: (1) if the assumption is false, you miss out on “true” differences. (2) You are unable to easily check for differences between electrodes.
Having said that,
m2
looks somewhat reasonable. However, the same caveat as before regarding the continuous covariates holds. The main effects are tested when the covariates are 0. So 0 needs to be meaningful given the data (and one way to achieve this is by centering them).
I only wonder why not have theFrB
andDistress
interaction as random slopes:... + (FrB * Cluster | PatID)
henrik
KeymasterThanks for the bug report. I have fixed this in the development version on github, which you can get via:
devtools::install_github("singmann/afex@master")
It might take some time until it gets on CRAN as there are some things I still want to fix before that.
henrik
KeymasterI seem to be getting convergence errors when using afex that don’t occur when just using lme4 and glmer(). Does this mean that the results of glmer() shouldn’t be trusted?
No, this is not a good reason. Note that
afex
usesglmer
. So if the model uses the same parameterization (which can be achieved by running afex::set_sum_contrasts()before fitting with
glmer) then the (full) model should be identical. In this case, running
summaryon both models (the one fitted with
afexand the one fitted with
glmer`) will reval them being identical.The problem is really that Wald tests for generalized models are not particularly trustworthy. In my opinion, LRTs are way better then. However, as said above, if computationally feasible parametric bootstrap is the better choice. But of course, if fitting the model takes very long, then this is not a good option (as parametric bootstrap fits the model several times, preferably 1000 times or more).
Note that the convergence warnings can be false positives, some more on that:
https://rdrr.io/cran/lme4/man/convergence.html
https://biologyforfun.wordpress.com/2018/04/09/help-i-have-convergence-warnings/ (note that this blog seems somewhat too alarmist for my taste)henrik
KeymasterThere are several questions here:
1. Is afex the best option for me?
If you want to stay in the frequentist realm, than
afex
is probably as easy as it gets (but I might of course be biased).If you can consider going Bayesian then both packages,
rstanarm
orbrms
are pretty good options. However, note that if you use those packages you have to make sure to use the correct coding (e.g.,afex::set_sum_contrasts()
) when running the model.afex
does so automatically, but not the other packages.2. Should I use parametric bootstrapping?
If computationally feasible that would be great. If not, LRTs is your only remaining option.
3. However, due to the nonlinear nature of most link functions, the interpretations of most model predictions, specifically of lower-order effects in factorial designs, can be quite challenging.
What we mean here is that due to the non-linear nature of the link function, the lower order effects might not faithfully represent your lower-order effect in the data. So it is worth checking whether or not the lower order effect actually represent pattern that are in the data and not an artifact. So compare the marginal estimates on the response scale with those in the data. If this makes sense, you should be mostly fine.
henrik
KeymasterYeah, that should probably be better documented. You need to pass the whole vector:
> mixed(pCorrect ~ touchType + cued + (1 |exptPID), weights=touchCompare$nTotal, family=binomial, data = touchCompare)
-
AuthorPosts