Home › forums › Mixed Models › warning: missing cells for some factors
- This topic has 3 replies, 2 voices, and was last updated 7 years, 7 months ago by
henrik.
-
AuthorPosts
-
-
March 23, 2018 at 15:26 GMT+0000 #215
statmerkurParticipantWhen I work with numeric covariates instead of contrasts in
mixed()I get the following warning message:In createDesignMat(rho) : missing cells for some factors (combinations of factors) care must be taken with type III hypothesis.I think this is a false positive because setting the same contrasts via
set_sum_contrasts()doesn’t produce any warnings:library(afex) data(sk2011.1) d <- aggregate(response ~ id + inference, data = sk2011.1, FUN = mean) set_sum_contrasts() contrast_mat <- contr.sum(4) d$c1 <- contrast_mat[, 1][d$inference] d$c2 <- contrast_mat[, 2][d$inference] d$c3 <- contrast_mat[, 3][d$inference] summary(mixed(response ~ c1 + c2 + c3 + (1|id), d)) # warning displayed summary(mixed(response ~ inference + (1|id), d)) # runs without warningAm I right in thinking that the warning can be ignored or is there something wrong with my contrast specification?
-
March 24, 2018 at 19:09 GMT+0000 #216
henrikKeymasterThe question here really is what you want to achieve. The main goal of
mixedis to provide tests of effects, such as main-effects or interactions (also called model terms). So splitting the variable into its part is somehow orthogonal to the intended goal. The reason this is not directly clear from your call is the use ofsummary.summarygives you output based on the parameters and not based on the terms, as the other functions that deal withmixedobjects such asniceoranova(or evenprint). Invoking one of those also makes the difference between the two calls apparent:library(afex) set_sum_contrasts() data(sk2011.1) d <- aggregate(response ~ id + inference, data = sk2011.1, FUN = mean) contrast_mat <- contr.sum(4) d$c1 <- contrast_mat[, 1][d$inference] d$c2 <- contrast_mat[, 2][d$inference] d$c3 <- contrast_mat[, 3][d$inference] m1 <- mixed(response ~ c1 + c2 + c3 + (1|id), d) nice(m1) # Mixed Model Anova Table (Type 3 tests, KR-method) # # Model: response ~ c1 + c2 + c3 + (1 | id) # Data: d # Effect df F p.value # 1 c1 1, 117 7.79 ** .006 # 2 c2 1, 117 0.67 .41 # 3 c3 1, 117 10.52 ** .002 # --- # Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘+’ 0.1 ‘ ’ 1 m2 <- mixed(response ~ inference + (1|id), d) nice(m2) # Mixed Model Anova Table (Type 3 tests, KR-method) # # Model: response ~ inference + (1 | id) # Data: d # Effect df F p.value # 1 inference 3, 117 5.15 ** .002 # --- # Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘+’ 0.1 ‘ ’ 1If, for some reason, you are actually interested in p-values for the individual parameters (which seems quite questionable for factors with more than two levels as in the example), you can get this also via the
lmerTestsummarymethod:lmerTest::summary(m2$full_model) # [...] # Fixed effects: # Estimate Std. Error df t value Pr(>|t|) # (Intercept) 79.141 2.495 39.000 31.725 < 2e-16 *** # inference1 8.372 3.000 117.000 2.791 0.00614 ** # inference2 -2.459 3.000 117.000 -0.820 0.41395 # inference3 -9.728 3.000 117.000 -3.243 0.00154 ** # --- # Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1Note that for this to work you need to invoke
set_sum_contrasts()earlier. Alternatively, setset_data_arg = FALSE. One final comment: You might also want to take a look at theper_parameterargument. -
March 27, 2018 at 16:16 GMT+0000 #221
statmerkurParticipantI am indeed interested in the individual parameters as I have specific hypotheses for different a priori contrasts. Maybe using
contr.sum()was not the best example to illustrate this. But thanks for reminding me on the fact thatafexis designed to provide tests of (main-)effects – it now became clear to me that using your package for this purpose seems to be inappropriate. However, when I useset_data_arg = FALSEI still get the same results. -
March 27, 2018 at 16:30 GMT+0000 #222
henrikKeymasterThere are several different ways to test prespecified contrasts. This can be done in
afexin the way described here. As you correctly notice, the p-values of the different methoda re identical and the warnings appear to be inconsequential.You could also fit the model with normal (i.e., sum-to-zero) contrasts and then setup the contrasts later via
emmeans. There are surely further possibilities (e.g., fit the model withlmerTest::lmerand usesummary). All of those should give you the same results (as long as you use the same method for calculating the df).
-
-
AuthorPosts
- You must be logged in to reply to this topic.