November 8, 2017 at 09:00 UTC #144
I have a question regarding using sum coding with an lmer analysis and a related warning message in afex (which I have just started trying to use).
I have a 2 (Finiteness: Finite/NonFinite) X 2 (Coherence: Coherent/NonCoherent) latin square design. There are four versions of each “item” rotated over four presentation lists. Each participant sees only one list. Thus each factor level has data points in each list, but different participants see different levels of each item (typical linguistics-style, latin square approach). Response is a 7-point rating scale. All participants respond to every rating choice.
I ran the model with treatment coding and discovered that there is a finiteness effect, but only when the intercept is set to NonCoherent. One intercept level produced a significant simple effect of Finiteness, but the other was not significant. There was however no significant interaction.
I then decided to run this with Sum coding to see if there was a main effect of Finiteness; and to do this I used afex.
My issue: I have four lists, but I do not have exactly the same number of participants per list. Thus my data is unbalanced. I could of course just remove some subjects to balance the lists, but I would like to understand better what is going on.
Question 1: Is it still valid to do sum coding with unbalanced data? I have very little understanding regarding how various anova functions work together with lmer output. I read the link below, but I’m still fairly confused:
Question 2: In the output from afex, It returned a warning message regarding dropped contrasts from my Item factor. Could you please explain in more detail what the function is doing here?
Output is provided below:
> mixed(RatingZ ~ Coherence*Finiteness + (1 + Coherence + Finiteness | Item) + (Coherence | Subject), test, method = afex_options(“S”))
Contrasts set to contr.sum for the following variables: Coherence, Finiteness, Item, Subject
Fitting one lmer() model. [DONE]
Calculating p-values. [DONE]
Mixed Model Anova Table (Type 3 tests, KR-method)
Model: RatingZ ~ Coherence * Finiteness + (1 + Coherence + Finiteness |
Model: Item) + (Coherence | Subject)
Effect df F p.value
1 Coherence 1, 50.67 10.73 ** .002
2 Finiteness 1, 37.39 7.03 * .01
3 Coherence:Finiteness 1, 3089.32 1.73 .19
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘+’ 0.1 ‘ ’ 1
1: contrasts dropped from factor Item due to missing levels
2: contrasts dropped from factor Item due to missing levels
3: contrasts dropped from factor Item due to missing levels
4: contrasts dropped from factor Item due to missing levels
Thank you for your time,
November 15, 2017 at 22:51 UTC #147
Question 1: In principle yes. If the imbalance is only small and random then go ahead. If the imbalance is however structural and provides information (e.g., you tried to sample more in this cell but it was more complicated than the other condition and participants dropped out), then it is time to think about it more (you should probably use Type 2 sums of squares then). In any case, with sum contrasts and unbalanced groups, the intercept simply represents the unweighted grand mean (i.e., the mean of the cell means and not the overall grand mean).
Question 2: I am not sure what produces this warning in this case. To fully understand what is going on, I need a reproducible example (see here). Note that the output indicates that both
Finitnesshave two levels, which seems to be intended. You might try to run
droplevels()on the data before fitting, maybe this removes it.
Also note that it should be
method = 'S'(and not
method = afex_options("S")).
Hope that helps.
You must be logged in to reply to this topic.