Home › forums › Mixed Models › ITEM RANDOM EFFECTS, N OBSERVATIONS OR LEVELS, WHICH IS MORE IMPORTANT?
- This topic has 3 replies, 2 voices, and was last updated 6 years, 9 months ago by
henrik.
-
AuthorPosts
-
-
July 2, 2018 at 19:01 GMT+0000 #283
Aaron Gardony
ParticipantThis is not an afex-specific question but one that I’ve been wrestling with as I run mixed models with AFEX.
For the purpose of this question let’s assume the following psychology experiment on facial emotion perception.
Participants see a set of faces during a pre-test where they rapidly judge the emotion of depicted on the face. Let’s say there are 100 faces that span 4 emotions (20 faces per emotion). They see each face exactly once. Then they either do a facial emotion training task or a filler task and then take the same test again (post-test). The goal of the experiment is to determine whether the training task improves post-test performance relative to the filler task.Let’s say the dependent variable is response time to correctly identify the face’s emotion.
The fixed effects are: session (w/i Ss, pre- and post-test), training (b/w Ss, training vs. filler task)
The random effects are: participant (p#) and itemIt seems like there are two possible ways to specify random effects of item (face) for this experiment.
1. Specify the individual face image (face_id) as the random effect (80 levels)
lmm_model = mixed(RT ~ session * training + (1+session|p#) + (1+session|face_id)
2. Specify the facial emotion category (face_emotion) each face image belongs to (4 levels)
lmm_model = mixed(RT ~ session * training + (1+session|p#) + (1+session|face_emotion)
In Henrik’s paper (http://singmann.org/download/publications/singmann_kellen-introduction-mixed-models.pdf) it says:
“An important rule of thumb is that random effects can only be specified practically for grouping factors which have at least five or six different levels”
and later it says
“… One can only estimate a specific random effects parameter if there are multiple observations for each level of the random effects grouping factor and the fixed effects parameter to which one wants to add the random effects parameter…In the first case there are 100 levels of the item random effect satisfying the first rule of thumb. However, there are only two observations at each level of the item random effect (one @ pre-test and one @ post-test). Hardly multiple observations…
In the second case there are 4 levels of item random effect. This fails to satisfy the first rule of thumb. However, now there are 40 observations (20 pre- & 20 post-test) at each level of the item random effect, satisfying the multiple observations requirement.
Which is preferable in this case? Do both rules of thumb have to be met and therefore would it be inappropriate to specify item random effects for this experiment?
-
July 4, 2018 at 14:28 GMT+0000 #286
henrik
KeymasterIn your case, the source of random variability most likely comes from the identity of the face. So this would be the natural random-effects grouping factor (i.e.,
face_id
). The important question now would be, if each emotion exists for eachface_id
? If this is the case, then this should be added to the model as well, together with the other random slopes you are missing. That is, a reasonable model could be:
lmm_model <- mixed(RT ~ session * training * face_emotion + (session*face_emotion|p#) + (session*training*face_emotion|face_id)
The important thing to keep in mind in this kind of design, is that the two random-effeccts grouping factors are crossed. This means, the question of multiple observations need to be determined for each combination of random-effects grouping factor and fixed effect individually. So, for the face question you need to ask yourself: Do I have multiple observations per
face_id
for each level of session (or emotion or training), across participants? That is, whether or not these multiple observations come from the same participant or not does not play any role. And I guess that across your experiment, you have multiple observations for eachface_id
for each of your factors. Hence, the random-effects structure I asked above.Said more explicitly: For the by-participant random-effects structure you have to ask yourself, which factors are within-participant, ignoring the
face_id
factor. Conversly, for the by-face_id
random-effects structure you have to ask yourself which factors vary withinface_id
, ignoring the participant factor.Hope that helps!
-
July 5, 2018 at 13:51 GMT+0000 #289
Aaron Gardony
ParticipantHenrik,
That is helpful, thanks. I hadn’t thought about including face_emotion as a fixed effect.
If each emotion does NOT exist for each
face_id
would it be appropriate to considerface_id
as nested withinface_emotion
?
hence this model:
lmm_model = mixed(RT ~ session * training + (1+session|p#) + (1+session*training|face_id/face_emotion)
-
-
July 5, 2018 at 14:08 GMT+0000 #290
henrik
KeymasterNot really.
(1|face_id/face_emotion)
is just a shorthand for(1|face_id) + (1|face_id:face_emotion)
, see: https://bbolker.github.io/mixedmodels-misc/glmmFAQ.html#model-specification
This would analogously hold for the random slopes of course.
Please see also: https://bbolker.github.io/mixedmodels-misc/glmmFAQ.html#nested-or-crossedI do not see why this idea of treating it as nested would make sense in your case. Nested really is only important when you have a truly
hierarchical structure, for example students in classroom and each student can only occur in exactly one classroom. In your case the lower level thing (i.e., emotions) occur however in different faces. So it does not really apply.
-
-
AuthorPosts
- You must be logged in to reply to this topic.