Forum Replies Created

Viewing 2 posts - 1 through 2 (of 2 total)
  • Author
    Posts
  • in reply to: Contrast coding, LMMS and interactions #421
    Thomas
    Participant

    EDIT to my previous post: please ignore my final query about differences in output. This was caused by REML vs ML fitting.

    in reply to: Contrast coding, LMMS and interactions #420
    Thomas
    Participant

    Dear Henrik,

    This is great, many thanks.

    I assume that with balanced you refer to the balance of the categories across subjects (e.g. an equal number of males and females). If that the case, yes, the two categorical variables are unbalanced. You suggest that removing the 3-way interaction, in the case of observational data, may be preferred – why do you believe this is the case?

    In terms of interrogating the effects: I have indeed some observational data and wish to examine the relationship between two continuous variables, and whether this relationship is moderated by two categorical (binary) variables. Note, the categorical variables, as well as nuisance categorical (binary and with 3 levels) variables that are part of the model, are strongly unbalanced. Am I right in saying that:
    1.one would first inspect the 3-way interaction in the ANOVA table (SS type 3). In case of non-significance, one would remove this interaction and refit the model with the 2 2-way interactions and again inspect the ANOVA table. In case one of them is significant, the 2-way interaction is to be interpreted / investigated. To this end, the interaction is to be visualised and one could take the regression estimate of the summary() table to interpret the strength/direction of this interaction?

    2. In case none of the 2-way interactions are “significant” either, one would remove those and refit the model again. One would then use summary() to be able to interpret the regression coefficient of the variable of interest.

    Finally, I have fit the same 3-way interaction model with afex::mixed and lmerTest::lmer (with set_sum_contrasts()), and noticed that the effects (beta’s, t-values, df’s) are slightly different, so is the ANOVA (SS3, method = “S”) output. Do you know why this is the case?

    Thanks in advance.

Viewing 2 posts - 1 through 2 (of 2 total)