Home › forums › Mixed Models › Mixed model specification and centering of predictor
 This topic has 8 replies, 3 voices, and was last updated 4 years, 2 months ago by SeeArrr.

AuthorPosts


June 20, 2018 at 15:03 GMT+0000 #271cbrnrParticipant
First of all, thank you for your great package – I really love how simple and intuitive it is to work with linear mixed models using your package! Here’s a question I have with a recent analysis. In our experiment, participants solved problems with 3 difficulty levels (easy/medium/hard). We recorded their EEG response during this task from 64 channels. We also have an intelligence score for each person.
I am fitting a linear mixed model with
afex::mixed
using the following syntax:m < mixed(erds ~ difficulty * ist + (difficulty  id) + (difficulty  chan), erds, method=”S”)
The dependent variable
erds
contains the EEG measure. I am interested if this measure depends on task difficulty (difficulty
, ordinal factor with 3 levels) and intelligence score (ist
, continuous variable, ranging from 29 to 60 points). Random effects areid
(participants) andchan
(EEG channel) (intercept and slope withdifficulty
).(1) Is the model correctly specified? I’m not sure because you could also say that
chan
is nested inid
since we recorded the same 64 EEG channels for each subject.(2) I get different results whether or not I scale the continuous predictor
ist
. Here is the model summary when I center and scaleist
:REML criterion at convergence: 67163.2 Scaled residuals: Min 1Q Median 3Q Max 5.3697 0.5441 0.0538 0.4629 19.1396 Random effects: Groups Name Variance Std.Dev. Corr chan (Intercept) 118.182 10.871 difficulty1 42.313 6.505 0.22 difficulty2 3.688 1.920 0.18 1.00 id (Intercept) 263.001 16.217 difficulty1 237.357 15.406 0.27 difficulty2 47.142 6.866 0.15 0.77 Residual 531.519 23.055 Number of obs: 7296, groups: chan, 64; id, 38 Fixed effects: Estimate Std. Error df t value Pr(>t) (Intercept) 22.76893 2.97330 55.14690 7.658 3.11e10 *** difficulty1 13.38370 2.65576 43.43775 5.039 8.70e06 *** difficulty2 3.94702 1.20163 38.80733 3.285 0.002169 ** ist 9.50434 2.64466 36.00053 3.594 0.000968 *** difficulty1:ist 0.08757 2.52829 36.00016 0.035 0.972560 difficulty2:ist 0.36387 1.17743 36.00023 0.309 0.759075  Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 Correlation of Fixed Effects: (Intr) dffcl1 dffcl2 ist dffc1: difficulty1 0.197 difficulty2 0.103 0.756 ist 0.000 0.000 0.000 dffclty1:st 0.000 0.000 0.000 0.270 dffclty2:st 0.000 0.000 0.000 0.137 0.745
And here is the summary if I do *not* center and scale
ist
:REML criterion at convergence: 67176.5 Scaled residuals: Min 1Q Median 3Q Max 5.3697 0.5441 0.0538 0.4629 19.1396 Random effects: Groups Name Variance Std.Dev. Corr chan (Intercept) 118.182 10.871 difficulty1 42.313 6.505 0.22 difficulty2 3.688 1.920 0.18 1.00 id (Intercept) 263.005 16.217 difficulty1 237.358 15.406 0.27 difficulty2 47.142 6.866 0.15 0.77 Residual 531.519 23.055 Number of obs: 7296, groups: chan, 64; id, 38 Fixed effects: Estimate Std. Error df t value Pr(>t) (Intercept) 24.478655 13.479162 36.740451 1.816 0.077532 . difficulty1 12.948358 12.846050 36.289679 1.008 0.320146 difficulty2 2.138164 5.975277 36.116137 0.358 0.722550 ist 1.051791 0.292672 35.999760 3.594 0.000968 *** difficulty1:ist 0.009691 0.279791 35.999975 0.035 0.972560 difficulty2:ist 0.040267 0.130300 36.000001 0.309 0.759075  Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 Correlation of Fixed Effects: (Intr) dffcl1 dffcl2 ist dffc1: difficulty1 0.267 difficulty2 0.135 0.745 ist 0.975 0.264 0.134 dffclty1:st 0.263 0.978 0.729 0.270 dffclty2:st 0.134 0.729 0.980 0.137 0.745
Why does centering/scaling the
ist
predictor affect the estimates, and more noticeably the pvalues of the difficulty levels? Whereas in the first case, bothdifficulty1
anddifficulty2
are highly significant, these factors are not significant in the latter case anymore. What is going on? What is the correct way to deal with this situation? 
July 4, 2018 at 14:17 GMT+0000 #285henrikKeymaster
(1) Is the model correctly specified? I’m not sure because you could also say that
chan
is nested inid
since we recorded the same 64 EEG channels for each subject.If you measured the same 64 channels for each subject this means these two variables are crossed and not nested. Nested means that some specific levels of one factors (e.g., EEG channel) only appears within specific levels of another factors (e.g., ID). So for example, a nesting would exist if for each participant you would have an idiosyncratic set of EEG channels. Which of course seems quite unlikely.
However, one thing to consider is that
ist
is also a withinchannel factor. So maybe
m < mixed(erds ~ difficulty * ist + (difficulty  id) + (difficulty * ist chan), erds, method="S")
might be more appropriate (i.e., reflect the maximum randomeffects structure justified by the design).(2) I get different results whether or not I scale the continuous predictor ist. […]
Why does centering/scaling the ist predictor affect the estimates, and more noticeably the pvalues of the difficulty levels? Whereas in the first case, both difficulty1 and difficulty2 are highly significant, these factors are not significant in the latter case anymore. What is going on? What is the correct way to deal with this situation?A few things to consider here.
First,
afex
tries to discourage you to inspect the parameter estimates as you do viasummary
. These are very often not very helpful, especially in cases such as your, where you have factors with more than two levels. I would highly suggest to use theprint
method (i.e., justm
),nice()
, oranova()
when your interest is in the fixedeffects. You can then use the interplay withemmeans
to look at specific effects.And now, to your specific question, yes centering can have dramatic effect on the interpretation of your model. This is for example discussed on CrossValidated: https://stats.stackexchange.com/q/65898/442
However, there exist also numerous papers discussing this. Some specifically in the context of mixed models. For example two papers that I know of are:Dalal, D. K., & Zickar, M. J. (2012). Some Common Myths About Centering Predictor Variables in Moderated Multiple Regression and Polynomial Regression. Organizational Research Methods, 15(3), 339–362. https://doi.org/10.1177/1094428111430540
Wang, L., & Maxwell, S. E. (2015). On disaggregating betweenperson and withinperson effects with longitudinal data using multilevel models. Psychological Methods, 20(1), 63–83. https://doi.org/10.1037/met0000030
The thing to keep in mind is that variables are tested when the other variables are set to 0. So the 0 value should be meaningful. Often centering makes it meaningful, because it is then on the mean. But other values of 0 can be meaningful as well (e.g., the midpoint of a scale).

July 9, 2018 at 11:22 GMT+0000 #291cbrnrParticipant
Thank you for your detailed response, this is really helpful!
If you measured the same 64 channels for each subject this means these two variables are crossed and not nested. Nested means that some specific levels of one factors (e.g., EEG channel) only appears within specific levels of another factors (e.g., ID). So for example, a nesting would exist if for each participant you would have an idiosyncratic set of EEG channels. Which of course seems quite unlikely.
You are right, thanks for clarifying the difference between nested and crossed variables.
However, one thing to consider is that
ist
is also a withinchannel factor. So maybe
m < mixed(erds ~ difficulty * ist + (difficulty  id) + (difficulty * ist chan), erds, method="S")
might be more appropriate (i.e., reflect the maximum randomeffects structure justified by the design).Is this really appropriate? IST is a personspecific measure like IQ, measured by a pencil and paper test before the EEG session. Therefore, channels do not contribute to random variation of IST (which is really constant within each person).
Anyway, even if I try to run the extended model as you suggested, I get convergence errors. I guess this means that even if this model was justified by the design, I would have to reduce it to get a stable model (which would be the model I proposed initially).
Regarding the centering issue, I can now see how centering can affect regression coefficients of main effects. I have two additional questions:
First, centering on the sample mean implies that this process depends on the sample. This could not be ideal if someone tries to replicate the experiment with a different sample – which mean does the other person need to subtract, my sample mean or the mean of the new sample? Alternatively, I could also center on some theoretical value (30 in my case, because the IST ranges from 0 to 60), but then I’d still get the warning by
afex
that the variable is not centered on 0. Or maybe this is not a problem at all? What is your recommendation on centering?Second, I wonder why I don’t get a significant interaction given that the results of the difficulty main effect change so drastically whether I center IST or not. I was expecting either a significant interaction given the differences in centering, or no dramatic changes in regression coefficients given a nonsignificant interaction.

July 11, 2018 at 09:30 GMT+0000 #292henrikKeymaster
1. The warning by
afex
is just there to remind you that this is a nontrivial issue. I do not recommend mean centering as a general solution. If you have thought about a good point of centering (maybe at 30 in your case) then you might ignore the warning. It is just there to force you to think about your problem.2. That is the problem with continuous variables. You need to make sure that the story your results tell are not contingent on some arbitrary value you choose for centering at. Maybe it makes sense to explore the possible range of where you can center and report the results throughout (in the sense of a multiverse analysis). You should give the reader the chance to fully understand what the consequences of your more or less arbitrary choices on your results are.
Finally, you say:
Is this really appropriate? IST is a personspecific measure like IQ, measured by a pencil and paper test before the EEG session. Therefore, channels do not contribute to random variation of IST (which is really constant within each person).
Anyway, even if I try to run the extended model as you suggested, I get convergence errors. I guess this means that even if this model was justified by the design, I would have to reduce it to get a stable model (which would be the model I proposed initially).
Yes, it is of course appropriate. The
ist chan
parts allows the effect ofist
to differ idiosyncratically per channel. Maybe the channels right above the area responsible forist
more strongly react to the corresponding signal. You really have to think about the effect ofist
independent of the effect ofid
here.And if it shows convergence warnings, try to suppress the estimation of correlations via

andexpand_re = TRUE
. 
March 4, 2019 at 11:22 GMT+0000 #339SeeArrrParticipant
I stumbled upon this post with great curiosity. Since I also work with EEG, but under a somewhat different, but nevertheless similar approach, I wondered whether there is a published paper to your computations and considerations?
@Henrik, please excuse me for using your forum for such a purpose.@cbrnr, please excuse my direct request, but your help would be a great support.
Thank you
 This reply was modified 4 years, 3 months ago by SeeArrr.

March 4, 2019 at 11:32 GMT+0000 #341cbrnrParticipant
Unfortunately, I didn’t have the time to finish the work on this particular data set, so there is no publication (yet). There are still several open questions I have myself, and once I have time to revisit my analysis I would appreciate advice from @henrik. For example, I was thinking about what if I want to see in which channels I get differences in activation – then I can’t use channels as a random effect. I was thinking that I’d probably need to perform some kind of permutation cluster test, but I wasn’t able to wrap my head around this so far.
@SeeArrr could you share your research question and your planned analysis? I’d be interested in how it is similar to what I was trying to achieve. 
March 4, 2019 at 13:26 GMT+0000 #342SeeArrrParticipant
Thank you for your fast reply! At the same time, my hopes are crushed.
I’m working with a disease that causes alternation in the resting state EEG pattern. In the literature, however, it has not yet been possible to make a clear statement as to which alternation takes place in which brain region. In addition, the experience of the disease can alter a lot between individuals, which results in quiet a heterogeneous EEG pattern. Therefore, I think the approach of using not only the patient but also the electrodes as random effects might be more appropriate. But I haven’t found anything in the literature yet. In addition, I use a questionnaire which records the distress of this disease (and hopefully is mirrored in the EEG pattern).
The dependent variable Power contains the EEG measure. I am interested if this measure depends on distress (questionnaire – continuous variables) and FrB (8 frequency bands, factorial), as well as duration of the disease. Random effects are PatID (participants) and Electrodes (65)
m1 < mixed(Power ~ FrB * Distress + Duration + (FrB  PatID) + (1  electrodes), data, method=”S”)
As far as I experienced it, other studies are working with electrode clusters (and not single electrodes). This would result in my opinion in:
m2 < mixed(Power ~ Cluster * FrB * Distress + Duration + (FrB + Cluster  PatID), data, method=”S”)
What do you think?

March 5, 2019 at 16:51 GMT+0000 #343henrikKeymaster
Treating something as a randomeffects grouping factor (i.e., to the right side of

) leads to shrinkage. Levels of this factor (e.g., specific electrodes) for which the effect differs quite strongly from the mean are shrunk towards the overall mean. If this is an assumption you are willing to make (i.e., that specific electrodes for which the effect diverges relatively strongly from the others is probably a measurement error), then it makes sense to treat it as a random effect.The benefit of treating a factor as randomeffects grouping factor is that under this assumption (i.e., that the effect across the different electrodes is randomly distributed around the grand mean effect) the overall prediction will be better.
There are two downsides: (1) if the assumption is false, you miss out on “true” differences. (2) You are unable to easily check for differences between electrodes.
Having said that,
m2
looks somewhat reasonable. However, the same caveat as before regarding the continuous covariates holds. The main effects are tested when the covariates are 0. So 0 needs to be meaningful given the data (and one way to achieve this is by centering them).
I only wonder why not have theFrB
andDistress
interaction as random slopes:... + (FrB * Cluster  PatID)

March 18, 2019 at 08:14 GMT+0000 #346SeeArrrParticipant
Thanks for the explanation. Helped a lot. I decided to go for the
m2
with interaction in the random slopes.As a next step I would like to introduce
time
as another IV. I have the baseline EEG resting state recording (time 0
) and a post treatment EEG recording (time 1
) as well as pre and postdistress scores. I wonder, due to the fact that I just have two time points, thatdistress
might be a mediating variable. I haven’t found anything about mediation in the documentation, so I wonder if there is a recommended package by you Henrik?Thanks again!!


AuthorPosts
 You must be logged in to reply to this topic.