Testing Predictive Relationships with Random Slopes Extracted from Linear Mixed-Effects Modeling (LMM)

Zhiyi Wu

This dataset is a subset of data for the Simon Task and the tasks for vowel and consonant perceptionfrom Huensch (2022), publicly available on OSF (osf.io/fxzvj/). Read the original study here: doi.org/10.1017/S0272263124000238.

# Set the working directory to the location of this R Markdown file
setwd(dirname(rstudioapi::getActiveDocumentContext()$path))

# Import dataset for random slope extraction
d = read.csv("S-data.csv")

# Load necessary libraries for data manipulation
library(tidyr)
library(tidyverse)
## ── Attaching core tidyverse packages ──────────────────────── tidyverse 2.0.0 ──
## ✔ dplyr     1.1.4     ✔ purrr     1.0.2
## ✔ forcats   1.0.0     ✔ readr     2.1.5
## ✔ ggplot2   3.5.1     ✔ stringr   1.5.1
## ✔ lubridate 1.9.3     ✔ tibble    3.2.1
## ── Conflicts ────────────────────────────────────────── tidyverse_conflicts() ──
## ✖ dplyr::filter() masks stats::filter()
## ✖ dplyr::lag()    masks stats::lag()
## ℹ Use the conflicted package (<http://conflicted.r-lib.org/>) to force all conflicts to become errors

Step 1: Fitting a model to the dataset

library(optimx)
library(blme)
## Loading required package: lme4
## Loading required package: Matrix
## 
## Attaching package: 'Matrix'
## The following objects are masked from 'package:tidyr':
## 
##     expand, pack, unpack
m = blmer(-1/RT ~ Congruency + (1+Congruency|Subject) + 
            (1+Congruency|ItemNo),
          data = d, 
          control=lmerControl(optimizer = "nloptwrap",
                              optCtrl = list(algorithm = 
                                               "NLOPT_LN_NELDERMEAD", 
                                             maxit = 2e5)))

Step 2: Extracting Random Slopes

This chunk extracts the random effects (particularly the random slopes) for each subject from the model. These random slopes represent each participant’s individual deviation from the fixed effect of Congruency, which, based on previous examination, seems to provide a more reliable measure of the congruency effect than traditional scoring methods.

#get the random slopes for all the individuals
rs=data.frame(ranef(m)$Subject)
rs$Participant= row.names(rs)
rs$Participant = factor(rs$Participant) # converted to a factor variable for proper handling in subsequent analyses

Step 3: Merging with Full Dataset

Now, we prepare the dataset for analysis by merging the extracted random slopes with the outcome variables.

# Import full dataset with the outcome variables 
all = read.csv("S-alldata.csv")

# Ensure the Participant column in the full dataset is a factor
all$Participant = factor(all$Participant)

# Merge the random slopes with the full dataset
all = left_join(rs, all, by = c("Participant" = "Participant"))
head(all)
##    X.Intercept. CongruencyIncongruent Participant  X Xlex    simon_log
## 1 -6.445980e-05         -4.334402e-05         101  1 2750 -0.009050021
## 2  3.204003e-04         -3.814187e-05         102  2 1400  0.055450810
## 3 -1.714075e-04         -1.843714e-06         103  3 3100  0.094712583
## 4 -8.181009e-05          5.225611e-05         105 NA   NA           NA
## 5  4.844722e-04         -4.853616e-05         106  4 2250  0.048872123
## 6 -2.993777e-04          5.096861e-05         108  5 3300  0.071281532
##        V_err     C_err
## 1 0.18750000 0.1250000
## 2 0.23076923 0.1875000
## 3 0.18750000 0.0625000
## 4         NA        NA
## 5 0.06250000 0.2666667
## 6 0.06666667 0.0000000
names(all)
## [1] "X.Intercept."          "CongruencyIncongruent" "Participant"          
## [4] "X"                     "Xlex"                  "simon_log"            
## [7] "V_err"                 "C_err"
# Reorder columns for easy viewing
all = all %>% select(3, 5, 6, 2, 7, 8) %>%
  filter(!is.na(Xlex))   #remove one NA row

Step 4: Correlation between Random Slopes and Outcome Variables

Now that we have the extracted individual index of the congruency effect (random slopes) and the full dataset with outcome variables, we can examine the relationship between these random slopes from the Simon task performance and phonological perception measures, and compare them to the original scoring methods.

Here, we first perform partial Spearman correlations to examine the relationship between the inhibitory control measure from the Simon task and phonological perception, both for vowels and for consonants, while controlling for lexical knowledge (Xlex).

The results suggest stronger correlations when using the random slopes as a predictor variable (-.225 vs -.103 for vowels, -.153 vs .049 for consonants), indicating that the random slopes may provide a better measure that reveals previously masked relationships.

# Partial Spearman Correlation between Phonological Perception and Simon Task
library(PResiduals)

### Vowel perception
#original scoring method vs. random slopes
partial_Spearman(V_err|simon_log ~ Xlex, data=all) # -.103
##                         est    stderr         p   lower CI upper CI
## partial Spearman -0.1029011 0.1396265 0.4643148 -0.3625844  0.17161
## Fisher Transform: TRUE 
## Confidence Interval: 95%
## Number of Observations: 57
partial_Spearman(V_err|CongruencyIncongruent ~ Xlex, data=all) # -.225
##                         est    stderr         p   lower CI  upper CI
## partial Spearman -0.2248977 0.1343452 0.1058788 -0.4669384 0.0484924
## Fisher Transform: TRUE 
## Confidence Interval: 95%
## Number of Observations: 57
### Consonant perception
partial_Spearman(C_err|simon_log ~ Xlex, data=all) # .049
##                         est    stderr         p   lower CI  upper CI
## partial Spearman 0.04903388 0.1241247 0.6932831 -0.1923662 0.2848382
## Fisher Transform: TRUE 
## Confidence Interval: 95%
## Number of Observations: 57
partial_Spearman(C_err| CongruencyIncongruent ~ Xlex, data=all) # -.153
##                         est    stderr         p  lower CI  upper CI
## partial Spearman -0.1531931 0.1326467 0.2556466 -0.397469 0.1113583
## Fisher Transform: TRUE 
## Confidence Interval: 95%
## Number of Observations: 57

Step 5: Regression Analysis

In this step, we fit linear regression models to examine the predictive relationships between the Simon task random slopes and the phonological perception measures, both for vowels and consonants. Again, we compare the results of the original method (using simon_log) with the random slopes (using CongruencyIncongruent).

The findings support that random slopes extracted from LMM which provide a more reliable measure of the congruency effect can potentially reveal the previously masked relationships between Simon task and phonological perception.

### Vowel perception
# original method
mo.v = lm(V_err ~ Xlex + simon_log, data = all)
summary(mo.v) # p = .270
## 
## Call:
## lm(formula = V_err ~ Xlex + simon_log, data = all)
## 
## Residuals:
##      Min       1Q   Median       3Q      Max 
## -0.16950 -0.09527 -0.02629  0.07064  0.39762 
## 
## Coefficients:
##               Estimate Std. Error t value Pr(>|t|)    
## (Intercept)  3.095e-01  6.803e-02   4.550 3.08e-05 ***
## Xlex        -3.642e-05  2.576e-05  -1.414    0.163    
## simon_log   -2.959e-01  2.654e-01  -1.115    0.270    
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Residual standard error: 0.1308 on 54 degrees of freedom
## Multiple R-squared:  0.06277,    Adjusted R-squared:  0.02805 
## F-statistic: 1.808 on 2 and 54 DF,  p-value: 0.1737
# random slopes
mr.v = lm(V_err ~ Xlex + CongruencyIncongruent, data = all)
summary(mr.v) # p = .038
## 
## Call:
## lm(formula = V_err ~ Xlex + CongruencyIncongruent, data = all)
## 
## Residuals:
##      Min       1Q   Median       3Q      Max 
## -0.19996 -0.08681 -0.02249  0.05867  0.38592 
## 
## Coefficients:
##                         Estimate Std. Error t value Pr(>|t|)    
## (Intercept)            2.867e-01  6.428e-02   4.460 4.19e-05 ***
## Xlex                  -3.767e-05  2.490e-05  -1.513    0.136    
## CongruencyIncongruent -5.378e+02  2.529e+02  -2.127    0.038 *  
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Residual standard error: 0.1271 on 54 degrees of freedom
## Multiple R-squared:  0.1153, Adjusted R-squared:  0.08251 
## F-statistic: 3.518 on 2 and 54 DF,  p-value: 0.03663
### Consonant perception
# original method
mo.c = lm(C_err ~ Xlex + simon_log, data = all)
summary(mo.c) # p = .891
## 
## Call:
## lm(formula = C_err ~ Xlex + simon_log, data = all)
## 
## Residuals:
##      Min       1Q   Median       3Q      Max 
## -0.20144 -0.07849 -0.03680  0.07521  0.34627 
## 
## Coefficients:
##               Estimate Std. Error t value Pr(>|t|)    
## (Intercept)  3.497e-01  6.929e-02   5.046 5.44e-06 ***
## Xlex        -7.487e-05  2.624e-05  -2.854  0.00611 ** 
## simon_log    3.727e-02  2.703e-01   0.138  0.89084    
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Residual standard error: 0.1332 on 54 degrees of freedom
## Multiple R-squared:  0.1314, Adjusted R-squared:  0.09926 
## F-statistic: 4.086 on 2 and 54 DF,  p-value: 0.02227
# random slopes
mr.c = lm(C_err ~ Xlex + CongruencyIncongruent, data = all)
summary(mr.c) # p = .536
## 
## Call:
## lm(formula = C_err ~ Xlex + CongruencyIncongruent, data = all)
## 
## Residuals:
##      Min       1Q   Median       3Q      Max 
## -0.21279 -0.08069 -0.02921  0.06677  0.34188 
## 
## Coefficients:
##                         Estimate Std. Error t value Pr(>|t|)    
## (Intercept)            3.504e-01  6.716e-02   5.218 2.95e-06 ***
## Xlex                  -7.395e-05  2.602e-05  -2.842  0.00631 ** 
## CongruencyIncongruent -1.647e+02  2.642e+02  -0.623  0.53576    
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Residual standard error: 0.1327 on 54 degrees of freedom
## Multiple R-squared:  0.1373, Adjusted R-squared:  0.1054 
## F-statistic: 4.298 on 2 and 54 DF,  p-value: 0.01853