Corresponding Author Information: Thomas M. Crow
Session Abstract: Background and Purpose: Supervised machine learning algorithms are powerful tools for data analysis, especially in the context of complex data, but their complexity often limits their use for inference compared to classical statistics. Furthermore, it remains unclear how these different approaches compare when used to answer the same empirical questions within the same dataset. Using new software allowing for deeper interpretation of machine learning models, we sought to compare algorithmic (i.e., machine learning) models with classical statistical analyses for predicting personality assessment outcomes. Specifically, we compared both overall model performance and the relative importance/strength of the same predictors across different model types.
Subjects: 1238 outpatients referred for psychological testing at an academic medical center in the northeastern USA.
Method and Materials: Participants completed the Personality Assessment Inventory, as well as demographic and life history questions as part of a multi-method battery of psychological functioning.
Analyses: To compare models with continuous outcome variables ("regression models"), the continuous variable PAI_SCZ (PAI Schizophrenia scale) was chosen, and for binary outcome variables ("classification models"), the categorical variable PAI_INVALID01 was chosen (invalid PAI profile no/yes). The 16 predictors in these models were taken from the demographic and life history portion of the initial interview, and included age, gender, race, psychiatric functioning questions, and medical history questions, among others.
For the regression models predicting PAI schizophrenia scores, multiple linear regression was compared with two machine learning algorithms, random forest regression and elastic net regression. For the classification models predicting invalid PAI profiles, logistic regression was compared with random forest classification and gradient boosting machines.
Results: Preliminary results suggest that several theoretically congruent predictors manifested consistent predictive power across modeling approaches. For example, participants' self-reported 0-10 depression rating during the initial interview robustly predicted their score on the PAI schizophrenia scale, regardless of modeling technique. However, rank importance of predictors differed across models, and strong predictors in one model were not always significant in others.
Conclusions: Researchers should not expect the effects of predictors in machine learning models to mirror those of traditional approaches. Nevertheless, depending on researchers' goals and the nature of the data, these techniques can be powerful analytic tools, whether primary or supplementary to traditional statistical analyses.
Thomas M. Crow | Harvard Medical School/Massachusetts General Hospital, Boston, MA
Michelle B. Stein | Harvard Medical School/Massachusetts General Hospital, Boston, MA
Mark A. Blais | Harvard Medical School/Massachusetts General Hospital, Boston, MA
Dr. Michelle B. Stein
Michelle B Stein, Ph.D. is a Psychologist at Massachusetts General Hospital (MGH) and Assistant Professor through Harvard Medical School (HMS). She is Director of the Adult-Track Psychology Internship Training Program. Also, she is Director of the Inpatient Psychology Service where she leads the training division for the inpatient portion of the psychology internship as well as conducts individual and group psychotherapy. She is a senior staff member of the Psychological Evaluation and Assessment Research Laboratory (PEaRL) where she conducts supervision and provides outpatient psychological assessments. She received her doctorate from The Derner School of Psychology at Adelphi University. She completed her pre-doctoral internship at Sagamore Children’s Psychiatric Center and post-doctoral fellowship at MGH/HMS in psychological assessment. Her research has focused on the use of Social Cognition and Object Relations Scale-Global Rating Method (SCORS-G) and she is considered the leading expert on this measure. She has authored numerous peer-reviewed papers, is an active member of Society for Personality Assessment, and recently published a book on the scoring and clinical implications of the SCORS-G rating system.