Session Abstract: Assessing the credibility of presented cognitive and psychological complaints is a core component of forensic mental health assessment. To advance knowledge in this field, this symposium describes a series of simulation studies (experimental malingering paradigm; ExpMAL) conducted using several different instruments. Pimentel, Kiss et al. will start off the session by presenting a study investigating whether selected scores derived from the Rorschach task can distinguish individuals attempting to appear insane from forensic inpatients who genuinely suffer from psychosis and from healthy individuals completing the task under standard conditions. Pimentel, Meyer et al. will then continue the session by presenting the results of a similar but independent investigation, conducted using the same research design and methods of the study described above, but on a different sample from a different country and language. Next, Whitman et al. will describe the results of a Minnesota Multiphasic Personality Inventory (MMPI)-3 study designed to investigate the utility of the MMPI-3 Validity Scales for detecting overreporting and underreporting, as well as the impact of these response sets on substantive scale scores. Rolfsen et al. will then describe the findings from a research project aimed at developing and validating a Norwegian version of the Inventory of Problems – 29 (IOP-29), a recently introduced symptom validity test designed to discriminate credible from noncredible clinical presentations. Lastly, Boskovic et al. will close the session by reporting on an ExpMAL study designed to evaluate the extent to which the Structured Inventory of Malingered Symptomatology (SIMS), Self-Report Symptom Inventory (SRSI), and IOP-29 are vulnerable to symptom coaching of depression, with or without additional google-available information.
Chair Information: Luciano Giromini, PhD | University of Turin- Italy
Presentation 1 Title: Comparing Committed Forensic Inpatients to Non-Patients Instructed to Malinger Insanity or Not Using Scores from the Rorschach Task and Self-report
Presentation 1 Abstract: Malingering is described as "the intentional production of false or grossly exaggerated physical or psychological symptoms, motivated by external incentives" (APA, 2013, p. 726). To assess and distinguish people attempting to malinger from patients and from healthy controls is particularly important in forensic contexts where assessors are trying to determine competency to stand trial, criminal responsibility, or the course of treatment for adjudicated individuals. Failure to assess for or identify malingering may result in multiples downside for justice, the assessor, the respondent, and, ultimately, society. Psychosis is one of the most common conditions feigned in criminal responsibility, and the Rorschach task is commonly used among the battery of tests to assess psychosis. The present study investigated whether selected scores derived from the Rorschach task can distinguish individuals attempting to appear insane (i.e., malingering) from forensic inpatients who suffer from psychosis and from healthy individuals completing the task under standard conditions. Netter and Viglione (1994) introduced a variable that differentiated malingerers from patients with schizophrenia, which is the behavior of creating "the impression that they [participants] were perceiving distortions of reality by relating to the response as if it were alive, saying things such as 'this monster will get me' (Netter & Viglione, 1994, p. 46). We built on their brief coding material to develop detailed coding guidelines with examples and named this variable 'Breaking the Card Boundary' (BCB; Meyer et al., 2021). We also investigated whether Rorschach variables provided incremental validity over self-reported symptoms of psychotic propensity. From the Rorschach task, we specifically tested five variables from the Rorschach Performance Assessment System (R-PAS; CritCont%, TP-Comp, FQ-%, WSumCog, and SevCog), BCB, a revised CritCont% that incorporated aggressive contents (AGC) and destructive relational representations (MAP), and revised versions of WSumCog and SevCog after accounting for BCB behavior. For self-reports, we used the Magical Ideation Scale (MIS) and Perceptual Aberration Scale (PAS). This study consisted of 185 participants: 47 nonpatient controls (age M=19.8, SD = 2.2) who completed standard R-PAS administration (S), 50 nonpatients asked to malinger (M; age M=19.2, SD=1.5), and 88 incarcerated forensic inpatients deemed not guilty by reason of insanity or incompetent to stand trial (P; age M=40.8, SD=13.5). Our main results showed that all variables contribute to distinguishing the three groups and the new scores (CritContR%, WSumCogNoBCB, SevCogNoBCB) showed a significant improvement to distinguish P vs. M. when compared to their original form. Also, we found that CritContR%, BCB, and WSumCogNoBCB presented incremental validity over self-report variables to distinguish the three groups. After discussing the response process behind each variable and commenting on the limitations of this study, we suggest practical implications for clinicians and researchers.
Ruam P. F. A. Pimentel, MA | University of Toledo
Andrea Kiss, PhD | University of Toledo
Joni L. Mihura, PhD, ABAP | University of Toledo
Gregory J. Meyer, PhD | University of Toledo
Nicole Kletzka, PhD | Center for Forensic Psychiatry
Joshua J. Eblin, PhD | Center for Forensic Psychiatry
Presentation 2 Title: Comparing Patients with Schizophrenia to Non-Patients Instructed to Malinger Insanity or Not Using Scores from the Rorschach Task and Self-Report
Presentation 2 Abstract: There are many instruments for assessing symptoms of schizophrenia, but not many assess when people are intentionally faking those symptoms. The latter description is also known as malingering. The Rorschach task is commonly used to assess psychosis symptoms. A previous study also submitted to this symposium (Pimentel et al., 2022) showed potential scales from the Rorschach Performance Assessment System (R-PAS; CritCront%, TP-Comp, WsumCog, FQ-%, and SevCog) and new composites variables (CritContR%, WSumCogNoBCB, SevCogNoBCB, and BCB) that could be used to distinguish Patients (P) and Malingerers (M). Their study used a variable proposed by Netter and Viglione (1994) and updated by Meyer et al. (2021), named 'Breaking the Card Boundary' (BCB). This variable is coded when individuals try to "create the impression that they were perceiving distortions of reality by relating to the response as if it were alive, saying things such as 'this monster will get me'" (Netter & Viglione, 1994, p. 46), and it has shown potential to distinguish malingerers from patients with schizophrenia. Thus, this present study aims to replicate the results of Pimentel et al. (2022, also submitted to this symposium) in a different sample from a different country and language. Particularly, we aim to test whether we maintain the same effect sizes for the comparisons between Patients vs. Malingerers and whether the new composites and variables (CritContR%, WSumCogNoBCB, SevCogNoBCB, and BCB) would show a similar significant change in effect sizes when compared to their original form. In addition, we will test whether the new composites present incremental validity over self-reported variables to distinguish the groups. If we replicate these results, this would be one step closer to suggesting a malingering or validity scale composite for R-PAS. If we do not replicate Pimentel et al., we will compare the limitations and differences of both studies in order to understand their implications. This sample will be drawn from a study by Guimarães Neto et al. (in press) that contains 40 participants instructed to malinger symptoms of schizophrenia after being taught about them and 35 patients diagnosed with schizophrenia. R-PAS was administered in both groups, and participants also responded to the Magical Ideation Scale and the Inventory of Problems-29 (IOP-29).
Ruam P. F. A. Pimentel, MA | University of Toledo
Gregory J. Meyer, PhD | University of Toledo
Armante Guimarães-Neto, MA | Centro Universidade de Mineiros
Philipe Vieira, PhD | Instituto de Pós-Graduação e Graduação
Anna-Elisa de Villemor-Amaral, PhD | Universidade São Francisco
Presentation 3 Title: Utility of the MMPI-3 validity scales for detecting overreporting and underreporting and their effects on substantive scale validity: A simulation study
Presentation 3 Abstract: The current study utilized an experimental design to investigate the utility of the Minnesota Multiphasic Personality Inventory (MMPI)-3 Validity Scales for detecting overreporting and underreporting and the impact of these response sets on substantive scale scores. College students completed a battery of criterion measures before assignment to a Standard Instructions (SIs) Group (n = 288), an Overreporting Group (n = 250), or an Underreporting Group (n = 215). t tests demonstrated that scores on MMPI-3 over-reporting indicators and most substantive scales were higher among the Overreporting Group relative to the SI group with very large effect sizes, and scores on MMPI-3 underreporting indicators were higher and most substantive scales scores were lower among the Underreporting Group relative to the SI group, with moderate to large effects. Classification accuracy estimates documented the effectiveness of MMPI-3 Validity Scales in detecting overreporting and underreporting. Bivariate correlations between MMPI-3 substantive scale scores and criterion measures (which were completed under SIs for all three groups) were substantially attenuated for both simulation groups relative to the SI Group. Bivariate correlations were also attenuated for groups identified as overreporting or underreporting using MMPI-3 Validity Scale scores relative to individuals with valid MMPI-3 protocols, highlighting the need for and importance of appraising threats to protocol validity when assessing personality and psychopathology by self-report.
Megan R. Whitman, MA | Kent State University
Jessica L. Tylicki, PhD | The Neurobehavioral Institute, Independent Practice
Yossef S. Ben-Porath, PhD, ABPP | Kent State University
Presentation 4 Title: Symptom Coaching and SRVTs: Does Googling Symptom Enhances the Overreporting Tendencies Among Feigners on Structured Inventory of Malingered Symptomatology, Self-Report Symptom Inventory, and Inventory of Problems-29
Presentation 4 Abstract: We tested whether three relatively popular self-report validity tests (SRVTs) are vulnerable to symptom coaching of depression, with or without additional google-available information. Specifically, we divided our sample (N = 193) so that they either received the Structured Inventory of Malingered Symptomatology (SIMS; n = 64), Self-Report Symptom Inventory (SRSI; n = 66), or Inventory of Problems-29 (IOP-29; n = 63). Prior to responding to the test, some participants were told to respond honestly (truth tellers, nSIMS = 21; nSRSI = 24; nIOP-29 = 26), whereas others were told to feign depression. The feigning participants were given a vignette so as to increase their compliance with instructions. Besides the vignette, all feigners also received information about symptoms of depression (Coached feigners, nSIMS = 25; nSRSI = 18; nIOP-29 = 21), and some of them also received popular google links to check before filling out the test (Google coached feigners, nSIMS = 18; nSRSI = 24; nIOP-29 = 16). Overall, the results indicated that truth tellers obtained the lowest total scores on all three measures, whereas the two feigning groups did not significantly differ from each other. Looking at the detection rates, IOP-29 (AUC = .97) and SIMS (AUC = .85) outperformed SRSI (AUC = .72), and the false positive outcomes were exceptionally high for SIMS (28%) and SRSI (17% and 25%, depending on the employed cutoff). Despite the limitations of this study, our findings provide some preliminary data on the vulnerability to symptom coaching of three relatively popular SRVTs.
Irena Boskovic, PhD | Erasmus School of Social and Behavioral Sciences, Erasmus University Rotterdam- The Netherlands
Ali Y.E. Akca, MA | University of Turin, Torino, Italy
Luciano Giromini, PhD | University of Turin, Torino, Italy