Featured Article
Article Title
Forensic evaluators’ considerations of contextual information sources in competence to stand trial cases
Authors
Christian Stephens - Department of Psychology, The University of Alabama
Jennifer Cox - Department of Psychology, The University of Alabama
Abstract
Exposure to task-irrelevant contextual information may prevent forensic mental health evaluators from reaching appropriate decisions in competency to stand trial (CST) evaluations. Context management strategies like linear sequential unmasking–expanded (LSU-E) may help reduce exposure, but little published data regarding evaluators’ considerations of information sources preclude their widespread implementation. Thus, the present study examined the extent to which evaluators agreed regarding information sources’ perceived usefulness, importance, and the LSU-E criteria of biasing power, objectivity, and relevance in a CST context. Utilizing a stimulus sampling design, licensed forensic mental health evaluators (n = 66) indicated their selection of 22 potential information sources within the context of a CST case vignette. Participants then sequenced the information sources to indicate the order in which they would view the data and rated each information source on its degree of biasing power, objectivity, and relevance. Participants generally selected collateral and CST interview information sources more frequently than clinical assessment instruments or competency assessment tools. They shared some agreement on how to sequence the sources but less agreement in their perceptions of LSU-E criteria. These findings suggest forensic mental health evaluators’ personal judgments may significantly impact the reliability of CST opinions and utility of context management strategies.
Keywords
forensic mental health assessment, competence to stand trial, linear sequential unmasking, task relevance, information sequencing, Competency, Restorability, Objectivity, Capacity, Data, Efficiency
Summary of Research
“Evaluation of a defendant’s competence to stand trial (CST) is the most common forensic mental health referral. In Dusky v. United States, the United States Supreme Court outlined the parameters for courts to consider defendants’ CST… To determine if a defendant lacks one or more of these abilities, an evaluator may analyze a wide range of information including data gathered from a direct interview with the defendant, collateral sources” (p. 1).
“To improve the quality of CST evaluations, it is critical that forensic evaluators reach this consensus. Moreover, understanding evaluators’ information sequencing patterns will facilitate the construction of optimal sequences for decreasing evaluator inconsistency… This was the goal of the present study: to examine the extent to which evaluators agreed regarding their considerations of information sources in CST evaluations…
Eligible participants included licensed forensic mental health evaluators practicing in the United States. Evaluators were recruited via professional psychology-law listservs… A case vignette detailed a hypothetical defendant accused of arson who was referred to the participant for a CST evaluation by the defendant’s attorney. Three versions of the vignette were designed, each varying in its explanation of why the defendant was referred to the evaluator. These explanations related to a singular prong the Dusky standard. For example, the vignette representing the factual understanding prong of the Dusky standard depicted the defendant as having difficulty with understanding and memorizing legal concepts whereas the vignette representing the rational understanding prong depicted the defendant as making decisions based on their extreme spiritual beliefs” (p. 3- 4).
“...Participants selected information sources from a list of 22 potential sources to indicate the information they believed they would likely consider when assessing CST. Two open-response “other” options were also available for participants to list any unaccounted items… Participants answered professional experience questions regarding their highest degree acquired, the year of degree obtainment, years of licensed practice, years spent conducting CST evaluations, the setting by which they conducted the majority of their CST evaluations, and the party from which they received the most referrals… Data were collected over the course of 13 weeks” (p. 5).
“Previous research has examined the information sources used by evaluators in addition to their general relevance to certain forensic mental health evaluations, but how evaluators perceive those sources’ biasing power, objectivity, and relevance is relatively unexplored in the realm of CST evaluations. Therefore, the present study explored evaluators’ agreement regarding their information source usage, sequencing, and LSU-Ecriteria ratings. Given that only examples of each information source were provided, not explanations of how they may directly contribute to the evaluation, our findings demonstrated what evaluators expected each source to contribute to their assigned context.
As a result, personal judgments played a much larger role in how evaluators considered each contextual information source which may have contributed to the variation found for each of our variables… CST evaluators may default to a “core” set of information sources similar to that found by Göranson et al. (2022) for mental state at the time of offense evaluations… Half of the evaluators in our study suggested they would use a competency assessment tool in a CST evaluation which is also generally consistent with previous studies. Although these measures provide a structured approach to the assessment process and specifically address psycholegal capacities, evaluators may not have selected them due to difficulty in persuading past defendants to engage, feeling that such assessments underestimate defendants’ capacities, beliefs these measures do not improve opinion accuracy, or external factors like time management. However, Anderson et al. (2022) note some competency assessment tools may be useful in cases in which the defendant’s competency-related abilities are ambiguous.
As such, evaluators may not consider these tools as important to the assessment process initially but ultimately rely on data from these measures when more readily available—and less resource intensive—data are unclear. The reverse may also be true in that evaluators rely on competency assessment tools only when other data hint at deficiencies in the defendant’s psycholegal capacities. Both rationales are plausible and, given the descriptive nature of analyses, speculative. Future research examining the circumstances under which an evaluator uses a competency tool is needed” (p. 8- 9).
“...While participants were excluded based on their completion time, we could not identify participants who only reorganized a few sources in the list or chose to ignore later items altogether. Thus, we advise similar studies employ ranked lists comprising all sources and selected-only sources to gain the most accurate insight on these aspects of evaluators’ considerations” (p. 12).
Translating Research into Practice
“Managing the presentation of contextual information to forensic mental health evaluators may yet prove effective in mitigating cognitive biases and evaluator unreliability. However, as implied by our data, evaluators’ expectations of what information sources will provide to their CST evaluations may detract from these strategies’ usefulness and may not lend to reliable understandings of their effectiveness. Ergo, diverse perceptions stemming from equally diverse backgrounds means establishing what information is task relevant to specific referral concerns may not be viable by obtaining the opinions of evaluators alone. Rather, consensus may complement future research efforts between clinicians and researchers studying how the task relevance of information manifests throughout an evaluation and how LSU-E may be able to control its influence. In this regard, our findings may also be valuable in illustrating how evaluators’ source selections, sequence placements, and perceptions of task relevance compare to each other, as well as those data collected via other experimental conditions, clinical settings, and CST reports…
Having a third-party control the inflow of an evaluator’s sources may help sift out task-irrelevant information, although practicality depends on the clinical setting. Likewise, it must be kept in mind that differences in context—be it with respect to the referral question, defendant, or evaluator—will always change some aspect of the evaluation itself which restricts the generalization of a source’s task relevance…
These findings suggest a field-wide advancement toward improving evaluator reliability in CST evaluations from a context management standpoint may require additional, empirical approaches. Nevertheless, this study can be used as a framework for further explorations of evaluators’ considerations of contextual information sources within other evaluation types and referral concerns” (p. 12- 13).
Other Interesting Tidbits for Researchers and Clinicians
“...Evaluators who examine a previously incompetent defendant restored through remediation services may agree even less in their CST determinations than those examining a defendant for the first time. Different clinical settings also may influence evaluators’ decision making. It is also important to consider real-world influences on decision making such as the large workload many forensic mental health evaluators sustain and differences in their habits, conventions, workplace guidelines, and other personal factors may affect how or if they consider task relevance of information. Therefore, in addition to restructuring our referral concern contexts, future studies should address the above and other defendant-specific factors” (p. 13).
Additional Resources/Programs
As always, please join the discussion below if you have thoughts or comments to add!