Assessing The Clinical Performance Of Learners

Read Complete Research Material

ASSESSING THE CLINICAL PERFORMANCE OF LEARNERS

Clinical Performance of Learners

Clinical Performance of Learners

Introduction

Consistent, reliable evaluation of students' and new graduates' clinical performance has long been a challenge for educators. The expanding use of simulation, which provides learners with opportunities to demonstrate clinical abilities, has intensified the challenge. In particular, educators from both schools of nursing and practice agencies recognize that new graduates often lack the clinical thinking required to meet the needs of acutely ill patients (del Bueno, 2005; Gillespie & Paterson, 2009; Newton & McKenna, 2007). Furthermore, safety initiatives are being implemented in response to reports and quality improvement programs of preventable deaths in acute care settings (Cronenwett et al., 2007; Institute of Medicine, 1999; Joint Commission, 2010). Although critical, these initiatives have added to the complexity of nurses' work, requiring superior clinical judgment (Ebright, 2004; Ebright, Patterson, Chalko, & Render, 2003). In response to the need for students and graduate nurses to be competent in clinical judgment, methods to evaluate progress in this area are of great interest.

Validity in Practice-Based Assessments

Tanner (2006) defined clinical judgment as “an interpretation or conclusion about a patient's needs, concerns, or health problems, and/or the decision to take action (or not), use or modify standard approaches, or improvise new ones as deemed appropriate by the patient's response” (p. 204).Clinical judgment needs to be flexible, not linear, using a variety of ways of knowing, including theoretical knowledge and practical experience (Benner, Tanner, & Chesla, 2009). Validity theory has evolved significantly over the past 30 years in response to the increased use of assessments across scientific, social and educational settings. The overarching trajectory of this evolution reflects a shift from a purely quantitative, positivistic approach to a conception of validity reliant on the interpretation of multiple evidence sources integrated into validity arguments. Moreover, within contemporary validity, interpretation has been emphasised as a central process; however, despite this emphasis, there have been few explicit articulations of specific interpretive methodologies applicable to the practice of validation. Reliability is a measure of consistency. Many factors enter into making

Clinical judgments that cannot be measured or represented in a rubric (Lasater, 2011; Tanner, 2006); therefore, evaluation data from the LCJR should be considered as one component or a snapshot in time, of a broader evaluation picture. Second, as evidenced by the results from the studies described in this article, reliability results are affected by characteristics of both the raters and the scenarios. In classical test theory, the observed score is a result of a combination of the true score, along with any error in measurement. A limitation of classical test theory is that error measurement is viewed as a single entity. However, when the goal of the performance appraisal is to evaluate the ability of the learner to respond to a clinical problem that is presented in the highly realistic setting of simulation, identifying extraneous sources of variability becomes important. In performance-based evaluations, there are several sources of variability, including the raters, the simulation case, and the learner's ...
Related Ads