Message from the Dean: University of Bridgeport, College of Naturopathic Medicine

 In Education

Marcia Prenguber, ND, FABNO

The faculty and staff of the College of Naturopathic Medicine at the University of Bridgeport (UBCNM), like our ubcnm-logocolleagues in other schools, are continuously looking for strategies and methods to enhance our program, to create better tools and techniques to improve our outcomes and the success of our students.

From a physical environment standpoint, in the past 24 months we built a new 11 000 sq ft naturopathic medical clinic within the UB clinics. This clinic encompasses the entire 8th floor of the Health Sciences Center building on campus, with spacious exam and treatment rooms and a new hydrotherapy suite, as well as 2 additional clinic conference rooms for case discussions. We are currently designing a new research lab in conjunction with the other health sciences programs here at the University. On the instructional side, additional faculty with specialty training have joined our team, enhancing both the academic and clinical programs, such as Patrick Fratellone, MD, an integrative cardiologist, and Allison Buller, PhD, a full-time faculty member of the UB Counseling and Psychology program. One of our most recent efforts, implemented in March and April of 2016, is a redesign of the procedures for assessing students’ clinical skills and competencies.

Improving Clinical Assessment

The clinical assessment process includes mid-term and end-of-term evaluations by the clinical supervising physicians, intended to provide feedback to students regarding their demonstration of clinical skills on each rotation. In addition, clinical examinations are implemented near the end of the academic year to evaluate each student’s skills and competencies, and their readiness for the next step in training, or readiness to graduate. These practical clinical exams serve both formative and summative functions; students are evaluated for the progress that they have made, and the evaluation is used to identify areas of need. At UBCNM, these exams assess professionalism, communication skills, and competencies in assessment, evaluation and interpretation of findings, medical reasoning, diagnosis, and in the development of appropriate treatment recommendations, appropriate referrals, and charting skills. The exams are progressively challenging, demanding greater skill and broader competence at each level of the program. Clinic Entrance Exams are administered near the end of the second year, Clinic Promotion Exams near the conclusion of the third year, and Clinic Exit Exams about 10 weeks before graduation. This provides adequate time for remediation and reassessment of identified areas of need before the student moves to the next step in the program or on to graduation.

In years past, the Clinic Exams consisted of a student’s encounter with a standardized patient (or volunteers trained to serve as patient), with a faculty evaluator present in the exam room, observing and concurrently documenting their findings as the patient encounter proceeded. The faculty evaluators would share the results with the student many days later following a clinic faculty meeting, to discuss findings. The system worked, although we were largely unable to adequately evaluate inter-rater reliability. Redesigning the exam procedure allowed us to improve reliability and objectivity of the assessment. Evaluation rubrics had been developed and successfully implemented in our clinical examinations in recent years, and we have maintained that approach.

The most evident change to the process was to replace the evaluator in the exam room with a camcorder or webcam, with evaluators viewing the exam recording at a later date. This removed the problem of an action or statement being missed if the evaluator glanced away or was unable to hear the discussion. It also eliminated any conscious or unconscious influence that an evaluator might have on the student or process during the exam. Each student’s exam, consisting of the recorded patient encounter, chart notes, and a recorded case presentation, was independently reviewed and scored by 2 or more evaluators. In turn, this has given the faculty and administrative team an opportunity to examine inter-rater reliability through the review and comparison of 2 or more scored rubrics for each test-taker. This evaluation will contribute to the ongoing redesign of the evaluator training process, toward a goal of developing more effective and more objective tools for assessing students’ clinical competencies.

We have administered each of the 3 clinical exams (Entrance, Promotion, and Exit Exams) in this new format once thus far, and we have learned from each. Having a few weeks between the various exams allowed us to make revisions that improved the process. We learned what information was important to the students to have in advance of the day of the test, altered some components of the process that improved the overall experience for the test-taker, and refined the rubrics to improve inter-rater reliability.

Improving Clinical Exams Recordings

There are many elements to consider when approaching the process of recording clinical examinations, considerations that are different from the more traditional method of clinical exams with an evaluator present in the room. Here are a few recommendations:

  • Edit the rubrics carefully for clarity and to express objective measures of competence. Varying interpretations by the evaluators of the salient aspects of the case and the level of competency demonstrated by a test-taker are signs of an unclear rubric. Look for wording that is unclear, inconsistent, confusing, or simply too broad, or competencies which require subjective analysis by the evaluator. Work with a series of readers, including evaluators, to create objective measures that are clearly written. Discuss language used in each rubric, assessing the potential for multiple interpretations, and edit each until they are clear and consistently interpreted.
  • Discuss accepted variations in performing physical exams during evaluator training. Style variation in the performance of physical exams may result in variable scoring. This training also reduces an evaluator’s inclination to assign extra points for skills demonstrated that are not appropriate to the case. Variation among evaluators’ scores in other areas may indicate a similar need for additional discussion of how to assess the more subtle aspects of each competency.
  • Coach evaluators to make notes regarding the demonstration of the competency as the recording proceeds. These notes are indispensable in reviewing the outcomes with students, to clarify the concerns as seen in the viewing of the recording.
  • Evaluate cases for “playability,” assessing if the symptoms can be convincingly communicated by the actor. Can the recording equipment capture the interviewing process as well as all aspects of the physical exam(s)? Playing out the case in advance can determine the best camera angle for the exam.
  • Identify each objective before creating the cases and associated rubrics. Overlap of skills noted in the scoring process can cause confusion in scoring about which competency is being evaluated. Creating the cases and rubrics around the previously identified objectives reduces the risk of confusion.
  • Keep scoring to a traditional method, using a more standard scale, such as 0-100, and building the weight of each category into the initial process. Using other approaches (eg, scale of 0-30) or weighing the scores after the evaluator has completed their process leads to confusion.
  • Anticipate students’ concerns, and know that some anxiety is inevitable. While the students expressed anxiety about the recording process in advance of the examination day, the recording component was not a significant concern during the patient encounters. Use of a familiar charting outline for documenting findings reduces another potential source of concern for students.
  • Collaborate with faculty members who teach the aspects of the curriculum that are most clearly evaluated in these exams, such as physical exam and orthopedic assessment. Their involvement with the case and rubric development is instrumental in creating a cohesive process.
  • Provide students a low-stakes encounter with the test format. Recording skills in the classroom as they are learned can reduce students’ anxiety in the test-taking environment, as well as support the learning process. Students can film one another using smartphones, webcams, or any of a number of electronic devices. When they review and evaluate their own and/or their classmates’ performance(s), they gain a measure of objectivity about what really took place, as compared to what they remember doing or saying during the encounter.

Using the simple technology of a webcam and laptop to record clinical exams has made a significant difference in our ability to assess and review with students their progress in the clinical realm. This detailed feedback, not only for the student, but also for the faculty member, the evaluator, and the administrator, takes us to a new level, creating a feed-forward loop which allows us to evaluate and improve our own skills in instruction and assessment.

Recent Posts

Start typing and press Enter to search