EyeWorld is the official news magazine of the American Society of Cataract & Refractive Surgery.
Issue link: https://digital.eyeworld.org/i/376249
EW RESIDENTS 86 September 2014 be great to see this protocol imple- mented across different institutions within the U.S. and abroad to see if the assessment trends hold. Often there is focus on how to evaluate junior surgeons, but there is minimal discussion of how senior surgeons are taught to evaluate. The potential of this paper is the possi- bility of publishing video examples that can correlate with performance assessment. Much like the lens opac- ity classifi cation system provided a tangible example for grading cata- racts, a validated video library may help with standardizing different levels of surgical performance. Ultimately, the role of comput- er-aided objective skill assessment may provide the most consistent feedback to the novice surgeon. This type of feedback would mirror some of the instant metrics avail- able to trainees who use surgical simulators. Nevertheless, in order to guide a helpful face-to-face discus- sion between a senior and junior surgeon on improvement of surgical skill, validated assessment tools have tremendous value. Shameema Sikder, MD Assistant professor of ophthalmology Johns Hopkins University School of Medicine, Baltimore Dr. Smith and colleagues attempt to address two serious consider- ations in surgical education, beyond the steps of hydrodissection and phacoemulsifi cation. One, can an assessment tool be modifi ed to be more focused and create increased inter-rater reliability? It is not feasi- ble for attending surgeons to fi ll out long assessment tools at the end of each case in a busy operating day with a trainee. The reality is that operating with the trainee is often time consuming and may lead to decreased surgical volume for the attending. However, for the train- ee, without consistent evaluation, a learner may not have adequate feedback to improve surgical skills. Furthermore, it is of utmost impor- tance that the evaluation tool can be utilized by multiple mentors and still provide meaningful feedback. Two, can this assessment tool be effective in evaluating video instead of live surgery? This introduces the participation of additional faculty members who do not have to be present at the time of surgery to provide meaningful feedback and also fl exibility in being able to review the case at a later time. While this paper has a limited number of videos reviewed, it lays the frame- work for further studies. It would The secret is in the sauceādon't blink! Shobha Boghani, MD Associate clinical professor Flaum Eye Institute, University of Rochester, N.Y. The study by Smith et al provides an objective evaluation tool for two steps of cataract surgery. The surgical videos were graded by 7 Cataract tips from the teachers O bjectively evaluating resident's cataract surgical progress is not an easy task. Developing mean- ingful metrics in surgical training to assess skill profi ciency and consistency is an area of active interest. In this column, three of our cataract teachers share their views on a recent publication regarding an evaluation tool for quantitatively measuring the performance of the hydrodissection and phacoemulsifi cation steps of cataract sur- gery in resident surgical training. Develop- ing an assessment tool in cataract surgery is not new, with previous tooling including the GRASIS and ICO-OSCAR. Validating the use of videography to assess performance by non-supervising surgeons may become an important component of resident surgi- cal training (the secret sauce) and serve as an additional method to evaluate resident cataract surgical training progress. Sherleen Chen, MD, and Roberto Pineda, MD Roberto Pineda, MD Assistant professor of ophthalmology, Harvard Medical School Director of Refractive Surgery, Massachusetts Eye and Ear Infi rmary to 8 observers (not the attending surgeon), which makes it objective. They found less interobserver vari- ability with some surgical steps as compared to the steps that needed three-dimensional viewing and hence were more diffi cult to assess on a video. It is easier to assess some of these steps in real-time by the attending surgeon who gets a three- dimensional view. One of the chal- lenges of grading a surgical video is that the microscope may not be cen- tered on the scene of action, which makes it diffi cult to get a good view. As the author suggests, having an additional side view may be helpful to grade some of the steps. It would have been helpful if the density of the cataract (soft vs. brunescent) was known as well as presence of pseudoexfoliation. Did the 39th case with the complication have any of these features? Was there any intervention after the case with the complication? This may also account for the improved per- formance on the following case. By comparing resident surgeries (viewing time 23.9 minutes) to a very experienced surgeon (viewing time 5.6 minutes), there is an auto- matic bias based on the time. Maybe it was done to use the experienced surgeon as a benchmark for the shortest surgical time as well as the highest scores. This study provides a useful method for objective evaluation of two parts of cataract surgery. At our residency program, we have been trying to introduce some steps of cataract surgery at an earlier stage of residency, during the second year. This helps them to feel comfortable handling and using instruments in- side the eye. The residents start with insertion of the intraocular lens and irrigation and aspiration of cortical material. Prior to doing live surgery, they practice in the wet lab, where they perform most surgical steps on pig eyes as well as the Kitaro eye model. They also attend courses conducted by various companies in their second year of residency. Sherleen Chen, MD Assistant professor of ophthalmology, Harvard Medical School Director of Cataract and Comprehensive Ophthalmology, Massachusetts Eye and Ear Infi rmary