(Tech Xplore)—A team of researchers at the University of Georgia has used a machine-learning algorithm to create a system to accurately rate the suturing skill level of a surgeon. In their paper uploaded to the preprint server arXiv, the team describes what went into development of the algorithm, how it was taught to perform, and how accurate it was compared to human assessors.
To make it all the way through medical school, internships and other training to become a surgeon, a student must demonstrate a variety of skills, both mental and physical. One of the most basic of those is suturing and tying knots once a wound has been closed. But as it turns out, despite showing sufficient proficiency to become certified, not all surgeons are created equal when it comes to closing up a patient's wounds.
As part of an effort to apply testing methodologies to physical procedures similar to those used during the development of drugs, the researchers sought to test the feasibility of creating systems to watch a surgeon perform certain procedures, and then to rate them on their skill level. To that end, the researchers created a system meant to judge just one skill: applying sutures and tying them off.
To create such a system, the researchers filmed 41 surgeons and nurses suturing test boards made of foam—each wore accelerometers on their hands to capture all of the action. The team then showed the videos to a clinician who rated the skill level of the subjects shown in the videos. Next, the video was fed to a computer running a machine-learning algorithm along with the scores from the clinician, which gave the system a basis for rating the work under review. Finally, the clinician's scores were removed from the system and it was then asked to rate suturing capabilities by itself. In looking at the results, the team found the new system to be 93.2 percent accurate in matching the rating of the original clinician rating.
The team also found that including data from the accelerometer did not improve accuracy, and in fact actually reduced effectiveness. The researchers suggest their system, or one like it, might one day be used by surgeons in training to get feedback on their skill level prior to undergoing critique by other surgeons.