UPDF AI

Analysis of Computer Vision Methods for Counting Surgical Instruments

Gustavo Chavez,David Y. Zhao,2 作者,D. Amanatullah

2020 · DOI: 10.1177/1553350620956425
Surgical Innovation · 引用数 2

TLDR

A literature search of computer vision studies on the detection or localization of surgical instruments outside of the surgical field highlights 4 studies that provide insight into both the feasibility and challenges of utilizing existing computer vision techniques to build a system that can perform the surgical count.

摘要

Dear Editor, The surgical count is the primary method to account for and manage surgical instruments, needles, and sponges during operative procedures. However, a miscount, when there is a discrepancy between counted and deployed instruments, is estimated to occur in roughly 1 in 140 cases. These events often result in significant costs to hospitals and patients per year due to lost operating room (OR) time and secondary imaging procedures. An estimated 1 in 70 miscounts result in a retained surgical instrument, resulting in patient harm, as well costly reoperation and litigation. Given the high cost of OR time and the current burden of counting procedures, it is clear that a more accurate and less labor-intensive counting system is needed. In recent years, there is tremendous growth in machine learning technology applied to healthcare. Computer vision techniques are now applied across many medical domains and are most visible in the context of minimally invasive surgery and endoscopic surgery. We conducted a literature search of computer vision studies on the detection or localization of surgical instruments outside of the surgical field. We are highlighting 4 studies that provide insight into both the feasibility and challenges of utilizing existing computer vision techniques to build a system that can perform the surgical count. The 4 studies, summarized in Table 1, implemented a wide range of computer vision techniques to localize different types of surgical items, with relatively high detection accuracies ranging from 89% to 95%. Various algorithms were tested, including: instrument barcoding with template matching, random forest, and convolutional neural networks. Three studies presented their object detection models in the context of a robot manipulator which could pick up the detected instrument. If computer vision is to have widespread adoption as a modality to perform the surgical count, the underlying object detection and tracking algorithms must be robust to the large number and types of surgical objects. The categories of objects present in an operative setting include soft disposables, such as laparotomy sponges, hard disposables, such as surgical needles, and instruments, such as hemostats. However, all 4 of the studies only considered object detection on a limited set of instruments and hard disposables. In particular, none of the studies attempted detection of surgical sponges and needles, 2 of the most commonly miscounted items in the OR. Data standardization and algorithmic benchmarking is another concern, as only 2 studies released their datasets publicly. Both datasets are limited in size (3200 and 3009 images, respectively), number of object categories, and types of objects, as Zhou and Wachs considered 5 instrument categories (scalpel, retractor, hemostat, scissors and babcock forceps), while Lavado considered 4 instrument categories (scalpel, straight dissection clamp, straight mayo scissor and curved mayo scissor). Furthermore, neither dataset contains annotations with object identifiers for tracking objects across a sequence of images. A collaborative effort between research groups should be established to create and publicly release a sufficiently large and varied dataset for detection and tracking. This would allow for a standardized process to evaluate the performance of computer vision algorithms at the surgical counting task.