Cited 0 times in Scipus Cited Count

Deep learning for anatomical interpretation of video bronchoscopy images

Authors
Yoo, JY  | Kang, SY  | Park, JS | Cho, YJ | Park, SY  | Yoon, HI | Park, SJ | Jeong, HG | Kim, T
Citation
Scientific reports, 11(1). : 23765-23765, 2021
Journal Title
Scientific reports
ISSN
2045-2322
Abstract
Anesthesiologists commonly use video bronchoscopy to facilitate intubation or confirm the location of the endotracheal tube; however, depth and orientation in the bronchial tree can often be confused because anesthesiologists cannot trace the airway from the oropharynx when it is performed using an endotracheal tube. Moreover, the decubitus position is often used in certain surgeries. Although it occurs rarely, the misinterpretation of tube location can cause accidental extubation or endobronchial intubation, which can lead to hyperinflation. Thus, video bronchoscopy with a decision supporting system using artificial intelligence would be useful in the anesthesiologic process. In this study, we aimed to develop an artificial intelligence model robust to rotation and covering using video bronchoscopy images. We collected video bronchoscopic images from an institutional database. Collected images were automatically labeled by an optical character recognition engine as the carina and left/right main bronchus. Except 180 images for the evaluation dataset, 80% were randomly allocated to the training dataset. The remaining images were assigned to the validation and test datasets in a 7:3 ratio. Random image rotation and circular cropping were applied. Ten kinds of pretrained models with < 25 million parameters were trained on the training and validation datasets. The model showing the best prediction accuracy for the test dataset was selected as the final model. Six human experts reviewed the evaluation dataset for the inference of anatomical locations to compare its performance with that of the final model. In the experiments, 8688 images were prepared and assigned to the evaluation (180), training (6806), validation (1191), and test (511) datasets. The EfficientNetB1 model showed the highest accuracy (0.86) and was selected as the final model. For the evaluation dataset, the final model showed better performance (accuracy, 0.84) than almost all human experts (0.38, 0.44, 0.51, 0.68, and 0.63), and only the most-experienced pulmonologist showed performance comparable (0.82) with that of the final model. The performance of human experts was generally proportional to their experiences. The performance difference between anesthesiologists and pulmonologists was marked in discrimination of the right main bronchus. Using bronchoscopic images, our model could distinguish anatomical locations among the carina and both main bronchi under random rotation and covering. The performance was comparable with that of the most-experienced human expert. This model can be a basis for designing a clinical decision support system with video bronchoscopy.
MeSH

DOI
10.1038/s41598-021-03219-6
PMID
34887497
Appears in Collections:
Journal Papers > School of Medicine / Graduate School of Medicine > Anesthesiology & Pain Medicine
Ajou Authors
강, 세윤  |  박, 성용  |  유, 지영
Full Text Link
Files in This Item:
34887497.pdfDownload
Export

qrcode

해당 아이템을 이메일로 공유하기 원하시면 인증을 거치시기 바랍니다.

Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.

Browse