Abstract: Currently, IVOCT is the only imaging technique with the resolution necessary to identify vulnerable thin cap fibro-atheromas (TCFAs). IVOCT also has greater penetration depth in calcified plaques as compared to Intravascular Ultrasound (IVUS). Despite its advantages, IVOCT image interpretation is challenging and time consuming with over 500 images generated in a single pullback. In this poster, we propose a method to automatically classify A-lines in IVOCT images using a convolutional neural network. Conditional random fields were used to clean network predictions across frames. The neural network was trained using a dataset of nearly 4,500 image frames across 48 IVOCT pullbacks. Ten-fold cross validation with held-out pullbacks resulted in a classification accuracy of roughly 76% for fibrocalcific, 84% for fibrolipidic, and 85% for other. Classification results across frames displayed in en face view matched closely to annotated counterparts.
Best Poster Finalist (BP): no
Poster summary: PDF
Reproducibility Description Appendix: PDF
Back to Poster Archive Listing