Skip to the main content

Original scientific paper

https://doi.org/10.32985/ijeces.13.8.3

Transfer Learning Based Deep Neural Network for Detecting Artefacts in Endoscopic Images

Kirthika Natarajan ; School of Engineering, Avinashilingam Institute for Home Science and Higher Education for Women, Varapalayam, Coimbatore, Tamilnadu 641 108, India.
Sargunam Balusamy ; School of Engineering, Avinashilingam Institute for Home Science and Higher Education for Women, Varapalayam, Coimbatore, Tamilnadu 641 108, India.


Full text: english pdf 1.030 Kb

page 633-641

downloads: 193

cite


Abstract

Endoscopy is typically used to visualize various parts of the digestive tract. The technique is well suited to detect abnormalities like cancer/polyp, taking sample tissue called a biopsy, or cauterizing a bleeding vessel. During the procedure, video/ images are generated. It is affected by eight different artefacts: saturation, specularity, blood, blur, bubbles, contrast, instrument and miscellaneous artefacts like floating debris, chromatic aberration etc. The frames affected by artefacts are mostly discarded as the clinician could extract no valuable information from them. It affects post-processing steps. Based on the transfer learning approach, three state-of-the-art deep learning models, namely YOLOv3, YOLOv4 and Faster R-CNN, were trained with images from EAD public datasets and a custom dataset of endoscopic images of Indian patients annotated for artefacts mentioned above. The training set of images are data augmented and used to train all the three-artefact detectors. The predictions of the artefact detectors are combined to form an ensemble model whose results outperformed well compared to existing literature works by obtaining a mAP score of 0.561 and an IoU score of 0.682. The inference time of 80.4ms was recorded, which stands out best in the literature.

Keywords

Deep Learning; Artefacts; Endoscopy; Transfer Learning

Hrčak ID:

285427

URI

https://hrcak.srce.hr/285427

Publication date:

10.11.2022.

Visits: 637 *