Deep Learning Architecture for Skin Cancer Segmentation

Deep Learning Architecture for Skin Cancer Segmentation

In a recent skin cancer research study, Dr. Bülent Bayram and his team at Yildiz Technical University, Department of Geomatics in Istanbul, Turkey, used our vision AI platform to perform image segmentation for early detection and analysis of skin cancer. As experts in photogrammetry, image processing, and machine learning, the Yildiz team is leveraging Plainsight vision AI to aid their work by facilitating efficient and accurate data curation for their research. Read about their use of Plainsight for their cancer research and for their AI-based cancer research using medical imaging below.

 ———-

Part 1 in a blog series


LABELLING FOR AUTOMATIC SKIN LESION SEGMENTATION

 

Bülent Bayram, Tolga Bakirman, and Esra Sunker, Yildiz Technical University Department of Geomatic Engineering, Istanbul/Turkey; and, Buket Bayram, M.D., Dermatology Clinic, Istanbul/Turkey

Cancer is one of the leading diseases that threaten human life today. Cancer’s rapid progression and challenges in diagnosis make the treatment of the disease a complex process. However, with early diagnosis and correct treatment, patients can regain their former health. Today, artificial intelligence (AI) technologies are widely utilized to facilitate the early diagnosis of cancer.

In order to develop AI applications, huge amounts of annotated data are required for efficient training. Generation of the dataset is the most time-consuming part for most AI applications. It is crucial to have a large volume of data and highly accurate labels for the training of a state-of-the-art AI application. To ease this process, we benefit from Plainsight Data Annotation which provides efficient labelling tools and a flexible user experience.

 

“Although the TrackForward function is actually developed for labelling video frames, it has accelerated the labelling of many images using AI too, as it automatically extracts the labels of similar dermoscopy images.”

 

In this use-case, we aim to train a deep learning architecture for skin cancer segmentation. For this project we have used The International Skin Imaging Collaboration (ISIC) 2019 – Skin Lesion Analysis Towards Melanoma Detection dataset. The ISIC 2019 dataset consists of 25k dermoscopic images from various sources such as BCN_20000, HAM10000 and MSK dataset. The goal of the ISIC 2019 is the classification of dermoscopic images in 8 categories, namely:

As you can see above, the ISIC 2019 dataset only provides class names for each case. However, the boundaries of the lesions are also crucially important for diagnosis. Therefore, we have preferred Plainsight’s platform to annotate each lesion’s boundary considering the number of images and challenges in manual labelling. The Plainsight platform provides vision AI-based tools such as SmartPoly and TrackForward.

Plainsight’s AI-powered annotation feature, AutoLabel, has been used in a variety of different Yildiz projects. Considering the custom nature of dermatological image classes, a custom model would be used. In the next installment of this blog series, we’ll highlight how annotation of custom medical datasets can be accelerated 20x with AutoLabel.

After the appropriate label classes were defined, the SmartPoly feature was predominantly used in images with high contrast, and very good results were obtained. 

 

“…the SmartPoly function was found to be very useful, as it reduces the time spent on labelling by almost half compared to the manual labelling process.”

 

In this feature, once the minimum bounding box is drawn to include the lesion, the function automatically extracts the boundaries in accordance with the lesion size. Efficient and accurate results were obtained with this feature as can be seen below. All labelled data have been evaluated by a dermatology expert (M.D.).

Although the TrackForward function is actually developed for labelling video frames, it has accelerated the labelling of many images using AI as it automatically extracts the labels of similar dermoscopy images. Only complex images with very low contrast, difficult to detect lesions, and different color tones were labeled manually. In the below example the lesion is covered by hair which makes it almost impossible to detect the lesion automatically. In these cases, automatic hair removal algorithms can be applied prior to the labelling process. 

from left to right: Original, SmartPoly Function, SmartPoly Result

from left to right: Manual Labelling and  Final Label

In particular, the SmartPoly function was found to be very useful, as it reduces the time spent on labelling by almost half compared to the manual labelling process.

Skin cancer diagnosis is very important to avoid unwanted mortality. Usually diagnosis procedure is performed by visual interpretation of dermoscopy images. Due to heavy workload and lack of concentration or experience, physicians may misdiagnose the cases which may result in death. This study aims to solve these challenges and to provide a second opinion to physicians in order to increase the accuracy of their diagnosis through use of Artificial Intelligence.

One of the main outcomes of this study is generated labelled data for skin lesion segmentation. Additionally, this created labelled dataset will be used to establish a deep learning-based skin cancer identification and segmentation system from dermoscopy images. It is expected that the output of the study will provide a backbone for future studies which focuses on different types of dermoscopy images.

——————-

In the next edition of this blog series, Yildiz will summarize their model creation, training, and deployment using the Plainsight vision AI platform.

More Plainsight Blog Posts: