Image segmentation refers to assigning each pixel of an image a class. Classifier concepts are more familiar for machine learning engineers and semantic segmentation is typically interpreted through classification of pixels.
Challenges and solutions:
* Pixel-level accuracy to ensure the real-live application of the machine learning model (example was introduced in Carvana challenge with masks). TaQadam introduced in our mobile annotation tool precise drawing tools to ensure the borders of objects, and areas as classes are very accurate.
* Scenes for semantic segmentation – for example, in the autonomous driving – have cars parked next to each other, pedestrians too close to vehicles, in front of vehicles. In semantic segmentation process of annotation, we solve it with class indexing of classes. Classes on the background, therefore, have a lower index, allowing correct interpretation of masks.
* Instance Segmentation. In the majority of scenarios, there is a need for multi-level tagging system to allow building defining each instance of a class (i.e. car, pedestrian). TaQadam platform allows flexibility to build attributes, add metadata or even descriptive text to each instance.