COCO format for semantic segmentation and object detection

Introducing Semantic Segmentation and Object Detection using COCO format.

As you already know, it started with just a research paper, and a tiny open source community about semantic segmentation and object detection. On the recent release of the COCO growing data set , there are familiar faces,  researchers and companies as collaborators Cornell’s computer vision guru: Serge Belongie, Mappilary and others.
 
COCO allows to annotate images with polygons and record the pixels for semantic segmentation and masks. It also picks the alternative bounding boxes for object detection.

image annotation with polygons

What worked best for us using COCO format with our client projects:

  1. Scene Image segmentation for robotics (industrial context) and street view cameras for autonomous driving or contextual cases (traffic management).
  2. Complex ML models where some of the objects are readily available using off the shelf solutions. The others require precision, and it is impossible to say whether a full segmentation or bounding boxes would be enough.
  3. Building instance segmentation datasets.

Semantic segmentation and Object detection

We in Taqadam embrace open source community. Especially when it comes to multi-format GEO tagging and ML around geospatial data, as well as imagery projects. COCO-based annotation and working our ways with other formats accessibility allowed us better serve our clients.
Whether you use YOLO, or use open source datasets from COCO, Kaggle to optimize the machine learning model – you can use pre-trained values in Taqadam portal annotation.

Leave a Reply

Your email address will not be published. Required fields are marked *