Press Release

Scale: Sensor Fusion

Scale Launches Sensor Fusion Annotation API For LIDAR and RADAR to Accelerate the Development of Autonomous Vehicles.

[email protected]

By combining human intelligence with machine learning, Scale delivers
pixel-perfect annotations and highly accurate training data for self-driving cars

SAN FRANCISCO, CA – Scale API (Scale.ai) launched its Sensor Fusion Annotation API for LIDAR and RADAR point cloud data, which accelerates the development of perception algorithms for autonomous vehicles. Dozens of leading automobile OEMs and self-driving car companies (such as GM Cruise and Voyage) already use Scale’s comprehensive Image Annotation APIs to produce premium training datasets for their computer vision algorithms.

Scale leverages machine learning, statistical models and human-generated data to deliver best-in-class object recognition, capable of accurately analyzing millions of camera images, LIDAR frames, and RADAR data each month. With Scale’s robust QA processes, humans and machines work in perfect harmony to keep costs low and quality high.

The combination of human and artificial intelligence results in rigorously tested training data that help autonomous vehicles more quickly learn to navigate independently while accurately identifying road markers, vehicles, and other objects in an instant.

Developers can leverage the Sensor Fusion and Image Annotation APIs to tap into several types of annotations:

  • LIDAR/RADAR Annotation: Identifies objects in a 3D point cloud and draws bounding cuboids around the specified objects, returning the positions and sizes of these boxes.
  • Semantic Segmentation: Classifies every pixel of an image according to the labels provided to return a full semantic, pixel-wise, and dense segmentation of the image.
  • Polygon Annotation: Identifies objects (such as vehicles, pedestrians, cyclists, and more) and draws bounding polygons around the specified objects, returning the vertices of these polygons.
  • Bounding Box Annotation: Identifies objects and draws bounding 2D boxes around the specified objects, returning the vertices of these boxes.**
  • Line Annotation: Identifies the different features of a road, such as lane lines, and draws segmented lines along each object, returning the vertices of these segmented lines.
  • Point Annotation: Identifies the location of objects and draws points at specified locations, returning the locations of these points.
  • Cuboid Annotation: Identifies objects and draws perspective 3D cuboids around the specified objects in camera images, returning the positions and sizes of these boxes.

“The need for training data for self-driving cars is rapidly growing," said Alexandr Wang, CEO of Scale. “We strive to help our customers scale up their training data needs while maintaining quality, thereby becoming a core part of their AI infrastructure. I’m truly excited to see how Scale will enable the future of the autonomous vehicle space, and AI applications more broadly.”

Scale’s comprehensive training data and advanced annotation tools will give manufacturers the tools they need to meet the demands of the 380 million self-driving cars projected to be on the road by 2030.

Video demo of Scale Sensor Fusion API: https://www.youtube.com/watch?v=43lxAFK9l3A&feature=youtu.be

To learn more about Scale’s Image Annotation API for autonomous vehicles, please visit
https://scale.ai/sensor-fusion-annotation

About Scale

Scale combines the efficiency of machine learning with the versatility of human Intelligence, providing a simple and easy-to-use platform for training data generation. By providing impeccable quality, high throughput, and cost efficiency, Scale is becoming a core part of their customers’ AI infrastructure (including GM Cruise and Voyage). Scale is headquartered in San Francisco and backed by Accel and Y Combinator. For more information, please visit Scale.ai.