illustrartion of city graffitis

nuScenes by Aptiv

Large-scale open source dataset for autonomous driving.

  • Scene 1
  • Scene 2
  • Scene 3
  • Scene 4
  • Scene 5
  • Scene 6
  • Scene 7
  • Scene 8
  • Scene 9
  • Scene 10
  • Scene 11
  • Scene 12
  • Scene 13
  • Scene 14
  • Scene 15
  • Scene 16
  • Scene 17
  • Scene 18
  • Scene 19
  • Scene 20
  • Scene 21
  • Scene 22
  • Scene 23
  • Scene 24
  • Scene 25
  • Scene 26
  • Scene 27
  • Scene 28
  • Scene 29
  • Scene 30
  • Scene 31
  • Scene 32
  • Scene 33
  • Scene 34
  • Scene 35
  • Scene 36
  • Scene 37
  • Scene 38
  • Scene 39
  • Scene 40
  • Scene 41
  • Scene 42
  • Scene 43
  • Scene 44
  • Scene 45
  • Scene 46
  • Scene 47
  • Scene 48
  • Scene 49
  • Scene 50
Overview

Support for computer vision and autonomous driving research

nuScenes is an initiative intended to support research to further advance the mobility industry.

With this goal in mind, the dataset includes 1000 scenes collected in Boston and Singapore and is the largest multi-sensor dataset for autonomous vehicles.

It features:

  • Full sensor suite: 1x LiDAR, 5x RADAR, 6x camera, IMU, GPS
  • 1000 scenes of 20s each
  • 1,440,000 camera images
  • 400,000 LiDAR sweeps
  • Two diverse cities: Boston and Singapore
  • Left versus right hand traffic
Spatial distribution of keyframes
Spatial distribution of keyframes
Nutonomy Car
Nutonomy Car
Data Collection

Careful scene planning by Aptiv

For the dataset, Aptiv managed the collection of data, carefully choosing to capture challenging scenarios and a diversity of locations, times and weather conditions.

Collected in Boston's Seaport and Singapore's One North, Queenstown and Holland Village districts, each of the 1000 scenes in the dataset were manually selected.

Car Setup

Two identical cars with identical sensor layouts were used to drive in Boston and Singapore. Refer to the image below for the placement of the sensors:

    • 12Hz capture frequency
    • 1/1.8'' CMOS sensor of 1600x1200 resolution
    • Bayer8 format for 1 byte per pixel encoding
    • 1600x900 ROI is cropped from the original resolution to reduce processing and transmission bandwidth
    • Auto exposure with exposure time limited to the maximum of 20 ms
    • Images are unpacked to BGR format and compressed to JPEG
    • See camera orientation and overlap in the figure below.
    6Cameras
    • 20Hz capture frequency
    • 32 channels
    • 360° Horizontal FOV, +10° to -30° Vertical FOV
    • 80m-100m Range, Usable returns up to 70 meters, ± 2 cm accuracy
    • Up to ~1.39 Million Points per Second
    1Spinning LiDAR
    • 77GHz
    • 13Hz capture frequency
    • Independently measures distance and velocity in one cycle using Frequency Modulated Continuous Wave
    • Up to 250m distance
    • Velocity accuracy of ±0.1 km/h
    5Long range RADAR sensor
    • 12Hz capture frequency
    • 1/1.8'' CMOS sensor of 1600x1200 resolution
    • Bayer8 format for 1 byte per pixel encoding
    • 1600x900 ROI is cropped from the original resolution to reduce processing and transmission bandwidth
    • Auto exposure with exposure time limited to the maximum of 20 ms
    • Images are unpacked to BGR format and compressed to JPEG
    • See camera orientation and overlap in the figure below.
    6Cameras
    • 20Hz capture frequency
    • 32 channels
    • 360° Horizontal FOV, +10° to -30° Vertical FOV
    • 80m-100m Range, Usable returns up to 70 meters, ± 2 cm accuracy
    • Up to ~1.39 Million Points per Second
    1Spinning LiDAR
    • 77GHz
    • 13Hz capture frequency
    • Independently measures distance and velocity in one cycle using Frequency Modulated Continuous Wave
    • Up to 250m distance
    • Velocity accuracy of ±0.1 km/h
    5Long range RADAR sensor
Sensors Extrinsic Coordinates

Sensor calibration

Careful calibration of the extrinsics and intrinsics of every sensor is critical to achieving a high quality dataset. For nuScenes, extrinsic coordinates are expressed relative to the ego frame (ie: the midpoint of the rear vehicle axle).

  • We use a laser liner to accurately measure the relative location of the LiDAR to the ego frame.

    LiDAR extrinsics
  • We place a cube-shaped calibration target in front of the camera and LiDAR sensors. The calibration target consists of three orthogonal planes with known patterns. After detecting the patterns we compute the transformation matrix from camera to LiDAR by aligning the planes of the calibration target. Given the LiDAR to ego frame transformation computed above, we can then compute the camera to ego frame transformation and the resulting extrinsic parameters.

    Camera extrinsics
  • We mount the radar in a horizontal position. Then we collect radar measurements by driving in an urban environment. After filtering radar returns for moving objects, we calibrate the yaw angle using a brute force approach to minimize the compensated range rates for static objects.

    RADAR extrinsics
  • We use a calibration target board with a known set of patterns to infer the intrinsic and distortion parameters of the camera.

    Camera intrinsic calibration

Sensor Synchronization

In order to achieve cross-modality data alignment between the LiDAR and the cameras, the exposure on each camera was triggered when the top LiDAR sweeps across the center of the camera’s FOV. This method was selected as it generally yields good data alignment.

Note that the cameras run at 12Hz while the LiDAR runs at 20Hz. The 12 camera exposures are spread as evenly as possible across the 20 LiDAR scans, so not all LiDAR scans have a corresponding camera frame.

Reducing the frame rate of the cameras to 12Hz helps to reduce the compute, bandwidth and storage requirement of the perception system.

Flythrough Of The Nuscenes Teaser
Data Annotation

Scale and Aptiv Partnership

As Aptiv's partner in developing nuScenes, Scale contributed two things: First, is the data annotation, including deciding on the taxonomy and developing instructions for labelers and managing QA.

Second is Scale's web-based visualizer for LiDAR and camera data for exploring the dataset. Scale's visualizer allows point cloud data to be easily embedded into any webpage and shared.

Scene 1
Tutorial

Get Started with Nuscenes

Ready to get started with nuScenes? This tutorial will give you an overview of the dataset without the need to download it. Please note that this page is a rendered version of a Jupyter Notebook.