Create Image Annotation Task

This is the recommended task type for annotating images with vector geometric shapes. The available geometries are box, polygon, line, point, cuboid, and ellipse.This endpoint creates an imageannotation task. Given an image, Scale will annotate the image with the geometries you specify.The required parameters for this task are attachment and geometries.

Body Params

projectstring

The name of the project to associate this task with.

batchstring

The name of the batch to associate this task with. Note that if a batch is specified, you need not specify the project, as the task will automatically be associated with the batch's project. For Scale Rapid projects specifying a batch is required. See Batches section for more details.

instructionstring

A markdown-enabled string or iframe embedded Google Doc explaining how to do the task. You can use markdown to show example images, give structure to your instructions, and more. See our instruction best practices for more details. For Scale Rapid projects, DO NOT set this field unless you specifically want to override the project level instructions.

callback_urlstring

The full url (including the scheme http:// or https://) or email address of the callback that will be used when the task is completed.

attachmentstringrequired

A URL to the image you'd like to be annotated.

context_attachmentsarray of objects

An array of objects in the form of {"attachment": "<link to actual attachment>"} to show to taskers as a reference. Context images themselves can not be labeled. Context images will appear like this in the UI. You cannot use the task's attachment url as a context attachment's url.

geometriesobject

This object is used to define which objects need to be annotated and which annotation geometries (box, polygon, line, point, cuboid, or ellipse) should be used for each annotation. Further description of each geometry can be found in each respective section below

annotation_attributesobject

This field is used to add additional attributes that you would like to capture per annotation. See Annotation Attributes for more details about annotation attributes.

linksobject

Use this field to define links between annotations. See Links for more details about links.

hypothesisobject

Editable annotations that a task should be initialized with. This is useful when you've run a model to prelabel the task and want annotators to refine those prelabels. Must contain the annotations field, which has the same format as the annotations field in the response.

layerobject

Read-only annotations to be pre-drawn on the task. See the Layers section for more details.

base_annotationsobject

Editable annotations, with the option to be "locked", that a task should be initialized with. This is useful when you've run a model to prelabel the task and want annotators to refine those prelabels. Must contain the annotations field, which has the same format as the annotations field in the response.

can_add_base_annotationsboolean

Whether or not new annotations can be added to the task if base_annotations are used. If set to true, new annotations can be added to the task in addition to base_annotations. If set to false, new annotations will not be able to be added to the task.

can_edit_base_annotationsboolean

Whether or not base_annotations can be edited in the task. If set to true, base_annotations can be edited by the tasker (position of annotation, attributes, etc). If set to false, all aspects of base_annotations will be locked.

can_edit_base_annotation_labelsboolean

Whether or not base_annotations labels can be edited in the task. If set to true, the label of base_annotations can be edited by the tasker. If set to false, the label will be locked.

can_delete_base_annotationsboolean

Whether or not base_annotations can be removed from the task. If set to true, base_annotations can be deleted from the task. If set to false, base_annotations cannot be deleted from the task.

image_metadataobject

This field accepts specified image metadata, supported fields include: date_time - displays the date and time the image is taken resolution - configures the units of the ruler tools, resolution_ratio holds the number of resolution_units corresponding to one pixel, one pixel in the image corresponds to three meters in the real world. location - the real-world location where this image was captured, in the standard geographic coordinate system.

metadataobject

A set of key/value pairs that you can attach to a task object. It can be useful for storing additional information about the task in a structured format. Max 10KB. See the Metadata section for more detail.

paddinginteger

The amount of padding in pixels added to the top, bottom, left, and right of the image. This allows labelers to extend annotations outside of the image. When using padding, annotation coordinates can be a negative value or greater than the width/height of the image. See visual example.

paddingXinteger

The amount of padding in pixels added to the left and right of the image. Overrides padding if set.

paddingYinteger

The amount of padding in pixels added to the top and bottom of the image. Overrides padding if set.

priorityinteger

A value of 10, 20, or 30 that defines the priority of a task within a project. The higher the number, the higher the priority

unique_idstring

A arbitrary ID that you can assign to a task and then query for later. This ID must be unique across all projects under your account, otherwise the task submission will be rejected. See Avoiding Duplicate Tasks for more details.

clear_unique_id_on_errorboolean

If set to be true, if a task errors out after being submitted, the unique id on the task will be unset. This param allows workflows where you can re-submit the same unique id to recover from errors automatically

tagsarray of strings

Arbitrary labels that you can assign to a task. At most 5 tags are allowed per task. You can query tasks with specific tags through the task retrieval API.

Request

POST/v1/task/imageannotation
import requests

url = "https://api.scale.com/v1/task/imageannotation"

payload = {
  project = "test_project",
  callback_url = "http://www.example.com/callback",
  instruction = "Draw a box around each baby cow and big cow.",
  attachment_type = "image",
  attachment = "http://i.imgur.com/v4cBreD.jpg",
  unique_id = "c235d023af73",
  geometries = {
      "box": {
          "objects_to_annotate": ["Baby Cow", "Big Cow"],
          "min_height": 10,
          "min_width": 10,
        }
    }
}
headers = {
    "accept": "application/json",
    "content-type": "application/json",
}

response = requests.post(url, json=payload, headers=headers)

print(response.text)
POST/v1/task/imageannotation
from scaleapi.tasks import TaskType
from scaleapi.exceptions import ScaleDuplicateResource

payload = dict(
    project = "test_project",
    callback_url = "http://www.example.com/callback",
    instruction = "Draw a box around each baby cow and big cow.",
    attachment_type = "image",
    attachment = "http://i.imgur.com/v4cBreD.jpg",
    unique_id = "c235d023af73",
    geometries = {
        "box": {
            "objects_to_annotate": ["Baby Cow", "Big Cow"],
            "min_height": 10,
            "min_width": 10,
        }
    },
)

try:
    client.create_task(TaskType.ImageAnnotation, **payload)
except ScaleDuplicateResource as err:
    print(err.message)  # If unique_id is already used for a different task

Response

{
  "task_id": "string",
  "created_at": "string",
  "type": "videoplaybackannotation",
  "status": "pending",
  "instruction": "string",
  "is_test": false,
  "urgency": "standard",
  "metadata": {},
  "project": "string",
  "callback_url": "string",
  "updated_at": "string",
  "work_started": false,
  "params": {
    "attachment": "582bfe0ee5d51cda4e903f4a",
    "frame_rate": 5,
    "with_labels": true,
    "events_to_annotate": [
      null
    ],
    "geometries": {
      "box": {
        "objects_to_annotate": [
          null
        ]
      }
    },
    "is_test": true,
    "metadata": "string"
  }
}

Create Semantic Segmentation Annotation Task

This endpoint creates a segmentannotation task. In this task, one of our labelers will view the given image and classify pixels in the image according to the labels provided. You will receive a semantic, pixel-wise, dense segmentation of the image. We also support instance-aware semantic segmentations, also called panoptic segmentation, via LabelDescription objects. The required parameters for this task are attachment and labels. The attachment is a URL to an image you'd like to be segmented. labels is an array of strings or LabelDescription objects describing the different types of objects you'd like to segment the image with. You can optionally provide additional markdown-enabled or Google Doc-based instructions via the instruction parameter. You can also optionally set allow_unlabeled to true, which will allow the existence of unlabeled pixels in the task response - otherwise, all pixels in the image will be classified (in which case it's important that there are labels for everything in the image, to avoid misclassification). The response you will receive will be a series of images where each pixel's value corresponds to the label, either via a numerical index or a color mapping. You will also get separate masks for each label for convenience.If the request successful, Scale will return the generated task object, at which point you should store the task_id to have a permanent reference to the task.

Body Params

projectstring

The name of the project to associate this task with.

batchstring

A markdown-enabled string or iframe embed google doc explaining how to do the segmentation. You can use markdown to show example images, give structure to your instructions, and more. See our instruction best practices for more details. For Scale Rapid projects, DO NOT set this field unless you specifically want to override the project level instructions.

callback_urlstring

The full url (including the scheme http:// or https://) or email address of the callback that will be used when the task is completed.

attachmentstringrequired

A URL to the image you'd like to be segmented.

attachment_typestring

Describes what type of file the attachment is. We currently only support image for the segmentannotation.

labelsarray of stringsrequired

An array of strings or LabelDescription objects describing the different types of objects you'd like to be used to segment the image. You may include at most 50 labels.

annotation_attributesobject

This field is used to add additional attributes that you would like to capture per annotation. This only applies to instance annotations. See Annotation Attributes for more details about annotation attributes.

allow_unlabeledboolean

Whether or not this image can be completed without every pixel being labeled.

hypothesisobject

Editable annotations that a task should be initialized with. This is useful when you've run a model to prelabel the task and want annotators to refine those prelabels. Review the Segmentation Hypothesis Format for more details.

metadataobject

A set of key/value pairs that you can attach to a task object. It can be useful for storing additional information about the task in a structured format. Max 10KB. See the Metadata section for more detail.

context_attachmentsarray of objects

An array of objects in the form of {"attachment": "<link to actual attachment>"} to show to taskers as a reference. Context images themselves can not be labeled. Context images will appear like this in the UI. You cannot use the task's attachment url as a context attachment's url.

unique_idstring

A arbitrary ID that you can assign to a task and then query for later. This ID must be unique across all projects under your account, otherwise the task submission will be rejected. See Avoiding Duplicate Tasks for more details.

clear_unique_id_on_errorboolean

If set to be true, if a task errors out after being submitted, the unique id on the task will be unset. This param allows workflows where you can re-submit the same unique id to recover from errors automatically

tagsarray of strings

Arbitrary labels that you can assign to a task. At most 5 tags are allowed per task. You can query tasks with specific tags through the task retrieval API.

Request

POST/v1/task/segmentannotation
import requests

url = "https://api.scale.com/v1/task/segmentannotation"

payload = {
    "instruction": "**Instructions:** Please label all the things",
    "attachment": "https://i.imgur.com/iDZcXfS.png",
    "attachment_type": "image",
    "annotation_attributes": { "newKey": {
            "type": "type",
            "description": "description",
            "choices": "choices",
            "conditions": {
                "label_condition": ["car", "car2"],
                "attribute_conditions": {
                    "newKey": "New Value",
                    "newKey-1": "New Value"
                }
            }
        } },
    "allow_unlabeled": False,
    "metadata": {
        "newKey": "New Value",
        "newKey-1": "New Value"
    },
    "project": "Project Name",
    "batch": "Batch Name",
    "callback_url": "http://www.example.com/callback",
    "labels": [["vehicle"], "vehicle 2", "vehicle 3"],
    "context_attachments": [{ "attachment": "attachment" }, { "attachment": "attachment2" }],
    "unique_id": "unique_id",
    "clear_unique_id_on_error": True,
    "tags": ["tag", "tag2"]
}
headers = {
    "accept": "application/json",
    "content-type": "application/json"
}

response = requests.post(url, json=payload, headers=headers)

print(response.text)
POST/v1/task/segmentannotation
from scaleapi.tasks import TaskType
from scaleapi.exceptions import ScaleDuplicateResource

payload = dict(
    "instruction": "**Instructions:** Please label all the things",
    "attachment": "https://i.imgur.com/iDZcXfS.png",
    "attachment_type": "image",
    "annotation_attributes": { "newKey": {
            "type": "type",
            "description": "description",
            "choices": "choices",
            "conditions": {
                "label_condition": ["car", "car2"],
                "attribute_conditions": {
                    "newKey": "New Value",
                    "newKey-1": "New Value"
                }
            }
        } },
    "allow_unlabeled": False,
    "metadata": {
        "newKey": "New Value",
        "newKey-1": "New Value"
    },
    "project": "Project Name",
    "batch": "Batch Name",
    "callback_url": "http://www.example.com/callback",
    "labels": [["vehicle"], "vehicle 2", "vehicle 3"],
    "context_attachments": [{ "attachment": "attachment" }, { "attachment": "attachment2" }],
    "unique_id": "unique_id",
    "clear_unique_id_on_error": True,
    "tags": ["tag", "tag2"]
  }
)

try:
    client.create_task(TaskType.Segmentannotation, **payload)
except ScaleDuplicateResource as err:
    print(err.message)  # If unique_id is already used for a different task

Response

{
  "task_id": "string",
  "created_at": "string",
  "type": "segmentannotation",
  "status": "pending",
  "instruction": "string",
  "is_test": false,
  "urgency": "standard",
  "metadata": {},
  "project": "string",
  "callback_url": "string",
  "updated_at": "string",
  "work_started": false,
  "params": {
    "allow_unlabeled": false,
    "labels": [
      null
    ],
    "instance_labels": [
      null
    ],
    "attachment_type": "image",
    "attachment": "https://i.imgur.com/SudOKhq.jpg"
  }
}

Create General Video Annotation Task

This endpoint creates a videoannotation task. Given a series of images sampled from a video (which we will refer to as "frames"), Scale will annotate each frame with the Geometries (box, polygon, line, point, cuboid, and ellipse) you specify.

The required parameter for this task is geometries.

You can optionally provide additional markdown-enabled or Google Doc-based instructions via the instruction parameter.

You may also optionally specify events_to_annotate, a list of strings describing events section to annotate in the video.

If the request is successful, Scale will return the generated task object, at which point you should store the task_id to have a permanent reference to the task.

Body Params

projectstring

The name of the project to associate this task with.

batchstring

The name of the batch to associate this task with. Note that if a batch is specified, you need not specify the project, as the task will automatically be associated with the batch's project. For Scale Rapid projects specifying a batch is required. See Batches section for more details.

instructionstring

A markdown-enabled string or iframe embed google doc explaining how to do the task. You can use markdown to show example images, give structure to your instructions, and more. See our instruction best practices for more details. For Scale Rapid projects, DO NOT set this field unless you specifically want to override the project level instructions.

callback_urlstring

The full url (including the scheme http:// or https://) or email address of the callback that will be used when the task is completed.

attachmentsarray of strings

An array of URLs for the frames you'd like to be annotated. These image frames are stitched together to create a video. This is required if attachment_type is image and must be omitted if attachment_type is video.

attachmentstring

A URL pointing to the video file attachment. Only the mp4, webm, and ogg formats are supported.

attachment_typestring

Describes what type of file the attachment(s) are. The only options are image and video.

geometriesobjectrequired

An object mapping box, polygon, line, point, cuboid, or ellipse to Geometry objects

annotation_attributesobject

See the Annotation Attributes section for more details about annotation attributes.

events_to_annotatearray of strings

The list of events to annotate.

linksobject

Use this field to define links between annotations. See Links for more details about links.

frame_rateint32

The number of frames per second to annotate.

paddingint32

The amount of padding in pixels added to the top, bottom, left, and right of each video frame. This allows labelers to extend annotations outside of the frames.

paddingXint32

The amount of padding in pixels added to the left and right of each video frame. Overrides padding if set.

paddingYint32

The amount of padding in pixels added to the top and bottom of each video frame. Overrides padding if set.

hypothesisobject

Editable annotations that a task should be initialized with. This is useful when you've run a model to prelabel the task and want annotators to refine those prelabels. Review the Segmentation Hypothesis Format for more details.

base_annotationsobject

Editable annotations, with the option to be "locked", that a task should be initialized with. This is useful when you've run a model to prelabel the task and want annotators to refine those prelabels. Must contain the annotations field, which has the same format as the annotations field in the response.

can_add_base_annotationsboolean

Whether or not new annotations can be added to the task if base_annotations are used. If set to true, new annotations can be added to the task in addition to base_annotations. If set to false, new annotations will not be able to be added to the task.

can_edit_base_annotationsboolean

Whether or not base_annotations can be edited in the task. If set to true, base_annotations can be edited by the tasker (position of annotation, attributes, etc). If set to false, all aspects of base_annotations will be locked.

can_edit_base_annotation_labelsboolean

Whether or not base_annotations labels can be edited in the task. If set to true, the label of base_annotations can be edited by the tasker. If set to false, the label will be locked.

can_delete_base_annotationsboolean

Whether or not base_annotations can be removed from the task. If set to true, base_annotations can be deleted from the task. If set to false, base_annotations cannot be deleted from the task.

metadataobject

A set of key/value pairs that you can attach to a task object. It can be useful for storing additional information about the task in a structured format. Max 10KB.

priorityint32

A value of 10, 20, or 30 that defines the priority of a task within a project. The higher the number, the higher the priority.

unique_idstring

A arbitrary ID that you can assign to a task and then query for later. This ID must be unique across all projects under your account, otherwise the task submission will be rejected. See Avoiding Duplicate Tasks for more details.

clear_unique_id_on_errorboolean

If set to be true, if a task errors out after being submitted, the unique id on the task will be unset. This param allows workflows where you can re-submit the same unique id to recover from errors automatically

tagsarray of strings

Arbitrary labels that you can assign to a task. At most 5 tags are allowed per task. You can query tasks with specific tags through the task retrieval API.

Request

POST/v1/task/videoannotation
import requests

url = "https://api.scale.com/v1/task/videoannotation"

payload = {
    "instruction": "**Instructions:** Please label all the things",
    "attachments": ["https://static.scale.com/scaleapi-lidar-images/2011_09_29_drive_0071_sync/image_02/data/0000000005.png", "https://static.scale.com/scaleapi-lidar-images/2011_09_29_drive_0071_sync/image_02/data/0000000008.png"],
    "attachment_type": "image",
    "geometries": {
        "box": {
            "min_height": 10,
            "min_width": 10,
            "can_rotate": True,
            "integer_pixels": False
        },
        "polygon": {
            "min_vertices": 10,
            "max_vertices": 20,
            "objects_to_annotate": ["large vehicle"]
        },
        "line": {
            "min_vertices": 10,
            "max_vertices": 20,
            "objects_to_annotate": ["large vehicle"]
        },
        "point": { "objects_to_annotate": ["large vehicle", "large vehicle"] },
        "cuboid": {
            "min_height": 10,
            "min_width": 10,
            "camera_intrinsics": {
                "fx": 10,
                "fy": 10,
                "cx": 10,
                "cy": 10,
                "skew": 10,
                "scalefactor": 10
            },
            "camera_rotation_quaternion": {
                "w": 10,
                "x": 10,
                "y": 10,
                "z": 10
            },
            "camera_height": 10
        },
        "ellipse": { "objects_to_annotate": ["large vehicle"] }
    },
    "events_to_annotate": ["event_1_name", "event_2_name"],
    "frame_rate": 1,
    "start_time": 10,
    "padding": 10,
    "paddingX": 10,
    "metadata": {
        "newKey": "New Value",
        "newKey-1": "New Value"
    },
    "priority": 30,
    "project": "Project Name",
    "batch": "Batch Name",
    "callback_url": "http://www.example.com/callback",
    "attachment": "attachment_url",
    "duration_time": 10,
    "paddingY": 10,
    "unique_id": "unique_id",
    "clear_unique_id_on_error": True,
    "tags": ["tag1", "tag2"]
}
headers = {
    "accept": "application/json",
    "content-type": "application/json"
}

response = requests.post(url, json=payload, headers=headers)

print(response.text)
POST/v1/task/videoannotation
from scaleapi.tasks import TaskType
from scaleapi.exceptions import ScaleDuplicateResource

payload = dict(
    "instruction": "**Instructions:** Please label all the things",
    "attachments": ["https://static.scale.com/scaleapi-lidar-images/2011_09_29_drive_0071_sync/image_02/data/0000000005.png", "https://static.scale.com/scaleapi-lidar-images/2011_09_29_drive_0071_sync/image_02/data/0000000008.png"],
    "attachment_type": "image",
    "geometries": {
        "box": {
            "min_height": 10,
            "min_width": 10,
            "can_rotate": True,
            "integer_pixels": False
        },
        "polygon": {
            "min_vertices": 10,
            "max_vertices": 20,
            "objects_to_annotate": ["large vehicle"]
        },
        "line": {
            "min_vertices": 10,
            "max_vertices": 20,
            "objects_to_annotate": ["large vehicle"]
        },
        "point": { "objects_to_annotate": ["large vehicle", "large vehicle"] },
        "cuboid": {
            "min_height": 10,
            "min_width": 10,
            "camera_intrinsics": {
                "fx": 10,
                "fy": 10,
                "cx": 10,
                "cy": 10,
                "skew": 10,
                "scalefactor": 10
            },
            "camera_rotation_quaternion": {
                "w": 10,
                "x": 10,
                "y": 10,
                "z": 10
            },
            "camera_height": 10
        },
        "ellipse": { "objects_to_annotate": ["large vehicle"] }
    },
    "events_to_annotate": ["event_1_name", "event_2_name"],
    "frame_rate": 1,
    "start_time": 10,
    "padding": 10,
    "paddingX": 10,
    "metadata": {
        "newKey": "New Value",
        "newKey-1": "New Value"
    },
    "priority": 30,
    "project": "Project Name",
    "batch": "Batch Name",
    "callback_url": "http://www.example.com/callback",
    "attachment": "attachment_url",
    "duration_time": 10,
    "paddingY": 10,
    "unique_id": "unique_id",
    "clear_unique_id_on_error": True,
    "tags": ["tag1", "tag2"]
)

try:
    client.create_task(TaskType.VideoAnnotation, **payload)
except ScaleDuplicateResource as err:
    print(err.message)  # If unique_id is already used for a different task

Response

{
  "task_id": "string",
  "created_at": "string",
  "type": "videoannotation",
  "status": "pending",
  "instruction": "string",
  "is_test": false,
  "urgency": "standard",
  "metadata": {},
  "project": "string",
  "callback_url": "string",
  "updated_at": "string",
  "work_started": false,
  "params": {
    "attachment_type": "website",
    "attachment": [
      null
    ],
    "geometries": {
      "box": {
        "objects_to_annotate": [
          null
        ],
        "min_height": 10,
        "min_width": 10
      },
      "polygon": {
        "objects_to_annotate": [
          null
        ]
      },
      "point": {
        "objects_to_annotate": [
          null
        ]
      }
    },
    "annotation_attributes": {
      "additionalProp": {
        "description": "string",
        "choice": "string"
      }
    },
    "events_to_annotate": [
      null
    ],
    "with_labels": true
  }
}

Create Video Playback Annotation Task

This endpoint creates a videoplaybackannotation task. In this task, we will view the given video file and draw annotations around the specified objects.

You are required to provide a URL to the video file as the attachment. It can be in mp4, webm, or ogg format.

You can optionally provide additional markdown-enabled or Google Doc-based instructions via the instruction parameter.

You may optionally specify a frame_rate, which will determine how many frames per second will be used to annotate the given video. The default value is 1.

You may also optionally specify events_to_annotate, a list of strings describing events section to annotate in the video.

If the request is successful, Scale will return the generated task object, at which point you should store the task_id to have a permanent reference to the task.

Body Params

projectstring

The name of the project to associate this task with.

batchstring

The name of the batch to associate this task with. Note that if a batch is specified, you need not specify the project, as the task will automatically be associated with the batch's project. For Scale Rapid projects specifying a batch is required. See Batches section for more details.

instructionstring

A markdown-enabled string or iframe embed google doc explaining how to do the task. You can use markdown to show example images, give structure to your instructions, and more. See our instruction best practices for more details. For Scale Rapid projects, DO NOT set this field unless you specifically want to override the project level instructions.

callback_urlstring

The full url (including the scheme http:// or https://) or email address of the callback that will be used when the task is completed.

attachmentsarray of strings

An array of URLs for the frames you'd like to be annotated. These image frames are stitched together to create a video. This is required if attachment_type is image and must be omitted if attachment_type is video.

attachmentstring

A URL pointing to the video file attachment. Only the mp4, webm, and ogg formats are supported.

attachment_typestring

Describes what type of file the attachment(s) are. The only options are image and video.

geometriesobjectrequired

An object mapping box, polygon, line, point, cuboid, or ellipse to Geometry objects

annotation_attributesobject

See the Annotation Attributes section for more details about annotation attributes.

events_to_annotateint32

The list of events to annotate.

duration_timearray of strings

The duration of the video in seconds. This is ignored if attachment_type is image. Default is full video length.

frame_rateobject

The number of frames to capture in one second. This is ignored if attachment_type is image.

start_timeint32

The start time in seconds. This is ignored if attachment_type is image.

paddingint32

The amount of padding in pixels added to the top, bottom, left, and right of each video frame. This allows labelers to extend annotations outside of the frames.

paddingXint32

The amount of padding in pixels added to the left and right of each video frame. Overrides padding if set.

paddingYint32

The amount of padding in pixels added to the top and bottom of each video frame. Overrides padding if set.

base_annotationsobject

Editable annotations, with the option to be "locked", that a task should be initialized with. This is useful when you've run a model to prelabel the task and want annotators to refine those prelabels. Must contain the annotations field, which has the same format as the annotations field in the response.

can_add_base_annotationsboolean

Whether or not new annotations can be added to the task if base_annotations are used. If set to true, new annotations can be added to the task in addition to base_annotations. If set to false, new annotations will not be able to be added to the task.

can_edit_base_annotationsboolean

Whether or not base_annotations can be edited in the task. If set to true, base_annotations can be edited by the tasker (position of annotation, attributes, etc). If set to false, all aspects of base_annotations will be locked.

can_edit_base_annotation_labelsboolean

Whether or not base_annotations labels can be edited in the task. If set to true, the label of base_annotations can be edited by the tasker. If set to false, the label will be locked.

can_delete_base_annotationsboolean

Whether or not base_annotations can be removed from the task. If set to true, base_annotations can be deleted from the task. If set to false, base_annotations cannot be deleted from the task.

metadataobject

A set of key/value pairs that you can attach to a task object. It can be useful for storing additional information about the task in a structured format. Max 10KB.

priorityint32

A value of 10, 20, or 30 that defines the priority of a task within a project. The higher the number, the higher the priority.

unique_idstring

A arbitrary ID that you can assign to a task and then query for later. This ID must be unique across all projects under your account, otherwise the task submission will be rejected. See Avoiding Duplicate Tasks for more details.

clear_unique_id_on_errorboolean

If set to be true, if a task errors out after being submitted, the unique id on the task will be unset. This param allows workflows where you can re-submit the same unique id to recover from errors automatically

tagsarray of strings

Arbitrary labels that you can assign to a task. At most 5 tags are allowed per task. You can query tasks with specific tags through the task retrieval API.

Request

POST/task/videoplaybackannotation
import requests

url = "https://api.scale.com/v1/task/videoplaybackannotation"

payload = {
    "instruction": "**Instructions:** Please label all the things",
    "attachments": ["https://static.scale.com/scaleapi-lidar-images/2011_09_26_drive_0051_sync/image_02/data/0000000000.png", "https://static.scale.com/scaleapi-lidar-images/2011_09_26_drive_0051_sync/image_02/data/0000000001.png"],
    "attachment": "https://scale-static-assets.s3-us-west-2.amazonaws.com/demos/multimodal-video.mp4",
    "attachment_type": "image",
    "geometries": {
        "box": {
            "min_height": 10,
            "min_width": 10
        },
        "polygon": {
            "min_vertices": 1,
            "max_vertices": " "
        },
        "line": {
            "min_vertices": 1,
            "max_vertices": " "
        },
        "point": {
            "x": " ",
            "y": " "
        },
        "cuboid": {
            "min_height": 0,
            "min_width": 0,
            "camera_intrinsics": {
                "fx": " ",
                "fy": " ",
                "cx": " ",
                "cy": " ",
                "skew": 0,
                "scalefactor": 1
            },
            "camera_rotation_quaternion": {
                "w": " ",
                "x": " ",
                "y": " ",
                "z": " "
            },
            "camera_height": " "
        }
    },
    "frame_rate": 1,
    "padding": 0,
    "paddingX": 0,
    "paddingY": 0,
    "priority": 30
}
headers = {
    "accept": "application/json",
    "content-type": "application/json",
    "authorization": "<YOUR_API_KEY>"
}

response = requests.post(url, json=payload, headers=headers)

print(response.text)
POST/task/videoplaybackannotation
from scaleapi.tasks import TaskType
from scaleapi.exceptions import ScaleDuplicateResource

payload = dict(
    "instruction": "**Instructions:** Please label all the things",
    "attachments": ["https://static.scale.com/scaleapi-lidar-images/2011_09_26_drive_0051_sync/image_02/data/0000000000.png", "https://static.scale.com/scaleapi-lidar-images/2011_09_26_drive_0051_sync/image_02/data/0000000001.png"],
    "attachment": "https://scale-static-assets.s3-us-west-2.amazonaws.com/demos/multimodal-video.mp4",
    "attachment_type": "image",
    "geometries": {
        "box": {
            "min_height": 10,
            "min_width": 10
        },
        "polygon": {
            "min_vertices": 1,
            "max_vertices": 10
        },
        "line": {
            "min_vertices": 1,
            "max_vertices": 10
        },
        "point": {
            "x": " ",
            "y": " "
        },
        "cuboid": {
            "min_height": 0,
            "min_width": 0,
            "camera_intrinsics": {
                "fx": 10,
                "fy": 10,
                "cx": 10,
                "cy": 10,
                "skew": 0,
                "scalefactor": 1
            },
            "camera_rotation_quaternion": {
                "w": 10,
                "x": 10,
                "y": 10,
                "z": 10
            },
            "camera_height": 10
        }
    },
    "frame_rate": 1,
    "padding": 0,
    "paddingX": 0,
    "paddingY": 0,
    "priority": 30
)

try:
    client.create_task(TaskType.VideoPlaybackAnnotation, **payload)
except ScaleDuplicateResource as err:
    print(err.message)  # If unique_id is already used for a different task

Response

{
  "task_id": "string",
  "created_at": "string",
  "type": "imageannotation",
  "status": "pending",
  "instruction": "string",
  "is_test": false,
  "urgency": "standard",
  "metadata": {},
  "project": "string",
  "callback_url": "string",
  "updated_at": "string",
  "work_started": false,
  "params": {
    "attachment_type": "image",
    "attachment": "http://i.imgur.com/3Cpje3l.jpg",
    "geometries": {
      "box": {
        "objects_to_annotate": [
          null
        ],
        "min_height": 5,
        "min_width": 5
      },
      "polygon": {
        "objects_to_annotate": [
          null
        ]
      },
      "point": {
        "objects_to_annotate": [
          null
        ]
      }
    },
    "annotation_attributes": {
      "additionalProp": {
        "type": "category",
        "description": "string",
        "choice": "string"
      }
    }
  }
}
Updated 21 days ago