POST
/
datasets
/
v3
/
trigger
cURL
curl --request POST \
  --url https://api.brightdata.com/datasets/v3/trigger \
  --header 'Authorization: Bearer <token>' \
  --header 'Content-Type: application/json' \
  --data '[
  {
    "url": "https://www.airbnb.com/rooms/50122531"
  },
  {
    "url": "https://www.airbnb.com/rooms/50127677"
  }
]'
{
  "snapshot_id": "s_m4x7enmven8djfqak"
}

Related guide: Crawl API Introduction

Authorizations

Authorization
string
header
required

Use your Bright Data API Key as a Bearer token in the Authorization header.

Get API Key from: https://brightdata.com/cp/setting/users.

Example: Authorization: Bearer b5648e1096c6442f60a6c4bbbe73f8d2234d3d8324554bd6a7ec8f3f251f07df

Query Parameters

dataset_id
string
required

Your dataset ID

Example:

"gd_m6gjtfmeh43we6cqc"

include_errors
boolean

Include errors report with the results. By setting "include_errors" to true, you will receive a detailed report of any errors that occur during the data collection.

Example:

true

custom_output_fields
string

The "custom_output_fields" parameter is used to filter the response data to include only the specified fields. You can list the output columns you want, separated by a pipe (|).

For example, if you want the response to include only the URL and the date it was last updated, you would set the parameter to "url|about.updated_on". This allows you to customize the data output to include only the fields relevant to your needs.

Example:

"url|about.updated_on"

Body

You can provide the input data in either JSON or CSV format. The input specifies the URLs or other parameters required by the scraper.

An array of objects containing URLs or other parameters required by the scraper. The exact fields needed depend on the specific dataset being used.

Response

200
application/json

Collection job successfully started

The response is of type object.