# Create Crawl

Specify the website root url and the collection for the website content. The service will automatically crawl it. This API returns a `crawl_id`, that you can use to track the crawl status.&#x20;

{% tabs %}
{% tab title="HTTP" %}

```
POST /docs/create_crawl HTTP/1.1
Api-Key: my_api_key
Content-Type: application/json
Host: my_account_id.us-west-2.aws.chatbees.ai

{
  "namespace_name": "string",
  "collection_name": "string",
  "root_url": "string"
}

Response:
{
  "crawl_id": "string"
}
```

{% endtab %}

{% tab title="Python" %}

```
import chatbees as cb

# Configure API key
cb.init(api_key="my_api_key", account_id="my_account_id")

col = cb.collection('llm_research')

# create crawl, currently crawl up to 200 pages. 
# a crawl_id is returned, and you could use it to get the crawl status.
crawl_id = col.create_crawl(root_url)
```

{% endtab %}
{% endtabs %}
