Supacrawler Docs

Quickstart

This guide will get you all set up and ready to use the Supacrawler API. We'll cover how to get your API key and make your first API request. We'll also look at where to go next to find all the information you need to take full advantage of our powerful web scraping API.

Note

Before you can make requests to the Supacrawler API, you will need to grab your API key from your dashboard at supacrawler.com/dashboard/api-keys.

Get Your API Key

Before making your first API request, you need to get your API key:

  1. Sign up at supacrawler.com if you haven't already
  2. Navigate to your dashboard at supacrawler.com/dashboard/api-keys
  3. Create or copy your API key - keep this secure!

Making your first API request

After getting your API key, you're ready to make your first call to the Supacrawler API. Below, you can see how to send a GET request to the Scrape endpoint to extract content from any webpage.

curl -G https://api.supacrawler.com/api/v1/scrape \
  -H "Authorization: Bearer YOUR_API_KEY" \
  -d url="https://example.com" \
  -d format="markdown"
const response = await fetch(
  'https://api.supacrawler.com/api/v1/scrape?url=https://example.com&format=markdown',
  {
    headers: {
      Authorization: 'Bearer YOUR_API_KEY',
    },
  },
)

const data = await response.json()
console.log(data.content)
import requests

response = requests.get(
    'https://api.supacrawler.com/api/v1/scrape',
    headers={'Authorization': 'Bearer YOUR_API_KEY'},
    params={
        'url': 'https://example.com',
        'format': 'markdown'
    }
)

data = response.json()
print(data['content'])
<?php
$curl = curl_init();

curl_setopt_array($curl, [
    CURLOPT_URL => 'https://api.supacrawler.com/api/v1/scrape?url=https://example.com&format=markdown',
    CURLOPT_RETURNTRANSFER => true,
    CURLOPT_HTTPHEADER => [
        'Authorization: Bearer YOUR_API_KEY'
    ],
]);

$response = curl_exec($curl);
curl_close($curl);

$data = json_decode($response, true);
echo $data['content'];
?>
Response
{
  "success": true,
  "url": "https://example.com",
  "content": "# Example Domain\n\nThis domain is for use in illustrative examples in documents...",
  "title": "Example Domain",
  "metadata": {
    "status_code": 200
  }
}

Read the docs for the Scrape endpoint →

Try Different Features

Scrape with Different Formats

Extract content in different formats:

HTML format
curl -G https://api.supacrawler.com/api/v1/scrape \
  -H "Authorization: Bearer YOUR_API_KEY" \
  -d url="https://example.com" \
  -d format="html"
Discover links
curl -G https://api.supacrawler.com/api/v1/scrape \
  -H "Authorization: Bearer YOUR_API_KEY" \
  -d url="https://example.com" \
  -d format="links" \
  -d depth=2

Create Crawl Jobs

For larger sites, use asynchronous crawl jobs:

Create crawl job
curl https://api.supacrawler.com/api/v1/crawl \
  -H "Authorization: Bearer YOUR_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "url": "https://example.com",
    "type": "crawl",
    "depth": 2,
    "maxPages": 10
  }'

Take Screenshots

Capture screenshots of webpages:

Create screenshot job
curl https://api.supacrawler.com/api/v1/screenshots \
  -H "Authorization: Bearer YOUR_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "url": "https://example.com",
    "device": "desktop",
    "fullPage": true
  }'

What's next?

Great, you're now set up with an API key and have made your first request to the API. Here are a few links that might be handy as you venture further into the Supacrawler API:

Common Use Cases

Content Analysis

Extract clean content from articles, blogs, and documentation for analysis or processing.

Site Monitoring

Set up automated scraping to monitor changes on important pages.

Data Collection

Gather structured data from multiple pages using crawl jobs.

Visual Testing

Capture screenshots across different devices for visual regression testing.

SEO Auditing

Extract metadata, titles, and content structure for SEO analysis.

Was this page helpful?