Quickstart
This guide will get you all set up and ready to use the Supacrawler API. We'll cover how to get your API key and make your first API request. We'll also look at where to go next to find all the information you need to take full advantage of our powerful web scraping API.
Before you can make requests to the Supacrawler API, you will need to grab your API key from your dashboard at supacrawler.com/dashboard/api-keys.
Get Your API Key
Before making your first API request, you need to get your API key:
- Sign up at supacrawler.com if you haven't already
- Navigate to your dashboard at supacrawler.com/dashboard/api-keys
- Create or copy your API key - keep this secure!
Making your first API request
After getting your API key, you're ready to make your first call to the Supacrawler API. Below, you can see how to send a GET request to the Scrape endpoint to extract content from any webpage.
curl -G https://api.supacrawler.com/api/v1/scrape \
-H "Authorization: Bearer YOUR_API_KEY" \
-d url="https://example.com" \
-d format="markdown"
Response
{
"success": true,
"url": "https://example.com",
"content": "# Example Domain\n\nThis domain is for use in illustrative examples in documents...",
"title": "Example Domain",
"metadata": {
"status_code": 200
}
}
Try Different Features
Scrape with Different Formats
Extract content in different formats:
HTML format
curl -G https://api.supacrawler.com/api/v1/scrape \
-H "Authorization: Bearer YOUR_API_KEY" \
-d url="https://example.com" \
-d format="html"
Discover links
curl -G https://api.supacrawler.com/api/v1/scrape \
-H "Authorization: Bearer YOUR_API_KEY" \
-d url="https://example.com" \
-d format="links" \
-d depth=2
Create Crawl Jobs
For larger sites, use asynchronous crawl jobs:
Create crawl job
curl https://api.supacrawler.com/api/v1/crawl \
-H "Authorization: Bearer YOUR_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"url": "https://example.com",
"type": "crawl",
"depth": 2,
"maxPages": 10
}'
Take Screenshots
Capture screenshots of webpages:
Create screenshot job
curl https://api.supacrawler.com/api/v1/screenshots \
-H "Authorization: Bearer YOUR_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"url": "https://example.com",
"device": "desktop",
"fullPage": true
}'
What's next?
Great, you're now set up with an API key and have made your first request to the API. Here are a few links that might be handy as you venture further into the Supacrawler API:
- Grab your API key from the Supacrawler dashboard
- Check out the Scrape endpoint for single-page content extraction
- Learn about Jobs for crawling multiple pages
- Explore Screenshots for visual webpage capture
- Learn about authentication and security best practices
- Understand error handling for robust integrations
Common Use Cases
Content Analysis
Extract clean content from articles, blogs, and documentation for analysis or processing.
Site Monitoring
Set up automated scraping to monitor changes on important pages.
Data Collection
Gather structured data from multiple pages using crawl jobs.
Visual Testing
Capture screenshots across different devices for visual regression testing.
SEO Auditing
Extract metadata, titles, and content structure for SEO analysis.