Supacrawler Docs

Frequently Asked Questions

Find answers to common questions about Supacrawler

General

What is Supacrawler?

Supacrawler is a powerful API platform for web scraping, crawling, and screenshot capture. It's built for reliability at scale and offers official SDKs for Python and TypeScript. You can scrape websites, extract structured data with AI, capture screenshots, and monitor pages for changes.

How do I get an API key?

Sign up for free at supacrawler.com and visit your Dashboard → API Keys. Your API key will be visible there and can be rotated at any time for security purposes.

What are the main features?

Supacrawler offers five core features: Scrape (fetch and parse single pages), Crawl (follow links across a site), Parse (AI-powered data extraction), Screenshots (high-quality captures), and Watch (monitor pages for changes).

Is there a free plan?

Yes! The Hobby plan is free and includes 500 credits per month. It's perfect for testing the API and small personal projects. No credit card required to get started.

Billing & Pricing

How does billing work?

We offer monthly and annual subscriptions. Each plan includes a set number of credits per month. Credits never expire as long as you maintain an active subscription. You can manage or cancel your subscription anytime from your account settings.

How are credits consumed?

Credits are consumed per operation: Scrape costs 1 credit per page, Crawl costs 1 credit per successful page, Screenshots cost 3 credits (base) or 5 credits (full-page), Watch costs 10 credits per job, and Parse costs vary based on content complexity.

Can I cancel my subscription at any time?

Yes, you can cancel at any time. Your plan remains active until the end of the current billing period, and you keep all accumulated credits during that time.

How much do I save with annual billing?

Annual billing saves you 2 months: Starter saves $30/year ($15 → $12/month), and Pro saves $132/year ($65 → $54/month). Credits are still distributed monthly.

What happens if I run out of credits?

When you run out of credits, API requests will return an error indicating insufficient credits. You can purchase additional credits or upgrade to a higher plan to continue using the service.

Technical

Do you render JavaScript?

Yes! All pages are rendered with a headless browser to capture the final DOM state. This ensures JavaScript-heavy sites, SPAs, and dynamically loaded content are fully captured for scraping, crawling, and screenshots.

Do you support full-page and element screenshots?

Yes. The Screenshots API supports viewport screenshots, full-page screenshots, and targeting specific elements using CSS selectors. You can also customize device type, format (PNG/JPEG), quality, and more.

Is rate limiting applied?

Yes, rate limits vary by plan to ensure fair usage and optimal performance. When limits are exceeded, the API returns a 429 status code. We recommend implementing exponential backoff for retries.

Can I use SDKs?

Yes! We provide official SDKs for TypeScript/JavaScript and Python. Both SDKs are fully typed, include comprehensive examples, and are actively maintained. They make integration much easier than using the REST API directly.

Do you support authentication (cookies/headers)?

Yes, you can pass custom headers and cookies to access authenticated pages. This is useful for scraping content behind login walls or pages that require specific session data.

What output formats are supported?

Scrape supports Markdown, HTML, and Links. Parse supports JSON, CSV, and Markdown. Screenshots support PNG and JPEG formats. All responses include metadata like status codes, titles, and timestamps.

Usage & Monitoring

Where can I see usage and logs?

Visit your Dashboard to monitor usage, view job statuses, check credit consumption, and access detailed logs for each API endpoint. You can filter by date, status, and endpoint type.

What happens if a page fails to load?

Failed requests return structured error details including the error type, message, and HTTP status code. Failed pages in crawl jobs don't consume credits, ensuring you only pay for successful extractions.

Does the API retry failed requests?

The API includes automatic retries for transient failures (network issues, timeouts). For permanent failures (404, 403), no retries are attempted. You can implement custom retry logic in your application for additional control.

Can I receive webhook notifications?

Yes, the Watch API supports webhook notifications when page changes are detected. You can configure notification preferences to receive alerts only on changes or for all checks.

Advanced Features

How does AI parsing work?

The Parse API uses advanced language models to understand your natural language prompts and extract structured data from web pages. You can provide JSON schemas for precise extraction or use freeform prompts for flexibility.

What is crawl depth and how does it work?

Crawl depth controls how many levels deep the crawler follows links. Depth 1 crawls only the starting page, depth 2 includes links from the starting page, depth 3 includes links from those pages, and so on. Higher plans allow greater depth.

How often does Watch check for changes?

Watch supports hourly, daily, and weekly check frequencies. You can also trigger manual checks at any time. Changes are detected by comparing content hashes, and you receive notifications based on your preferences.

What screenshot customization options are available?

Screenshots support device presets (desktop, mobile, tablet, custom), dark mode, content blocking (ads, trackers, cookies), element hiding, custom headers/cookies, accessibility features, and wait strategies for dynamic content.

Compliance & Security

Do you have content guidelines?

Yes. Users must respect target website terms of service and applicable laws (GDPR, CCPA, etc.). Abuse, illegal scraping, or violations may result in account suspension. Always check robots.txt and respect rate limits.

How is my data handled?

We don't store scraped content unless explicitly requested for caching purposes. API keys are encrypted, and all data transmission uses HTTPS. We comply with GDPR and CCPA regulations for data privacy.

What's your uptime guarantee?

Pro plans include a 99.9% uptime SLA. We monitor our infrastructure 24/7 and provide status updates at status.supacrawler.com. Planned maintenance is announced in advance via email and dashboard notifications.

What support do you offer?

Hobby users have access to community forums and documentation. Starter users get email support with best-effort response times. Pro users receive priority email support with 24-hour response times.

Still Have Questions?

Can't find what you're looking for? Reach out to our support team:

Was this page helpful?