Visual Regression Testing
Automated UI testing with screenshots for detecting visual changes. Automate visual regression testing by capturing screenshots and comparing them to detect unintended UI changes.
APIs Used
This use case relies on the Screenshots API to capture and compare visual states of your application.
Quick Example
from supacrawler import SupacrawlerClient
import hashlib
import os
client = SupacrawlerClient(api_key=os.environ['SUPACRAWLER_API_KEY'])
def capture_baseline(url, test_name):
# Take screenshot
job = client.create_screenshot_job(
url=url,
device="desktop",
full_page=True
)
result = client.wait_for_job(job.job_id)
# Download and hash
import requests
img_data = requests.get(result.data.screenshot).content
img_hash = hashlib.md5(img_data).hexdigest()
# Save baseline
with open(f'baselines/{test_name}.png', 'wb') as f:
f.write(img_data)
with open(f'baselines/{test_name}.hash', 'w') as f:
f.write(img_hash)
return img_hash
def check_regression(url, test_name):
# Capture current state
job = client.create_screenshot_job(url=url, device="desktop", full_page=True)
result = client.wait_for_job(job.job_id)
import requests
current_img = requests.get(result.data.screenshot).content
current_hash = hashlib.md5(current_img).hexdigest()
# Load baseline
with open(f'baselines/{test_name}.hash', 'r') as f:
baseline_hash = f.read()
if current_hash != baseline_hash:
print(f"⚠️ Visual regression detected in {test_name}!")
# Save diff
with open(f'diffs/{test_name}-current.png', 'wb') as f:
f.write(current_img)
return False
else:
print(f"✅ {test_name} passed")
return True
CI/CD Integration
name: Visual Regression Tests
on: [push, pull_request]
jobs:
visual-tests:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- name: Run visual regression tests
env:
SUPACRAWLER_API_KEY: ${{ secrets.SUPACRAWLER_API_KEY }}
run: |
python visual_regression_tests.py
- name: Upload diffs
if: failure()
uses: actions/upload-artifact@v3
with:
name: visual-diffs
path: diffs/
Multi-Page Testing
pages = [
{"name": "homepage", "url": "https://myapp.com/"},
{"name": "pricing", "url": "https://myapp.com/pricing"},
{"name": "dashboard", "url": "https://myapp.com/dashboard"}
]
results = []
for page in pages:
passed = check_regression(page["url"], page["name"])
results.append({"page": page["name"], "passed": passed})
failed = [r for r in results if not r["passed"]]
if failed:
print(f"\n❌ {len(failed)} test(s) failed:")
for test in failed:
print(f" - {test['page']}")
exit(1)
else:
print(f"\n✅ All {len(results)} tests passed!")
Best Practices
- Capture full page screenshots for comprehensive testing
- Use consistent viewport sizes across tests
- Wait for
networkidle
to ensure page is fully loaded - Hide dynamic content (dates, live counters)
- Store baselines in version control
- Generate visual diffs for failed tests
- Run tests in CI/CD pipeline
- Test across multiple devices and browsers
Was this page helpful?