ScraperAPI Crawler v2.0 & dashboard UI launch
We’re excited to introduce ScraperAPI Crawler v2.0 and the new Crawler UI workspace in your dashboard; a complete, end-to-end solution for crawling sites where the data you need lives.
All-New Crawler UI
Streamlined Crawler Workspace
Centralized Management: View all active and inactive crawlers in one place
Quick Status Overview: Monitor job health, costs, and performance at a glance
Intuitive Navigation: Easy access to create, configure, and manage crawlers
Visual Job Monitoring
Real-time Progress Tracking: Watch your crawlers work with live updates
Interactive Job Tables: Search, and filter through crawler job list
Detailed insights: Cost tracking and success monitoring
Simplified Crawler Setup
Step-by-Step Configuration: Guided process for creating new crawlers
Download job results directly from the dashboard.
What's New in Crawler v2.0
Key Improvements
Smarter, job-based crawling engine
Starts from a single URL and automatically discovers and scrapes new pages based on your rules.
Skips duplicate URLs to avoid loops and wasted credits.
Handles failed requests gracefully (failures don’t cost credits, and full reasons are included in job summaries).
Automatically stops at your credit budget or depth limit, so you stay in control of cost and scope.
Streams page results in real time to your webhook, plus delivers a full job summary when the crawl is done.
Flexible scheduling: You can run a crawl once or schedule it hourly, daily, weekly, or monthly, all while keeping budget and depth controls in place.
Whether you’re crawling 10 pages or 10,000, Crawler 2.0 runs each job from start to finish and sends every page result to your webhook as it’s processed.
Last updated
Was this helpful?

