Last Updated: March 2026

What Is Crawl Budget?

Direct Answer
Crawl budget is the practical limit on how many URLs a search engine crawler will fetch from your site. It is determined by two factors: the crawl capacity limit (how much crawling Google can do without overwhelming your servers) and crawl demand (how much crawling Google wants to do based on content value, freshness, and popularity).

How Googlebot Allocates Crawl Budget

Google defines crawl budget at the hostname level — meaning www.example.com and blog.example.com have separate budgets. The allocation is the intersection of two independent factors:

  • 1. Crawl Capacity Limit: How hard can Google crawl your host without breaking it? This is computed from sustained server responsiveness and error rates. Fast responses increase capacity; slow responses or 5xx errors reduce it
  • 2. Crawl Demand: How much does Google want to crawl your host? This is driven by perceived URL inventory, page popularity, content staleness, and site events like migrations

An important distinction: crawling is not indexing. Being crawled does not guarantee being indexed. Statuses like "Discovered – currently not indexed" in Search Console often reflect crawl scheduling constraints, quality assessments, or canonicalization decisions happening downstream.

When Does Crawl Budget Actually Matter?

Google's own guidance is clear — crawl budget is an advanced concern for specific site profiles:

  • Large sites (~1M+ unique pages) with content that changes moderately often (about weekly)
  • Medium+ sites (~10K+ unique pages) with very rapidly changing content (daily updates)
  • Sites with a large share of URLs classified as "Discovered – currently not indexed"

If your site has fewer than 1,000 pages, crawl budget is rarely a constraint. Google notes their Crawl Stats report is "aimed at advanced users" with large, dynamic sites.

How to Optimize Crawl Budget

The highest-leverage strategy is not requesting more crawling — it is reducing crawl waste. Focus on eliminating low-value URL variants and fixing technical traps that consume crawl resources without producing indexing value.

TechniqueImpactEffort
Remove thin/duplicate pagesHigh — reduces wasted inventoryMedium
Fix redirect chainsHigh — each hop counts as a requestMedium
Clean XML sitemapsMedium — improves discovery signalsLow
Block low-value URLs via robots.txtHigh — prevents unnecessary crawlingLow
Improve internal linkingMedium-High — faster discoveryMedium
Improve server response timesHigh — increases crawl capacityHigh

Common Crawl Budget Mistakes

  • Using noindex to "save" budget: Google must crawl the page to see the noindex tag — it doesn't prevent crawling
  • Faceted navigation without controls: Filter/sort parameters create infinite URL spaces that overwhelm crawlers
  • Stale sitemaps with non-indexable URLs: Including redirected, blocked, or noindex URLs degrades sitemap trust
  • Relying on crawl-delay: Googlebot does not process the non-standard crawl-delay directive in robots.txt

Measuring Crawl Budget Health

Track these KPIs using Google Search Console's Crawl Stats report and server logs:

  • Crawl requests/day by template — ensure high-value pages get crawled frequently
  • "Discovered – not indexed" trend — reductions signal improved capacity
  • Average response time — lower TTFB directly increases crawl capacity
  • Crawl waste ratio — percentage of requests spent on redirects, 4xx, and soft-404s

What This Means for You

Clickcentric's technical SEO checklist includes crawl budget diagnostics — identifying redirect chains, thin pages, and sitemap issues that waste crawl resources. Combined with our schema markup feature, every page is structured for efficient crawling and rich indexing. Start free.

Related

Frequently Asked Questions

Generally no. Google's own guidance states that crawl budget is an advanced concern primarily for sites with 10,000+ pages that change daily, or 1M+ pages that change weekly. If your site has fewer than 1,000 pages, crawl budget is rarely a limiting factor.
Yes — blocked URLs are not crawled, which frees up capacity. However, Google notes that this freed capacity doesn't automatically redirect to other pages unless Google is already hitting your server's serving limit. It prevents waste rather than 'reallocating' budget.
No. Google must crawl a page to see the noindex directive, so the page still consumes crawl resources. Use robots.txt to block pages you never want crawled. Use noindex only when a page must be accessible to users but should not appear in search results.
Use Google Search Console's Crawl Stats report to see total crawl requests, response times, and host availability. Combine this with server log analysis to get ground truth on which URLs are being fetched most frequently and which templates are consuming the most crawl resources.
Yes. Google counts each hop in a redirect chain as a separate crawl request. A chain like A→B→C uses three requests instead of one. Collapsing chains into single-hop redirects (A→C) is one of the highest-impact crawl budget optimizations you can make.

Ready to Scale Your SEO?

Generate optimized content and publish to WordPress in minutes. 3-day free trial — no credit card required.

Start 3-Day Free Trial