One of the first things we check in every audit is how many pages a business has indexed in Google. The number is almost always lower than the business owner expects — and sometimes dramatically so.
We audited a Vancouver HVAC company recently that had 27 pages indexed. Their top competitor had over 40. That gap is partly a content volume problem, which is fixable. But when we dug further, we found that the HVAC company also had pages that should have been indexed but weren’t — service pages, a contact page, neighbourhood-specific pages. Pages that existed on the site, but that Google had never processed.
That’s an indexation problem. And it’s the most foundational issue we find in local service business audits, because a page Google can’t find cannot rank. Content quality doesn’t matter. Schema doesn’t matter. Nothing else matters until Google can access and process the page.
What indexation actually means
When Google indexes a page, it means a Googlebot crawler visited the page, read its content, and added it to Google’s search index — the database of pages Google can serve as results. An indexed page is eligible to appear in search results. An unindexed page is invisible to search, no matter what it says or how well it’s optimised.
Indexation is not automatic or guaranteed. Google decides which pages to crawl and when based on a range of signals: the quality of your site’s internal links, the clarity of your sitemap, your server’s crawl response, and whether Google trusts your site enough to invest crawl budget in its pages.
The most common causes we find in service business audits
Missing or broken sitemap. A sitemap is an XML file that tells Google every page on your site and when it was last updated. Without one, Google has to discover your pages by following links — a slower, less reliable process. Many service business sites either have no sitemap or have one that’s outdated, listing pages that no longer exist or missing pages that do.
Accidental noindex tags. The noindex meta tag instructs Google not to index a specific page. It’s a legitimate tool — you might use it on a thank-you page or a duplicate version of a page. But we regularly find noindex tags on pages that should absolutely be indexed: service pages, location pages, the homepage itself. This often happens when a developer sets noindex during a site build and forgets to remove it before launch.
Blocked by robots.txt. The robots.txt file tells search engine crawlers which parts of your site they’re allowed to access. A misconfigured robots.txt can block Googlebot from crawling entire sections of your site. Like noindex tags, robots.txt blocks are sometimes set intentionally during development and never cleaned up.
Orphan pages. An orphan page is a page with no internal links pointing to it. Google discovers pages by following links. If a page exists but nothing links to it — not your homepage, not your nav, not any other page — Google is unlikely to find it. Service businesses often create neighbourhood pages or specific service pages and then fail to link to them from the main navigation or from related pages.
Slow or broken server responses. If your server responds slowly or returns errors (5xx status codes) when Googlebot visits, Google will reduce how often it crawls your site. Persistent server issues can cause pages to be dropped from the index entirely.
How to check your indexation in 10 minutes
Step 1: Google Search Console — Coverage report
If you have Google Search Console set up (if you don’t, set it up today — it’s free), go to the Coverage or Pages report. This shows you exactly which pages are indexed, which were excluded and why, and which have errors. The “Excluded” category is where problems hide — look specifically for pages marked as “Excluded by noindex tag,” “Crawled — currently not indexed,” or “Discovered — currently not indexed.”
Step 2: Site search operator
In Google, search: site:yourdomain.com
This returns an approximate count of your indexed pages. Compare this number to how many pages you know your site has. If you have 30 pages and only 12 are appearing, you have an indexation gap.
Step 3: Check your robots.txt
Visit yourdomain.com/robots.txt in your browser. You should see a file. Look for any Disallow rules that block important sections of your site. A line like Disallow: /services/ would prevent Googlebot from crawling your services pages entirely.
Step 4: Check for noindex tags
On any important page, right-click and view the page source. Search (Ctrl+F or Cmd+F) for noindex. If you find <meta name="robots" content="noindex"> on a page you want indexed, that’s your problem.
Step 5: Check your sitemap
Visit yourdomain.com/sitemap.xml. You should see an XML list of your pages. If the file doesn’t exist, you don’t have a sitemap submitted. If it exists, check whether it lists all your important pages and whether the URLs match your actual page URLs exactly.
What good indexation looks like
A well-indexed local service business site has every revenue-driving page discoverable and processed by Google. That means:
- All service pages indexed (each service you offer deserves its own page)
- All location/neighbourhood pages indexed
- The homepage indexed with the correct canonical URL
- A clean sitemap submitted to Google Search Console
- No accidental noindex or robots.txt blocks on important pages
- Internal links connecting pages to each other so Google can navigate the site structure
For most local service businesses, getting indexation right is a one-time fix followed by ongoing vigilance as new pages are added. It doesn’t require ongoing budget. But without it, every other SEO effort — content, schema, link building — is wasted on pages Google can’t find.
Indexation issues are one of the six gaps we find consistently in Greater Vancouver service business audits. The fix is usually faster than business owners expect. The audit is free.