Back to Insights
Testing & QA

Five Checks No Testing Tool Runs on Every Page of Your App

A grid of web pages, each annotated with one of five page-quality issues.

Your tests validate behaviour. They do not validate completeness. A page can pass every test in your suite and still have no title, an exposed API key in a data attribute, and a staging URL in the footer. These are not edge cases. They are the baseline of page correctness, and no testing tool checks them across your entire application.

Page-level quality is the set of deterministic, per-page assertions that verify a page is complete, correct, and safe — independent of whether its interactive features function as designed.

Automated testing tools are built around user flows: “click this, fill that, assert the result.” Lighthouse is built around performance and accessibility scoring at the page level but focuses on DOM attributes and render metrics. Neither tool asks the simpler question: is this page finished?

Check 1: page completeness

A route that renders with <title>React App</title> or an empty <meta name="description"> is an incomplete deployment, not a working page.

Page titles are not an SEO concern. They are a completeness concern. A page without a meaningful title is a page that was generated by a framework scaffold and never finished. The WebAIM Million 2026 report found that while 93.2% of the top million home pages have a valid HTML5 doctype, title quality remains inconsistent across secondary pages — the pages deeper in the application that receive less manual attention.

The check is deterministic: does the <title> element contain a string that is not a framework default (“React App,” “Vite App,” “Next.js,” “Untitled”), is not empty, and is not identical to every other page’s title? Does <meta name="description"> contain a non-empty, non-duplicate value?

During an audit of a production SaaS application, we found 7 pages with missing or generic titles. All 7 were functional pages that passed every test in the suite. The titles had simply never been set because no developer had been assigned that task after the route was scaffolded.

Check 2: sensitive data exposure

JWT tokens rendered in text nodes, API keys in data-* attributes, AWS access keys in inline scripts. The DOM is a public surface. Anything in it is readable by anyone.

Secret exposure in rendered HTML is a well-documented attack vector. The rendered DOM is not a private space — browser DevTools, page source, and automated scrapers can read everything in it. Published HackerOne disclosures regularly surface JWT tokens, Stripe publishable keys, and internal API endpoints embedded in page source.

The detection method uses pattern-matching rules from projects like Secretlint: regex patterns for common credential formats (JWT structure, AWS key prefixes, API key patterns) combined with context filtering to exclude false positives. React component keys, CSS-in-JS hashes, and data table IDs match some credential patterns but can be filtered by their surrounding DOM context.

In a production audit, we found 3 instances of credential-adjacent strings in the rendered DOM. Two were development API keys that had been hardcoded during prototyping and never removed. One was a JWT fragment in a data attribute used for client-side session management that should have been in an HTTP-only cookie.

Check 3: non-production domain references

Localhost URLs, staging domain names, and development environment strings in production content are deployment artefacts that should not ship.

This is the simplest check in the set and the one most likely to find something. A production page that contains localhost:3000, staging.example.com, or dev.internal.example.com in any href, src, action, or text node has a configuration leak. These references are harmless in isolation but indicate that environment-specific values are not being properly substituted at build time.

The detection method is a customer-provided blocklist: a set of domain strings and URL patterns that should never appear in production HTML. The check scans every attribute and text node on the page against the blocklist.

We found 1 instance in a production audit: a profile settings page that displayed a URL hint showing https://staging.example.com/profile/username instead of the production domain. The hint text had been hardcoded during development and never parameterised.

Check 4: navigation component integrity

A page that renders with no sidebar, no header navigation, and no breadcrumb is reachable but functionally broken for onward discovery.

Navigation components are not decorative. They are the mechanism by which users move between features. A page that renders without them is a dead end — the user can arrive but cannot leave without using the browser’s back button. The Web Almanac 2025 reports that the <main> landmark element appears on only 47% of pages, and skip links are present on approximately 25% of pages. Navigation component integrity extends this concern beyond accessibility landmarks to the functional navigation elements that enable user traversal.

The check verifies that each page contains the expected navigation components for its page type. For most applications, this means a header with navigation links, a sidebar (if applicable to the page type), and breadcrumbs (if the application uses them). A page that renders without these components is flagged as having broken navigation integrity.

In a production audit, we found 2 pages where the navigation component had failed to render entirely. Both were deep-linked pages (reachable by URL but not through standard navigation) that had been built with a different layout component than the rest of the application. The pages loaded, the content rendered, and every feature on the page worked. But the user could not navigate away from them without typing a new URL.

Check 5: cross-locale script mismatch

CJK characters on an English page, Latin text on a Japanese page. The wrong script on the wrong locale is immediately visible to a human reader and trivially detectable by a machine.

Multilingual applications are particularly vulnerable to rendering bugs where content from one locale appears on a page served under a different locale prefix. The most common cause is a translation key fallback that silently serves the wrong language version, or a content database query that returns results without filtering by locale.

The detection method is Unicode range analysis per locale URL segment. If a page is served at /ja/features/, the check verifies that the majority of text content falls within CJK Unicode ranges. If the page is served at /en/features/, the check verifies that the majority of text content falls within Latin script ranges. Mixed-script text that exceeds a configurable threshold is flagged.

In a production audit, we found the single most critical bug in the entire scan: a Chinese-language job listing rendering on the English-language version of a recruitment site. The listing was fully functional — it displayed correctly, the apply button worked, and the page passed every automated test. But an English-speaking user landing on the page saw Chinese text with no indication that they were viewing content in the wrong language.

The compound metric: Quality Rating

When all five checks run on every reachable page, the ratio of passing pages to total reachable pages produces a letter grade — the Quality Rating.

Navigation Coverage answers “can users get there?” Quality Rating answers “is it correct when they arrive?” The two metrics are orthogonal. A page can be reachable but broken (high coverage, low quality). A page can be correct but undiscoverable (low coverage, high quality). The combination of both metrics gives a complete picture of application health.

In the production audit referenced throughout this article, the application scored 88% Navigation Coverage — most pages were reachable through normal navigation. But the Quality Rating was 71% (C+) before remediation. The 17-point gap between the two metrics is the difference between “users can get there” and “the page is correct when they arrive.”

The five checks are deterministic. They require no AI inference, no visual comparison, no machine learning model. Each check is a boolean pass/fail per page, and the aggregate produces a letter grade. The methodology is transparent: the checks are documented, the thresholds are published, and the results are reproducible.

These five checks run on every Glia Quest scan at no additional cost. The Quality Rating is the result. Run a scan at glia.quest.

Frequently asked questions

Why are these checks not already part of Lighthouse? Lighthouse focuses on performance, accessibility, best practices, and SEO — categories defined by web standards. Page completeness, credential exposure, domain leaks, navigation integrity, and locale mismatches are application-level concerns that fall outside Lighthouse’s scope. They require knowledge of the application’s intended behaviour (which domains are non-production, which navigation components are expected, which locales are served) that Lighthouse does not have.

How do you avoid false positives on credential detection? Context filtering. React component keys (key="item-abc123"), CSS-in-JS class hashes, and UUID-formatted data table IDs match some credential regex patterns. The detection rules examine the surrounding DOM context — attribute name, parent element type, and adjacent text — to distinguish credentials from application-generated identifiers. The false positive rate after context filtering is below 2% in our testing.

Can I run these checks with existing tools? Partially. You could combine Secretlint for credential scanning, a custom script for domain blocklisting, and manual inspection for the remaining checks. The challenge is running all five checks across every page of your application on every deployment. Individual tools solve individual checks; the value is in the compound metric across the full surface.

What letter grades correspond to what percentages? A: 95–100% of pages pass all checks. B: 85–94%. C: 70–84%. D: 50–69%. F: below 50%. The grade reflects the proportion of reachable pages that are complete, correct, and safe across all five check categories.

Does the Quality Rating replace Navigation Coverage? No. The two metrics measure different things. Navigation Coverage measures whether users can reach your pages. Quality Rating measures whether those pages are correct when users arrive. An application needs both: high coverage with low quality means users find broken pages, and high quality with low coverage means correct pages that nobody can discover.

Run a coverage scan on your app.

Point Glia Quest at a staging or production URL. The first run is free and the report shows up in two minutes.