
Automated accessibility scanners catch structural issues. Screen reader testing catches the real user experience failures that automated tools cannot see: the modal that announces incorrectly, the live region that fires when it should not, the button that reads its icon description instead of its label.
If you need manual screen reader testing, here is which combinations matter, what the findings cover, and what separates professional testing from someone who just has a screen reader installed.
Which screen reader and browser combinations matter
The WebAIM Screen Reader User Survey and GOV.UK usage data consistently show a core set of screen reader and browser combinations used by disabled people on the web. A professional manual audit tests across:
Windows desktop:
- NVDA with Chrome (most widely used combination)
- NVDA with Firefox
- JAWS with Chrome or Edge (dominant in enterprise and corporate environments)
macOS:
- VoiceOver with Safari (the primary macOS combination)
- VoiceOver with Chrome (secondary)
iOS:
- VoiceOver with Safari on iPhone and iPad
Android:
- TalkBack with Chrome
Not every engagement requires all of these. For most UK web products, NVDA with Chrome, JAWS with Chrome or Edge, and VoiceOver on iOS cover the majority of real-world usage. Mobile testing is increasingly important as mobile usage has overtaken desktop for many consumer products.
What manual screen reader testing covers
Navigation and structure: Can the user navigate the page using headings, landmarks, and skip links? Are landmark regions labelled where needed? Is the heading hierarchy logical?
Interactive elements: Are all buttons, links, and form controls reachable by keyboard and announced with the correct name, role, and state? Does Tab order match visual order?
Forms and error handling: Are form labels associated with their inputs? Are error messages announced when they appear? Is the relationship between a field and its error text clear to a screen reader user?
Dynamic content: When content changes without a page reload (notifications, search results, filtered lists, modal dialogs), is the change announced to the user? Are aria-live regions used correctly?
Images and media: Are meaningful images given accurate alternative text? Are decorative images hidden from screen readers? Do videos have captions and audio descriptions where required?
Custom components: Dropdowns, accordions, tabs, carousels, date pickers, and autocomplete components are all tested against the expected ARIA patterns for their role.
What distinguishes good screen reader testing from poor testing
Poor screen reader testing uses a screen reader to navigate and notes what sounds odd. Professional screen reader testing:
- Maps each finding to the specific WCAG success criterion it fails
- Notes the specific browser and screen reader combination where the failure occurs
- Distinguishes between widespread failures and browser-specific edge cases
- Identifies the root cause (missing ARIA attribute, incorrect role, broken focus management) not just the symptom
- Produces findings that a developer can reproduce and fix without needing to run the screen reader themselves
Get in touch to discuss screen reader testing for your product