Engineering
screen-reader-testing avatar

screen-reader-testing

Test web applications with screen readers like VoiceOver, NVDA, and JAWS. Validate accessibility, debug assistive technology issues, and ensure compliance with screen reader support standards.

Introduction

The screen-reader-testing skill provides a structured framework for auditing and validating the accessibility of web interfaces for users who rely on assistive technology. It is designed for developers, QA engineers, and accessibility specialists who need to ensure that web applications are fully navigable and communicative via screen readers. The skill covers the entire spectrum of screen reader interactions, from initial page load and semantic structure to complex dynamic content and form handling.

  • Full support for major screen readers including VoiceOver (macOS/iOS), NVDA (Windows), JAWS (Windows), TalkBack (Android), and Narrator (Windows).
  • Detailed testing methodology including minimum coverage requirements versus comprehensive audit paths for production-level accessibility.
  • Expert guidance on screen reader operational modes such as Browse/Virtual mode, Focus/Forms mode, and Application mode for handling specialized ARIA widgets.
  • Comprehensive testing checklists covering page structure, landmark navigation, heading hierarchy, link purpose, form label association, and dynamic alert communication.
  • Practical, code-based solutions for common accessibility pitfalls like non-descriptive buttons, invisible dynamic updates, and missing ARIA roles.
  • Step-by-step setup and keyboard command references for efficient navigation, rotor control, and element discovery.

When using this skill, developers should identify the primary screen reader and browser combinations relevant to their user base, such as NVDA with Firefox or VoiceOver with Safari. The skill facilitates debugging by providing specific techniques to verify that dynamic content changes are correctly announced to the user using ARIA live regions and that input errors are programmatically associated with field labels. It is intended to be used during the development phase to catch non-compliant UI patterns early and during formal accessibility reviews to verify that semantic HTML, ARIA labels, and focus management meet WCAG compliance standards. Users should be prepared to toggle between different interaction modes to effectively simulate the user experience of someone navigating the DOM exclusively through auditory cues.

Repository Stats

Stars
34,512
Forks
3,740
Open Issues
4
Language
Python
Default Branch
main
Sync Status
Idle
Last Synced
Apr 29, 2026, 01:12 PM
View on GitHub