-
Notifications
You must be signed in to change notification settings - Fork 257
feat(core): add accessibility API with cross-platform TUI speech #436
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
- Add AccessibilityManager with node tracking and event handling - Add accessibility properties to Renderable (role, label, value, hint) - Add AccessibilityRole and AccessibilityLive types - Add unit tests for AccessibilityManager
- Add speakForPlatform() for TTS on all platforms - Linux: spd-say with priority support - Windows: PowerShell SAPI System.Speech - macOS: say command - Add accessibility demo with focus announcements - Add accessibility documentation
|
I am not sure if adding builtin speech to the TUI would benefit blind people Now that LLMs exist they could just ask an LLM to describe the terminal output instead If there was a terminal protocol for showing semantic content in a tree like aria does for the browser it would make sense. But building the speech inside the framework will not be useful. |
|
@remorses I think that would help actually, as that is what screen readers do. There is no terminal protocol, but most platforms have accessibility APIs that can be used, therefor the added props for the Renderables, to control semantics. I think we should get some feedback from actual impaired people if that would help as an intermediate towards using platform accessibility APIs. |
Converted to draft in the meantime. |
|
Hi. I'm visually impaired and am willing to test this out. the best way to get in contact with me is via direct message on discore or email, as github is not the msot straight forward with screen readers sometimes. discord is @perezl2047 |
I sent you a friend request with my |
|
Such a solution would be an interesting compromise, but unfortunately it wouldn’t be ideal. I’m speaking here from the perspective of a blind user. Sadly, I don’t have a good idea myself of how to perfectly solve the problem of operating terminal applications with NVDA or other GUI Screen Readers. Speech generated by an external application is something we try to avoid, because the user can—and should—primarily rely on their screen reader, which they can tailor to their own needs. |
|
Here are my thoughts from discord: I think supporting UIA on windows and other native interfaces would be the way to go. I don't mind having the TUI process create and own a native window to be able to access these native APIs. I could also see an additional accessibility server working, a separate process that exposes native APIs as RPC, so multiple TUI processes could use the same accessibility server. And the server itself can normalize the native APIs across platforms. TUI processes can then run discovery to see if an accessibility server is available and connect if so. This could also allow to control remote TUI processes via an ssh tunnel for example. I would build on bun with ffi for this as well, wouldn't mind doing NAPI here though, to make it more widely available for other runtimes like node/deno. I wonder if there is something that provides RPC or something already for native APIs, I could imagine there is. Need to do some research. |
|
I think I should close this PR and we should start from scratch. |
Fixes #423
Summary
AccessibilityManagerwith cross-platform text-to-speech for TUI accessibilityChanges
AccessibilityManager.ts: Add node tracking, event handling, cross-platform TTSRenderable.ts: Add accessibility properties (role, label, value, hint, hidden, live)lib/index.ts: Export AccessibilityManager and getAccessibilityManagerexamples/accessibility-demo.ts: Interactive demo with focus announcementsdocs/accessibility.md: API documentation with platform requirementstests/accessibility.test.ts: Unit tests for AccessibilityManagerPlatform Support
Testing
bun test packages/core/src/tests/accessibility.test.ts bun run packages/core/src/examples/accessibility-demo.tsNotes
My first approach was based on @kommander's RFC in the issue - implementing native accessibility infrastructure (AT-SPI2 D-Bus for Linux, UI Automation COM for Windows, NSAccessibility for macOS) to directly integrate with screen readers like Orca/NVDA/VoiceOver.
However, this approach is challenging for TUI applications because:
Without owning a native GUI window, registering with platform accessibility APIs (creating an HWND for UIA, connecting to AT-SPI2 registry, etc.) isn't straightforward. The implemented solution uses direct TTS calls which works immediately for TUI apps and provides accessibility announcements without needing native window ownership.
Tested on Linux (spd-say) and Windows (SAPI). macOS (
saycommand) is untested but should work as the implementation is straightforward.Open to suggestions for improving the accessibility implementation or adding additional features!