Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Email: Comment: WCAG 3.0 accessibility guidelines: Accessing a "virtual" meeting using a teleconferencing application such as Zoom - receipt of streamed text content #476

Open
jspellman opened this issue Mar 12, 2021 · 2 comments
Labels
a11y-tracker Group bringing to attention of a11y, or tracked by the a11y Group but not needing response. status: enhancement New feature or request for new guideline

Comments

@jspellman
Copy link
Contributor

jspellman commented Mar 12, 2021

Comment from Email:

From: Rod Macdonald (rjmacdonald@hawaiiantel.net)

Re: W3C Accessibility Guidelines (WCAG) 3.0 - W3C First Public Working Draft 21 January 2021

Issue: Accessing a "virtual" meeting using a teleconferencing application such as Zoom - receipt of streamed text content

Background: For purposes of this discussion, individuals who are Deaf-Blind can be grouped into four sub-groups:

(1) The Deaf-Blind individual retains sufficient residual hearing, with amplification and/or other enhancements, to access web content in the of a hearing person.

(2) The Deaf-Blind individual retains sufficient residual vision, with screen magnification and/or other enhancements, to access web content in the manner of a sighted person.

(3) The Deaf-Blind individual cannot access web content via speech or hearing, but can do so using braille.

(4) The Deaf-Blind individual cannot access web content using vision, hearing or braille, and thus cannot access web content at all. (There may be extremely rare cases when the use of unusual technology may circumvent this.)

This discussion refers exclusively to Deaf-Blind individuals in the third group - braille users.

Problem: Spoken language is typically at about 200 words per minute. Adding additional information, such as identifying the speaker and adding punctuation, increases the volume of text being transmitted. The text is never enhanced for braille display.

The average adult braille reader reads at about 15 words per minute. This speed assumes error-free content, somewhat shortened into "contracted" (or "Grade 2) literary braille correctly formatted on paper.

The speed at which streamed text is rendered makes it impossible for 99% of the braille-reading population to read it. The process is made even more impossible by the fact that the braille reader is typically using a braille display that can only hold 40 characters at a time; after the 40-character limit has been reached the display jumps to a new line to continue with incoming text. No existing braille device I am aware of will allow the user to control this streaming process. In reading a document one can "pan" text forward, one 20-character display at a time; but when the input is a continuous stream the braille device will always follow the dynamic cursor - one cannot go back and read prior text while the input stream is active.

For all practical purposes a braille reader can NOT access streamed text during a Zoom meeting. There is no known solution to this accessibility issue.

The only known workaround would be for the meeting to employ a designated CAN (Computer Assisted Notetaker) interpreter to provide real-time communication for the Deaf-Blind consumer. However, a CAN interpreter typically types at about 50 words per minute and must therefore condense the communication, especially when speaker-identification is required, and the physical separation of the meeting and Deaf-Blind consumer makes for additional logistical issues - is the interpreter placed at the meeting site or at the consumer's end? How can the consumer communicate with the interpreter if they are separated? Can the funding source pay for an interpreter located off-site? In any case this solution does not address the problem with the teleconferencing application itself.

(Additional note: Youtube transcripts usually do not identify the speaker.)

@RealJoshue108
Copy link

RealJoshue108 commented Mar 18, 2021

@jspellman This will be discussed in RQTF but my own thoughts on the issue and how it may relate to the RTC Accessibility User Requirements document is that it (the need to control and manage various output) is referred to as a broad user need in Routing and communication channel control

This is where we have: User Need 5: A blind user of both screen reader and braille output devices simultaneously may need to manage audio and text output differently.

We have a requirement (REQ 5b: Allow controlled routing of alerts and other browser output to a braille device or other hardware.) that does move in parallel with the OP, and what they are suggesting. However, there is a broader issue of semantics, and user agent support that you need to be aware of.

There is a known lack of support for ARIA amongst existing braille devices. We have been discussing this in RQTF, and will be flagging this to related groups. We have discussed the possibility that in future versions of ARIA there may be a need for specific semantics that support the user need outlined here by the OP (and in the RAUR), as well as other aspects of ARIA such as Live Regions - which could be mapped to particular zones of the braille output so the user can distinguish when alerts have been fired.

So part of this is a user agent issue, in that the user agent should be able to monitor/manage the rate of output in a suitable way - in tandem with the necessary semantics.

@jasonjgw
Copy link

Upon checking the current RTC Accessibility User Requirements draft, I notice

  1. I can't find a clear statement of the need that users who are deaf-blind have for text-only communication. It's implicit in some places though.
  2. I can't find any mention of the issue of reading rate. The user need here would be for summaries of the dialogue rather than full word-for-word transcripts. Note that a similar issue arises in relation to captions (the choice between captions as full transcripts and captions as simplified or summarized forms of the dialogue that can be easier to understand or which can be read more slowly).
    Of course, I may have overlooked something important in reviewing the draft, as it's a little while since I've read it in full.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
a11y-tracker Group bringing to attention of a11y, or tracked by the a11y Group but not needing response. status: enhancement New feature or request for new guideline
Projects
None yet
Development

No branches or pull requests

3 participants