w3c/wbs-design
or
by mail to sysreq
.
The results of this questionnaire are available to anybody.
This questionnaire was open from 2020-01-21 to 2020-02-05.
15 answers have been received.
Jump to results for question:
The purpose of this questionnaire is to get input from AGWG and other asynchronous participants of Silver on the structure of Silver. We want to know where to focus our efforts in the last weeks before we publish the First Public Working Draft (FPWD).
For now, the content is for illustrative purposes and is rapidly being updated and will continue to be updated. Feedback on the general direction of the content is welcome.
Please keep in mind:
If you say "no" to any questions, please give ideas of how we can improve the section.
Responder | Survey Introduction |
---|---|
Martin Jameson | Result of a dynamic page worries me if you're looking for multiple things as do the separate URLs, finding information might be hard. It will be a delicate balance to get this right |
Wayne Dick | Clear enough |
Stefan Schnabel | |
Aidan Tierney | I'm unclear which parts of the document are illustrative and don't need review yet as they are placeholders and which are substantive or structural and need review and comments. I'll try to figure that out. |
David MacDonald | |
Charles Hall | It may be worth mapping and/or describing the relationship between the described dynamic structure and the standard W3C Technical Report. |
Gundula Niemann | Question 3 "Is the Introduction clear and give sufficient background for a reviewer?" is not clear in the point who is the reviewer: Is it the persion reviewing the Draft and answering the questionnaire (me), or is it the person who plans to review his/her documents/sites/web applications? |
Alastair Campbell | |
Laura Carlson | |
John Foliot | |
Brooks Newton | |
Peter Korn | |
Jonathan Avila | |
Wilco Fiers | I have left a lot of feedback in the past already, little of which seems to have been addressed. Instead of repeating this here I just want to refer to those issues I created in 2018: https://github.com/w3c/silver/issues/created_by/WilcoFiers Additionally: - The document does not indicate which parts are normative. This seems like the most fundamental thing in writing a standard. I do not think we can publish a standard without a clear indication of which parts are normative and which ones are not. - I do not agree with removing accessibility support from Silver. Firstly because as the FPWD says, UAAG has not been picked up. Secondly because more often then not there are no standards for assistive technologies. You can't build to standard if there isn't one. - The ACT Task Force has in the past rejected the idea of creating a clear distinction between automated and manual testing. This is not a binary yes or no question. There are degrees of automation, depending on how much accuracy you need, and how much you know of the system that is being tested. The automation industry is continuously pushing the boundaries on this. - I do not think this is fleshed out enough for a wide review. Especially the lack of any sort of conformance modal. This project stands or falls on being able to come up with a new conformance modal. Everything else is secondary to that. Without at least a basic version of a conformance model, this is not a document that can be properly reviewed. Having just a list of potential requirements is a good start, but I do not think it is enough. Especially because some of these requirements seem contradictory. - The status of this document is confusing. This has been called a FPWD, but the document is in a community group report template. If this is a CG report, the status should not say it's a collaboration between AG and Silver CG. That may cause people to conclude this has a more formal status than it actually does. If this actually is a FPWD, the template is wrong, and this should not be a collaboration between the Silver TF and the Silver CG. Task forces do not have a mandate to publish rec track documents. Only working groups do. In either case this sentence is incorrect: "The Silver Editor's Draft is published as a joint effort of the Silver Task Force of the Accessibility Guidelines Working Group and of the W3C Silver Community Group." - In "status", "Github" should be "GitHub" (capital H) - From the conversation in AG yesterday I learnt that the testable statements for silver are intended to be informative. This is problematic for a number of reasons. Most importantly it creates a situation where a website could conform to Silver today, but not conform to it tomorrow because the test procedures were updated. This is quite problematic for organisations that need to ensure their content complies to regulations. - ... there was not enough time for a proper review on this. |
Detlev Fischer | Like Wilco, I have also left comments in Github (as a comment to his comment). Not sure what impact these have had yet. https://github.com/w3c/silver/issues/41#issuecomment-456839780 |
Does the Abstract give a clear summary of what the reader should expect from this document?
Choice | All responders |
---|---|
Results | |
yes | 5 |
no | 6 |
(4 responses didn't contain an answer to this question)
Responder | Abstract | Abstract Comments |
---|---|---|
Martin Jameson | yes | |
Wayne Dick | no | Right now there is too much about what is not there and not enough about what is. Just describe what the document is about and leave what is not developed not to a "TODO" apendix. |
Stefan Schnabel | yes | |
Aidan Tierney | no | Not sure if this is placeholder text but as-is it's not as clear, compared to say the 2.1 abstract (https://www.w3.org/TR/WCAG21/#abstract) which explains more about the aims, the technology it applies to, and how conformance relates to older versions of WCAG. I'd like the abstract to explain up front whether there are any new requirements or whether this is only a restructuring of WCAG 2.1 |
David MacDonald | ||
Charles Hall | yes | While the expectation for this document is clear, it spawns several additional questions, like: “what other documents will follow, and when?” and “if the TR format is difficult to use, will there be evidence that this proposed format has solved it?” |
Gundula Niemann | no | It remains unclear whether the document is a first version of the guideline or whether it forms a guideline template. In addition, the term guideline is used with different meanings (the guideline contains guidelines). |
Alastair Campbell | yes | I'd probably go with something like "The Silver Accessibility Guidelines are the next major version update that is @@intended to supercede@@ the Web Content Accessibility Guidelines 2.x." The W3C doesn't "replace" standards as such, it adds & supersede. |
Laura Carlson | yes | Typo: "The content is only as an example of the structure" should likely be "The content is only an example of the structure" |
John Foliot | no | The ongoing concerns around a conformance model have not been addressed in any substantive way. The architecture speaks of points assigned to guidelines (and not methods), but there is no indication of the value of points or scoring in the current framework. In the simplified view "Headings" example, under the Tab "Test and Audit", there is little to no information on how to do either: instead it has a placeholders links to legacy content on how to achieve 'success', but no indication how to measure or audit for success. |
Brooks Newton | ||
Peter Korn | no | Agree with Aidan Tierney’s comments – the Silver Abstract could look at the WCAG 2.1 Abstract (https://www.w3.org/TR/WCAG21/#abstract) and be more clear about what it covers. |
Jonathan Avila | ||
Wilco Fiers | ||
Detlev Fischer | no | Having read just the abstract, it is unclear to me whether "the Editor's Draft for the Silver Accessibility Guidelines project" is a meta resource about the new version update or an early draft / subset of that actual version (as in "it illustrates structural changes and examples of that future version"). This should be clearer. I would suggest to have one document that is a draft of the future version, and one document explaining the concept. |
Is the Introduction clear and give sufficient background for a reviewer?
Choice | All responders |
---|---|
Results | |
yes | 5 |
no | 7 |
(3 responses didn't contain an answer to this question)
Responder | Introduction | Introduction Comments |
---|---|---|
Martin Jameson | yes | |
Wayne Dick | yes | Accessibility guidelines are based on two factors: the state of technology and the state of understanding of disability. Both change. I see this theme throughout the introduction, but not explicitly. We also learn how to implement guidelines. |
Stefan Schnabel | no | Make more clear that Silver REPLACES all (WCAG) 2.x (ATAG) 2.0 UAAG 2.0 and should be the ONE AND ONLY reference for now. |
Aidan Tierney | no | Most people using the guidelines don't need to know their history, I would move this to an appendix and focus the intro on basic info someone who is new to the topic, such as a product manager, needs to know to make sense of the document. |
David MacDonald | ||
Charles Hall | yes | Perhaps a little less history would simplify it as an introduction, and if important, move to a history section or appendix. |
Gundula Niemann | no | It remains unclear whether the guideline addresses documents / electronic pages or also software. It should be clearly and centrally stated that Silver applies also to applications, including buiness applications, "Complex Web applications". Currently, it seems to address only web sites with simple pages. Similarly, it shoud be explicitly stated whether Silver also applies to ICTs or not. In the latter case it should be described what should be done. For the draft reviewer it remains unclear how ATAG 2.0 and UAAG 2.0 are connected to the current Silver draft and whether silver shall complement or replace these. For the document/page/software reviewer (the target reader of the final guideline) it remains unclear what to find where, how to work with the document and which kind of pages is addressed. In later parts of the documents, in some places web applications are named, in other explicitly web documents. In general, the motivation to create silver does not become clear. It is not sufficient to state that WCAG 2.x has not ben fully adopted. |
Alastair Campbell | yes | Nit-picky, but generally the guidelines don't "identify barriers", that would take too long. The guidelines provide requirements that avoid or overcome barriers. It feels like it needs to end with a statement about what Silver is tackling, perhaps linking to the explainer... |
Laura Carlson | yes | |
John Foliot | no | The introduction states "We need guidelines to: [bullet] identify these barriers encountered by people with disabilities. [bullet] explain how to solve the problems they pose." As tool vendors, Deque (and our clients) also needs guidelines on how to measure and score conformance or compliance, and expects to see that bullet point addressed in the Silver documentation. We as a consumer of this new guideline do not need explanations per-se, we need measurable benchmarks. |
Brooks Newton | no | Personally, I think that what's in the Silver introduction at this point should directly address the issues that were at the heart of why the Silver Task was first impaneled. Why did Silver get started? What about the Web Content Accessibility Guidelines version 2.x (WCAG 2.x) was so off that significant segments of the accessibility community demanded a change? Whatever the answers are to that question need to be the focus of what the Silver First Public Working Draft (FPWD) needs to address. Don't be afraid to identify weaknesses in WCAG 2.x directly, instead of burying those issues in linked resources referenced from the FPWD. The fine grain details around conformance, scoring, structure, etc. can com later, as long as the original purpose for forming a Silver Task force is directly addressed in the First Public Working Draft (FPWD). Use the FPWD process as way to crowd source ideas that will help fill in some of the missing pieces that we know have to be built out. Get the community's buy-in about what needs to change in the next generation of the standard before trying to build a proof of concept for that change. |
Peter Korn | no | It would be helpful to include (even as placeholder) a section similar to WCAG 2.1’s “0.5 Comparison with WCAG 2.0”, which helps the reader understand what and how this is different than what came before. |
Jonathan Avila | ||
Wilco Fiers | ||
Detlev Fischer | no | Reading the Inroduction, I went back to the Silver Research Summary slides which feature so prominently. I notice that the Usability Problem Statement #3 Ambiguity in interpreting the success criteria, the next slide has only one answer that may address this problem somewhat: "Silver needs more updated examples across newer technologies". The intentions (expanded scope, more technologies, more disabilities etc) and the aim to have graded conformance rating approach, include user testing etc. indicate strongly that the result will be *more complex* even we manage to provide a better user interface (beginners ramp, searching, filtering, organisational role). I fear that the ambiguity and in turn, the difficulty of a consistent conformance rating is going to increase strongly due to that widening of scope and the more complex rating approach. I have no solution, but I just think we probably can't have it both ways (encompassing many more dimensions AND at the same time making it easy to understand and use). The introduction says that "...opportunities identified in the research were improvements in the structure and presentation accessibility guidance to improve usability, to support more disability needs, and to improve maintenance." The thorny issue of conformance and the new relationship between Guidelines and Methods is not covered in overview in the introduction. |
Is it clear that this draft is a framework and not about the content?
Choice | All responders |
---|---|
Results | |
yes | 6 |
no | 4 |
(5 responses didn't contain an answer to this question)
Responder | Guideline Introduction | Guideline Introduction Comments |
---|---|---|
Martin Jameson | yes | |
Wayne Dick | yes | I really liked this section. Please do more. |
Stefan Schnabel | yes | You should include ALL the planned areas not yet investigated with respective comments when and how. Let the pleople see your PLANS in PLACE. |
Aidan Tierney | no | I'm answering no because its not clear to me what parts are "content" (i.e placeholder not needing feedback) and what is "framework" (i.e "structure" ) |
David MacDonald | ||
Charles Hall | yes | I think there is the opportunity to define “guideline” and how that maps to the normative success criteria and non-normative supporting information. |
Gundula Niemann | no | It does not describe any framework nor talks about it. It says the document contains samples for the structure and how it would work. That is not the same as defining a framework. The way content is presented and the fact that comments are asked for in this questionaire suggests the content is relevant, not only the structure. Does "guidelines" replace the term "success criteria"? Which guideleines are planned exactly? It would be better to have the COMPLETE structure with alle section headers right away. It says "numbering of individual guidelines will be removed to improve overall usability of the guidelines in response to public requests." How should people refer to specific guideleines independently from language? Each guideline needs a unique ID. How shall the "Tagging Engine" look like? Does "merging the AA and AAA levels" mean that former level AA criteria are merged with (corrsponding) level AAA criteria? This would pose a problem since many companies including SAP did commit to AA but not AAA. In fact, the method of compliance might be invalidated. For example, for visual contrast in US Section 508 it was requested to provide several themes for different needs, like one theme for good contrast (minimal contrast) and one theme for strong contrast (high contrast). Mixing the levels means giving up this flexibility. Level A requirments are not referred to at all. |
Alastair Campbell | yes | It seems odd to include such a big caveat about the Headings guideline, and then it is included anyway. |
Laura Carlson | Even with the disclaimers, some may find it a confusing because new content is in the new structure. Expect comments on the content. | |
John Foliot | no | It appears to be a combination of both, with a focus on adding content while still not addressing some architectural concerns. Half the questions in this survey are on existing content, and not architecture or structure. (It very much feels like the work is leaving "scoring and conformance" to the end, just like so many orgs leave "accessibility" to the end.) |
Brooks Newton | ||
Peter Korn | yes | |
Jonathan Avila | ||
Wilco Fiers | ||
Detlev Fischer | no | See comment above for "Abstract". I would prefer a draft of the Guidelines clearly separate from the doc that discusses the approach. Both seem to get mixed up right now, so it is not clear to me. |
Are the Abstract, Introduction, and Guideline Introduction ready for wider review? By "ready for wider review" we are referring to the First Public Working Draft.
Choice | All responders |
---|---|
Results | |
yes | 4 |
no | 8 |
(3 responses didn't contain an answer to this question)
Responder | Abstract, Introduction and Guideline Introduction Ready for Review? | Introduction Sections Ready for Review Comments |
---|---|---|
Martin Jameson | no | I'm not sure I understand the purpose of 2.1, 2.2 and 2.3. These sections don't seem to be explained in the document in the Guidelines section and if they are just links to examples these could be structured as such "2.1 Examples.... *unordered list* Headings - Link to heading example, Clear language - Link, Visual Contrast - Link *end of unordered list* |
Wayne Dick | yes | I think this sets the stage for comment. |
Stefan Schnabel | no | Need some tweaking based on the input of other reviewers |
Aidan Tierney | yes | It could be more clear whether we need to review the pages "Explanation of Visual Contrast" ""Explanation of Headings" or if these just illustrate there will be some links to more info. The link name "Explanation of Headings" doesn't adequately describe the wealth of information at this resource. I wouldn't likely click on a link that said this and would miss all the Design, Build, Test guidance. |
David MacDonald | ||
Charles Hall | no | It seems there will be useful insights from the survey that will inform some editorial considerations. |
Gundula Niemann | no | too much unclearity, e.g. on terms used ("guideline", "framework", ...), see also 2. through 4. |
Alastair Campbell | yes | |
Laura Carlson | yes | |
John Foliot | no | There are a number of Editor and Author comments sprinkled throughout the document that should be removed or annotated in a different fashion (one example is particularly jarring: "(This had been a flaw that was bothering me that could allow the system to be gamed by using multiple methods.)") Additionally, there are a number of spelling mistakes throughout the document and "simplified view" examples. A finer editorial review should be undertaken before we open this to public scrutiny for the first time. |
Brooks Newton | ||
Peter Korn | no | See comments above for Abstract & Introduction |
Jonathan Avila | ||
Wilco Fiers | no | |
Detlev Fischer | no | Probably No, but this could also be a Yes. I am uncertain whether it is productive to have a review of this document at this point. The intro to guidelines makes it clear that the structure has not been settled yet, so feedback to something so loose and wobbly may be less helpful than feedback to a more solid structure (even if it is to be modified completely). |
Does this guideline and its explanation (tab design) give a clear wireframe example of the structure of the Silver guideline?
Choice | All responders |
---|---|
Results | |
yes | 6 |
no | 5 |
(4 responses didn't contain an answer to this question)
Responder | Headings | Headings Comments |
---|---|---|
Martin Jameson | yes | |
Wayne Dick | yes | I really like this sections. I think you are expecting people to read the explanation of headings. It is necessary. Regarding the explanation. I think the word paragraphs should be replaced by something like "one or more connected paragraphs" |
Stefan Schnabel | no | Link above is broken. The "Explanation of Headings" link term is misleading since there is much more in content |
Aidan Tierney | yes | It gives a good sense of it yes. I'd prefer to see at least one fully fleshed-out example to understand the level of detail we can expect. Some sections like 'getting started' have this level of detail but others do not. Not looking for final content but just a sense of how detailed the content will be. Eg: Auto-Testing Tips: TBD Manual Testing Tips: TBD Testing with Users: TBD - different from user reeearch or testing, this is tips for QA testing by PwD. |
David MacDonald | ||
Charles Hall | yes | Since there is no current semantic equivalent for subtitle or sub-heading, it seems relevant to affirm that while the purpose is sections and topics within sections, the element would still be a heading. |
Gundula Niemann | no | The structure is not clear. The main document does not make clear what exactly is the requirement. The linked pages do not have a common and consistent structure for the various requirements. The information given under the structure elements does not have the same granularity nor a similar nature in 'Headings', 'Clear Language', and 'Visual Contrast'. The link name "Explanation of Headings" suggests it links to a glossary to explain what a heading is. The page behind that link gives a process, where several information is given multiple times, including the requirement text (without any indication this is the requirement). It lacks a clear guidance - what exactly is the requirement - when it is applicable (the wording suggests it applies only to pure text doucuments) - When it is fulfilled and when it isn't Methodology and requirement are mixed up. |
Alastair Campbell | no | See next comment, and also the tabs are quite different from the next guideline which is odd. |
Laura Carlson | yes | |
John Foliot | no | The current framework does not address the goal of "how to score" any of the Guidelines. One of the early examples is Headings (which was one of the first "new content" activities of the Task Force). Fortunately, as we collectively know a fair bit already about headings, it also serves as a good 'test-case' for the scoring component. We know anecdotally that so many websites today get the hierarchy of heading structure wrong (skipped levels, etc.) to the point that one class of user today - screen reader users - will tell you that having headings is critical for in-page navigation, but most of those users are un-surprised by hierarchy mismatches; they consider getting the hierarchy correct as a "nice to have". (I note that while HTML has one element that serves that goal, in ARIA we've separated the two pieces of information: role="heading" AND aria-level="(1,2,3,etc.)" ) However, for users with cognitive issues, failing to get the hierarchy correct not only is not helpful, it may in fact introduce additional confusion and stress, and have a negative impact. So, from the perspective of scoring and conformance, do we only accept "both" pieces of information as mandated correct (conformant, compliant, acceptable?), or will we none-the-less give "partial scores" when the get the role correct, but the aria-level wrong? I don't know, and more importantly, the 'framework' they are seeking to advance has no 'placeholder' to address a granular concern like that: no means or mechanism for actually determining (measuring) whether they are 100% correct, "good enough", or failing miserably - nor how that 'final score' would be integrated into a larger scoring and conformance reporting mechanism. |
Brooks Newton | ||
Peter Korn | yes | Would prefer “should” instead of “must” in places – e.g. “you must ensure that people can quickly find…”. How do you measure “can quickly find” (if you make it a requirement, it is important to have a measure). Fundamentally the entire “Plan” tab is very good guidance from expert practitioners, but isn’t the only way to achieve the ends described. |
Jonathan Avila | ||
Wilco Fiers | ||
Detlev Fischer | no | The wireframe with tabs could work in principle. Conformance testing is still completely unclear, though I think content is fairly immature yet. Some examples: Tab "Planning responsibilities": I would have expected something outlining a good process for arriving at a good heading structure, or more generally a sound information architecture - things like check whether terms used resonate with intended end users, avoid deep nesting structures, pros and cons of flat vs. deep structure, when to use which, etc. Methods could include also dedicated planning methods like card sorting. It may also worth pointing out that headings structures extend beyond the page so an important part of planning is need to consider headings holisitically so people make sense of content across the site (and that headings also appear in some (shortened) form in site navigation structures). Having said that, all this is IA and UX territory, and I am not sure where WCAG 3.0 should place the 'cut-off' point between those and accessibility concerns... it could become huge if it fully takes IA and usability on board. Tab "Desgin": You write "The visual styling must also be semantic, that is, the code must communicate the level of the headings to non-visual users" - presumably the code is whether it has a good heading hierarchy h1-h6 not the visual design - using 'code' here is confusing. Tab "Test and Audit": I find "Testing responsibilities" unclear. For a start, "This must structure the content so users know what to expect in each section." is not a testing but rather an author responsibility (and I think it is far too prescriptive - headings could be a quote, a hint, something playful i.e. not describe content underneath - do we really deem it possible to mandate descriptiveness for ALL content headings? This is overreach, IMO.) |
Is the Headings guideline ready for wider review?
Choice | All responders |
---|---|
Results | |
yes | 5 |
no | 7 |
(3 responses didn't contain an answer to this question)
Responder | Headings: Ready for Review? | Headings Ready for Review Comments |
---|---|---|
Martin Jameson | yes | |
Wayne Dick | yes | |
Stefan Schnabel | no | Need some tweaking based on the input of other reviewers |
Aidan Tierney | yes | It could be clearer what is meant to be reviewed vs placeholder text. If the substance of any guideline is to be reviewed it will be helpful to know if the intention of the guideline is to keep the WCAG 2.1. requirement and/or to make it more clear, or to extend or create a new requirement. |
David MacDonald | no | I don't know what the normative requirements are from reading it. I don't know what content is proposed as normative and what isn't. I think language like "You must ensure ..." in the explanation content (tabbed interface) feels like a finger pushing into the chest of the reader. It may be perceived as demeaning and overly focused on the personal space of the person reading it... I think the guideline explanation should point at the content rather than the person reading it. We don't know if the person reading this document is responsible for the content, yet we are saying "you must ensure ...". I don't think there should be any mention of "you" or "you must ensure" in any of the documents. It is fine to say the content must have such and such a characteristic... but we should never order the person reading the document around. It might be perceived as bossy. I understand that plain language is generally in the active voice and there then needs to be an object, but I don't think its working yet. Would it make sense for normative content to focus first on accuracy and precision and then provide a supplementary non normative plain language version which has a disclaimer that in the case of discrepancy of interpretation, the normative version prevails? |
Charles Hall | yes | I think it is clear that this is a draft and feedback from a larger audience is useful. |
Gundula Niemann | no | See 6. |
Alastair Campbell | no | It raises a lot of questions, such as: - The how-to content doesn't cover all the different connotations, I'd anticipate questions around hidden headings. (If content is written to be very directive, it needs to cover the different situations.) - The first exception talks about who is responsible for 3rd party content, which is a much wider decision than this guideline. |
Laura Carlson | no | I agree with David MacDonald regarding the "You must ensure" language. Taking the "you" out of the Testing Responsibilities may help. Consider changing "You must ensure that headings exist and are logical" to something such as "Headings must exist and be logical." In addition, don't know how we are going to score this yet. |
John Foliot | no | Please see previous comments. |
Brooks Newton | ||
Peter Korn | yes | Getting Started provides an exception for documents uploaded from another author, but Test & Audit doesn’t mention how to address such an exception. Ideally the first public working draft would at least speak to this, but not absolutely critical for FPWD. |
Jonathan Avila | ||
Wilco Fiers | ||
Detlev Fischer | no | The sketchiness noted above makes me think this is maybe not ready for review. |
Clear Language is a new guideline proposed by the Cognitive Accessibility Task Force (COGA), and includes research, documents and comments from COGA.
The explanation for Clear Language is Clear Language explanation. Look in the tabs. Is the information helpful to people performing each activity?
Choice | All responders |
---|---|
Results | |
yes | 7 |
no | 3 |
(5 responses didn't contain an answer to this question)
Responder | Clear Language: Explanation tabs | Clear Language explanationComments |
---|---|---|
Martin Jameson | yes | |
Wayne Dick | yes | I liked this. It also allows for gradations in complexity and specificity of content. If one has an audience of mathematicians the the term non-separable is clear. |
Stefan Schnabel | yes | |
Aidan Tierney | yes | It's not clear to me how the section "Clear Language Guidance" relates to " Edit Text for Clear Language" . The Evaluate section links to tests for "Edit text for clear language" section. |
David MacDonald | no | Its helpful advice... I wish there was an "I don't know" radio button for this. |
Charles Hall | yes | |
Gundula Niemann | no | General: There is no clear guidance what exactly is the requirement, its applicability, when the requirement is fulfilled, and there is no distinction between requirement and methodology. Important information in this regard is hidden in tabs and subpages. Are there any exceptions, and under which conditions? For example, does it apply to a corporate policy or other legally binding documents? What about Business applications and documents, Expert applications, technical applications and documents, to name a few? The distinction into Design, Write, Develop is artificial. In addition, information needed when performing the process steps is merely hidden in Evaluate and Methods. "Get Started": A definition of clear language is missing, and the term is mixed with 'plain language' and 'simple language'. A guidance on applicability is missing. It is not feasible for content that addresses specific user groups, like mathmaticians or financial experts. It is not feasible for software, specifically business software. The 'why' section suggests this is a pure usability task, not an accessibility task. Special needs are not reflected. Examples are missing. "Plan": The text (in "Get Started" as well as in "Plan") suggests a professional editor knows how to write in clear language. "Design": The wording suggests these are the requirements, while the process is not described. "Write": This section makes clear staements on specific points to observe. Nevertheless this kind of information should be given in a defining section (it might be named "Requirement"), not hidden in a process. "Develop": Is this part of the requirement, methodology, or a summary of reminders? Several points overlap with other existing requirements / success criteria. "Evaluate": It merely states that the instruction in "Write" forms a mandatory part of fulfillment. Yet this information is hidden for readers who focus on basic knowledge of the requirement, or on planning. It does not become clear, whether the linked "Edit text for Clear Language" forms part of the requirement. In its subsection 'Platfrom' it refers to itself as 'method'. It also states it applies only to text documents. |
Alastair Campbell | Generally yes but there are some confusing things. There is a lot of (conceptual) overlap between tabs, probably because the content for writing plain language and evaluating it are very similar. If I'm looking for 'how do I write in clear language', it could be in 'Get started', 'Write', 'Evaluate' and 'Methods'. The design tab has a lot of overlap with the headings guideline, it really needs to be focused on the current guideline. The design and develop tabs appear to require a glossary, but without mentioning that elsewhere. | |
Laura Carlson | ||
John Foliot | no | Additionally, we've been previously told that this is a review of architecture, not content, so this question seems a bit out of place. Reviewing the 'simplified' version that the wbs survey points to, the tab structure on the Clear Language Guideline is different than the tab outline for Headings Guideline (contravening WCAG Success Criterion 3.2.3 Consistent Navigation). Additionally, the 'Edit Text for Clear Language' screen has a third version of the top navigation bar, which is also non-compliant to WCAG 2.x today. Based on existing content however, it is unclear how to evaluate this in a measurable, repeatable way. The "guidance" is spot on as a teaching document, but knowing if (or how) you have succeeded or failed is not addressed, and anyone unfamiliar with the larger topic will be confused when they ask "what do I have to do?" (As an aside, the sentence: "The explanation for Clear Language is Clear Language explanation" feels quite circular, and is less than clear...) |
Brooks Newton | ||
Peter Korn | yes | Info is helpful, but all Silver SCs in FPWD should use the same style sheet; tabs aren’t the same as in Headings. Prefer the top level title/heading of this guideline vs. Headings (“Clear Language Guidance<p>User clear language that readers easily understand.</p>” atop the light pink background). |
Jonathan Avila | ||
Wilco Fiers | ||
Detlev Fischer | yes | Clear language genrally: I would appreciate the same overall tab structure as introduced for headings - at least nit a different structure for each Guideline! Tab design: Headings levels point overlaps with prior draft SC, should be removed. This should only be about the 'language' of headings, not the headings structure? I find the sections YOu must ensure that... and This must... conceptually quite similar - it is unclear what differentiates them, There are no methods yet Methods have mothing on testing (but I don't know if they are designed to be testable as are Techniques in WCAG 2.X). That this element is absent reduces the value of any review. |
The method for testing Clear Language is Edit text for clear language. Look in the Test tab. There are three options for conforming to the method:
Does having options give flexibility to organizations and authors while improving accessibility for people with disabilities?
Choice | All responders |
---|---|
Results | |
yes | 6 |
no | 2 |
(7 responses didn't contain an answer to this question)
Responder | Clear Language: Options for testing | Clear Language Testing Options Comments |
---|---|---|
Martin Jameson | yes | |
Wayne Dick | yes | That looks sufficient. |
Stefan Schnabel | ||
Aidan Tierney | I don't understand this topic well enough to comment | |
David MacDonald | Certainly there needs to be options, but how would an a 3rd party auditor know this was done. Its common practice in RFP strategy for vendors to simply check off the boxes. I find this with my corporate clients. Their service providers builder web sites for them check off the boxes that they met WCAG. Luckily it's fairly easy to objectively review the content and document failures. Perhaps an ISO model where organizations follow a process (i.e., develop a stye guide) and document what was done... I think we should invite experts from ISO and LEADS Framework who are responsible for certification to speak with us about how to keep organizations accountable for this kind of documentation. I don't know how the process works. | |
Charles Hall | yes | Of course it provides flexibility. What I question is how that flexibility will be used, and how domain expertise in editing translates to the needs of people with disabilities. |
Gundula Niemann | no | It gives flexibility in the sense that everything can be declared as fulfilling, which does not improve accessibility. |
Alastair Campbell | There is a lot of overlap (confusion) between evaluate and method tabs, and between evaluate and the 'tests' tab within the only method. The three options are quite different, but each is 'sufficient', which seems odd. Of the three options, the easiest appears to be creating a style guide of whatever quality, and use that. When using the rubric (or is it 'rubrics' plural?), at this stage I'm now wondering what I do with the score, and how this will relate to other scores in other guidelines. I'll try to remember that question! I think it would be useful to link through to the explanation in the spec, or a shared bit of content that provides an overview. | |
Laura Carlson | ||
John Foliot | yes | On the surface, providing options gives flexibility. However the current options again lack any real measurably metric, and the 3rd option (Use the Rubric) appears to only allow "Substantially: A significant effort has been made and most exceptions improve clarity; Partially: An effort has been made, but there is room for substantial improvement; Limited: Inconsistent effort or no effort". For third-party evaluators it is unclear how to determine any of those 3 options: what is the difference between Substantially and Partially? |
Brooks Newton | ||
Peter Korn | yes | Options for testing: Tests are good, but it is important to note explicitly here in Silver that this is not automatable with today’s technology. In fact, every test tab should have a discussion of automation. Also, what is the research behind the point scoring system given here? We risk giving a false sense of “goodness” if there isn’t research behind this system. |
Jonathan Avila | ||
Wilco Fiers | no | There is no way to do external verification of this. There are no guarantees that a "professional editor" (whatever that means) can produce consistent results, or that following a style guide would. There is an implicit assumption here that every editor and style guide is reasonably skilled and qualified to not just recommend improvements, but to assess a minimal acceptable level for PWD. This also doesn't seem repeatable, two plain language experts are unlikely to agree without giving them greater context of what methodology to follow. Expecting every editor to agree with every style guide, and every style guide with the rubric that is proposed here seems like quite a stretch. This seems likely to create significant inconsistencies, which is counter to one of the goals for Silver: > Improve tests so that repeated tests get more consistent results. |
Detlev Fischer | yes | I have no idea how this Method could be tested in any way. Is there any more detail on the use of the styleguide for testing and the use of the rubic? I think draft approaches for testing and how they translate into an overall score would need to de spelled out, at least at a rough draft level. It would be vital to have this in place to make this useful to review. The way you have phrased the question fishes for a Yes so here you will get one... |
Clear Language showcases a new rubric test that could not be included in WCAG.
The method for testing Clear Language is Edit text for clear language. Look in the Test tab for the rubric.
Is the rubric testing approach acceptable to testing professionals? Keep in mind that this is a prototype and can be more refined with help from people with greater expertise in testing.
Choice | All responders |
---|---|
Results | |
yes | 2 |
no | 7 |
(6 responses didn't contain an answer to this question)
Responder | Clear Language: Rubric | Clear Language Rubric Comments |
---|---|---|
Martin Jameson | no | Editors and QAs have different skill sets. Whilst ideal that an editor tests this content, in traditional software building cycles this will not happen as test environments require very strict conditions in most setups |
Wayne Dick | yes | The yes is conditional on more guidance later. |
Stefan Schnabel | ||
Aidan Tierney | no | I can see this guidance as useful for a content creator but not for QA/tester. E.g. the Action "Use correct spelling" makes sense as guidance but it is not a test action which might be: copy text to a word document and run spellcheck (or run a bookmarklet test in a browser); evaluate whether any flagged words are spelled correctly, and count number of errors) |
David MacDonald | I honestly don't know. I think we should invite experts from ISO and LEADS Framework who are responsible for certification to speak with us about how to keep organizations accountable for this kind of documentation. I don't know how the process works. | |
Charles Hall | yes | I think use of rubric is fine. It may even result in a greater consideration for more people. However, it certainly also increase the complexity of and time required for testing, which has been a concern for many in the community. |
Gundula Niemann | no | The rubric indeed gives guidance and assistance for testing. Yet it does not match the instructions given in other places (like Clear Language > Write). When the rubric is done, how is fulfillment determined? If the properties described in the rubric form the criteria to determine clear language, why are they hidden in some test method? |
Alastair Campbell | It is difficult to answer whether it would be acceptable for testing purposes at the moment, I'll have to get further to see how the results are used, and come back to this aspect. | |
Laura Carlson | ||
John Foliot | no | There is no measurable line between Substantially, Partially and Limited, making it *harder* to determine a compliance score, not easier. Additionally, it is unclear whether the rubric is used on passages of text, an entire page of text, or a site's worth of text - there is no indication of scale in the current draft. Additionally, establishing a "style guide" in no way ensures that page content will be written in clear language. As many organizations will confirm, policing that kind of internal guideline is an ongoing concern, and short of expert review there is no way of knowing that an existing style guide meets the needs of impacted users. |
Brooks Newton | ||
Peter Korn | no | See previous comments – unless this rubric is clearly supported by research, it risks feeling arbitrary as well as risks giving a false sense of how good something is if it has a given score. Why is spelling the same weight as grammar, the same weight as voice (active vs. passive)? Are they equal in how understandable something is for someone with some sort of cognitive disability? |
Jonathan Avila | ||
Wilco Fiers | no | This is not repeatable. It relies heavily on choices left up to a tester, such as: - what style guide - which spell checker - which verbs are "simple" verbs - how many instances of something makes it "occasional" - how do you assess that "an effort has been made"? Is two efforts enough effort? |
Detlev Fischer | no | The structure of the rubic as table with long text content does not make it easy to parse. I would prefer step-by-step procedures for each test. It is unclear how the many actions listed in the rubic will contribute to the score, so there should be a draft approach fior that that can then be commented / improved upon. |
Is the Clear Language guideline ready for wider review?
Choice | All responders |
---|---|
Results | |
yes | 5 |
no | 6 |
(4 responses didn't contain an answer to this question)
Responder | Clear Language Ready for Review? | Clear Language Ready for Review Comments |
---|---|---|
Martin Jameson | yes | |
Wayne Dick | yes | Yes, with the explanation. I like the one given. I like the special considerations for complex content. |
Stefan Schnabel | no | |
Aidan Tierney | yes | |
David MacDonald | ||
Charles Hall | yes | I think it is clear that this is a draft and feedback from a larger audience is useful. |
Gundula Niemann | no | The structure is clumsy and hides information in many points. See 8. through 10. |
Alastair Campbell | ||
Laura Carlson | ||
John Foliot | no | It is perhaps ready for more 'expert review', but not necessarily "wider" review. |
Brooks Newton | ||
Peter Korn | yes | There is enough here to get useful feedback, SO LONG as we make clear the rationale for the rubric. |
Jonathan Avila | no | I only see stubs in the testing and no finished text. |
Wilco Fiers | no | See comments above |
Detlev Fischer | no | See above comments. Indications how the rubic would be used (what parts to pick) and how results contribute to the score and ulötimately conformance would be vital to have in place in a useful draft. |
Visual Contrast is a migration from WCAG 2.1 with significant updates:
Is the new algorithm credible?
Select font characteristics and background colors to provide enough contrast. See the Detailed Description tab.
Choice | All responders |
---|---|
Results | |
yes | 2 |
no | 5 |
(8 responses didn't contain an answer to this question)
Responder | Visual Contrast: New algorithm | Visual Contrast: New algorithm Comments |
---|---|---|
Martin Jameson | yes | The variation for font weights and font sizes is much appreciated. A common question. |
Wayne Dick | no | Needs to be fleshed out if it is to be accepted as a replacement for the old easily tested criterion. |
Stefan Schnabel | no | Expert rating required |
Aidan Tierney | This is not my area of expertise. I'm confused by this question: I thought this survey was about the structure, and this question is asking about a very specific change to an algorithmic calculation for contrast guidance. | |
David MacDonald | no | I wish there was an "I don't know" selection... when I clicked through to the examples I got a weird 404 page with *terrible* looking neon flashing contrast. https://www.myndex.com/APCA It doesn't appear that there is any tool at this point and the examples page is 404. |
Charles Hall | yes | Blown away. The world needs this. |
Gundula Niemann | no | Basically it is a good idea to incorporate further characteristics for legibility into the contrast definition besides luminosity. Nevertheless the algorithm does not incorporate font size, font stroke width, and nearby colors, notr does it explain why it doesn't and where and how these are taken into account. In this sense, the algorithm ist not complete, but crucial parts are hidden in other tabs/sections. The requirement distinguishes between good and strong contrast. This distinction is dropped in the sequel. Specifically it does not state when good contrast and when strong contrast is achieved. The WCAG 2.1 knows about contrast for information that is not given by text. This is missing here. |
Alastair Campbell | The title is 'visual contrast', but it doesn't include non-text contrast. I'm not clear if this is going to be a general visual contrast that includes both/all, or whether it is text-only? | |
Laura Carlson | ||
John Foliot | Unknown. We have no implementation experience with the newly proposed algorithm. However, from a test-ability perspective it is the most mature example we're seeing to date. However, again, scoring seems to be completely left out of the document and structure, and it appears that the net result is only one of pass or fail (i.e. #5 is true), and does not indicate any possibility of partially conformance (Bronze, Silver Gold). For example, in the table, if a font size is 36, and font-weight is 200, then the table demands(?) an APCA value of 80. (Q: what is APCA?) What score would content receive if the actual score was 78? The current structure of the presentation does not answer that significant and important question. | |
Brooks Newton | ||
Peter Korn | ||
Jonathan Avila | ||
Wilco Fiers | no | I don't think the added granularity is all that beneficial. The font-weight and font-size properties are rough estimates of what the thickness and height of letters will be. An Ariel black at 400 is still has much thicker letters than a courier new at 600, for example.The granularity gives the impression of accuracy that I don't think is there. At least with WCAG a tester can say "yeah, it's a bold font, even if the weight is 400". There's no allowance for that here. |
Detlev Fischer | Sorry not enough time to comment... |
Does “Scoring and Conformance” provide a clear summary and help the user understand what to expect from this section?
Will this more flexible approach to identifying conformance help content creators/owner convey how accessible they are?
Will this more flexible approach to identifying conformance help people with disabilities in general?
Choice | All responders |
---|---|
Results | |
yes | 4 |
no | 9 |
(2 responses didn't contain an answer to this question)
Responder | Scoring and Conformance | Scoring & Conformance Comments |
---|---|---|
Martin Jameson | no | Scoring is still very uncertain and unknown, so it is very hard to understand what to expect. I think rating accessibility on a scale is slightly counter productive given the wide variety of disabilities, however it will help people understand their position with more clarity. I feel that scoring will result in businesses reaching minimum score standards and staying there (whatever this score may be) and looking for the fastest way to hit the minimum score standards, rather than striving for a better product. |
Wayne Dick | yes | |
Stefan Schnabel | no | How scoring should work is too vague. Points & Levels explains better but relation is not clear to this chapter |
Aidan Tierney | yes | I'm glad to see the emphasis on scoring and conformance in Silver. I'm assuming the text here is just a placeholder and will be reviewed in due course. |
David MacDonald | no | "Conformance is a complex topic with many parts that work together. Scoring is more easily understood. .... " I would be surprised if percentage based conformance model will be easier to understand than the pass/fail model we have now. We may still may want to go with percentages for other reasons but I don't think making it "easier" to evaluate will be one of those reasons. If we go with a percentage model then the justification should be something like. "there are usually artefact errors which don't affect the use of the page, but prevent from the making of a conformance claim under 2.x, therefore the ext major version will introduce scoring .... " or something like that. |
Charles Hall | yes | There are 3 questions here and only 1 set of yes / no. I think yes to all. |
Gundula Niemann | no | A percentage approach makes it hard to determine conformance, so it does not help app creators, but increases uncertainty and confusion. Nor does it help people with disabilities, as a number dos not reflect whether specific user groups are assisted or whether everything is kind of halfway done. Just to give a few examples: - A stair with 20 steps, where a ramp is avalable for 18 steps. Does that make a 90% compliance? The upper floor can reached with a wheelchair or baby chariot only if the ramp reaches the top of the stair (or if an´nearby elevator exists, is fully reachable and working). - A form with 20 fields. One of the fields cannot be reached via keyboard. Does that make for a 95% compliance? The keyboard user cannot complete the task. Incorporating the severity of issues makes things even more complicate. Also, counting pages, tasks, apps in a product or whatever, makes things more complicated, harder to calculate as well as hard to understand. Incorporating the "importance" of a disability as well as defining "less critical" workflow easily yield discrimination or making people feel discriminated. Instead, the granularity and nature of the requirements/success criteria should yield a clear conformity information. It is not clear from the "Scoring and Performance" introduction whether the percentage approach applies to each success criterion or to the whole of all requirements. |
Alastair Campbell | ||
Laura Carlson | no | I am apprehensive that "Allowing the organization (this includes company, business, non-profit, government) to prioritize what it important for their product for accessibility assessment" could be a way to game the system. |
John Foliot | no | No/Unknown. We have no examples of scoring or conformance to review or evaluate. Returning to Headings, the current example suggests "Substantially, Partially and Limited" conformance levels with attendant scores of 1, .5, 0, but with no indication of whether it's scoring at the element level, the content block level, the page level or site level. (is it each heading, or all headings?) Additionally, under Scoring and Conformance it states: "The individual organization determines the scope of the claim." While fundamentally this is not incorrect, explicitly stating this in our specification will possibly limit adoption, certainly "as is", by legislators, and perhaps our architecture and approach should offer a limited range of possible conformance approaches that entities can choose from. (I continue to return to "How would this work in a VPAT?") |
Brooks Newton | ||
Peter Korn | no | The summary should also recognize and reference the work going into the working group note Challenges with Accessibility Guidelines Conformance and Testing which should reach FPWD by the time Silver’s FPWD is out. It should also make clear that WCAG 2.x only defined “conformance” with respect to individual web pages, not websites, and that Silver will define the new thing “Website conformance”. More detailed suggested edits are being provided within the Silver Task Force. |
Jonathan Avila | no | I am concerned with the following phrase "Allowing the organization (this includes company, business, non-profit, government) to prioritize what it important for their product for accessibility assessment." While their input is a valuable piece -- it cannot be the sole decision in making a determination. We need to find a compromise that takes into account users, important tasks, steps in a process, legally mandated pages, etc. I'm not sure why clear language would be tested on more pages than other manual issues if the goal is to make sure all disabilities are treated equally. If everything is additive and methods are sufficient techniques -- then how do you know if you pass if there isn't an existing method for the technique you are using? How can you show you pass if you don't fail something but you haven't implemented it in a way where there are any documented methods? This current model requires methods exist for everything. |
Wilco Fiers | no | I don't think there's enough here for someone to understand what the conformance model for Silver is going to look like. |
Detlev Fischer | yes | Dificult to anser if the net result is better than what we have now. Again it feels a bit as if you are asking in a way that will get you the maximum number of yes answers even from people how have deep concerns about the scoring approach :). This could be a yes, but scoring itself will become much harder, and I guess results a lot less reliable. Quote: "Allowing the organization (this includes company, business, non-profit, government) to prioritize what it important for their product for accessibility assessment." As I have said before repeatedly, I think a conformance rating must reflect the user perspective and relate to the object in front of the user, ideally be replicable by others. An accessibility assessment must not be at the whim of what a provider deeps impoerant or preferable. We must avoid a sotuation where providers do a number of no-verifiable things ("we have done user testing" etc.) to crank up some conformance point score. I do not shere your optimism that "Scoring is more easily understood." - at least no if it is a complex approach mixing a variety of measurements like currently intended in WCAG 3.0. |
Are the goals clear?
Goals for Scoring and Conformance
Choice | All responders |
---|---|
Results | |
yes | 5 |
no | 5 |
(5 responses didn't contain an answer to this question)
Responder | Goals for Scoring and Conformance | Goals Comments |
---|---|---|
Martin Jameson | yes | |
Wayne Dick | ||
Stefan Schnabel | yes | |
Aidan Tierney | yes | The info here is very useful but I'd like to highlight where the goals may be a departure from WCAG 2.1. I think for most content creators this will be top of mind. For example: Remove “accessibility supported” as an author responsibility and provide guidance to authoring tools, browsers and assistive technology developers of the expected behaviors of their products. This would be a huge change. Possibly welcomed by content creators in the sense it will reduce test effort, but not welcomed by AT users who may encounter problems that creators don't find before publishing. |
David MacDonald | > Support the ability for organizations to define the a logical subset of a site for conformance. Not sure about this. Seems weird to let the organization tell auditors where they should go to check on a site. I honestly don't know how the proposed point system will work... do you get a point for every right thing you do, so - if there are 500 images do you get 500 points. Or -If there are 10 images on your sample page do you get 10 points. | |
Charles Hall | yes | While the goal is clear, it is still unclear on all the mechanics of how the goal will be met. |
Gundula Niemann | no | Some goals contradict, like "Don't elevate the needs of one disability over another disability." and "particular attention to the needs of low vision and cognitive accessibility". Some goals have nothing to do with conformity and scroring, like test improvement, automation of tests, giving responsibility to authoring tools, browsers and assistive technology providers. |
Alastair Campbell | ||
Laura Carlson | no | I agree with Jon Avila that the conformance requirements seem slanted toward large corporations. I fear that a "more flexible conformance model" and "more flexible method of measuring conformance" for large corporations may mean less accessibility for PWD. In addition it may be seen as inequitable treatment for small organizations. |
John Foliot | yes | The goals appear to be well articulated. However the other materials under review do not appear at this time to be addressing all of the goals. One goal in particular, "Improve tests so that repeated tests get more consistent results" does not seem to be addressed in the current draft or architecture. There was also a declared desire earlier to more fully integrate ACT Test methods into the next-gen Guidelines, yet there is no mention of ACT anywhere in the current draft. |
Brooks Newton | ||
Peter Korn | no | As with the previous question, the Goals should make clear reference to “Website Conformance”. |
Jonathan Avila | no | There is no mention of conformance with steps in a process. If one step in the process is not conformant then the whole process should not conform as the user will be blocked. Removing the page as a measure of conformance might ok - but that means that parts of a page might not conform and that would prevent users from accessing the rest of the page. So this is problematic -- it's great if the header and footer are accessible -- and the component in the middle is accessible -- but if you can't get to the middle of the page then you can't use that component. The conformance requirements seem slanted toward those who are from large corporations and not end users. |
Wilco Fiers | ||
Detlev Fischer | no | I think this is promising so much - but the result of the interaction of all these good intentions could be very messy. The mix of tests is not likely to "get more consistent results" but if the net effect is better accessibility for users, repeatable conformance mesures may be an acceptable sacrifice? |
Is the new structure explained clearly?
How Conformance Fits Into the Information Architecture
Choice | All responders |
---|---|
Results | |
yes | 1 |
no | 6 |
(8 responses didn't contain an answer to this question)
Responder | Information Architecture | Information Architecture Comments |
---|---|---|
Martin Jameson | yes | |
Wayne Dick | ||
Stefan Schnabel | ||
Aidan Tierney | no | I think it needs to be more clear what happens to the WCAG Success Criteria (i.e the hard requirements) in this shift and to explain which parts of the guidance in the new version will be required (currently in WCAG) vs informative. |
David MacDonald | > Guidelines are general information and intent written in plain language > Methods are specific examples, instructions, and tests with more technical information It sounds like we have no normative statements. Just a general statement and a test. Is that right. When I follow that through for headings we find this. > You must ensure that headings exist and are logical > This must structure the content so users know what to expect in each section. Here is a testable requirement saying that the author MUST structure the content in such a way as it is understandable by a user. Historically, we've been shy to assign attributes to a user ... because what is required for one person may be different from what is required by someone else... and it leads to an untestable SC. There may be another way to make it testable... but historically this has been our concern with SC proposals that reference the abilities of the end user. | |
Charles Hall | no | I think the description “flattening to: Guidelines and Methods” is perhaps misaligned with the multiple tab architecture of each example. It doesn’t seem flat. |
Gundula Niemann | no | The structure seen in the prototype does not resemble what is explained in this section. The Success Criteria are lost, clear defnitions are lost, clean criteria to define compliance from incompliance are lost. The intent is not clear in the prototypes. The structure as seen in the prototypes is very process-oriented. It si not consistent, nor in structure, not in content granularity or nature of content. Test instructions are hidden somewhere in the process (guideline) or in the methods, even if technology independent. The term Information Structure appears in this section for the first time, and it is not explained. |
Alastair Campbell | A large issue for me is that the guidelines in the spec are far from sufficient to be useful. The published spec would be just a skeleton. I think key bits from the underlying explanation and/or methods should be pulled into the spec (ideally automatically). | |
Laura Carlson | no | Need to explain how the concept of normative in WCAG does or doesn't apply. And that methods can provide guidance for authors. |
John Foliot | no | The conformance section is incomplete, and leaves too many unanswered questions at this time. |
Brooks Newton | ||
Peter Korn | ||
Jonathan Avila | no | Are methods sufficient techniques? If they aren't how would we measure passes? How do we know when something fails -- the test in a method? So is the method a failure technique? If you fail a method but pass another do you pass? The document indicates that methods are for AT, user agents, etc. -- but nowhere do I see what is for authors. It seems like everything is aimed at everyone else but authors. Where will guidance for authors be? This document seems slanted toward things being addressed in user agents and assistive technology -- and we all know that user agents haven't adopted the UAAG -- so removing any requirements for authors while moving them somewhere else does not address the issue for users with disabilities and will only make things worse. |
Wilco Fiers | ||
Detlev Fischer |
Does this cover the variety of projects that could be evaluated for conformance?
Is it explained clearly?
Choice | All responders |
---|---|
Results | |
yes | 4 |
no | 5 |
(6 responses didn't contain an answer to this question)
Responder | Scope of Conformance Claim | Scope Comments |
---|---|---|
Martin Jameson | yes | |
Wayne Dick | ||
Stefan Schnabel | yes | |
Aidan Tierney | yes | The example of the newspaper helps make this clear. |
David MacDonald | The law in many countries is that public content conforms. Are we suggesting that organizations can meet their requirements by define a scope and then just make that accessible without any outside influence? | |
Charles Hall | yes | While it is clear, it begs the question “will organizations leverage narrow scopes and ultimately have fewer projects (products) that conform?” |
Gundula Niemann | no | It does not become clear whether only projects and sites can form the scope of a conformance claim, of if the organization fully determines the scope. A conformance claim usually is done per product. A product might have subproducts. This is specifically true for software products. The text is not clean in the usage of the terms 'project' and 'product'. The example does not match the explanation. The crossword section in a newpaper rather is a product, not a project or site. |
Alastair Campbell | ||
Laura Carlson | no | The statement the "The individual organization determines the scope of the claim" is concerning. Allowing an organization to determine the scope of the claim with no oversight is akin to the fox guarding the hen house - the organization is in charge of policing itself. What is stopping them from exploiting the situation? |
John Foliot | no | Please see earlier comments. |
Brooks Newton | ||
Peter Korn | no | As with the questions 13 and 14, the scope should make clear reference to “Website Conformance”. |
Jonathan Avila | ||
Wilco Fiers | ||
Detlev Fischer | no | Many issues and questions here. What can organisations scope out is the most impoortant one. |
Does “Points & Levels” provide a clear summary and help the user understand how points result in levels of conformance?
Choice | All responders |
---|---|
Results | |
yes | 3 |
no | 5 |
(7 responses didn't contain an answer to this question)
Responder | Points & Levels | Points & Levels Comments |
---|---|---|
Martin Jameson | yes | |
Wayne Dick | ||
Stefan Schnabel | ||
Aidan Tierney | yes | I assume this is placeholder/draft content and we're not reviewing it in detail here. |
David MacDonald | ||
Charles Hall | yes | |
Gundula Niemann | no | The points haven't been introduced before, and it is not clear, how they can be gained and how they are calculated. The categories are neither defined nor explained. The structure (Information Architecture) does not show where and how points and categories are supposed to be defined. It does not become clear how the Levels A, AA and AAA are matched into points and how they are reflected in Bronce, Silver, and Gold. A purely numeric approach (no weighting for severity, for example) yields a meaningless number. Imagine a page with 20 input fields, and one of them cannot be reached by keyboard. This yields 95% compliance, yet the user's task on this page cannot be completed. Is it possible the score/view compliance with respect to specific kinds of disability? It says: "For automated tests, the total instances of the condition are divided by the total passes for that condition. That gives a percentage response." A percentage is calculates as number of passes divided by the number of instances, and the result times 100 Take a page with 20 buttons, one of them can be handled with keyboard. The calculation as given in the editor's draft yields 20 divided by 1 which results in 20, where you can't really tell it percent. In fact the percentage is 1 divided by 20 which results in 0,05, that is 5%. What does "a guideline isn't used" mean? How can a guideline not be used? Is it not applicable? Not tested for conformity (and thus rated as non-conformant)? As there are percentages for single guidelines (requirements) as well as their sum (which people will call percentage) as well as an overall percentage, confusion and misunderstanding are foreseeable. |
Alastair Campbell | ||
Laura Carlson | ||
John Foliot | no | It is unclear how points are applied, accrued or lost. It is also unclear how points are determined against the 7+ functional needs articulated in EN 301549. How would points be applied in the previous example of headings and heading level (in the context of the 2 user-groups I mentioned: non-sighted users versus users with cognitive disabilities? For another example, look at "textual alternatives". If a web page has one image, and it has an appropriate text alternative, does it get one point? What about a page that has 50 images, all with appropriate text alternatives - does it get 50 points or 1 point? (why? why not?) If the page with 50 images has 49 images with appropriate text alternatives, does it get 1 point, .5 points, or something else? If option 3, what is the 'something else'? (49/50 = 98% - does that mean they get .98 points?) Are some points more valuable than others? (For example, should points for audio description be more valuable than ensuring language of the document - to use two current Success Criteria today? The current draft is silent on this) |
Brooks Newton | ||
Peter Korn | no | The text should offer up Points & Levels as one approach to rating website accessibility – a thought through suggestion for strong consideration. But it isn’t the only way, nor will it be the best way for all sites. |
Jonathan Avila | no | Throughout the document it refers to Mandate 376. mandate 376 is done - past - gone. The resulting document is EN 301 549. It should refer to EN 301 549. How will the minimum for each FPC be calculated? User testing or mapping of the guidelines to the FPC? This needs clarification. The document says automatics will be scored based on number of conditions divided by the number of passed items. How will we know the number of passed items? For example, we would have to have sufficient techniques to know that something passes in order to say it passes. It's often hard to know when something passes because it isn't present vs. it passes because you have done something correctly. The phrase ". (If a guideline isn’t used, then you don’t divide by that one)." is problematic. For example, if I don't have flashing content do I pass the guideline or do I say it isn't used? Basically if you say I don't get credit for passing it because I didn't use flashing content then you are penalizing me compared to people who use flashing content but the content is under the threshold. They would get credit for that guideline passing in the overall score but I would not get credit -- yet my site with no flashing would actually be better for the user. |
Wilco Fiers | ||
Detlev Fischer | no | Profoundly unclear how this points thing will work. How does it all add up? "For automated tests, the total instances of the condition are divided by the total passes for that condition. That gives a percentage response." This does not take criticality into account (probably cannot), and is therefore deeply flawed. One important image-based control with no alt, and 100 teaser images with alt, result in a great score. The rubic scoring seems flawed too - can factors be skipped? Not all may be applicable? |
Do the breakpoints for sizes make sense?
Is there sufficient flexibility for different organizations of different sizes to meet their needs?
Choice | All responders |
---|---|
Results | |
yes | 4 |
no | 4 |
(7 responses didn't contain an answer to this question)
Responder | Sampling | Sampling Comments |
---|---|---|
Martin Jameson | yes | |
Wayne Dick | ||
Stefan Schnabel | yes | |
Aidan Tierney | yes | I like the idea of defining samples. It does open up the possibility of gaming the system or just missing issues on upsampled sections. But it is helping to solve for a real-world problem in determining conformance on large sites. Sampling may need more research to see if any recomended approach is reliable, or prone to missing issues or misuse. |
David MacDonald | ||
Charles Hall | yes | Although I would use a different term than “breakpoints” as this has another common meaning. |
Gundula Niemann | no | It is not suitable if a guideline defines how a creator should test. Indeed, recommendation and best practices are welcome. Yet an obligation neglects experience and intrinsic knowldge about the own product, structure and processes. Continous creation is also ignored - new parts more likely need tests that mature parts of a product. Focusing on "primary workflows plus high traffic pages" still might leave disabled users behind, specifically expert users who work on specialized workflows. How is a 'page' defined? How are partly changed pages counted? |
Alastair Campbell | ||
Laura Carlson | ||
John Foliot | While much of the proposed breakpoints are pretty good, for example on sites with 11 - 100 pages it currently suggests "10% of the pages need in-depth testing including manual testing." - this would equate to an 11-page site testing one page. Since Deque has some experience in this realm, this is like not enough of a representative sampling, and we would propose that for in-depth testing it would be more like "no fewer than 20 pages or screens". This is (should be) open for discussion. | |
Brooks Newton | ||
Peter Korn | no | While there is certainly flexibility, there is no reference to research or statistical models to support the particular breakpoints. They feel arbitrary. |
Jonathan Avila | no | I'd like to know how the 10% was calculated. I've heard from others who performed research that the magic number is more like a fixed number of x pages. So testing 10 pages from a site with 100 pages will likely miss many of the issues. The sampling should ideally pick pages with different types of components, media, etc. Not much discussion of responsive variations of pages and different states of a page. WCAG 2.1 conformance requires all variations of a page be tested to make sure that when a low vision user zooms in and see a responsive version that the version is also accessible. |
Wilco Fiers | no | In practice samples are definitely the right way to go about testing, but what matters is how confident you want to be in your result. A bigger sample means greater confidence. I think provoding a model which lets site owners determine how much confidence they want in their conformance claim is probably better than picking what confidence level is appropriate everywhere globally. Sampling methods are highly complex. Are the numbers used here based on anything? If not, I think they should be. |
Detlev Fischer | No time to review tonight - sorry. |
Does “Accessibility Supported” clearly convey the inclusive nature of Silver?
Is the expectation that browsers, user agents, and AT working together with content owners will provide the most accessible environment/outcome/support clearly conveyed?
Choice | All responders |
---|---|
Results | |
yes | 2 |
no | 6 |
(7 responses didn't contain an answer to this question)
Responder | Accessibility Supported | Accessibility Supported Comments |
---|---|---|
Martin Jameson | yes | |
Wayne Dick | ||
Stefan Schnabel | no | |
Aidan Tierney | no | This seems like a long-term goal. Removing AT testing could degrade quality of organizational content as issues will be missed. Perhaps something "should" work but if in fact it "does not" work that's what matters to users. Saying "Authors are not responsible for the browser or assistive technology" sounds like authors don't need to test with any of these. Perhaps an interim state could be: "Authors are not responsible for bugs or lack of support for WCAG techniques in browsers or assistive technology". This would mean issues users experience that are caused by bugs in AT or browser don't impact conformance. |
David MacDonald | ||
Charles Hall | yes | |
Gundula Niemann | no | The section does not talk about inclusion. It talk about responsibiities. It does not convey that working together will provide the most accessible outcome, not does it clearly pose responsibility on the various parties author, user agent, and screen reader. How can the advice to user agents and screen readers be non-normative? Understanding the "Information Architecture prototype" as this Editor's Draft I understand that guidance how their product could help authors meet Silver for browsers, user agents, and assistive technology are planned to be added. |
Alastair Campbell | ||
Laura Carlson | ||
John Foliot | no | |
Brooks Newton | ||
Peter Korn | ||
Jonathan Avila | no | By saying that accessibility support will be removed we are saying it's ok for an author to put an accessible name in the jon-acc-name property and that user agents and assistive technology will just have to be updated to figure out the jon-acc-name property is what I use on my web page to expose an accessible name -- but no one else uses that attribute. The point of accessibility supported is to make sure standard ways are used -- not that it has to be supported by all assistive technology. The comments in the document lead me to believe that there is a misunderstanding of the term. All current WCAG requires is 1 supporting assistive technology -- it's a pretty low bar and now we are lowering the bar even further. |
Wilco Fiers | no | I think dropping accessibility support will make Silver less inclusive than WCAG 2. Stating that authors aren't responsible for browsers, is obviously true, but is also misleading. Authors are responsible for testing that what they authored actually works for their users. Accessibility support by no means says that what an author builds must work in every browser on the planet, with every possible assistive technology. If as an author, you're happy for it to only work in one of them, that's fine. Nothing in WCAG says that wouldn't be enough. Taking accessibility support out of Silver means content authors don't have to worry about if their content works at all, anywhere, for anyone. |
Detlev Fischer |
Is Scoring and Conformance ready for wide review?
Choice | All responders |
---|---|
Results | |
yes | 3 |
no | 7 |
(5 responses didn't contain an answer to this question)
Responder | Scoring and Conformance Ready for Review | Scoring & Conformance Ready for Review Comments |
---|---|---|
Martin Jameson | yes | |
Wayne Dick | ||
Stefan Schnabel | no | |
Aidan Tierney | yes | Yes to review, but I have reservations about what it is saying. The authors must not be able to decide what sections to test or which sections need to be accessible and which don't. Conformance statements need to provide info so that users (or purchasers) get an accurate understanding of how and where the content meets or does not meet the requirements and needs of users. As written this section seems to open a way for authors to hide problems. |
David MacDonald | I'm really sorry but I just don't know. It's such a weighty subject with such significant ramifications that I feel this is too speculative, and has a lot of hand waving... I like that it is simpler than previous iterations, but I wouldn't say its ready for wide review until the conformance and scoring have met a certain level of maturity. The world is watching. Silver has been under development for a couple of years, and the world wants to know what we are planning next. I'm perfectly fine if a new standard will change our conformance model. But I don't even see the basics here, of how the planned point system will work. This just seems too speculative to me. If we are going to take a radical departure from WCAG like this, we have to convince the world that we are going in the right direction. My concern is that jurisdictions will loose confidence in our AW group and stop citing WCAG in policy and law if we put it out with this level of maturity. It may make their confidence in where we are going shaky. | |
Charles Hall | yes | I think feedback from a larger audience is useful. |
Gundula Niemann | no | see 13. through 19. Too many points remain open or unclear. |
Alastair Campbell | ||
Laura Carlson | ||
John Foliot | no | |
Brooks Newton | ||
Peter Korn | no | |
Jonathan Avila | no | As a person with a disability and has a 20 year expert in this field I find the fact that organizations can decide what is tested highly problematic. Probably every week I see customers skip testing something that they know is not accessible but is needed by the user. Also people make assumptions that blind people won't use this feature so we won't test this feature. Most people with disabilities aren't blind and making a decision to skip something because one group may not use a feature is problematic. People will always pick pages that are simple or that they know will pass in order to game the system. I hear repeatedly that organizations have no accessibility budget this year because they haven't received a lawsuit. When they receive a suit they will allocate money. Owner cherry picked pages is also not how things work in the legal system. Documents and pages outside of the owners focus will almost certainly be broad up in a US court case. So following this methodology won't protect folks in the current climate. I propose that the evaluator work with the site owners to agree upon content for testing. |
Wilco Fiers | no | Not sure where to put this, but there is nothing in here about ACT. The ACT TF has learned an lot about how to write requirements that are repeatable and unambiguous, and it concerns me that there doesn't seem to be a place for this work in Silver (yet?). I also think it is worth pointing out that the ACT task force explicitly chose not to differentiate between manual and automated testing. While in Silver there seems to be this idea that those are two separate different things, with no intersection. The line between these have been blurred over the past few years, and are likely to continue to do so. |
Detlev Fischer | no | For reasons given above. |
The following persons have not answered the questionnaire:
Send an email to all the non-responders.
Compact view of the results / list of email addresses of the responders
WBS home / Questionnaires / WG questionnaires / Answer this questionnaire
w3c/wbs-design
or
by mail to sysreq
.