Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Personalization Semantics Explainer and  Module 1 #476

Closed
lseeman opened this issue Feb 19, 2020 · 12 comments
Closed

Personalization Semantics Explainer and  Module 1 #476

lseeman opened this issue Feb 19, 2020 · 12 comments
Assignees
Labels
Resolution: unsatisfied The TAG does not feel the design meets required quality standards Topic: accessibility Venue: APA WG Accessible Platform Architecture

Comments

@lseeman
Copy link

lseeman commented Feb 19, 2020

Hello TAG!

We are requesting a TAG review of Personalization Semantics Explainer and  Module 1 - Adaptable Content. It contains supporting syntax that provides extra information, help, and context. We aim to make web content adaptable for users who function more effectively when content is presented to them in alternative modalities - including people with cognitive & learning disabilities. 

Our wiki and getting-started page.

Security and Privacy self-review: PING review

GitHub repo: Repo and issues

Primary contacts (and their relationship to the specification):

-Charles LaPierre charlesl@benetech.org (Personalization Taskforce co-facilitator) 
-Lisa Seeman lisa.seeman@zoho.com  (Personalization Taskforce co-facilitator) 
-Janina Sajka janina@rednote.net (APA chair)

Organization(s)/project(s) driving the specification: Accessible Platform Architectures (APA) Working Group -  Personalization Taskforce  

Key pieces of existing multi-stakeholder review or discussion of this specification: Taskforce discussions and stakeholder discussion (such as wiki vocabulary-in-content discussion and wiki symbol discussion.)

External status/issue trackers for this specification (publicly visible, e.g. Chrome Status): Beyond the discussion in the wiki there is also Easyreading and  Firefox alpha from our demo at TPAC 2019 in Japan.

Special considerations: We would like advice on porting from the "data-" prefix to a more permanent syntax in CR. Should we suggest a prefix or just plain attributes? Note as more modules are produced we could have a sizable number of attributes.

Further details:

We'd prefer the TAG provide feedback as (please delete all but the desired option):

🐛 open issues in our GitHub repo for each point of feedback

@alice alice self-assigned this Feb 24, 2020
@torgo torgo changed the title Requesting a TAG review of Personalization Semantics Explainer and  Module 1 Personalization Semantics Explainer and  Module 1 Feb 26, 2020
@hadleybeeman hadleybeeman self-assigned this Feb 26, 2020
@torgo torgo added the Venue: APA WG Accessible Platform Architecture label Feb 26, 2020
@torgo torgo added this to the 2020-03-03-f2f-wellington milestone Feb 26, 2020
@alice
Copy link

alice commented Mar 4, 2020

Hi Lisa et al,

Thank you for bringing this work to us.

@torgo, @atanassov and I have had a chance to go through the explainer in our face to face meeting in Wellington. Unfortunately, we were unable to understand from the explainer what the proposed technology/API is.

Would it be possible to have a document more closely based on our Explainer template?

Specifically:

  • The explainer document should contain no boilerplate - it is an informal document, we don't need to know its formal status
  • The explainer should start with a brief (2-3 short sentences) summary of the proposed technology
  • It would be nice to have some concrete, specific examples of user problems - the Use Cases section does cover some user problems, but it's difficult to understand where one example ends and another begins.
    • Adding sub-headings to briefly describe each user problem/need would be helpful
    • The "send email" button example is interesting, but we couldn't really follow it - some mock-ups of different presentations would be helpful
    • The section on Symbols was similarly tricky to follow for someone who is unfamiliar with the topic. It would be helpful if it was more succinct, and potentially included some screenshots or mock-ups
  • The "Technology Summary" should be framed in terms of "Considered alternatives", and included later in the document (and we weren't sure what things like "Authoring" were doing in that list)
  • The proposed solution/solutions should be described clearly and with exampled (including example code) framed around the user needs described.
  • The Vocabulary Structure section is unnecessary in this context, as is Stakeholders

It seems like perhaps this needs to be three smaller explainers, one for each module, possibly with one meta-explainer framing the larger picture?

We were left unclear on what the actual proposed technology was - the Explainer or Explainers should clearly explain the proposal, without requiring the reader to click through to supplementary material. Supplementary material may go into extra detail or more technical depth.

We are excited for this work, and would like to help review the proposal, but we were left scratching our heads a bit here - we hope the above feedback helps explain what would be more useful for us.

@alice
Copy link

alice commented May 27, 2020

We got a draft explainer restructure via email, so we'll be looking at that at our virtual face-to-face.

@ruoxiran
Copy link

We got a draft explainer restructure via email, so we'll be looking at that at our virtual face-to-face.

Hi Alice,
Thank you for bringing this up, the draft explainer you got is prepared for review.
With regard to the Content module (AKA Module 1) document, which is mentioned in Explainer, we want to publish the document as CR this time, but it is not fully ready for review yet, we may have some updates on this document in the next few days. I will leave a comment here when the Content document is ready. Thank you.

@alice
Copy link

alice commented Sep 24, 2020

Our apologies for taking so long to look at this, again.

@atanassov and I just looked at this at our virtual face to face. Here are our thoughts:

  • The research here is very thorough, and there are very real user needs addressed.
  • It would be good to see some examples of how the proposed vocabulary would be consumed, particularly for some of the less obvious values.
    • Specifically, what types of tools would consume these values, and how? Extensions? AAC tools? Specialised user agents?
  • It seems like this proposal is aimed at overlapping but distinct sets of users - it might be helpful to separate these out a bit, and explain how each set of users would be served by what subset of these vocabularies, and where the intersections are. It might make sense to have a few different technologies instead of one single standard.
  • We wonder if you've researched microformats as a potential avenue for providing the hints described in the vocabulary of the content module. That seems to be a potentially good fit for the types of annotations we're seeing here, and would avoid needing to reserve an attribute prefix in HTML.
    • EDIT: @tantek suggested this wouldn't be a good idea; he is going to give us more information next week.
  • We note that purpose seems to have a near 100% overlap with the autocomplete attribute - could whatever is intended to consume purpose use autocomplete instead?
  • distraction unfortunately seems largely geared at allowing users to remove content which is intended to generate revenue for the site - why would a site voluntarily make it easy for users to hide that content?
    • distraction=sensory is a slight exception to this, but has strong overlap with the prefers-reduced-motion media query - could its intended uses be covered by that existing technology?

@snidersd
Copy link

snidersd commented Nov 13, 2020

Edited to remove user needs not applicable to Module 1.

Thank you very much for your detailed review and feedback on the restructured explainer according to your comments. (See Explainer-for-Personalization-Semantic https://github.com/w3c/personalization-semantics/wiki/Explainer-for-Personalization-Semantics

Regarding the tools that will be used, we do have a JavaScript solution as a proof of concept implementation that can be seen in this video prepared for TPAC: https://www.w3.org/2020/10/TPAC/apa-personalization.html
We also expect that this technology will be important for education publishers who use EPUB. With an HTML attribute, this information can be embedded within EPUB documents where reader software or assistive technologies can use it to assist with learning. We also expect that custom or general browser extensions will be developed by 3rd parties to assist the various disability groups. This includes updating of AAC software tools to take advantage of the additional semantic information. Also, the addition of personalization information can enhance machine learning. For example, providing alternatives to idioms, it’s raining cats and dogs, or other ambiguous terms. Tools would parse the HTML for the personalization attributes and make the necessary substitutions into the DOM or assistive tool based on the identified user group or individualized need.

This module can also support a number of needs exposed by COGA's Content Usable (https://www.w3.org/TR/coga-usable/) such as:

  • I need (a version of) the interface to be familiar to me so that I recognize and know what will happen.

  • I need alternatives to spoken and written language such as icons, symbols, or pictures.

  • I need to sometimes avoid types of content, such as social media, distractions, noises or triggers.

  • I need personalized symbols or pictures that I can recognize immediately because learning new ones takes a long time.

  • I need the symbols and pictures that I know and recognize when I do not know a word.

  • I often need less content without extra options and features because at times I cannot function at all when there is too much cognitive overload.

  • I need symbols to help understand essential content, such as controls and section headings.

  • I need symbols that I understand and are familiar to me; recognizable, commonly used symbols; or personalizable.

  • I do not want distractions from my task.

To be honest we did not thoroughly investigate microformats. We are wary of relying on a specification that does not fall under the auspices of the W3C. While personalization may be a reasonable use case for this technology, it would slow down the development of the Personalization specification while working to advance microformats to meet our additional, diverse needs. We would also be very interested in hearing @tantek‘s input on this.

The I18N group raised the same question about the similarities between autocomplete and purpose. While autocomplete can only be used on form fields, the purpose values can be used on other element types. Where there is overlap with the autocomplete values, we have included the definition from the WCAG 2.1 Input Purposes for User Interface Components reference: https://www.w3.org/TR/WCAG21/#input-purposes. We can update the purpose values section of the content module to specify that.

With regard to your question about distractions. We do understand that advertising constitutes the critical revenue stream for many content providers. However, not all distractions are third party advertisements, and may be within the sites ability to allow the user agent to remove them.
Further, the purpose of allowing users to hide (or systematically show and sequentially review) on page advertising is simply to give users the control other users have over such content. The user without a disability can ignore the add and complete the task. The user who cannot ignore it, or TAB past it conveniently, is forced to grapple with a stumbling block that prevents them from completing a task.

We believe users will choose to look at advertising because it's informative. It's an important mechanism for learning about options in life. By allowing users to control when and how they see ads, we allow them the ability to avoid becoming frustrated by processes that prevent task completion. We also allow them to see advertising as potentially useful information, not a source of frustration. Surely, we don't think a frustrated user will follow up on the ad that caused the
frustration? With regard to the cross over with ARIA. ARIA-Live covers distractions such as a ticking clock for screen reader users making the
interruptions less invasive but does not address the COGA use cases where the constant change distracts a person with ADHD etc, who does not use a
screen reader. Whether this content is essential, or if it can be removed is not addressed in ARIA.

Please let us know how we can further assist.

Thanking you in advance

The personalization task force

@alice
Copy link

alice commented Nov 17, 2020

Thank you for following up!

I'm still mulling over much of your response, but I wanted to make a clarification to my comments on advertising and the distraction attribute:

I completely agree that there is a real user need around removing distractions, and that advertising which doesn't provide useful information to a user in their context might have little value to the advertiser, and certainly has no value to the user.

However, if an advertiser is creating a distracting ad, and a site is prepared to show it, it's because the distracting design of the ad serves some purpose which in turn serves the purpose of both the advertiser and the site. In that case, the user isn't so much choosing to look at it because it's informative, but because the design makes it difficult or impossible not to.

I can't imagine why the advertiser or the site would be prepared to make it easier for all users to evade such an ad by annotating it, without some way to make up the implied loss of revenue to the site, or some stronger incentive to outweigh the revenue concern.

In other words, unfortunately I think the distraction attribute is at risk of not being used on distracting ads in practice.

Also, I mentioned the prefers-reduced-motion media query, rather than ARIA - did you have a chance to evaluate the overlap in user needs there?

@plinss plinss added the Progress: pending external feedback The TAG is waiting on response to comments/questions asked by the TAG during the review label Dec 7, 2020
@plinss plinss removed this from the 2020-11-23-week milestone Dec 7, 2020
@snidersd
Copy link

Dear Alice:

Thank you for raising Issue #476 asking about duplication between
Distraction in our Module 1 specification draft and CSS' Media Queries
5 (MQ5).

We have spent a good deal of time considering this in our past several
teleconferences and have come to the following conclusions:

While there is some overlap distraction is much more
comprehensive than MQ5. However, our examples have inadvertently
focussed on the overlap, not the broader scope supported in our
specification. We are creating new examples to remedy this.

We have decided to let the overlap stand for now rather than to
try and split out the items covered in MQ5. As this is all new
technology, we feel this is more appropriate at this stage.
Let's see how developers and content authors apply both our
spec and MQ5.

Thanks again for bringing this overlap to our attention. We had not
previously addressed it in our conversations, and we do need to be aware
of the overlap and prepared to respond substantively when questions like
yours come up.

Best,

Lisa and Sharon
Personalization TF Facilitators

@alice
Copy link

alice commented Jan 27, 2021

Thank you for the update.

At this point, we're not sure how much help we can be here.

We're would like to reiterate that the work you have collectively done identifying the problems to solve is extremely good, and that the problems are real and should be solved.

However, we think we need to say more clearly that the shape of the solution as an ARIA-like vocabulary doesn't seem like a good fit.

ARIA had the characteristics that it was based on an existing vocabulary (MSAA), and was intended to be consumed by a single class of technologies (screen readers, later expanded to other types of assistive technologies.)

In contrast, this vocabulary seems like it would have a wide range of (programmatic) consumers, and it's not clear that those consumers have been brought in as stakeholders to this design process.

We would like to more strongly suggest that instead of trying to solve all of these problems with one solution, that you could take the excellent work already done identifying problems, and look at solving them individually, working with the relevant stakeholders. The relevant stakeholders may include users, authors, publishers, assistive technology creators, and potentially others as well.

This aligns with our general design principles: we advise all API authors to prefer simple solutions.

Some of these problems may be "stickier" than others; for example, trying to come up with a system to allow users to avoid unwelcome distraction from revenue-generating ads is going to involve buy-in from the publishers who need that revenue in order to keep operating as a business.

However, other problems, like annotating words with the relevant Bliss or other symbol, seem very well scoped in terms of user need, authoring responsibility, and assistive technology implementation requirements.

Not all of these solutions may even need to go through a standardisation process immediately, but may be better suited to incubation to allow prototyping and rapid iteration as a collaborative process with the various stakeholders, before settling on a standard.

We know this is not the feedback you were likely hoping for, but we would like to emphasise how rare it is that we get a proposal with the level of work put in to user needs as we have seen here, and that this is one of the most critical parts of the design process.

We would welcome the opportunity to continue working with you on better scoped proposals to address subsets of the user needs you have identified, including very early stage ideas in incubation.

@atanassov
Copy link

One more follow-up to your comment about overlapping with MQ5. As you rightfully pointed out, there are already a number of media features that are intended to improve user preferences. I would encourage you to open a new issue with the CSSWG in their repo.

Explaining the user problem, proposal and overlap with MQ5 will get their attention and help solving the set of overlapping features between Personalization and MQ5.

I would be happy to guide you through the process if you @-mention me in the issues.

@atanassov atanassov added Progress: propose closing we think it should be closed but are waiting on some feedback or consensus and removed Progress: pending external feedback The TAG is waiting on response to comments/questions asked by the TAG during the review labels Jan 27, 2021
@atanassov
Copy link

After reviewing the overall issue during our "Kronos" VF2F, we resolved to close it. Thank you for working with us.

@torgo torgo added Resolution: unsatisfied The TAG does not feel the design meets required quality standards and removed Progress: propose closing we think it should be closed but are waiting on some feedback or consensus labels Sep 18, 2022
@plehegar
Copy link
Contributor

See APA/TAG @TPAC2022 : Unblocking the WAI-Adapt Content Module 1.0 CR

@ruoxiran
Copy link

ruoxiran commented Dec 1, 2022

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Resolution: unsatisfied The TAG does not feel the design meets required quality standards Topic: accessibility Venue: APA WG Accessible Platform Architecture
Projects
None yet
Development

No branches or pull requests

9 participants