IRC log of apa on 2019-09-18

Timestamps are in UTC.

23:36:14 [RRSAgent]
RRSAgent has joined #apa
23:36:14 [RRSAgent]
logging to https://www.w3.org/2019/09/18-apa-irc
23:36:19 [MichaelC]
meeting: APA at TPAC
23:36:25 [MichaelC]
rrsagent, this meeting spans midnight
23:36:30 [MichaelC]
rrsagent, make log world
23:36:37 [MichaelC]
agenda: https://www.w3.org/WAI/APA/wiki/Meetings/TPAC_2019
23:36:41 [MichaelC]
chair: Janina
23:41:34 [jamesn]
jamesn has joined #apa
23:57:55 [Roy]
Roy has joined #apa
23:59:09 [Roy]
Meeting: APA WG Meeting at TPAC 2019
00:01:00 [Irfan]
Irfan has joined #apa
00:01:04 [Irfan]
present+
00:01:22 [Irfan]
Chair: Janina
00:01:23 [CharlesL]
CharlesL has joined #apa
00:01:49 [CharlesL]
present+
00:02:11 [Roy]
present+
00:02:55 [Irfan]
Agenda: APA Task Forces & Next Steps Today and Tomorrow
00:03:15 [Irfan]
Scribe: Irfan
00:04:00 [CharlesL]
chair: Janina
00:04:19 [CharlesL]
Meeting: APA TPAC Meeting
00:04:49 [Roy]
RRSAgent, make logs public
00:04:55 [Manishearth]
Manishearth has joined #apa
00:04:58 [Roy]
RRSAgent, make minutes
00:04:58 [RRSAgent]
I have made the request to generate https://www.w3.org/2019/09/18-apa-minutes.html Roy
00:05:09 [Joshue108]
Joshue108 has joined #apa
00:05:23 [Joshue108_]
present+
00:05:34 [Joshue108_]
Joshue108_ has joined #apa
00:05:43 [Irfan]
Janina: question that can help us focusing on a11y to hear from all of us. what kind of applications, what kind of emersive environment you are thinking off, in a working group.
00:05:46 [Joshue108_]
present+
00:05:52 [CharlesL]
Agenda: XR
00:05:53 [ada]
ada has joined #apa
00:06:15 [Irfan]
where are we going? sensible question?
00:06:16 [Roy]
Topic: APA Task Forces & Next Steps Today and Tomorrow
00:06:25 [kip]
kip has joined #apa
00:06:33 [Manishearth]
present+
00:06:36 [Irfan]
*Introduction*
00:06:36 [kip]
present+
00:07:21 [klausw_]
klausw_ has joined #apa
00:07:49 [ada]
q+
00:08:05 [klausw]
klausw has joined #apa
00:08:22 [Irfan]
ack ada
00:08:55 [ada]
ack ada
00:09:18 [bajones]
bajones has joined #apa
00:09:28 [bajones]
q+
00:10:05 [Irfan]
ada: if you looking at the study improvement of current hw.. for the VR side.. a massive technology shift
00:11:01 [Irfan]
stuff like ML.. it didnt happened overnight..
00:11:08 [cabanier]
cabanier has joined #apa
00:11:12 [Joshue108_]
+q to ask if people understand some of the challenges around XR A11y
00:11:17 [cabanier]
present+
00:11:28 [Irfan]
software wise.. standards wise.. people are interesting about webXR..
00:11:30 [Matt_King]
Matt_King has joined #apa
00:11:39 [Matt_King]
present+
00:11:42 [Irfan]
we are bulding the foundation at the moment
00:11:59 [NellWaliczek]
NellWaliczek has joined #apa
00:12:11 [NellWaliczek]
present+
00:12:13 [Irfan]
for the work.. its been done today.. hopefully we will see lot more capability towards voice interfaces...
00:12:41 [jwer]
jwer has joined #apa
00:12:43 [Irfan]
speech synthesis and recognition.. long way to go
00:12:46 [NellWaliczek]
q?
00:12:50 [NellWaliczek]
q+
00:13:02 [Manishearth]
q?
00:13:03 [Irfan]
some of the thoughts
00:13:15 [CharlesL]
ack bajones
00:13:15 [Irfan]
ack bajones
00:13:27 [Manishearth]
RRSAgent, please draft the minutes
00:13:27 [RRSAgent]
I have made the request to generate https://www.w3.org/2019/09/18-apa-minutes.html Manishearth
00:13:38 [Irfan]
bajones: going for a11y.. couple of paths that are clear..
00:13:43 [Irfan]
there are some that not very clear
00:14:06 [Irfan]
one area which is clear, mobility concerns that impact aria computing
00:14:16 [Irfan]
job simulator game example.
00:15:25 [Irfan]
kind of adjustment where you are making user biger and environment smaller.. where you allow user to manipulate the space..
00:15:38 [Irfan]
those are things that can be done in a way.. that is ahndoff in application point of view
00:15:54 [Irfan]
you could have all sort of a11y gesture
00:15:56 [kip]
q+ To mention that at the UA level, we can implement low hanging fruit quickly that don't require spec changes. Eg, leanback mode. Maybe later add things such as mono audio modes.
00:16:31 [Irfan]
these are things where you could have within the browser. that kind of a11 is going to emerge very well.
00:16:40 [Irfan]
it is going to have huge impact
00:16:52 [kip]
q+ To say that at the UA level, we can implement low hanging fruit quickly that don't require spec changes. Eg, leanback mode. Maybe later add things such as mono audio modes.
00:16:52 [Irfan]
it related to other form of a11y.. e.g. visual
00:17:47 [Irfan]
a frame which is declarative by feature... where the base life api doesnt have any hooks. lot of possibilities there
00:17:53 [Matt_King]
q+ what is aFrame?
00:18:19 [Irfan]
going further.. having like descriptive audio..
00:18:26 [Matt_King]
q+, what is aframe?
00:18:35 [Matt_King]
q+
00:18:42 [Irfan]
i dont not personally have ckear idea.. and I dont use any a11y tool. this may be an area where lot of research is required
00:19:11 [cabanier]
q+
00:19:12 [Irfan]
if you want to tab through to determine the content.. or you want to navigate though the objects...
00:19:25 [Irfan]
janina: any one on the phone?
00:19:31 [Irfan]
* no one*
00:20:26 [Irfan]
Joshua: term XR covers many things
00:20:42 [Irfan]
broadly speaking, it makes lot of sense..
00:21:13 [Irfan]
essentially.. it is visual rendering in 2 d. which makes sementic information architecture in the dom.
00:21:31 [Irfan]
this gets little fussy..
00:22:46 [Joshue108_]
ack me
00:22:46 [Zakim]
Joshue108_, you wanted to ask if people understand some of the challenges around XR A11y
00:22:46 [CharlesL]
q?
00:22:47 [Irfan]
current model when sr interact the web page.. forms mode thing, user bypass the a11y.. if they are interacting or navigation something..
00:22:53 [CharlesL]
ack Joshue108
00:23:11 [Joshue108_]
What do we understand are the issues for existing AT users, where the AT is essentially outside the simulation?
00:23:18 [Irfan]
what are the issues emersive ways?
00:23:36 [cabanier]
q-
00:23:38 [Irfan]
what are the issues with AT outside of a11y.
00:23:47 [Irfan]
in the future AT could be inside the simulation
00:24:01 [Irfan]
AT could be embedded in the environment.
00:24:16 [Joshue108_]
What does the architecture of tomorrow look like?
00:24:21 [Joshue108_]
# Could AT be embedded in these future environments?
00:24:32 [cabanier]
q+
00:24:39 [Irfan]
another question? what is the architecture for tomorrow look like?
00:25:16 [Irfan]
bajones: are there any a11y tools that applies to teh similar situation that we discussed?
00:25:29 [Irfan]
I dont think there are many paralles to the environment that we are talking about
00:25:33 [ada]
q+ to ask about developers generating something akin to the AOM from the scenegraph?
00:25:34 [CharlesL]
q?
00:25:39 [CharlesL]
q+ Janina
00:25:40 [jwer]
s/teh/the/
00:25:45 [Irfan]
janina: history about it.. would like to explain
00:25:48 [CharlesL]
ack NellWaliczek
00:27:08 [Irfan]
nell: we srtat with short term options where we start with entry point.. we could encourage UA.. example.. job simulator.. browser level setting can make it easier
00:27:44 [Irfan]
input devise.. target ray.. have to reach the thing you are trying to reach. It truens out that it is not pleasant experience
00:28:00 [Irfan]
there may be opportunity in lower level apis
00:28:29 [Irfan]
to enable alternate in input device. that could accomplish similar feature.
00:28:57 [Irfan]
there were two different discussion.. we need to split it out.
00:29:44 [Irfan]
perhaps the bigger benefit might not be the web specific way.. user agent can do something like job simulator.
00:29:49 [Joshue108_]
q?
00:30:25 [Irfan]
existing user interface.. often user experience tells you that you are trying to access is behind you. not sure how to think about making that easier.
00:30:33 [Irfan]
localizing sound is not useful to me.
00:30:40 [Irfan]
it seems some opportunity to dig in there
00:31:16 [Irfan]
things to consider.. take action on today.. that isnt necessary the web specific.
00:31:55 [Irfan]
how can we propose the change in glTF (GL Transmission format) file format.
00:32:22 [Irfan]
thats a declarative format and relies on extension that is integrated with A11Y
00:33:04 [Irfan]
you are asking.. whats the future/
00:33:07 [Irfan]
two interesting things
00:33:12 [Joshue108_]
q+ to mention our current draft XR user needs
00:33:40 [Irfan]
one, is fair amount of interest in itracking APIs at the plateform level
00:33:56 [Irfan]
this could either be super helpful or super problematic as far a11y concerns
00:34:19 [Irfan]
if you cant see the content and cant detect objects.. its falls positive
00:34:44 [klausw]
q+
00:34:44 [Joshue108_]
q?
00:34:51 [CharlesL]
q+
00:34:55 [Irfan]
its related to input sources.. having a target ray.. under the hood to chage the targeting ray.. there is an opportunity.. thats again in few years.
00:35:15 [Irfan]
when we look out for 5-10 years in the future... we liekly to see more declaring hybrid interface.
00:35:36 [Joshue108_]
s/liekly/likely
00:35:41 [Irfan]
emersive shell UI has the ability to place 3d object all around the world.
00:36:15 [Joshue108_]
s/itracking/eye tracking
00:36:37 [Irfan]
That would allow users ti provide more sementic approach.
00:37:15 [Irfan]
walking to the street.. you can query the menu that is digitally advertised...
00:37:28 [jwer]
s/sementic/semantic/
00:37:42 [Irfan]
joshua: we need to start what people actually need.
00:37:56 [Irfan]
nel;: I am talking about 5-10 years in a11y work.
00:38:16 [Irfan]
nell: people are going to wear those gadget 24x7 like they have phone
00:38:22 [jwer]
s/nel;/nell/
00:39:09 [Irfan]
as those things start to be more widely accepted and available.. there are interesting potential to get the information.
00:39:21 [CharlesL]
q?
00:39:22 [Irfan]
that could be helpful in context of a11y
00:39:36 [Irfan]
joshua: its great as long as it is not vendor related
00:40:03 [Joshue108_]
q?
00:40:13 [Irfan]
nell: its very different than our imperative approach
00:40:20 [CharlesL]
ack kip
00:40:20 [Zakim]
kip, you wanted to mention that at the UA level, we can implement low hanging fruit quickly that don't require spec changes. Eg, leanback mode. Maybe later add things such as
00:40:23 [Zakim]
... mono audio modes. and to say that at the UA level, we can implement low hanging fruit quickly that don't require spec changes. Eg, leanback mode. Maybe later add things such as
00:40:23 [Zakim]
... mono audio modes.
00:40:37 [Irfan]
kip: i can speak what happend in FF reality.
00:40:53 [Irfan]
spent time with user and understand what do they need.
00:41:13 [Irfan]
we discovered things.. what are the things that are actionable quicker.
00:43:15 [Irfan]
things .. some people are sensitive about some behavior of browsers . watching videos.. we don't project the video in proper way.. if you are producing the video and add the subtile in the video.. its
00:43:37 [Irfan]
when we show 360 degree video.. different presentation to left and right eye..
00:43:47 [Irfan]
nell: its like map projection
00:43:57 [Irfan]
kip: it is unreadable
00:44:06 [Irfan]
you may want to have that text around you..
00:44:30 [Irfan]
it needs to be sensitive where you are looking at one paricular point.. mid-term work that we want to look at
00:44:46 [Irfan]
reviewed document.. actionable thing that can be done quickly..
00:45:01 [Irfan]
allowing mix audio to both ears..
00:45:08 [Joshue108_]
XAUR draft https://www.w3.org/WAI/APA/wiki/Xaur_draft
00:45:26 [Irfan]
we can discover that can be handled quickly
00:45:33 [CharlesL]
ack Matt_King
00:46:14 [Irfan]
nell: A frame
00:46:21 [Joshue108_]
https://aframe.io/
00:46:39 [Irfan]
* Thanks Joshua*
00:47:47 [Irfan]
nell: super medium browser is based upon A frame.
00:48:37 [NellWaliczek]
to clarify, it's not build on aframe
00:48:42 [NellWaliczek]
just the same people working on both
00:49:08 [Joshue108_]
q?
00:50:00 [Irfan]
matt: inside or outside the at experience, 30 years of context, how the at technology is going to work.
00:50:32 [Irfan]
at would live inside the app. it would load the program with all the functions
00:51:00 [Irfan]
you can imagine the problem for the end users to find the new ways to experience.
00:51:08 [bajones]
q+ to ask further about A-frame/accessibility integration
00:52:05 [kip]
q+ To say that as aframe is based on custom elements, perhaps authors could start adding aria attributes
00:52:09 [Irfan]
application that rely on third party and try to build an a11y tree.. I wonder if XR space is an opportunity to get best in both worlds
00:52:28 [Irfan]
where you can have at build inside the app but standardize api
00:53:17 [NellWaliczek]
q+ to give context about how 3D tech works
00:53:33 [Irfan]
building a SR that tried to read world around you, we do not have a concept today in any SR tech.
00:53:44 [Irfan]
it a linear world and not 2d
00:54:08 [bajones]
q-
00:54:26 [CharlesL]
q-
00:54:36 [Irfan]
i would love us when we try to think that API and what is possible in linear world that could be more ideal with general purpose feature
00:55:17 [CharlesL]
ack cabanier
00:56:15 [Irfan]
cabanier: everything is declarative.. there is no reason that a11y dom can not be used..
00:56:54 [Irfan]
in short term, we do have a set of strict recommendations to use application...
00:57:04 [CharlesL]
ack ada
00:57:04 [Zakim]
ada, you wanted to ask about developers generating something akin to the AOM from the scenegraph?
00:57:41 [Irfan]
ada: one of the interesting about aframe, is concept of eam graph
00:58:03 [Irfan]
s/eam/seam
00:58:07 [klausw]
s/eam graph/aom graph/
00:58:27 [klausw]
s/seam/scene/
00:58:53 [Irfan]
there were some kind of scene graph format that can be easily generated using JS library.
00:59:05 [klausw]
(FYI, assume AOM = https://github.com/WICG/aom/blob/gh-pages/explainer.md )
00:59:31 [Judy]
Judy has joined #apa
00:59:34 [Irfan]
cabanier: we have 3d declarative framework. they can become the part of a11y
00:59:52 [Irfan]
ada: there was an API that would let you submit JS object in a particular format.
01:00:10 [Joshue108_]
+1 to Ada
01:00:24 [CharlesL]
q?
01:00:30 [Irfan]
that might go in long way to provide a method.
01:00:42 [Irfan]
joshua: great idea
01:00:50 [Irfan]
would liek to explore more with you folks
01:00:57 [Irfan]
s/liek/like
01:01:00 [CharlesL]
ack janina
01:01:22 [Irfan]
janina: one of the things in a11y is archeological digging.
01:02:02 [Irfan]
stuff can change live.. we put more booting, more inconvenience. making a calculation..
01:02:26 [Irfan]
my mentor was a guy who would track his eyes on keyboard...
01:02:32 [Judy]
q+
01:02:38 [Irfan]
there is a background there.. we need to digg that...
01:02:49 [Judy]
q+ to comment on "single switch access"
01:02:52 [NellWaliczek]
Anyone who is interested, I'd be happy to schedule time this wek to explain a bit about how the underlying 3D and XR platforms work
01:02:55 [NellWaliczek]
q-
01:03:04 [Irfan]
history of attempts to use early implementation that becomes more robistics
01:03:37 [Irfan]
one of the most compelling presentation at CSUN in 1994..
01:04:12 [jwer]
s/wek/week
01:04:18 [Irfan]
example of the wheelchair in presentation.
01:04:55 [Irfan]
skills we would rather learn in good controlled environment.
01:05:12 [CharlesL]
ack Joshue108
01:05:12 [Zakim]
Joshue108_, you wanted to mention our current draft XR user needs
01:05:18 [Irfan]
we need to dig something in archive
01:05:37 [Joshue108_]
ack me
01:05:42 [Irfan]
joshua: in history, many initiative that we can learn from the things that didnt work well and determine why
01:06:12 [Irfan]
exploring what can we do in authoring environment is brilliant idea.
01:06:37 [Irfan]
semantic scene graph, AOM are part of equation.. what can we do for user needs...
01:06:38 [Joshue108_]
https://www.w3.org/WAI/APA/wiki/Xaur_draft
01:06:52 [Irfan]
a11y means different things for different people
01:06:58 [CharlesL]
ack klausw
01:07:01 [klausw]
WebXR flexibility could be used for near-term a11y improvements.
01:07:11 [klausw]
Input uses abstractions, and a custom controller should be usable in existing applications by supplying "select" events and targeting rays which are decoupled from physical movement.
01:07:16 [klausw]
tuning outputs: fully disabling rendering may confuse applications, but a user agent could reduce framerate, set monocular mode, and/or set an extremely low resolution to decrease rendering cost.
01:07:21 [klausw]
Reference spaces and poses are UA controlled, the UA could have mobility options such as adjusting floor height, teleportation, or arm extension, even without specific application support.
01:07:27 [klausw]
There are ongoing discussions about DOM layers in WebXR, important to ensure that existing a11y mechanisms can remain functional. For example, avoid a "DOM to texture" approach where this information may get lost.
01:07:35 [CharlesL]
ack kip
01:07:35 [Zakim]
kip, you wanted to say that as aframe is based on custom elements, perhaps authors could start adding aria attributes
01:07:36 [Irfan]
klaus: ask me if you have any question what I have added here.
01:07:56 [Irfan]
kip: aframe is based upon custom element
01:08:01 [CharlesL]
ack judy
01:08:01 [Zakim]
Judy, you wanted to comment on "single switch access"
01:08:05 [Irfan]
that could be one potential avenue to action
01:08:21 [Joshue108_]
q?
01:08:26 [Irfan]
judy: want to refelct back.. with regards to single switch access..
01:08:37 [joanie]
present+ Joanmarie_Diggs
01:09:01 [Irfan]
second: use case for mobility training..
01:09:37 [Irfan]
there is very interesting development in virtual environment.
01:10:24 [Irfan]
nell: I can make myself available to give you more information if you need.
01:11:34 [Irfan]
rrsagent, make minutes
01:11:34 [RRSAgent]
I have made the request to generate https://www.w3.org/2019/09/18-apa-minutes.html Irfan
01:13:47 [Irfan]
joshua: we have a set of declarative semantic. we tell how to markup stuff... now we are in a situation in XR where we need declarative semantic.
01:14:34 [Irfan]
the only other thing that I saw recently is AOM that could be used as a bridge between document semantic and application semantic.
01:14:49 [bajones]
q+ to discuss AOM
01:15:03 [Irfan]
what am I hearing from the feedback, it could be possible to populate the a11y tree.. we . need to agree what is needed.
01:15:57 [Irfan]
object oriented case.. where you have properties inherited... or encapsulate.. have ability to understand
01:16:04 [CharlesL]
q+
01:16:12 [Irfan]
in-terms in AOM it seems interim solution.
01:17:11 [CharlesL]
ack bajones
01:17:11 [Zakim]
bajones, you wanted to discuss AOM
01:17:42 [Irfan]
bajones: talk about AOM.. recently came up with this.. it is something like canvas
01:18:15 [Joshue108_]
q+
01:18:16 [Irfan]
linear stream of data.. what is most logical ordering of data?
01:18:20 [Joshue108_]
q-
01:18:30 [CharlesL]
q-
01:18:38 [Joshue108_]
q+ to say its not really about linearisation
01:19:15 [Irfan]
it is relatively trivial for use to produce some markup that got some volume, description in it.. you need one intelligent way to mark it up. <div> <div>..<div>
01:20:14 [Irfan]
AOM seems reasonable example..
01:20:28 [Joshue108_]
ack me
01:20:28 [Zakim]
Joshue108_, you wanted to say its not really about linearisation
01:20:42 [Irfan]
joshua: brilliant topic
01:21:31 [Irfan]
if you take regular HTML.. the example of data table where you interrogate the data table.. user go where they want to go...
01:22:09 [Irfan]
thats a little bit matching your understanding... what you need a description where you can read that a particular heading belongs to a particular field.
01:22:32 [Irfan]
we need to work on what kind of architecture looks like...
01:22:53 [Irfan]
matt: as a SR use, you still has a linear view even if you think for 3d
01:22:59 [NellWaliczek]
q+ to talk about input profiles
01:23:21 [Joshue108_]
q?
01:23:25 [Irfan]
if you move item by item on the webpage.. you do need to order in way that makes sense
01:23:43 [Joshue108_]
q+ to ask Ada more about her view of standardisation of semantic scene graphs
01:25:03 [Irfan]
*example of discovering objects in room*
01:25:22 [Irfan]
you dont have easy ways to control scanning in different ways..
01:26:22 [CharlesL]
ack NellWaliczek
01:26:22 [Zakim]
NellWaliczek, you wanted to talk about input profiles
01:26:47 [Irfan]
nell: there is one other emerging API area that will be available in short term.. eye-tracking.
01:28:08 [Irfan]
you will see that within couple of years underline the plate-form level.
01:28:54 [Irfan]
there is an open source librarty that I am working, "input profiles"
01:31:23 [NellWaliczek]
https://github.com/immersive-web/webxr-input-profiles is the library's github
01:31:30 [Irfan]
bajones: it is part of our input story called select events..
01:32:05 [Irfan]
user are doing primary input..
01:32:25 [Irfan]
s/emersive/immersive
01:33:00 [Irfan]
* APA room*
01:33:07 [NellWaliczek]
This is the link to the test page I've been using to ensure the motion controllers behave consistently. Apologies that it is very barebones (and probably very poorly built because i'm not really a webdev...), but i'd be happy to take guidance on how to make it more usable
01:33:09 [NellWaliczek]
https://immersive-web.github.io/webxr-input-profiles/packages/viewer/dist/index.html
01:33:19 [Irfan]
* Thanks Nell
01:35:04 [Judy]
q+
01:35:30 [CharlesL]
q?
01:35:57 [CharlesL]
ack Joshue108
01:35:57 [Zakim]
Joshue108_, you wanted to ask Ada more about her view of standardisation of semantic scene graphs
01:37:40 [CharlesL]
ack Judy
01:41:28 [Roy]
RRSAgent, make minutes
01:41:28 [RRSAgent]
I have made the request to generate https://www.w3.org/2019/09/18-apa-minutes.html Roy
01:42:10 [CharlesL]
CharlesL has joined #apa
01:43:01 [CharlesL]
rrsagent, draft minutes
01:43:01 [RRSAgent]
I have made the request to generate https://www.w3.org/2019/09/18-apa-minutes.html CharlesL
01:56:09 [Roy]
Topic: AOM and XR
02:00:07 [CharlesL]
CharlesL has joined #apa
02:02:26 [ZoeBijl]
present+
02:02:33 [Irfan]
Irfan has joined #apa
02:02:37 [Irfan]
present+
02:05:06 [aboxhall_]
aboxhall_ has joined #apa
02:05:23 [tink]
present+ Léonie (tink)
02:05:29 [Judy]
Judy has joined #apa
02:06:48 [Irfan]
rrsagent, make minutes
02:06:48 [RRSAgent]
I have made the request to generate https://www.w3.org/2019/09/18-apa-minutes.html Irfan
02:07:08 [sangwhan]
sangwhan has joined #apa
02:07:58 [Joshue108_]
scribe: Joshue108
02:08:01 [mhakkinen]
mhakkinen has joined #apa
02:08:17 [Joshue108_]
TOPIC: Pronunciation Approach
02:08:21 [chrishall]
chrishall has joined #apa
02:08:25 [Joshue108_]
MH: I've describe the issue.
02:08:35 [Irfan]
https://github.com/w3c/pronunciation/wiki
02:08:42 [CharlesL]
scribe+ Joshue108
02:08:43 [Joshue108_]
In the education space, we have requirements for students to be exposed to text to text speech.
02:09:04 [Joshue108_]
There are issues with things not being pronounced correctly, based on teaching style..
02:09:15 [Joshue108_]
e.g. certain pauses and emphasis etc.
02:09:23 [Irfan]
https://w3c.github.io/pronunciation/gap-analysis/
02:09:29 [Joshue108_]
There are no real solutions for this and we done a gap analysis.
02:09:57 [Joshue108_]
There are various hacks, use of speech q's such as miss use of aria-label, rather fragile hacks.
02:09:59 [Irfan]
User scenarios document https://w3c.github.io/pronunciation/user-scenarios/
02:10:21 [Joshue108_]
We have done a gap analysis.
02:10:23 [Irfan]
Use case document https://w3c.github.io/pronunciation/use-cases/
02:10:27 [dbaron]
dbaron has joined #apa
02:10:40 [Joshue108_]
SSML is a W3C req we dont have a way for authors to bring it into HTML.
02:10:41 [jcraig]
jcraig has joined #apa
02:11:14 [Joshue108_]
There were solutions such as inlining into HTML, or an attribute model which may work well for AT vendors.
02:11:29 [Joshue108_]
We also have other attribute based model.
02:11:43 [Joshue108_]
The question for TAG is which of these could be the most successful?
02:11:53 [zcorpan]
zcorpan has joined #apa
02:12:00 [zcorpan]
present+
02:12:10 [Joshue108_]
Talking to AT vendors, inlining is not so attractive etc, standardisind the attribute model could work.
02:12:19 [Joshue108_]
Also scraping content could work.
02:12:32 [jcraig]
Q+ to get a refresher on IPA attr?
02:12:32 [CharlesL]
q?
02:12:34 [Joshue108_]
Irf: We have also provided these use case document, see URI.
02:12:39 [jcraig]
Ack me
02:12:39 [Zakim]
jcraig, you wanted to get a refresher on IPA attr?
02:12:46 [CharlesL]
ack jcraig
02:13:04 [Joshue108_]
JC: When we talked about an aria attribute for IPA pronunciation, does this do enough?
02:13:33 [Joshue108_]
MH: Pronunciation is a key aspect but there are issues of handling numberic values and other peculiar lexical values.
02:13:42 [Joshue108_]
Not handled by IPA pronunciation.
02:14:11 [Joshue108_]
LW: There are issues with classical english iambic pentamter, prosody etc.
02:14:23 [Joshue108_]
JC: IPA allows this.
02:14:38 [Joshue108_]
JS: The problem loading this into ARIA means we dont get the user we want to pick up.
02:14:57 [Joshue108_]
JC: Could we do a combo of IPA and parts of CSS speech, speak as digits etc.
02:15:16 [Joshue108_]
MH: Right, a combination. Not e'thing is supported.
02:15:36 [Irfan]
https://w3c.github.io/pronunciation/gap-analysis/#gap-analysis
02:15:53 [Joshue108_]
Janinas point is that with the range of voice assistance, and SSML type content could be beneficial to the growing number.
02:15:57 [Joshue108_]
This is not just an AT issue.
02:16:10 [Joshue108_]
There are other potential use cases.
02:16:21 [Joshue108_]
JS: We want to eventually make this a part of HTML.
02:16:24 [Joshue108_]
q?
02:16:49 [Joshue108_]
AB: Looking thru the use case doc, it does seem like a problem that goes beyond AT.
02:17:07 [Joshue108_]
Seems like a good problem to solve.
02:17:16 [Joshue108_]
What was the feedback you needed?
02:17:35 [Joshue108_]
MH: We have surveys etc out to the AT vendor community.
02:17:41 [Joshue108_]
Irf: Posts survey.
02:18:04 [Joshue108_]
JS: We want to finish the gap analysis, etc then lock it into HTML, as the way to solve these issues.
02:18:05 [Irfan]
https://www.w3.org/2002/09/wbs/110437/SurveyforAT/
02:18:25 [Joshue108_]
JS: HTML is now not just W3C, we've talked with WHATWG etc.
02:18:31 [Joshue108_]
Happy Leonie is here.
02:19:40 [Joshue108_]
<discussion on namespace solutions>
02:20:30 [Joshue108_]
MH: For some background ETS, Pearson and others are offering solutions where things are captured..
02:21:16 [Joshue108_]
MH: We know we have to author pronunciation to get content to students..
02:21:26 [Joshue108_]
We are missing mechanism to bring it into HTML.
02:21:32 [Joshue108_]
There is a legal imperative.
02:21:46 [CharlesL]
q?
02:21:49 [Joshue108_]
MH: This is a real problem.
02:22:10 [Joshue108_]
Language users with the read aloud tool for example, if pron incsistent with general usage.
02:22:18 [Joshue108_]
Totally confusing for language learners.
02:22:35 [Joshue108_]
SP: Simon Pieters from Baku.. editor of HTML.
02:22:47 [jcraig]
S/Baku/Bocoup/
02:22:48 [Joshue108_]
If you want to propose, please file issue to HTML repo.
02:22:59 [Joshue108_]
JS: Yup.
02:23:14 [Joshue108_]
SP: You can start by presenting the problem, good way to get discussion going.
02:23:25 [Joshue108_]
I can talk about issues with the namespace.
02:23:38 [Joshue108_]
MH: No-one we have talked to really wants to go there.
02:23:57 [Joshue108_]
SP: They are technical implementation issues, the problem statement is the crucial bit.
02:24:06 [Joshue108_]
AB: Was going to suggest filing a review.
02:24:25 [aboxhall_]
https://github.com/w3ctag/design-reviews/issues/new/choose
02:24:28 [dbaron]
https://w3ctag.github.io/explainers
02:24:29 [Joshue108_]
SW: We require an explainer.
02:24:38 [zcorpan]
s/Baku/Bocoup/
02:24:46 [aboxhall_]
Choose "Specification Review"
02:25:12 [Joshue108_]
AB: If there is an issue filed on HTML we can bring those issues together, any preference?
02:25:26 [Joshue108_]
SP: No, I need to know more about the space first.
02:25:45 [Joshue108_]
SP: Process wise, file an issue, explain use case- point to existing work.
02:25:56 [Joshue108_]
Will send a link.
02:26:04 [Joshue108_]
MH: We are vetting approaches.
02:26:14 [sangwhan]
SW: https://github.com/w3ctag/design-reviews/issues is where we take reviews. We require an explainer - effectively an elevator pitch in markdown. Here is a explainer for what an explainer is: https://w3ctag.github.io/explainers
02:26:21 [Joshue108_]
JS: The use case doc is past FPWD.
02:26:32 [Joshue108_]
There are directions that are apparent.
02:26:34 [zcorpan]
https://whatwg.org/faq#how-does-the-whatwg-work - whatwg process
02:26:43 [jihye]
jihye has joined #apa
02:26:46 [Joshue108_]
AB: We have a different def of an explainer for TAG review.
02:26:50 [Joshue108_]
<gives overview>
02:27:07 [Joshue108_]
s/def/definition
02:27:21 [Joshue108_]
We like to understand the problem space and options you have considered.
02:27:24 [Joshue108_]
And discuss.
02:27:31 [Joshue108_]
JS: Sounds good?
02:27:32 [CharlesL]
q?
02:27:34 [Joshue108_]
<yup>
02:27:50 [Irfan]
rrsagent, make minutes
02:27:50 [RRSAgent]
I have made the request to generate https://www.w3.org/2019/09/18-apa-minutes.html Irfan
02:27:53 [Joshue108_]
TOPIC: XR and AOM
02:27:58 [mhakkinen]
IMS QTI Spec (Question Test interoperability) https://www.imsglobal.org/question/index.html
02:28:47 [ZoeBijl]
scribe: ZoeBijl
02:28:47 [CharlesL]
q?
02:29:11 [mhakkinen]
IMS QTI usage of SSML defined here: https://www.imsglobal.org/apip/apipv1p0/APIP_QTI_v1p0.html
02:29:27 [ZoeBijl]
Josh: we had a very useful meeting with some folks from inclusive ???
02:29:38 [ZoeBijl]
there was a general need to give this some attention
02:29:46 [ZoeBijl]
general need to understand user needs
02:29:56 [ZoeBijl]
?? semantics
02:30:00 [ZoeBijl]
DOM generation
02:30:05 [ZoeBijl]
accessibility tree
02:30:12 [ZoeBijl]
and getting that to AT
02:30:16 [CharlesL]
s/???/web
02:30:33 [ZoeBijl]
There was also an acknowledgement of ??
02:30:43 [ZoeBijl]
things could be described declaratively
02:30:44 [CharlesL]
s/inclusive/immersive
02:30:55 [ZoeBijl]
it’s not moved(?) into an accessibility API
02:31:16 [ZoeBijl]
there was an ineresting discussiona round making accessible ???
02:32:09 [CharlesL]
scribe+
02:32:25 [zcorpan]
scribenick: CharlesL
02:32:45 [CharlesL]
JC: AOM could be a temp solution today /tomorrow
02:32:51 [CharlesL]
virtual tree may be a while
02:33:00 [Irfan]
scribe: CharlesL
02:33:06 [CharlesL]
aria reflected attributes
02:33:28 [CharlesL]
Josh: we are making assumptions if we took agile approach what does good look like
02:33:38 [CharlesL]
Janina: what is practical.
02:33:44 [Judy]
Judy has joined #apa
02:33:45 [Joshue108_]
q?
02:33:56 [CharlesL]
Josh: if thats a blocker
02:33:56 [aboxhall_]
https://github.com/WICG/aom/blob/gh-pages/caniuse.md
02:34:38 [CharlesL]
… semantics for XR … can we?
02:35:25 [jcraig]
s/AOM could be a temp solution/AOM is not yet ready to be used as a temp solution/
02:35:45 [CharlesL]
Allison: what might be possible. what AT would be consuming this, really cool if they were developing AT
02:36:06 [CharlesL]
Josh: AT could be embedded in the environment.
02:36:21 [CharlesL]
… AT would be looking at an abstraction.
02:36:43 [CharlesL]
… core things the user needs to know (role / state / property)
02:37:12 [aboxhall_]
https://github.com/WICG/aom/blob/gh-pages/explainer.md#virtual-accessibility-nodes
02:37:23 [CharlesL]
Jaimes: on the roadmap Virtual trees canvas a JS api to expose the virtual tree under it
02:38:14 [CharlesL]
Alison: we could create the accessibility Tree like Dom nodes, could you create a DOM tree that represented in the XR env.
02:38:32 [CharlesL]
how would existing ATs would interact with it with different User interfaces.
02:39:02 [CharlesL]
Josh: JS calls on DOM window object and env. object could have children objects being separate nodes.
02:40:00 [CharlesL]
… I would see visit them sequentially. linearization. web page mark up there is a semantic markup, if marked up correctly users can navigate it
02:40:21 [CharlesL]
Leonie: suggested an API to expose those things.
02:40:42 [CharlesL]
Josh: create a blob semantics that user can interact with it.
02:40:51 [tink]
Proposal for an API for immersive web https://github.com/immersive-web/proposals/issues/54#issuecomment-522341968
02:40:55 [CharlesL]
Alison: new AT based on the APIs
02:41:36 [CharlesL]
Jaimes: 3d space in the accessibility tree is a new VR/AR is a primitive. and use cases for that is not settled yet.
02:41:48 [CharlesL]
AR - utilitarian, VR - games etc.
02:42:03 [CharlesL]
some primitives we can put together but a solution is very early
02:42:50 [CharlesL]
Leonie: aria has Web Commonents UI controls but in VR we can have anything. a labo to a dragon so how can we figure out what we are dealing with.
02:44:16 [CharlesL]
Josh: DOM tree that emulates this room. a tree can be generated, issues we have docuent rendered if there were aync calls the ajax call, but immersive env. as a function of time, backwards/forwards depending on where the user is. node will change dependant on user ,via API calls as a function of time.
02:44:43 [CharlesL]
… moving beyond document oject models states as a function of time.
02:45:40 [CharlesL]
Alison: scope to a new vocabulary fundamentally a tree and node would interact with that and sequential or in 2D space. How do you pick which node in 3D space.
02:45:52 [CharlesL]
Josh: we need a vocabulary in AOM lexicon of terms
02:46:03 [CharlesL]
Matt: agrees with josh
02:46:15 [CharlesL]
interaction how do we read this tree we don't know yet.
02:46:31 [CharlesL]
surfacing the info so AT could interact with it.
02:46:44 [Irfan]
s/Alison/Alice
02:46:56 [CharlesL]
Alison: that there is a tree, but how does the tree map to the immersive env.
02:47:22 [CharlesL]
Matt: where you are standing in that tree is something we would need to know.
02:47:31 [ZoeBijl]
q+ to say that I don’t see how flattening the 3D space would give the same experience
02:47:33 [CharlesL]
Josh: no
02:47:47 [CharlesL]
Alice: we need to know how the interaction would work.
02:48:10 [Irfan]
q?
02:48:29 [jcraig]
s/Jaimes: 3d space in the accessibility tree is a new VR/AR is a primitive. and use cases for that is not settled yet./jcraig: 3d space VR/AR could be a new accessibility primitive. and use cases for AR/VR are not yet settled./
02:48:33 [jcraig]
Q+
02:48:36 [CharlesL]
Josh: I don't think we need to worry about that. the interaction could be mediated by the AT
02:49:12 [Irfan]
q?
02:49:12 [jcraig]
Q+ to say I am not sure a “tree” is the right solution for 3D space
02:49:28 [CharlesL]
… some things that are AT responsibilities. Matt different env. updating that tree sequentially would give you that concept of movement.
02:49:53 [zcorpan]
q?
02:49:53 [CharlesL]
… various different nodes within that env. could be different sound.
02:49:58 [CharlesL]
effects.
02:50:08 [Irfan]
ack ZoeBijl
02:50:08 [Zakim]
ZoeBijl, you wanted to say that I don’t see how flattening the 3D space would give the same experience
02:50:10 [jcraig]
Q+ to say some of the vocab may be solved in an XR ARIA module (similar to DPUB’s)
02:50:24 [zcorpan]
q+ to discuss analogy with scrolling content into view
02:50:44 [CharlesL]
zoe: I am not sure flattening a 3D space would give AT user the same thing
02:51:04 [jcraig]
Ack me
02:51:05 [Zakim]
jcraig, you wanted to say I am not sure a “tree” is the right solution for 3D space and to say some of the vocab may be solved in an XR ARIA module (similar to DPUB’s)
02:51:06 [CharlesL]
… you can move in 2D space how are you going to do this with a linear tree
02:51:07 [Judy]
Judy has joined #apa
02:51:26 [CharlesL]
Jaimes: not sure a tree is good for 3D space as zoe points out
02:51:42 [CharlesL]
… obscuring moving behind objects etc.
02:51:56 [Irfan]
q?
02:52:14 [CharlesL]
… not convinced. Josh's with vocabulary that you can work on like the DPUB module in ARIA not sure how far that would get you.
02:52:58 [Joshue108_]
q?
02:53:08 [Irfan]
ack zcorpan
02:53:08 [Zakim]
zcorpan, you wanted to discuss analogy with scrolling content into view
02:53:10 [CharlesL]
in different environments it could be just say "boardroom" is enough, but these ideas are not worked out yet.
02:54:01 [CharlesL]
Simon: you can scroll i 2 dimensions, similar if you see in one direction vs. moving your head like a scroll bar potentially.
02:54:10 [Matt_King]
q?
02:54:14 [Matt_King]
q+
02:54:49 [CharlesL]
Josh: Google is working on a large JSON model populated as needed, renascent thing virtual scrolling
02:55:09 [ZoeBijl]
s/you can move in 2D space how are you going to do this with a linear tree/A website is essentially a linear document. It might have branches in 2D which you can move about in. But all of the branches are connected. This doesn’t work the same way in 3D space. Things aren’t connected to each other—they’re not linear./
02:55:31 [CharlesL]
Josh: modal muting is the idea of cutting out the stuff you don't use ie visual readering would be must more responsive etc.
02:56:27 [chrishall]
https://en.wikipedia.org/wiki/Octree
02:56:29 [CharlesL]
Matt: prev. meeting in every 3D library there is a concept like Tree OCT-Tree
02:56:34 [chrishall]
https://en.wikipedia.org/wiki/Binary_space_partitioning
02:57:17 [Irfan]
ack matt
02:58:59 [CharlesL]
Josh: user within an emersive space; view from within that space Sceen Graph is represented used for expressing relationships OCT-trees are optimization reducing the load output device
02:59:22 [CharlesL]
logic is captured in the form of. a graph spanning tree can be deduced from that graph.
02:59:41 [CharlesL]
I don't belive the oct-tree is the right representation that has semantic value.
03:00:00 [CharlesL]
oct-tree only subdivids space.
03:00:18 [Irfan]
rrsagent, make minutes
03:00:18 [RRSAgent]
I have made the request to generate https://www.w3.org/2019/09/18-apa-minutes.html Irfan
03:00:58 [CharlesL]
Rossen - made previous comments
03:01:14 [CharlesL]
Rossen: OCT-Tree reduces down to a Quadrant Tree
03:01:32 [CharlesL]
Matt: strickly spactial is an OCT-tree
03:02:34 [CharlesL]
Simmon: what do we want to represent to the user, is a tree or graph the best way to do this?
03:03:18 [CharlesL]
Janina: Nell mentioned that as you pass restaurants you may get the entire menu, or way to enter in that virtual env. to eat there.
03:04:25 [CharlesL]
Rossen: current AT to observe 1 element at a time which is fair on a web page, but on a 3D space you are observing a multitude of things happening which ddoesn't fit the current single observability model.
03:05:16 [CharlesL]
simplest thing how do you convey multiple things to the user at the same time
03:05:54 [Joshue108_]
q?
03:06:34 [CharlesL]
Matt: if a person is coming down the street in the real world I hear the footsteps, if there is cars in the street I hear that, but if it is Janina walking towards me then the AT could say who is coming towards me or that vehicle on the street is bus #102 that is the information we could expose via AT
03:07:34 [CharlesL]
Josh: cherrypicking certain portions we could scrolled window pane we could map and time sync with sound effects could be iterated over time
03:09:01 [CharlesL]
Simmon: describing virtual reality similar to an actual person helping a blind person on the street you would talk about one thing at a time, and similarly with a screen reader would do the same.
03:09:43 [Joshue108_]
CharlesLeP: I used to work in GPS system for blind users, so when walking thru street you will hear announcements..
03:09:59 [Joshue108_]
These could be personalised and narrowed down to what was needed.
03:10:13 [Joshue108_]
You can also ping to find out what is around you.
03:10:58 [CharlesL]
Leonie: Microsoft sound scape does this with different pings and distance to where those objects are in reality.
03:11:47 [CharlesL]
Josh: semantic sceen graph and a tree representations could be beneficial
03:12:42 [tink]
W3C workshop on inclusive design for immersive web standards https://w3c.github.io/inclusive-xr-workshop/
03:12:57 [CharlesL]
Leonie: there is a w3c workshop on Nov 5/6 in Seattle
03:13:21 [CharlesL]
rrsagent, draft minutes
03:13:21 [RRSAgent]
I have made the request to generate https://www.w3.org/2019/09/18-apa-minutes.html CharlesL
03:49:15 [MichaelC]
MichaelC has joined #apa
03:53:52 [Avneesh]
Avneesh has joined #apa
03:58:40 [CharlesL]
CharlesL has joined #apa
03:59:01 [romain]
romain has joined #apa
04:02:53 [zcorpan]
zcorpan has joined #apa
04:03:46 [CharlesL]
topic: Digital Publishing / APA
04:04:17 [stevelee]
stevelee has joined #apa
04:04:32 [Roy]
Roy has joined #apa
04:06:33 [Avneesh]
present+
04:08:47 [LisaSeemanKest_]
LisaSeemanKest_ has joined #apa
04:09:44 [marisa]
marisa has joined #apa
04:10:29 [CharlesL]
%s/sceen/scene/g
04:10:39 [CharlesL]
rrsagent, draft minutes
04:10:39 [RRSAgent]
I have made the request to generate https://www.w3.org/2019/09/18-apa-minutes.html CharlesL
04:11:06 [Avneesh]
audio books: https://www.w3.org/TR/2019/WD-audiobooks-20190911/
04:11:29 [CharlesL]
Janina: I did not get this review done
04:11:35 [Avneesh]
publication manifest: https://www.w3.org/TR/2019/WD-audiobooks-20190911/#audio-accessibility
04:12:53 [CharlesL]
Avneesh: basic dpub manifest, and audio books is a JSON structure with default play list and TOC in HTML and page #'s and uses media fragments file name, chapter2 .mp3 and the time sync. for a11y but not accessible for hearing impaired.
04:12:56 [LisaSeemanKest_]
trying to join the webex
04:13:27 [LisaSeemanKest_]
i can join after the host joins
04:13:34 [CharlesL]
… pointer to media file and sync with text representation
04:14:22 [CharlesL]
Marisa: we are exploring and prototyping it for video. we restricted to sync media text/audio, but there is room to grow sign language / braille etc.
04:15:59 [marisa]
s/we are exploring and prototyping it for video./we are exploring and prototyping sign language video sync
04:16:27 [marisa]
s/we restricted to sync media text/we restricted sync media to text
04:16:50 [marisa]
s/braille etc.//
04:17:17 [LisaSeemanKest_]
we are on the webex, but the host needs to join
04:17:42 [CharlesL]
Janina: the APA review should take us a week.
04:19:58 [CharlesL]
Avneesh; end of September would be fine, we want to go to CR by early October. i18n already done, privacy is going on right now and looks good.
04:20:01 [Irfan]
Irfan has joined #apa
04:20:11 [CharlesL]
Janina: I will make sure APA review is done by the end of Sept.
04:20:15 [Irfan]
present+
04:20:42 [romain]
present+
04:22:23 [marisa]
present+
04:31:44 [LisaSeemanKest_]
roy, no audio
04:32:06 [LisaSeemanKest_]
taking a brake
04:33:07 [LisaSeemanKest_]
no
04:33:17 [LisaSeemanKest_]
hanging up. will try audio again
04:34:08 [LisaSeemanKest_]
ok, will call back after the brake
04:34:37 [LisaSeemanKest_]
will try a diffrent audio
04:39:42 [Judy]
Judy has joined #apa
04:47:46 [atai]
atai has joined #apa
04:48:54 [dbaron]
dbaron has left #apa
04:54:32 [LisaSeemanKest_]
present+
04:55:25 [LisaSeemanKest_]
i am tring to join
04:57:49 [LisaSeemanKest_]
i can not join without michael joining
04:57:54 [LisaSeemanKest_]
it needs a host
04:58:02 [LisaSeemanKest_]
q?
04:59:34 [MichaelC]
MichaelC has joined #apa
05:05:48 [Judy]
Judy has joined #apa
05:51:38 [sangwhan]
sangwhan has left #apa
05:52:01 [CharlesL]
CharlesL has joined #apa
05:58:00 [marisa]
marisa has joined #apa
05:59:15 [CharlesL]
CharlesL has left #apa
06:00:29 [atai]
atai has joined #apa
06:03:57 [Judy]
Judy has joined #apa
06:04:00 [marisa]
marisa has joined #apa
06:14:39 [zcorpan]
zcorpan has joined #apa
06:32:58 [LisaSeemanKest_]
waiting for michael to join webex
06:33:05 [LisaSeemanKest_]
agenda?
06:38:33 [zcorpan]
zcorpan has joined #apa
06:40:28 [zcorpan]
zcorpan has joined #apa
06:41:19 [Matt_King]
Matt_King has joined #apa
06:42:01 [Joshue108]
Joshue108 has joined #apa
06:42:14 [Roy]
Topic: FAST
06:42:31 [Joshue108]
present+
06:42:48 [LisaSeemanKest_]
LisaSeemanKest_ has joined #apa
06:42:50 [LisaSeemanKest_]
waiting for michale to join the webe
06:43:20 [Matt_King]
present+
06:45:16 [Joshue108]
JOC: I would like to undestand how the FAST architecture relates to other spec and work that we have going on.
06:46:07 [Joshue108]
So what does good look like for the FAST, how do we need to change it?
06:46:12 [zcorpan]
zcorpan has joined #apa
06:46:13 [Joshue108]
JSON-LD used the FAST.
06:46:38 [Joshue108]
JSON-LD horizontal review request used it
06:46:46 [zcorpan]
zcorpan has joined #apa
06:46:53 [Joshue108]
FAST is a big list of user needs and a description of how they could be met.
06:46:57 [Joshue108]
scribe: Joshue108
06:47:09 [Joshue108]
MC: That was a bigger issue than I thought.
06:47:17 [Joshue108]
MC: Its there as a POC.
06:47:42 [Joshue108]
Around this time, checklists started to get traction, so all groups were starting to get requests for checklist.
06:47:51 [LisaSeemanKest_]
You can't start the meeting right now because we're having problems connecting to the WebEx service. Try again later.
06:47:51 [LisaSeemanKest_]
Error code: 0xa0010003
06:47:51 [LisaSeemanKest_]
Help us improve Cisco Webex Meetings by sending a problem report. Your report will be confidential.
06:47:56 [LisaSeemanKest_]
error message
06:47:57 [Joshue108]
It does have some good ideas, filterning relating to the tech you are developing.
06:48:06 [Joshue108]
There is a short and long form version of the checklist.
06:48:18 [Joshue108]
There are placeholders for links to relevant specs.
06:48:55 [Joshue108]
MC: Its a CSS styled thing but not really a functioning spec.
06:49:05 [Joshue108]
It is hard to tell how this is applicable tbh.
06:49:11 [Joshue108]
We did try the WASM thing.
06:49:20 [Joshue108]
We should regroup, recode it.
06:49:30 [Joshue108]
Should be Yes/No/NA for example.
06:49:41 [Joshue108]
And can be used as a list of relevant questions.
06:50:07 [Joshue108]
This would be easy to do with a DB, and output the checkboxes.
06:50:26 [Joshue108]
MattK: Why do you need to do that, are there not other groups doing this?
06:50:35 [Joshue108]
MC: Because other groups do this differently.
06:50:47 [LisaSeemanKest_]
joined on my ohone. Thanks all
06:50:51 [LisaSeemanKest_]
q+
06:51:02 [Joshue108]
A better way to edit it, and output it etc would be good.
06:51:22 [Joshue108]
There is talk about a common infrstructure, not going to happen quickly.
06:51:31 [Joshue108]
MK: What happens to the output?
06:51:42 [Joshue108]
MC: There is a GH feature where you can store some data etc.
06:52:27 [Joshue108]
MK: Why not make an issue template, and put them in there GH has this out of the box?
06:52:41 [Joshue108]
MC: i18n does this.
06:53:19 [Joshue108]
<discussion on GH pros and cons>
06:53:27 [Joshue108]
ack Lis
06:53:48 [Joshue108]
https://w3c.github.io/apa/fast/
06:54:14 [Joshue108]
LS: There is another issue, may not be the right time.
06:54:33 [Joshue108]
My concern is that it is difficult to get things from COGA into WCAG.
06:54:56 [Joshue108]
This is possibly more important than WCAG, so this could have the hooks to make stuff happen.
06:55:22 [Joshue108]
So rather than focussing on WCAG etc for user needs, and with other specs.
06:55:38 [Joshue108]
They could be moved here, and could include more COGA issues.
06:55:59 [Joshue108]
This could help to not perpetuate a catch 22 situation.
06:56:19 [Joshue108]
There could also be more flexible technologies etc outside of COGA as well.
06:56:55 [Joshue108]
So it could be a way of addressing accessibility use cases.
06:57:22 [Joshue108]
As speech interaction is more prominent this will be more relevant.
06:57:42 [Joshue108]
So instead of UAAG 2.0 etc they could be moved here.
06:57:53 [Joshue108]
MC: On user needs we should be migrating towards that.
06:58:00 [Joshue108]
Longer term vision for sure..
06:58:10 [zcorpan]
zcorpan has joined #apa
06:58:13 [zcorpan]
zcorpan has joined #apa
06:58:17 [Joshue108]
FAST could be the repo of user needs with other specs in parallel.
06:58:26 [Joshue108]
We won't get their quickly or easily.
06:58:35 [Joshue108]
Silver is also moving in that direction.
06:58:54 [Joshue108]
Will take time to do something meaningful, our focus now is on the checklist for self review.
06:59:02 [Joshue108]
We need to do the checklist first.
06:59:02 [LisaSeemanKest_]
Q?
06:59:37 [Joshue108]
LS: There are problems from my perspective, I'm not seeing that the COGA patterns are being included here.
06:59:53 [Joshue108]
MC: Yes, it is incompleted. We also need to make it manageble.
07:00:21 [Joshue108]
MC: I'm not so clear on self review checklists etc, but we need to help groups get meaningful review on their spec.
07:00:30 [Joshue108]
The idea is that it should raise questions also.
07:00:38 [Joshue108]
With the relevant group, here APA.
07:00:54 [LisaSeemanKest_]
https://w3c.github.io/coga/content-usable/#appendix1
07:01:06 [LisaSeemanKest_]
Q+
07:01:22 [marisa]
marisa has joined #apa
07:01:22 [Joshue108]
JS: I'd rather we help other groups raise issues here rather than muddle things.
07:01:49 [Joshue108]
It seems we should help them build a correct UI, and then help them with specifics as they relate to COGA etc.
07:02:06 [Joshue108]
JS: Not asking them in very deep level of detail at this point.
07:02:49 [zcorpan]
zcorpan has joined #apa
07:02:52 [Joshue108]
MC: So yes, I was poking around of i18n checklist - Michael reads..
07:03:15 [Joshue108]
These are checklists but they are not easily maintained.
07:03:27 [Joshue108]
MK: There is an API for it.
07:03:33 [Joshue108]
<discussion of GH again>
07:03:46 [Joshue108]
MC: I'm not sure how robust this is.
07:04:16 [Joshue108]
They are rather detailed with many links
07:04:55 [LisaSeemanKest_]
q?
07:05:14 [Joshue108]
The question is how much focus do we want, how detailed it should be etc.
07:05:36 [Joshue108]
ack lisa
07:05:44 [LisaSeemanKest_]
https://w3c.github.io/coga/content-usable/#appendix1
07:06:02 [Joshue108]
LS: I've linked to the COGA patterns.
07:06:03 [Joshue108]
ttps://w3c.github.io/coga/content-usable/#appendix1
07:06:19 [Joshue108]
s/ttps://w3c.github.io/coga/content-usable/#appendix1/https://w3c.github.io/coga/content-usable/#appendix1
07:06:33 [Joshue108]
We can move this up to our things to do, can go on checklist.
07:06:37 [Joshue108]
Good for self review.
07:07:01 [Joshue108]
There needs to a way for the things that are not in WCAG are still supported.
07:07:04 [Joshue108]
q+
07:07:47 [Joshue108]
LS: User testing could also help, for SR users, low vision etc.
07:08:46 [Joshue108]
MC: This is for technology spec developers, your link relates to authors etc.
07:09:00 [Joshue108]
Some may be relevant but this is mostly relevant for spec people.
07:09:32 [Joshue108]
JS: What would you expect from JSON-LD.
07:09:37 [Joshue108]
LS: Dont know really.
07:09:48 [Joshue108]
JS: They are the ones who filled out the survey.
07:10:04 [Joshue108]
What about Immersive Web etc? We need to know what they are doing.
07:10:45 [Joshue108]
MC: JSON-LD is an abstract framework. We need to know what they are doing, we are being asked to produce generic user requirements.
07:11:00 [Joshue108]
It can be difficult to know how to provide checklist for some specs.
07:11:10 [Joshue108]
LS: How does this relate to WCAG?
07:11:18 [LisaSeemanKest_]
https://w3c.github.io/coga/content-usable/#objective-adapt-and-personalize
07:11:19 [Joshue108]
JS: It doesn't..
07:11:33 [Joshue108]
LS: I've looked at these slides.
07:11:54 [Joshue108]
MC: If you have looked at this from FAST, it should be possible to create stuff that relates to WCAG.
07:11:55 [Joshue108]
ack me
07:13:30 [Joshue108]
JOC: So how do these FAST requirements bubble into and impact on a spec? Thats something I'd like to know.
07:13:45 [Joshue108]
LS: These questions will need to be revised from a COGA perspective.
07:13:59 [Joshue108]
JS: We hear you, but dont see how that analysis fits in here.
07:14:29 [Joshue108]
JS: We can come back later to this.
07:14:43 [Joshue108]
MC: Something that would fit in, is for users to indicate personlisation preferences.
07:14:52 [Joshue108]
We could reasonably add that.
07:15:03 [Joshue108]
Some of the other things could relate to the FAST checklist.
07:15:06 [Joshue108]
q+
07:15:08 [Joshue108]
ack me
07:15:12 [LisaSeemanKest_]
- i was looing at the intro of the doscument. my mistake
07:15:39 [Joshue108]
So what parts of the COGA requirements could be fixed by FAST, at the spec leve?
07:15:41 [Joshue108]
MC: Right.
07:15:53 [Joshue108]
JS: So thats not user testing etc.
07:16:52 [Joshue108]
MC: I went to a meet that were looking at user testing etc, so these suggestions could be added.
07:17:04 [Joshue108]
JS: Asking these questions does make sense, but not diving into details.
07:17:19 [Joshue108]
JS: This is a semaphor
07:17:36 [Joshue108]
s/MC: I went to a meet that were looking at user testing etc, so these suggestions could be added./
07:18:03 [Joshue108]
MC: Checklist for best practices could point to resources and what to do, outline impacts etc.
07:18:29 [Joshue108]
The full framework could cover these things.
07:18:49 [Joshue108]
The full framework is the user needs, and a breakdown, best practices etc.
07:18:52 [LisaSeemanKest_]
faiding in and out -want to hear this...
07:18:58 [Joshue108]
JS: Could be a lot like Silver.
07:19:41 [Joshue108]
MC: we are distilling a framwork which we will undistil in the full framework.
07:19:51 [Joshue108]
JS: <riffs on how spec review may work>
07:19:58 [Joshue108]
q?
07:20:39 [Joshue108]
JS: We need something for a group, say second screen, who is writing an API, keeping devices in sync.
07:22:26 [Joshue108]
JOC: So these are like my approach to XAUR and seperating technical use cases from user needs and requirements.
07:22:35 [Joshue108]
MC: I need to think about that.
07:22:47 [Joshue108]
q?
07:23:14 [LisaSeemanKest_]
i realy can not hear well. just mumbles
07:23:37 [Joshue108]
So if we can capture these at a highlevel, then this would make the authors job easier.
07:24:01 [Joshue108]
MC: So I struggle with capturing it at high level. Its an ok start.
07:24:03 [Joshue108]
q+
07:24:06 [Joshue108]
ack me
07:25:27 [Joshue108]
q?
07:25:37 [MichaelC]
-> http://w3c.github.io/apa/fast/checklist Draft FAST checklist
07:26:24 [Joshue108]
Ahh..
07:27:17 [Joshue108]
http://w3c.github.io/apa/fast/checklis
07:27:20 [Joshue108]
http://w3c.github.io/apa/fast/checklist
07:27:41 [Joshue108]
s/http://w3c.github.io/apa/fast/checklis/
07:29:19 [LisaSeemanKest_]
Q+
07:29:38 [Joshue108]
ack Lisa
07:29:46 [Joshue108]
JOC: This checklist is really good.
07:30:15 [Joshue108]
Very useful for specs doing technical stuff, use cases, that can fix things at the spec level.
07:30:59 [Joshue108]
LS: Some of this from our patterns could be supported by this.
07:31:11 [Joshue108]
MC: If we can break them down to technology features then yes.
07:31:36 [Joshue108]
LS: Some thing that provides direct navigable access to specific points in a media file.
07:31:41 [Joshue108]
MC: Right.
07:31:51 [Joshue108]
JS: I'd like to see a hierarchical list.
07:31:54 [Joshue108]
LS: Yes.
07:32:00 [Joshue108]
MC: One bit at a time.
07:32:29 [Joshue108]
q+ to ask if Lisa could review the checklist agains the COGA patterns she is suggesting.
07:33:15 [Joshue108]
LS: time based media etc.
07:33:30 [Joshue108]
LS: I need to read it.
07:33:41 [Joshue108]
ack me
07:33:41 [Zakim]
Joshue, you wanted to ask if Lisa could review the checklist agains the COGA patterns she is suggesting.
07:33:55 [Joshue108]
MC: Please do!
07:34:11 [Joshue108]
MC: It would be great if you could come up with some.
07:34:33 [Joshue108]
I want to identify checklist items that are missing etc, and want to identify categories that are missing.
07:34:47 [Joshue108]
<Michael review categories>
07:35:00 [Joshue108]
They feel a little weird but I find things I could group under them.
07:35:28 [Joshue108]
I'd like input on how useful they are, and what are missing, especially as we are including emerging tech.
07:35:32 [Joshue108]
JOC: I'll also review.
07:37:37 [Joshue108]
JS: Is Media XR a time based medium?
07:37:43 [Joshue108]
JOC: Interesting!
07:37:52 [Joshue108]
MC: I'd like to look at WoT also.
07:38:30 [Joshue108]
JOC: Content is aggregrated in WoT via sensors etc.
07:39:25 [Joshue108]
JOC: The stuff Lisa could feed into this would be really useful.
07:39:40 [Joshue108]
MC: We could do a bigger call for review.
07:40:00 [Joshue108]
Needs an explainer!
07:40:24 [Joshue108]
MC: Shall we take the checklist to the note track?
07:40:37 [LisaSeemanKest_]
i cant here well.
07:40:40 [Joshue108]
I say no to either the framework or checklist.
07:40:54 [LisaSeemanKest_]
but i think i get it if i focus on the checklist
07:40:59 [Joshue108]
The Framework is on hold, and the checklist needs attention.
07:41:15 [Joshue108]
MC: Implications of XR and other related tech.
07:41:27 [Joshue108]
We need accessiblity people to have a look at this.
07:41:44 [Joshue108]
Nell will be in tomorrow to demo how 3D is authored etc today.
07:42:55 [Joshue108]
MC: There was a demo I saw with 3D type captions etc that was interesting.
07:44:02 [Joshue108]
MC: Next step is to request what is missing review from accessibility people we know.
07:44:26 [Joshue108]
What it is and is not could be written up quickly - after TPAC, for two weeks say?
07:44:30 [Joshue108]
Could we ask?
07:44:35 [Joshue108]
MC: Yes.
07:44:55 [Joshue108]
JS: We can ask for it on the call Weds, and say we'd like feedback.
07:45:18 [Joshue108]
Then we can look at i18n thing borrow their code, either me or Josh.
07:45:36 [Joshue108]
MC: They have a generator - static doc, generator GH, scraping etc.
07:45:46 [Joshue108]
We could have a checklist by the end of the year.
07:45:51 [Joshue108]
JOC: Yes.
07:47:30 [Joshue108]
JS: I think we have a plan.
07:47:48 [Joshue108]
present+ Janina
07:47:58 [Joshue108]
rrsagent, make minutes
07:47:58 [RRSAgent]
I have made the request to generate https://www.w3.org/2019/09/18-apa-minutes.html Joshue108
07:56:40 [CharlesL]
CharlesL has joined #apa
07:58:37 [CharlesL]
CharlesL has left #apa
08:00:06 [LisaSeemanKest_]
im back
08:02:47 [Judy]
Judy has joined #apa
08:06:48 [achraf]
achraf has joined #apa
08:07:59 [atai]
atai has joined #apa
08:08:08 [addison]
addison has joined #apa
08:08:30 [MichaelC]
topic: Correct identification of signed and symbolic (AAC)
08:08:34 [MichaelC]
scribe: MichaelC
08:11:09 [achraf]
present+
08:11:36 [addison]
present+
08:12:24 [MichaelC]
Bliss symbols being referenced from Personalization spec
08:12:36 [MichaelC]
raised if we should be referencing unicode
08:12:46 [MichaelC]
means getting the Bliss symbols into Unicode
08:12:58 [MichaelC]
that´s apparently been explored before unsure of outcome
08:13:21 [MichaelC]
Bliss people ok with the usage in Personalization, want to discuss with them the unicode thing
08:13:31 [MichaelC]
they were invited to this meeting but nobody seems present
08:13:40 [MichaelC]
Lisa was at AAC conference
08:14:03 [MichaelC]
people with certain kinds of brain damage benefit from symbols
08:14:06 [MichaelC]
there are libraries
08:14:30 [MichaelC]
js: would somebody use symbols to express?
08:15:05 [MichaelC]
lsk: they could
08:15:39 [MichaelC]
js: in media work we worked on supporting multiple alternate representations of media
08:17:04 [MichaelC]
lsk: challenge with sign languages
08:17:13 [MichaelC]
ag: sign languages are regional
08:17:21 [MichaelC]
used to fudge a region code
08:17:39 [MichaelC]
ISO 639-3 has 3-letter codes that cover many sign languages
08:18:04 [MichaelC]
lsk: you could have both a symbol set and a language
08:19:11 [MichaelC]
there was need to be able to identify both spoken regional language and sign regional language
08:19:18 [MichaelC]
ag: sounds like two separate things to tag
08:20:57 [MichaelC]
js: appear to be supporting that in media formats
08:21:43 [MichaelC]
ag: sounds like we might need to register additional subtags
08:22:24 [MichaelC]
where there are modalities beyond text
08:25:06 [MichaelC]
lsk: symbols sometimes work for a given language
08:25:48 [stevelee]
wha tis the meeting URL please the one on the usual page say meeting has ended
08:26:07 [stevelee]
present+
08:26:17 [MichaelC]
and cultural representation of symbols
08:26:40 [Roy]
/me https://www.w3.org/2017/08/telecon-info_apa-tpac
08:26:52 [stevelee]
i dont have that either - jus ta verbal that it was happening looking
08:27:41 [achraf]
q+
08:28:16 [MichaelC]
in some languages there can be symbol overlap
08:28:30 [MichaelC]
or other cases different symbol sets within same language based on AT use
08:29:31 [MichaelC]
there can be copyrights on symbol sets, which is actually copyrighting someone´s language
08:29:46 [MichaelC]
so we´re using a more neutral set
08:31:23 [stevelee]
http://www.arasaac.org/
08:31:34 [LisaSeemanKest_]
i can not hear
08:32:12 [MichaelC]
amrai: localized for Qatar
08:33:04 [MichaelC]
use case of eye tracker user using symbols to construct phrase
08:33:32 [MichaelC]
cultural issues mean can´t use all symbols from other regions
08:33:38 [MichaelC]
need local versions
08:33:51 [MichaelC]
exploring whether there could be abstract ones suitable for all cultues
08:33:56 [MichaelC]
js: there´s a demo
08:34:05 [MichaelC]
using Bliss IDs to translate among set
08:34:18 [MichaelC]
ag: these are glyph variations, not semantic variations?
08:34:23 [MichaelC]
amrai: yes
08:34:43 [LisaSeemanKest_]
https://github.com/w3c/personalization-semantics/wiki/TPAC2019-WebApps-Symbols-Overview
08:35:24 [MichaelC]
deaf community says sign language is its own language with grammar etc
08:36:14 [MichaelC]
looking at finding mappings between sign languages
08:37:12 [LisaSeemanKest_]
q+
08:37:57 [Roy]
ac ac
08:38:09 [Roy]
ack ac
08:38:22 [MichaelC]
ag: sign language codes not related to spoken language of region
08:39:42 [stevelee]
please speak louder or closer to the mic - thanks
08:39:47 [LisaSeemanKest_]
https://github.com/w3c/personalization-semantics/wiki/TPAC2019-WebApps-Symbols-Overview
08:40:05 [LisaSeemanKest_]
https://mycult-5c18a.firebaseapp.com/
08:40:19 [r12a]
r12a has joined #apa
08:49:22 [achraf]
q+
08:49:47 [LisaSeemanKest_]
https://github.com/w3c/personalization-semantics/wiki/TPAC2019-WebApps-Symbols-Overview
08:54:08 [achraf]
https://youtu.be/68TbCVNQ3Z8?t=25
08:54:42 [achraf]
Library: http://madaportal.org/tawasol/en/symbols/
08:57:35 [MichaelC]
<demos of customizing symbol sets>
08:58:16 [hawkinsw]
hawkinsw has joined #apa
08:58:34 [LisaSeemanKest_]
lisa.seeman@zoho.com lisa seeman
08:58:42 [hawkinsw]
If you could point me to the source of the plugin, that would be great
08:58:50 [MichaelC]
s/lisa.seeman@zoho.com lisa seeman//
08:58:59 [hawkinsw]
I was able to see the zip file, but I was hoping that I could actually see the source code.
09:03:19 [MichaelC]
rrsagent, make minutes
09:03:19 [RRSAgent]
I have made the request to generate https://www.w3.org/2019/09/18-apa-minutes.html MichaelC
09:03:25 [MichaelC]
rrsagent, bye
09:03:25 [RRSAgent]
I see no action items