23:36:14 RRSAgent has joined #apa 23:36:14 logging to https://www.w3.org/2019/09/18-apa-irc 23:36:19 meeting: APA at TPAC 23:36:25 rrsagent, this meeting spans midnight 23:36:30 rrsagent, make log world 23:36:37 agenda: https://www.w3.org/WAI/APA/wiki/Meetings/TPAC_2019 23:36:41 chair: Janina 23:41:34 jamesn has joined #apa 23:57:55 Roy has joined #apa 23:59:09 Meeting: APA WG Meeting at TPAC 2019 00:01:00 Irfan has joined #apa 00:01:04 present+ 00:01:22 Chair: Janina 00:01:23 CharlesL has joined #apa 00:01:49 present+ 00:02:11 present+ 00:02:55 Agenda: APA Task Forces & Next Steps Today and Tomorrow 00:03:15 Scribe: Irfan 00:04:00 chair: Janina 00:04:19 Meeting: APA TPAC Meeting 00:04:49 RRSAgent, make logs public 00:04:55 Manishearth has joined #apa 00:04:58 RRSAgent, make minutes 00:04:58 I have made the request to generate https://www.w3.org/2019/09/18-apa-minutes.html Roy 00:05:09 Joshue108 has joined #apa 00:05:23 present+ 00:05:34 Joshue108_ has joined #apa 00:05:43 Janina: question that can help us focusing on a11y to hear from all of us. what kind of applications, what kind of emersive environment you are thinking off, in a working group. 00:05:46 present+ 00:05:52 Agenda: XR 00:05:53 ada has joined #apa 00:06:15 where are we going? sensible question? 00:06:16 Topic: APA Task Forces & Next Steps Today and Tomorrow 00:06:25 kip has joined #apa 00:06:33 present+ 00:06:36 *Introduction* 00:06:36 present+ 00:07:21 klausw_ has joined #apa 00:07:49 q+ 00:08:05 klausw has joined #apa 00:08:22 ack ada 00:08:55 ack ada 00:09:18 bajones has joined #apa 00:09:28 q+ 00:10:05 ada: if you looking at the study improvement of current hw.. for the VR side.. a massive technology shift 00:11:01 stuff like ML.. it didnt happened overnight.. 00:11:08 cabanier has joined #apa 00:11:12 +q to ask if people understand some of the challenges around XR A11y 00:11:17 present+ 00:11:28 software wise.. standards wise.. people are interesting about webXR.. 00:11:30 Matt_King has joined #apa 00:11:39 present+ 00:11:42 we are bulding the foundation at the moment 00:11:59 NellWaliczek has joined #apa 00:12:11 present+ 00:12:13 for the work.. its been done today.. hopefully we will see lot more capability towards voice interfaces... 00:12:41 jwer has joined #apa 00:12:43 speech synthesis and recognition.. long way to go 00:12:46 q? 00:12:50 q+ 00:13:02 q? 00:13:03 some of the thoughts 00:13:15 ack bajones 00:13:15 ack bajones 00:13:27 RRSAgent, please draft the minutes 00:13:27 I have made the request to generate https://www.w3.org/2019/09/18-apa-minutes.html Manishearth 00:13:38 bajones: going for a11y.. couple of paths that are clear.. 00:13:43 there are some that not very clear 00:14:06 one area which is clear, mobility concerns that impact aria computing 00:14:16 job simulator game example. 00:15:25 kind of adjustment where you are making user biger and environment smaller.. where you allow user to manipulate the space.. 00:15:38 those are things that can be done in a way.. that is ahndoff in application point of view 00:15:54 you could have all sort of a11y gesture 00:15:56 q+ To mention that at the UA level, we can implement low hanging fruit quickly that don't require spec changes. Eg, leanback mode. Maybe later add things such as mono audio modes. 00:16:31 these are things where you could have within the browser. that kind of a11 is going to emerge very well. 00:16:40 it is going to have huge impact 00:16:52 q+ To say that at the UA level, we can implement low hanging fruit quickly that don't require spec changes. Eg, leanback mode. Maybe later add things such as mono audio modes. 00:16:52 it related to other form of a11y.. e.g. visual 00:17:47 a frame which is declarative by feature... where the base life api doesnt have any hooks. lot of possibilities there 00:17:53 q+ what is aFrame? 00:18:19 going further.. having like descriptive audio.. 00:18:26 q+, what is aframe? 00:18:35 q+ 00:18:42 i dont not personally have ckear idea.. and I dont use any a11y tool. this may be an area where lot of research is required 00:19:11 q+ 00:19:12 if you want to tab through to determine the content.. or you want to navigate though the objects... 00:19:25 janina: any one on the phone? 00:19:31 * no one* 00:20:26 Joshua: term XR covers many things 00:20:42 broadly speaking, it makes lot of sense.. 00:21:13 essentially.. it is visual rendering in 2 d. which makes sementic information architecture in the dom. 00:21:31 this gets little fussy.. 00:22:46 ack me 00:22:46 Joshue108_, you wanted to ask if people understand some of the challenges around XR A11y 00:22:46 q? 00:22:47 current model when sr interact the web page.. forms mode thing, user bypass the a11y.. if they are interacting or navigation something.. 00:22:53 ack Joshue108 00:23:11 What do we understand are the issues for existing AT users, where the AT is essentially outside the simulation? 00:23:18 what are the issues emersive ways? 00:23:36 q- 00:23:38 what are the issues with AT outside of a11y. 00:23:47 in the future AT could be inside the simulation 00:24:01 AT could be embedded in the environment. 00:24:16 What does the architecture of tomorrow look like? 00:24:21 # Could AT be embedded in these future environments? 00:24:32 q+ 00:24:39 another question? what is the architecture for tomorrow look like? 00:25:16 bajones: are there any a11y tools that applies to teh similar situation that we discussed? 00:25:29 I dont think there are many paralles to the environment that we are talking about 00:25:33 q+ to ask about developers generating something akin to the AOM from the scenegraph? 00:25:34 q? 00:25:39 q+ Janina 00:25:40 s/teh/the/ 00:25:45 janina: history about it.. would like to explain 00:25:48 ack NellWaliczek 00:27:08 nell: we srtat with short term options where we start with entry point.. we could encourage UA.. example.. job simulator.. browser level setting can make it easier 00:27:44 input devise.. target ray.. have to reach the thing you are trying to reach. It truens out that it is not pleasant experience 00:28:00 there may be opportunity in lower level apis 00:28:29 to enable alternate in input device. that could accomplish similar feature. 00:28:57 there were two different discussion.. we need to split it out. 00:29:44 perhaps the bigger benefit might not be the web specific way.. user agent can do something like job simulator. 00:29:49 q? 00:30:25 existing user interface.. often user experience tells you that you are trying to access is behind you. not sure how to think about making that easier. 00:30:33 localizing sound is not useful to me. 00:30:40 it seems some opportunity to dig in there 00:31:16 things to consider.. take action on today.. that isnt necessary the web specific. 00:31:55 how can we propose the change in glTF (GL Transmission format) file format. 00:32:22 thats a declarative format and relies on extension that is integrated with A11Y 00:33:04 you are asking.. whats the future/ 00:33:07 two interesting things 00:33:12 q+ to mention our current draft XR user needs 00:33:40 one, is fair amount of interest in itracking APIs at the plateform level 00:33:56 this could either be super helpful or super problematic as far a11y concerns 00:34:19 if you cant see the content and cant detect objects.. its falls positive 00:34:44 q+ 00:34:44 q? 00:34:51 q+ 00:34:55 its related to input sources.. having a target ray.. under the hood to chage the targeting ray.. there is an opportunity.. thats again in few years. 00:35:15 when we look out for 5-10 years in the future... we liekly to see more declaring hybrid interface. 00:35:36 s/liekly/likely 00:35:41 emersive shell UI has the ability to place 3d object all around the world. 00:36:15 s/itracking/eye tracking 00:36:37 That would allow users ti provide more sementic approach. 00:37:15 walking to the street.. you can query the menu that is digitally advertised... 00:37:28 s/sementic/semantic/ 00:37:42 joshua: we need to start what people actually need. 00:37:56 nel;: I am talking about 5-10 years in a11y work. 00:38:16 nell: people are going to wear those gadget 24x7 like they have phone 00:38:22 s/nel;/nell/ 00:39:09 as those things start to be more widely accepted and available.. there are interesting potential to get the information. 00:39:21 q? 00:39:22 that could be helpful in context of a11y 00:39:36 joshua: its great as long as it is not vendor related 00:40:03 q? 00:40:13 nell: its very different than our imperative approach 00:40:20 ack kip 00:40:20 kip, you wanted to mention that at the UA level, we can implement low hanging fruit quickly that don't require spec changes. Eg, leanback mode. Maybe later add things such as 00:40:23 ... mono audio modes. and to say that at the UA level, we can implement low hanging fruit quickly that don't require spec changes. Eg, leanback mode. Maybe later add things such as 00:40:23 ... mono audio modes. 00:40:37 kip: i can speak what happend in FF reality. 00:40:53 spent time with user and understand what do they need. 00:41:13 we discovered things.. what are the things that are actionable quicker. 00:43:15 things .. some people are sensitive about some behavior of browsers . watching videos.. we don't project the video in proper way.. if you are producing the video and add the subtile in the video.. its 00:43:37 when we show 360 degree video.. different presentation to left and right eye.. 00:43:47 nell: its like map projection 00:43:57 kip: it is unreadable 00:44:06 you may want to have that text around you.. 00:44:30 it needs to be sensitive where you are looking at one paricular point.. mid-term work that we want to look at 00:44:46 reviewed document.. actionable thing that can be done quickly.. 00:45:01 allowing mix audio to both ears.. 00:45:08 XAUR draft https://www.w3.org/WAI/APA/wiki/Xaur_draft 00:45:26 we can discover that can be handled quickly 00:45:33 ack Matt_King 00:46:14 nell: A frame 00:46:21 https://aframe.io/ 00:46:39 * Thanks Joshua* 00:47:47 nell: super medium browser is based upon A frame. 00:48:37 to clarify, it's not build on aframe 00:48:42 just the same people working on both 00:49:08 q? 00:50:00 matt: inside or outside the at experience, 30 years of context, how the at technology is going to work. 00:50:32 at would live inside the app. it would load the program with all the functions 00:51:00 you can imagine the problem for the end users to find the new ways to experience. 00:51:08 q+ to ask further about A-frame/accessibility integration 00:52:05 q+ To say that as aframe is based on custom elements, perhaps authors could start adding aria attributes 00:52:09 application that rely on third party and try to build an a11y tree.. I wonder if XR space is an opportunity to get best in both worlds 00:52:28 where you can have at build inside the app but standardize api 00:53:17 q+ to give context about how 3D tech works 00:53:33 building a SR that tried to read world around you, we do not have a concept today in any SR tech. 00:53:44 it a linear world and not 2d 00:54:08 q- 00:54:26 q- 00:54:36 i would love us when we try to think that API and what is possible in linear world that could be more ideal with general purpose feature 00:55:17 ack cabanier 00:56:15 cabanier: everything is declarative.. there is no reason that a11y dom can not be used.. 00:56:54 in short term, we do have a set of strict recommendations to use application... 00:57:04 ack ada 00:57:04 ada, you wanted to ask about developers generating something akin to the AOM from the scenegraph? 00:57:41 ada: one of the interesting about aframe, is concept of eam graph 00:58:03 s/eam/seam 00:58:07 s/eam graph/aom graph/ 00:58:27 s/seam/scene/ 00:58:53 there were some kind of scene graph format that can be easily generated using JS library. 00:59:05 (FYI, assume AOM = https://github.com/WICG/aom/blob/gh-pages/explainer.md ) 00:59:31 Judy has joined #apa 00:59:34 cabanier: we have 3d declarative framework. they can become the part of a11y 00:59:52 ada: there was an API that would let you submit JS object in a particular format. 01:00:10 +1 to Ada 01:00:24 q? 01:00:30 that might go in long way to provide a method. 01:00:42 joshua: great idea 01:00:50 would liek to explore more with you folks 01:00:57 s/liek/like 01:01:00 ack janina 01:01:22 janina: one of the things in a11y is archeological digging. 01:02:02 stuff can change live.. we put more booting, more inconvenience. making a calculation.. 01:02:26 my mentor was a guy who would track his eyes on keyboard... 01:02:32 q+ 01:02:38 there is a background there.. we need to digg that... 01:02:49 q+ to comment on "single switch access" 01:02:52 Anyone who is interested, I'd be happy to schedule time this wek to explain a bit about how the underlying 3D and XR platforms work 01:02:55 q- 01:03:04 history of attempts to use early implementation that becomes more robistics 01:03:37 one of the most compelling presentation at CSUN in 1994.. 01:04:12 s/wek/week 01:04:18 example of the wheelchair in presentation. 01:04:55 skills we would rather learn in good controlled environment. 01:05:12 ack Joshue108 01:05:12 Joshue108_, you wanted to mention our current draft XR user needs 01:05:18 we need to dig something in archive 01:05:37 ack me 01:05:42 joshua: in history, many initiative that we can learn from the things that didnt work well and determine why 01:06:12 exploring what can we do in authoring environment is brilliant idea. 01:06:37 semantic scene graph, AOM are part of equation.. what can we do for user needs... 01:06:38 https://www.w3.org/WAI/APA/wiki/Xaur_draft 01:06:52 a11y means different things for different people 01:06:58 ack klausw 01:07:01 WebXR flexibility could be used for near-term a11y improvements. 01:07:11 Input uses abstractions, and a custom controller should be usable in existing applications by supplying "select" events and targeting rays which are decoupled from physical movement. 01:07:16 tuning outputs: fully disabling rendering may confuse applications, but a user agent could reduce framerate, set monocular mode, and/or set an extremely low resolution to decrease rendering cost. 01:07:21 Reference spaces and poses are UA controlled, the UA could have mobility options such as adjusting floor height, teleportation, or arm extension, even without specific application support. 01:07:27 There are ongoing discussions about DOM layers in WebXR, important to ensure that existing a11y mechanisms can remain functional. For example, avoid a "DOM to texture" approach where this information may get lost. 01:07:35 ack kip 01:07:35 kip, you wanted to say that as aframe is based on custom elements, perhaps authors could start adding aria attributes 01:07:36 klaus: ask me if you have any question what I have added here. 01:07:56 kip: aframe is based upon custom element 01:08:01 ack judy 01:08:01 Judy, you wanted to comment on "single switch access" 01:08:05 that could be one potential avenue to action 01:08:21 q? 01:08:26 judy: want to refelct back.. with regards to single switch access.. 01:08:37 present+ Joanmarie_Diggs 01:09:01 second: use case for mobility training.. 01:09:37 there is very interesting development in virtual environment. 01:10:24 nell: I can make myself available to give you more information if you need. 01:11:34 rrsagent, make minutes 01:11:34 I have made the request to generate https://www.w3.org/2019/09/18-apa-minutes.html Irfan 01:13:47 joshua: we have a set of declarative semantic. we tell how to markup stuff... now we are in a situation in XR where we need declarative semantic. 01:14:34 the only other thing that I saw recently is AOM that could be used as a bridge between document semantic and application semantic. 01:14:49 q+ to discuss AOM 01:15:03 what am I hearing from the feedback, it could be possible to populate the a11y tree.. we . need to agree what is needed. 01:15:57 object oriented case.. where you have properties inherited... or encapsulate.. have ability to understand 01:16:04 q+ 01:16:12 in-terms in AOM it seems interim solution. 01:17:11 ack bajones 01:17:11 bajones, you wanted to discuss AOM 01:17:42 bajones: talk about AOM.. recently came up with this.. it is something like canvas 01:18:15 q+ 01:18:16 linear stream of data.. what is most logical ordering of data? 01:18:20 q- 01:18:30 q- 01:18:38 q+ to say its not really about linearisation 01:19:15 it is relatively trivial for use to produce some markup that got some volume, description in it.. you need one intelligent way to mark it up.
..
01:20:14 AOM seems reasonable example.. 01:20:28 ack me 01:20:28 Joshue108_, you wanted to say its not really about linearisation 01:20:42 joshua: brilliant topic 01:21:31 if you take regular HTML.. the example of data table where you interrogate the data table.. user go where they want to go... 01:22:09 thats a little bit matching your understanding... what you need a description where you can read that a particular heading belongs to a particular field. 01:22:32 we need to work on what kind of architecture looks like... 01:22:53 matt: as a SR use, you still has a linear view even if you think for 3d 01:22:59 q+ to talk about input profiles 01:23:21 q? 01:23:25 if you move item by item on the webpage.. you do need to order in way that makes sense 01:23:43 q+ to ask Ada more about her view of standardisation of semantic scene graphs 01:25:03 *example of discovering objects in room* 01:25:22 you dont have easy ways to control scanning in different ways.. 01:26:22 ack NellWaliczek 01:26:22 NellWaliczek, you wanted to talk about input profiles 01:26:47 nell: there is one other emerging API area that will be available in short term.. eye-tracking. 01:28:08 you will see that within couple of years underline the plate-form level. 01:28:54 there is an open source librarty that I am working, "input profiles" 01:31:23 https://github.com/immersive-web/webxr-input-profiles is the library's github 01:31:30 bajones: it is part of our input story called select events.. 01:32:05 user are doing primary input.. 01:32:25 s/emersive/immersive 01:33:00 * APA room* 01:33:07 This is the link to the test page I've been using to ensure the motion controllers behave consistently. Apologies that it is very barebones (and probably very poorly built because i'm not really a webdev...), but i'd be happy to take guidance on how to make it more usable 01:33:09 https://immersive-web.github.io/webxr-input-profiles/packages/viewer/dist/index.html 01:33:19 * Thanks Nell 01:35:04 q+ 01:35:30 q? 01:35:57 ack Joshue108 01:35:57 Joshue108_, you wanted to ask Ada more about her view of standardisation of semantic scene graphs 01:37:40 ack Judy 01:41:28 RRSAgent, make minutes 01:41:28 I have made the request to generate https://www.w3.org/2019/09/18-apa-minutes.html Roy 01:42:10 CharlesL has joined #apa 01:43:01 rrsagent, draft minutes 01:43:01 I have made the request to generate https://www.w3.org/2019/09/18-apa-minutes.html CharlesL 01:56:09 Topic: AOM and XR 02:00:07 CharlesL has joined #apa 02:02:26 present+ 02:02:33 Irfan has joined #apa 02:02:37 present+ 02:05:06 aboxhall_ has joined #apa 02:05:23 present+ Léonie (tink) 02:05:29 Judy has joined #apa 02:06:48 rrsagent, make minutes 02:06:48 I have made the request to generate https://www.w3.org/2019/09/18-apa-minutes.html Irfan 02:07:08 sangwhan has joined #apa 02:07:58 scribe: Joshue108 02:08:01 mhakkinen has joined #apa 02:08:17 TOPIC: Pronunciation Approach 02:08:21 chrishall has joined #apa 02:08:25 MH: I've describe the issue. 02:08:35 https://github.com/w3c/pronunciation/wiki 02:08:42 scribe+ Joshue108 02:08:43 In the education space, we have requirements for students to be exposed to text to text speech. 02:09:04 There are issues with things not being pronounced correctly, based on teaching style.. 02:09:15 e.g. certain pauses and emphasis etc. 02:09:23 https://w3c.github.io/pronunciation/gap-analysis/ 02:09:29 There are no real solutions for this and we done a gap analysis. 02:09:57 There are various hacks, use of speech q's such as miss use of aria-label, rather fragile hacks. 02:09:59 User scenarios document https://w3c.github.io/pronunciation/user-scenarios/ 02:10:21 We have done a gap analysis. 02:10:23 Use case document https://w3c.github.io/pronunciation/use-cases/ 02:10:27 dbaron has joined #apa 02:10:40 SSML is a W3C req we dont have a way for authors to bring it into HTML. 02:10:41 jcraig has joined #apa 02:11:14 There were solutions such as inlining into HTML, or an attribute model which may work well for AT vendors. 02:11:29 We also have other attribute based model. 02:11:43 The question for TAG is which of these could be the most successful? 02:11:53 zcorpan has joined #apa 02:12:00 present+ 02:12:10 Talking to AT vendors, inlining is not so attractive etc, standardisind the attribute model could work. 02:12:19 Also scraping content could work. 02:12:32 Q+ to get a refresher on IPA attr? 02:12:32 q? 02:12:34 Irf: We have also provided these use case document, see URI. 02:12:39 Ack me 02:12:39 jcraig, you wanted to get a refresher on IPA attr? 02:12:46 ack jcraig 02:13:04 JC: When we talked about an aria attribute for IPA pronunciation, does this do enough? 02:13:33 MH: Pronunciation is a key aspect but there are issues of handling numberic values and other peculiar lexical values. 02:13:42 Not handled by IPA pronunciation. 02:14:11 LW: There are issues with classical english iambic pentamter, prosody etc. 02:14:23 JC: IPA allows this. 02:14:38 JS: The problem loading this into ARIA means we dont get the user we want to pick up. 02:14:57 JC: Could we do a combo of IPA and parts of CSS speech, speak as digits etc. 02:15:16 MH: Right, a combination. Not e'thing is supported. 02:15:36 https://w3c.github.io/pronunciation/gap-analysis/#gap-analysis 02:15:53 Janinas point is that with the range of voice assistance, and SSML type content could be beneficial to the growing number. 02:15:57 This is not just an AT issue. 02:16:10 There are other potential use cases. 02:16:21 JS: We want to eventually make this a part of HTML. 02:16:24 q? 02:16:49 AB: Looking thru the use case doc, it does seem like a problem that goes beyond AT. 02:17:07 Seems like a good problem to solve. 02:17:16 What was the feedback you needed? 02:17:35 MH: We have surveys etc out to the AT vendor community. 02:17:41 Irf: Posts survey. 02:18:04 JS: We want to finish the gap analysis, etc then lock it into HTML, as the way to solve these issues. 02:18:05 https://www.w3.org/2002/09/wbs/110437/SurveyforAT/ 02:18:25 JS: HTML is now not just W3C, we've talked with WHATWG etc. 02:18:31 Happy Leonie is here. 02:19:40 02:20:30 MH: For some background ETS, Pearson and others are offering solutions where things are captured.. 02:21:16 MH: We know we have to author pronunciation to get content to students.. 02:21:26 We are missing mechanism to bring it into HTML. 02:21:32 There is a legal imperative. 02:21:46 q? 02:21:49 MH: This is a real problem. 02:22:10 Language users with the read aloud tool for example, if pron incsistent with general usage. 02:22:18 Totally confusing for language learners. 02:22:35 SP: Simon Pieters from Baku.. editor of HTML. 02:22:47 S/Baku/Bocoup/ 02:22:48 If you want to propose, please file issue to HTML repo. 02:22:59 JS: Yup. 02:23:14 SP: You can start by presenting the problem, good way to get discussion going. 02:23:25 I can talk about issues with the namespace. 02:23:38 MH: No-one we have talked to really wants to go there. 02:23:57 SP: They are technical implementation issues, the problem statement is the crucial bit. 02:24:06 AB: Was going to suggest filing a review. 02:24:25 https://github.com/w3ctag/design-reviews/issues/new/choose 02:24:28 https://w3ctag.github.io/explainers 02:24:29 SW: We require an explainer. 02:24:38 s/Baku/Bocoup/ 02:24:46 Choose "Specification Review" 02:25:12 AB: If there is an issue filed on HTML we can bring those issues together, any preference? 02:25:26 SP: No, I need to know more about the space first. 02:25:45 SP: Process wise, file an issue, explain use case- point to existing work. 02:25:56 Will send a link. 02:26:04 MH: We are vetting approaches. 02:26:14 SW: https://github.com/w3ctag/design-reviews/issues is where we take reviews. We require an explainer - effectively an elevator pitch in markdown. Here is a explainer for what an explainer is: https://w3ctag.github.io/explainers 02:26:21 JS: The use case doc is past FPWD. 02:26:32 There are directions that are apparent. 02:26:34 https://whatwg.org/faq#how-does-the-whatwg-work - whatwg process 02:26:43 jihye has joined #apa 02:26:46 AB: We have a different def of an explainer for TAG review. 02:26:50 02:27:07 s/def/definition 02:27:21 We like to understand the problem space and options you have considered. 02:27:24 And discuss. 02:27:31 JS: Sounds good? 02:27:32 q? 02:27:34 02:27:50 rrsagent, make minutes 02:27:50 I have made the request to generate https://www.w3.org/2019/09/18-apa-minutes.html Irfan 02:27:53 TOPIC: XR and AOM 02:27:58 IMS QTI Spec (Question Test interoperability) https://www.imsglobal.org/question/index.html 02:28:47 scribe: ZoeBijl 02:28:47 q? 02:29:11 IMS QTI usage of SSML defined here: https://www.imsglobal.org/apip/apipv1p0/APIP_QTI_v1p0.html 02:29:27 Josh: we had a very useful meeting with some folks from inclusive ??? 02:29:38 there was a general need to give this some attention 02:29:46 general need to understand user needs 02:29:56 ?? semantics 02:30:00 DOM generation 02:30:05 accessibility tree 02:30:12 and getting that to AT 02:30:16 s/???/web 02:30:33 There was also an acknowledgement of ?? 02:30:43 things could be described declaratively 02:30:44 s/inclusive/immersive 02:30:55 it’s not moved(?) into an accessibility API 02:31:16 there was an ineresting discussiona round making accessible ??? 02:32:09 scribe+ 02:32:25 scribenick: CharlesL 02:32:45 JC: AOM could be a temp solution today /tomorrow 02:32:51 virtual tree may be a while 02:33:00 scribe: CharlesL 02:33:06 aria reflected attributes 02:33:28 Josh: we are making assumptions if we took agile approach what does good look like 02:33:38 Janina: what is practical. 02:33:44 Judy has joined #apa 02:33:45 q? 02:33:56 Josh: if thats a blocker 02:33:56 https://github.com/WICG/aom/blob/gh-pages/caniuse.md 02:34:38 … semantics for XR … can we? 02:35:25 s/AOM could be a temp solution/AOM is not yet ready to be used as a temp solution/ 02:35:45 Allison: what might be possible. what AT would be consuming this, really cool if they were developing AT 02:36:06 Josh: AT could be embedded in the environment. 02:36:21 … AT would be looking at an abstraction. 02:36:43 … core things the user needs to know (role / state / property) 02:37:12 https://github.com/WICG/aom/blob/gh-pages/explainer.md#virtual-accessibility-nodes 02:37:23 Jaimes: on the roadmap Virtual trees canvas a JS api to expose the virtual tree under it 02:38:14 Alison: we could create the accessibility Tree like Dom nodes, could you create a DOM tree that represented in the XR env. 02:38:32 how would existing ATs would interact with it with different User interfaces. 02:39:02 Josh: JS calls on DOM window object and env. object could have children objects being separate nodes. 02:40:00 … I would see visit them sequentially. linearization. web page mark up there is a semantic markup, if marked up correctly users can navigate it 02:40:21 Leonie: suggested an API to expose those things. 02:40:42 Josh: create a blob semantics that user can interact with it. 02:40:51 Proposal for an API for immersive web https://github.com/immersive-web/proposals/issues/54#issuecomment-522341968 02:40:55 Alison: new AT based on the APIs 02:41:36 Jaimes: 3d space in the accessibility tree is a new VR/AR is a primitive. and use cases for that is not settled yet. 02:41:48 AR - utilitarian, VR - games etc. 02:42:03 some primitives we can put together but a solution is very early 02:42:50 Leonie: aria has Web Commonents UI controls but in VR we can have anything. a labo to a dragon so how can we figure out what we are dealing with. 02:44:16 Josh: DOM tree that emulates this room. a tree can be generated, issues we have docuent rendered if there were aync calls the ajax call, but immersive env. as a function of time, backwards/forwards depending on where the user is. node will change dependant on user ,via API calls as a function of time. 02:44:43 … moving beyond document oject models states as a function of time. 02:45:40 Alison: scope to a new vocabulary fundamentally a tree and node would interact with that and sequential or in 2D space. How do you pick which node in 3D space. 02:45:52 Josh: we need a vocabulary in AOM lexicon of terms 02:46:03 Matt: agrees with josh 02:46:15 interaction how do we read this tree we don't know yet. 02:46:31 surfacing the info so AT could interact with it. 02:46:44 s/Alison/Alice 02:46:56 Alison: that there is a tree, but how does the tree map to the immersive env. 02:47:22 Matt: where you are standing in that tree is something we would need to know. 02:47:31 q+ to say that I don’t see how flattening the 3D space would give the same experience 02:47:33 Josh: no 02:47:47 Alice: we need to know how the interaction would work. 02:48:10 q? 02:48:29 s/Jaimes: 3d space in the accessibility tree is a new VR/AR is a primitive. and use cases for that is not settled yet./jcraig: 3d space VR/AR could be a new accessibility primitive. and use cases for AR/VR are not yet settled./ 02:48:33 Q+ 02:48:36 Josh: I don't think we need to worry about that. the interaction could be mediated by the AT 02:49:12 q? 02:49:12 Q+ to say I am not sure a “tree” is the right solution for 3D space 02:49:28 … some things that are AT responsibilities. Matt different env. updating that tree sequentially would give you that concept of movement. 02:49:53 q? 02:49:53 … various different nodes within that env. could be different sound. 02:49:58 effects. 02:50:08 ack ZoeBijl 02:50:08 ZoeBijl, you wanted to say that I don’t see how flattening the 3D space would give the same experience 02:50:10 Q+ to say some of the vocab may be solved in an XR ARIA module (similar to DPUB’s) 02:50:24 q+ to discuss analogy with scrolling content into view 02:50:44 zoe: I am not sure flattening a 3D space would give AT user the same thing 02:51:04 Ack me 02:51:05 jcraig, you wanted to say I am not sure a “tree” is the right solution for 3D space and to say some of the vocab may be solved in an XR ARIA module (similar to DPUB’s) 02:51:06 … you can move in 2D space how are you going to do this with a linear tree 02:51:07 Judy has joined #apa 02:51:26 Jaimes: not sure a tree is good for 3D space as zoe points out 02:51:42 … obscuring moving behind objects etc. 02:51:56 q? 02:52:14 … not convinced. Josh's with vocabulary that you can work on like the DPUB module in ARIA not sure how far that would get you. 02:52:58 q? 02:53:08 ack zcorpan 02:53:08 zcorpan, you wanted to discuss analogy with scrolling content into view 02:53:10 in different environments it could be just say "boardroom" is enough, but these ideas are not worked out yet. 02:54:01 Simon: you can scroll i 2 dimensions, similar if you see in one direction vs. moving your head like a scroll bar potentially. 02:54:10 q? 02:54:14 q+ 02:54:49 Josh: Google is working on a large JSON model populated as needed, renascent thing virtual scrolling 02:55:09 s/you can move in 2D space how are you going to do this with a linear tree/A website is essentially a linear document. It might have branches in 2D which you can move about in. But all of the branches are connected. This doesn’t work the same way in 3D space. Things aren’t connected to each other—they’re not linear./ 02:55:31 Josh: modal muting is the idea of cutting out the stuff you don't use ie visual readering would be must more responsive etc. 02:56:27 https://en.wikipedia.org/wiki/Octree 02:56:29 Matt: prev. meeting in every 3D library there is a concept like Tree OCT-Tree 02:56:34 https://en.wikipedia.org/wiki/Binary_space_partitioning 02:57:17 ack matt 02:58:59 Josh: user within an emersive space; view from within that space Sceen Graph is represented used for expressing relationships OCT-trees are optimization reducing the load output device 02:59:22 logic is captured in the form of. a graph spanning tree can be deduced from that graph. 02:59:41 I don't belive the oct-tree is the right representation that has semantic value. 03:00:00 oct-tree only subdivids space. 03:00:18 rrsagent, make minutes 03:00:18 I have made the request to generate https://www.w3.org/2019/09/18-apa-minutes.html Irfan 03:00:58 Rossen - made previous comments 03:01:14 Rossen: OCT-Tree reduces down to a Quadrant Tree 03:01:32 Matt: strickly spactial is an OCT-tree 03:02:34 Simmon: what do we want to represent to the user, is a tree or graph the best way to do this? 03:03:18 Janina: Nell mentioned that as you pass restaurants you may get the entire menu, or way to enter in that virtual env. to eat there. 03:04:25 Rossen: current AT to observe 1 element at a time which is fair on a web page, but on a 3D space you are observing a multitude of things happening which ddoesn't fit the current single observability model. 03:05:16 simplest thing how do you convey multiple things to the user at the same time 03:05:54 q? 03:06:34 Matt: if a person is coming down the street in the real world I hear the footsteps, if there is cars in the street I hear that, but if it is Janina walking towards me then the AT could say who is coming towards me or that vehicle on the street is bus #102 that is the information we could expose via AT 03:07:34 Josh: cherrypicking certain portions we could scrolled window pane we could map and time sync with sound effects could be iterated over time 03:09:01 Simmon: describing virtual reality similar to an actual person helping a blind person on the street you would talk about one thing at a time, and similarly with a screen reader would do the same. 03:09:43 CharlesLeP: I used to work in GPS system for blind users, so when walking thru street you will hear announcements.. 03:09:59 These could be personalised and narrowed down to what was needed. 03:10:13 You can also ping to find out what is around you. 03:10:58 Leonie: Microsoft sound scape does this with different pings and distance to where those objects are in reality. 03:11:47 Josh: semantic sceen graph and a tree representations could be beneficial 03:12:42 W3C workshop on inclusive design for immersive web standards https://w3c.github.io/inclusive-xr-workshop/ 03:12:57 Leonie: there is a w3c workshop on Nov 5/6 in Seattle 03:13:21 rrsagent, draft minutes 03:13:21 I have made the request to generate https://www.w3.org/2019/09/18-apa-minutes.html CharlesL 03:49:15 MichaelC has joined #apa 03:53:52 Avneesh has joined #apa 03:58:40 CharlesL has joined #apa 03:59:01 romain has joined #apa 04:02:53 zcorpan has joined #apa 04:03:46 topic: Digital Publishing / APA 04:04:17 stevelee has joined #apa 04:04:32 Roy has joined #apa 04:06:33 present+ 04:08:47 LisaSeemanKest_ has joined #apa 04:09:44 marisa has joined #apa 04:10:29 %s/sceen/scene/g 04:10:39 rrsagent, draft minutes 04:10:39 I have made the request to generate https://www.w3.org/2019/09/18-apa-minutes.html CharlesL 04:11:06 audio books: https://www.w3.org/TR/2019/WD-audiobooks-20190911/ 04:11:29 Janina: I did not get this review done 04:11:35 publication manifest: https://www.w3.org/TR/2019/WD-audiobooks-20190911/#audio-accessibility 04:12:53 Avneesh: basic dpub manifest, and audio books is a JSON structure with default play list and TOC in HTML and page #'s and uses media fragments file name, chapter2 .mp3 and the time sync. for a11y but not accessible for hearing impaired. 04:12:56 trying to join the webex 04:13:27 i can join after the host joins 04:13:34 … pointer to media file and sync with text representation 04:14:22 Marisa: we are exploring and prototyping it for video. we restricted to sync media text/audio, but there is room to grow sign language / braille etc. 04:15:59 s/we are exploring and prototyping it for video./we are exploring and prototyping sign language video sync 04:16:27 s/we restricted to sync media text/we restricted sync media to text 04:16:50 s/braille etc.// 04:17:17 we are on the webex, but the host needs to join 04:17:42 Janina: the APA review should take us a week. 04:19:58 Avneesh; end of September would be fine, we want to go to CR by early October. i18n already done, privacy is going on right now and looks good. 04:20:01 Irfan has joined #apa 04:20:11 Janina: I will make sure APA review is done by the end of Sept. 04:20:15 present+ 04:20:42 present+ 04:22:23 present+ 04:31:44 roy, no audio 04:32:06 taking a brake 04:33:07 no 04:33:17 hanging up. will try audio again 04:34:08 ok, will call back after the brake 04:34:37 will try a diffrent audio 04:39:42 Judy has joined #apa 04:47:46 atai has joined #apa 04:48:54 dbaron has left #apa 04:54:32 present+ 04:55:25 i am tring to join 04:57:49 i can not join without michael joining 04:57:54 it needs a host 04:58:02 q? 04:59:34 MichaelC has joined #apa 05:05:48 Judy has joined #apa 05:51:38 sangwhan has left #apa 05:52:01 CharlesL has joined #apa 05:58:00 marisa has joined #apa 05:59:15 CharlesL has left #apa 06:00:29 atai has joined #apa 06:03:57 Judy has joined #apa 06:04:00 marisa has joined #apa 06:14:39 zcorpan has joined #apa 06:32:58 waiting for michael to join webex 06:33:05 agenda? 06:38:33 zcorpan has joined #apa 06:40:28 zcorpan has joined #apa 06:41:19 Matt_King has joined #apa 06:42:01 Joshue108 has joined #apa 06:42:14 Topic: FAST 06:42:31 present+ 06:42:48 LisaSeemanKest_ has joined #apa 06:42:50 waiting for michale to join the webe 06:43:20 present+ 06:45:16 JOC: I would like to undestand how the FAST architecture relates to other spec and work that we have going on. 06:46:07 So what does good look like for the FAST, how do we need to change it? 06:46:12 zcorpan has joined #apa 06:46:13 JSON-LD used the FAST. 06:46:38 JSON-LD horizontal review request used it 06:46:46 zcorpan has joined #apa 06:46:53 FAST is a big list of user needs and a description of how they could be met. 06:46:57 scribe: Joshue108 06:47:09 MC: That was a bigger issue than I thought. 06:47:17 MC: Its there as a POC. 06:47:42 Around this time, checklists started to get traction, so all groups were starting to get requests for checklist. 06:47:51 You can't start the meeting right now because we're having problems connecting to the WebEx service. Try again later. 06:47:51 Error code: 0xa0010003 06:47:51 Help us improve Cisco Webex Meetings by sending a problem report. Your report will be confidential. 06:47:56 error message 06:47:57 It does have some good ideas, filterning relating to the tech you are developing. 06:48:06 There is a short and long form version of the checklist. 06:48:18 There are placeholders for links to relevant specs. 06:48:55 MC: Its a CSS styled thing but not really a functioning spec. 06:49:05 It is hard to tell how this is applicable tbh. 06:49:11 We did try the WASM thing. 06:49:20 We should regroup, recode it. 06:49:30 Should be Yes/No/NA for example. 06:49:41 And can be used as a list of relevant questions. 06:50:07 This would be easy to do with a DB, and output the checkboxes. 06:50:26 MattK: Why do you need to do that, are there not other groups doing this? 06:50:35 MC: Because other groups do this differently. 06:50:47 joined on my ohone. Thanks all 06:50:51 q+ 06:51:02 A better way to edit it, and output it etc would be good. 06:51:22 There is talk about a common infrstructure, not going to happen quickly. 06:51:31 MK: What happens to the output? 06:51:42 MC: There is a GH feature where you can store some data etc. 06:52:27 MK: Why not make an issue template, and put them in there GH has this out of the box? 06:52:41 MC: i18n does this. 06:53:19 06:53:27 ack Lis 06:53:48 https://w3c.github.io/apa/fast/ 06:54:14 LS: There is another issue, may not be the right time. 06:54:33 My concern is that it is difficult to get things from COGA into WCAG. 06:54:56 This is possibly more important than WCAG, so this could have the hooks to make stuff happen. 06:55:22 So rather than focussing on WCAG etc for user needs, and with other specs. 06:55:38 They could be moved here, and could include more COGA issues. 06:55:59 This could help to not perpetuate a catch 22 situation. 06:56:19 There could also be more flexible technologies etc outside of COGA as well. 06:56:55 So it could be a way of addressing accessibility use cases. 06:57:22 As speech interaction is more prominent this will be more relevant. 06:57:42 So instead of UAAG 2.0 etc they could be moved here. 06:57:53 MC: On user needs we should be migrating towards that. 06:58:00 Longer term vision for sure.. 06:58:10 zcorpan has joined #apa 06:58:13 zcorpan has joined #apa 06:58:17 FAST could be the repo of user needs with other specs in parallel. 06:58:26 We won't get their quickly or easily. 06:58:35 Silver is also moving in that direction. 06:58:54 Will take time to do something meaningful, our focus now is on the checklist for self review. 06:59:02 We need to do the checklist first. 06:59:02 Q? 06:59:37 LS: There are problems from my perspective, I'm not seeing that the COGA patterns are being included here. 06:59:53 MC: Yes, it is incompleted. We also need to make it manageble. 07:00:21 MC: I'm not so clear on self review checklists etc, but we need to help groups get meaningful review on their spec. 07:00:30 The idea is that it should raise questions also. 07:00:38 With the relevant group, here APA. 07:00:54 https://w3c.github.io/coga/content-usable/#appendix1 07:01:06 Q+ 07:01:22 marisa has joined #apa 07:01:22 JS: I'd rather we help other groups raise issues here rather than muddle things. 07:01:49 It seems we should help them build a correct UI, and then help them with specifics as they relate to COGA etc. 07:02:06 JS: Not asking them in very deep level of detail at this point. 07:02:49 zcorpan has joined #apa 07:02:52 MC: So yes, I was poking around of i18n checklist - Michael reads.. 07:03:15 These are checklists but they are not easily maintained. 07:03:27 MK: There is an API for it. 07:03:33 07:03:46 MC: I'm not sure how robust this is. 07:04:16 They are rather detailed with many links 07:04:55 q? 07:05:14 The question is how much focus do we want, how detailed it should be etc. 07:05:36 ack lisa 07:05:44 https://w3c.github.io/coga/content-usable/#appendix1 07:06:02 LS: I've linked to the COGA patterns. 07:06:03 ttps://w3c.github.io/coga/content-usable/#appendix1 07:06:19 s/ttps://w3c.github.io/coga/content-usable/#appendix1/https://w3c.github.io/coga/content-usable/#appendix1 07:06:33 We can move this up to our things to do, can go on checklist. 07:06:37 Good for self review. 07:07:01 There needs to a way for the things that are not in WCAG are still supported. 07:07:04 q+ 07:07:47 LS: User testing could also help, for SR users, low vision etc. 07:08:46 MC: This is for technology spec developers, your link relates to authors etc. 07:09:00 Some may be relevant but this is mostly relevant for spec people. 07:09:32 JS: What would you expect from JSON-LD. 07:09:37 LS: Dont know really. 07:09:48 JS: They are the ones who filled out the survey. 07:10:04 What about Immersive Web etc? We need to know what they are doing. 07:10:45 MC: JSON-LD is an abstract framework. We need to know what they are doing, we are being asked to produce generic user requirements. 07:11:00 It can be difficult to know how to provide checklist for some specs. 07:11:10 LS: How does this relate to WCAG? 07:11:18 https://w3c.github.io/coga/content-usable/#objective-adapt-and-personalize 07:11:19 JS: It doesn't.. 07:11:33 LS: I've looked at these slides. 07:11:54 MC: If you have looked at this from FAST, it should be possible to create stuff that relates to WCAG. 07:11:55 ack me 07:13:30 JOC: So how do these FAST requirements bubble into and impact on a spec? Thats something I'd like to know. 07:13:45 LS: These questions will need to be revised from a COGA perspective. 07:13:59 JS: We hear you, but dont see how that analysis fits in here. 07:14:29 JS: We can come back later to this. 07:14:43 MC: Something that would fit in, is for users to indicate personlisation preferences. 07:14:52 We could reasonably add that. 07:15:03 Some of the other things could relate to the FAST checklist. 07:15:06 q+ 07:15:08 ack me 07:15:12 - i was looing at the intro of the doscument. my mistake 07:15:39 So what parts of the COGA requirements could be fixed by FAST, at the spec leve? 07:15:41 MC: Right. 07:15:53 JS: So thats not user testing etc. 07:16:52 MC: I went to a meet that were looking at user testing etc, so these suggestions could be added. 07:17:04 JS: Asking these questions does make sense, but not diving into details. 07:17:19 JS: This is a semaphor 07:17:36 s/MC: I went to a meet that were looking at user testing etc, so these suggestions could be added./ 07:18:03 MC: Checklist for best practices could point to resources and what to do, outline impacts etc. 07:18:29 The full framework could cover these things. 07:18:49 The full framework is the user needs, and a breakdown, best practices etc. 07:18:52 faiding in and out -want to hear this... 07:18:58 JS: Could be a lot like Silver. 07:19:41 MC: we are distilling a framwork which we will undistil in the full framework. 07:19:51 JS: 07:19:58 q? 07:20:39 JS: We need something for a group, say second screen, who is writing an API, keeping devices in sync. 07:22:26 JOC: So these are like my approach to XAUR and seperating technical use cases from user needs and requirements. 07:22:35 MC: I need to think about that. 07:22:47 q? 07:23:14 i realy can not hear well. just mumbles 07:23:37 So if we can capture these at a highlevel, then this would make the authors job easier. 07:24:01 MC: So I struggle with capturing it at high level. Its an ok start. 07:24:03 q+ 07:24:06 ack me 07:25:27 q? 07:25:37 -> http://w3c.github.io/apa/fast/checklist Draft FAST checklist 07:26:24 Ahh.. 07:27:17 http://w3c.github.io/apa/fast/checklis 07:27:20 http://w3c.github.io/apa/fast/checklist 07:27:41 s/http://w3c.github.io/apa/fast/checklis/ 07:29:19 Q+ 07:29:38 ack Lisa 07:29:46 JOC: This checklist is really good. 07:30:15 Very useful for specs doing technical stuff, use cases, that can fix things at the spec level. 07:30:59 LS: Some of this from our patterns could be supported by this. 07:31:11 MC: If we can break them down to technology features then yes. 07:31:36 LS: Some thing that provides direct navigable access to specific points in a media file. 07:31:41 MC: Right. 07:31:51 JS: I'd like to see a hierarchical list. 07:31:54 LS: Yes. 07:32:00 MC: One bit at a time. 07:32:29 q+ to ask if Lisa could review the checklist agains the COGA patterns she is suggesting. 07:33:15 LS: time based media etc. 07:33:30 LS: I need to read it. 07:33:41 ack me 07:33:41 Joshue, you wanted to ask if Lisa could review the checklist agains the COGA patterns she is suggesting. 07:33:55 MC: Please do! 07:34:11 MC: It would be great if you could come up with some. 07:34:33 I want to identify checklist items that are missing etc, and want to identify categories that are missing. 07:34:47 07:35:00 They feel a little weird but I find things I could group under them. 07:35:28 I'd like input on how useful they are, and what are missing, especially as we are including emerging tech. 07:35:32 JOC: I'll also review. 07:37:37 JS: Is Media XR a time based medium? 07:37:43 JOC: Interesting! 07:37:52 MC: I'd like to look at WoT also. 07:38:30 JOC: Content is aggregrated in WoT via sensors etc. 07:39:25 JOC: The stuff Lisa could feed into this would be really useful. 07:39:40 MC: We could do a bigger call for review. 07:40:00 Needs an explainer! 07:40:24 MC: Shall we take the checklist to the note track? 07:40:37 i cant here well. 07:40:40 I say no to either the framework or checklist. 07:40:54 but i think i get it if i focus on the checklist 07:40:59 The Framework is on hold, and the checklist needs attention. 07:41:15 MC: Implications of XR and other related tech. 07:41:27 We need accessiblity people to have a look at this. 07:41:44 Nell will be in tomorrow to demo how 3D is authored etc today. 07:42:55 MC: There was a demo I saw with 3D type captions etc that was interesting. 07:44:02 MC: Next step is to request what is missing review from accessibility people we know. 07:44:26 What it is and is not could be written up quickly - after TPAC, for two weeks say? 07:44:30 Could we ask? 07:44:35 MC: Yes. 07:44:55 JS: We can ask for it on the call Weds, and say we'd like feedback. 07:45:18 Then we can look at i18n thing borrow their code, either me or Josh. 07:45:36 MC: They have a generator - static doc, generator GH, scraping etc. 07:45:46 We could have a checklist by the end of the year. 07:45:51 JOC: Yes. 07:47:30 JS: I think we have a plan. 07:47:48 present+ Janina 07:47:58 rrsagent, make minutes 07:47:58 I have made the request to generate https://www.w3.org/2019/09/18-apa-minutes.html Joshue108 07:56:40 CharlesL has joined #apa 07:58:37 CharlesL has left #apa 08:00:06 im back 08:02:47 Judy has joined #apa 08:06:48 achraf has joined #apa 08:07:59 atai has joined #apa 08:08:08 addison has joined #apa 08:08:30 topic: Correct identification of signed and symbolic (AAC) 08:08:34 scribe: MichaelC 08:11:09 present+ 08:11:36 present+ 08:12:24 Bliss symbols being referenced from Personalization spec 08:12:36 raised if we should be referencing unicode 08:12:46 means getting the Bliss symbols into Unicode 08:12:58 that´s apparently been explored before unsure of outcome 08:13:21 Bliss people ok with the usage in Personalization, want to discuss with them the unicode thing 08:13:31 they were invited to this meeting but nobody seems present 08:13:40 Lisa was at AAC conference 08:14:03 people with certain kinds of brain damage benefit from symbols 08:14:06 there are libraries 08:14:30 js: would somebody use symbols to express? 08:15:05 lsk: they could 08:15:39 js: in media work we worked on supporting multiple alternate representations of media 08:17:04 lsk: challenge with sign languages 08:17:13 ag: sign languages are regional 08:17:21 used to fudge a region code 08:17:39 ISO 639-3 has 3-letter codes that cover many sign languages 08:18:04 lsk: you could have both a symbol set and a language 08:19:11 there was need to be able to identify both spoken regional language and sign regional language 08:19:18 ag: sounds like two separate things to tag 08:20:57 js: appear to be supporting that in media formats 08:21:43 ag: sounds like we might need to register additional subtags 08:22:24 where there are modalities beyond text 08:25:06 lsk: symbols sometimes work for a given language 08:25:48 wha tis the meeting URL please the one on the usual page say meeting has ended 08:26:07 present+ 08:26:17 and cultural representation of symbols 08:26:40 /me https://www.w3.org/2017/08/telecon-info_apa-tpac 08:26:52 i dont have that either - jus ta verbal that it was happening looking 08:27:41 q+ 08:28:16 in some languages there can be symbol overlap 08:28:30 or other cases different symbol sets within same language based on AT use 08:29:31 there can be copyrights on symbol sets, which is actually copyrighting someone´s language 08:29:46 so we´re using a more neutral set 08:31:23 http://www.arasaac.org/ 08:31:34 i can not hear 08:32:12 amrai: localized for Qatar 08:33:04 use case of eye tracker user using symbols to construct phrase 08:33:32 cultural issues mean can´t use all symbols from other regions 08:33:38 need local versions 08:33:51 exploring whether there could be abstract ones suitable for all cultues 08:33:56 js: there´s a demo 08:34:05 using Bliss IDs to translate among set 08:34:18 ag: these are glyph variations, not semantic variations? 08:34:23 amrai: yes 08:34:43 https://github.com/w3c/personalization-semantics/wiki/TPAC2019-WebApps-Symbols-Overview 08:35:24 deaf community says sign language is its own language with grammar etc 08:36:14 looking at finding mappings between sign languages 08:37:12 q+ 08:37:57 ac ac 08:38:09 ack ac 08:38:22 ag: sign language codes not related to spoken language of region 08:39:42 please speak louder or closer to the mic - thanks 08:39:47 https://github.com/w3c/personalization-semantics/wiki/TPAC2019-WebApps-Symbols-Overview 08:40:05 https://mycult-5c18a.firebaseapp.com/ 08:40:19 r12a has joined #apa 08:49:22 q+ 08:49:47 https://github.com/w3c/personalization-semantics/wiki/TPAC2019-WebApps-Symbols-Overview 08:54:08 https://youtu.be/68TbCVNQ3Z8?t=25 08:54:42 Library: http://madaportal.org/tawasol/en/symbols/ 08:57:35 08:58:16 hawkinsw has joined #apa 08:58:34 lisa.seeman@zoho.com lisa seeman 08:58:42 If you could point me to the source of the plugin, that would be great 08:58:50 s/lisa.seeman@zoho.com lisa seeman// 08:58:59 I was able to see the zip file, but I was hoping that I could actually see the source code. 09:03:19 rrsagent, make minutes 09:03:19 I have made the request to generate https://www.w3.org/2019/09/18-apa-minutes.html MichaelC 09:03:25 rrsagent, bye 09:03:25 I see no action items