W3C

- DRAFT -

Audio Working Group Teleconference

16 Sep 2019

Attendees

Present
rtoyg_m2
Regrets
Chair
SV_MEETING_CHAIR
Scribe
rtoyg_m2

Contents


<chris> trackbot, start telcon

<trackbot> Meeting: Audio Working Group Teleconference

<trackbot> Date: 16 September 2019

<mdjp> hoch: not much to report from google on implementation, some optimisations and clarifications. Still missing output latency and media stream track.

<mdjp> padenot: Missing audioworklet, implementation in progress. CancelAndHold being implemented, waiting on clear spec for algorythm.

<mdjp> padenot aw, missing message port, some queations around this.

<chris> jer: our implementation has not been updated for at least a year, we have ery few resources but don't oppose this

<chris> https://github.com/w3cping/tracking-issues/issues/13

<chris> https://github.com/WebAudio/web-audio-api/issues/2061

<mdjp> mdjp aim to republish updated CR during TPAC

<mdjp> chris blocker is issue #2061 privacy review

<mdjp> issue #2069, is this required for V1. padenot it is late for the spec but potentially a problem

<mdjp> karlt joins meeting remotely

<mdjp> karlt there was a chance to make this easier. Might be hard to spec this around message port.

<mdjp> padenot this is a big change karlt what is the implications padenot chrome is already shipping, concerns about compatibility.

<mdjp> padenot something like, when we would not call process again we could automatically close the mp karlt so we define when close is called. I considered this but nodes should be reusable so we do not know when this would happen to close automatically

<mdjp> mdjp if we defer this what are the issues in coming back to it in the future. karlt we would have compatibility issues

<mdjp> padenot this is a serious issue - multiple audio nodes, and cannot dispose

<mdjp> hoch worker does not have this problems, does not expose mp object itself?

<mdjp> padenot how do you send between processes hoch worker inherits from mp

<mdjp> hoch aim to talk to chrome worker team

<mdjp> karlt we are working on it - network issue....

<mdjp> padenot in a worker you can reinstantiate a message channel - so same situation exists and people need to be aware of this, objects must be disposed of manually

<mdjp> karlt good point, we would only be fixing one usecase but the problem still exists elsewhere. Outying cases would still be an issue

<mdjp> padenot anyone who wants to do something bad will still be able to this is a fundmental issue

<mdjp> padenot might want to take input from people who have been dealing with message channel and gc

<mdjp> padenot will write summary and request more input

<hoch> https://www.chromestatus.com/metrics/feature/popularity

<hoch> To sum up: V8MessageChannel constructor = ~40%, V8AudioContext constructor = ~4%, V8GainNode constructor= ~1.2%

<hoch> https://github.com/WebAudio/web-audio-api/issues/2051

<hoch> https://github.com/WebAudio/web-audio-api/issues/2047

<mdjp> https://github.com/WebAudio/web-audio-api/issues?q=is%3Aopen+is%3Aissue+no%3Amilestone

<padenot> 15min break

<mdjp> Agenda https://www.w3.org/2011/audio/wiki/F2F_Sep_2019

<padenot> sangwhan, we'll discuss this in a minute, thanks

<sangwhan> padenot: thanks!

<mdjp> https://github.com/WebAudio/web-audio-api/issues/1967

<mdjp> https://github.com/WebAudio/web-audio-api/issues/2008

<mdjp> hoch fundamental issue touching event loop spec

<padenot> sangwhan, we dont' think we _need_ TAG presence at this stage V2, but TAG presence could be useful when talking about the `AudioDeviceClient` proposal, https://github.com/WebAudio/web-audio-cg/tree/master/audio-device-client

<padenot> karlt, audio issues

<mdjp> https://github.com/WebAudio/web-audio-api/issues/1933

<sangwhan> padenot: roger that, will show up sometime during your allocated meeting time in that case

<padenot> sangwhan, do you have the agenda handy? https://www.w3.org/2011/audio/wiki/F2F_Sep_2019 is the link, basically 11am tomorrow

<sangwhan> padenot: https://www.w3.org/wiki/Media_WG/TPAC/2019 seems to be what I have

<padenot> that's the media wg, which is thursday/friday, in #mediawg

<padenot> breaking for lunch, 1h

<scribe> scribenick: rtoyg_m2

<scribe> scribe: rtoyg_m2

V2: Decide what V2 means and what the goals are.

mdjp: What is V2? First off, incremental changes to V1.

chris: V1 has worklets. What counts as V2 that can be added even if worklets can do that.
... Where is that boundary? Some things naturally make sense even if it's easy to do with worklets. Items only benefit one person maybe not so much.

hoch: Should get feedback from developers too.

mdjp: We're pretty clear what's in V1. When does V2 start?

chris: Two ways to do this. Fork the spec. Add a separate spec.

rtoyg_m2: Likes 2 specs. Will we keep them separate?

chris: Yes, but eventually merge into one bigger spec.

hoch: What about community input?

chris: Yes, but primarily driven by working group.

mdjp: Have just one WG call and more CG calls.
... Group is small, so nice to have external viewpoints into the spec. This is what CG is for.
... Clarify: Incremental update of V1, handled by WG. But CG to help incubate new ideas.

concensus: Create a new repository for V2. Move over issues to V2.

hoch: Use project board; minimize use of labels except when necessary.

mdjp: Creates a new web-audio-api-v2 repository for new work.

rtoyg_m2: What about milestones?

concensus: No milestones; new stuff goes to vnext project board.

(In webaudio-api repi, not v2 repo).

hoch: No. vnext board should be in v2 repo.

concensus: Yes.

That works. We can do that.

chris: and others discussing how to do a hard-sync of two oscillators.

Needs design work. Different approaches suggested, tending towards using an AudioParam to control the sync.

mdjp: In v2?

chris: Yes.

<mdjp> https://github.com/WebAudio/web-audio-api/issues/1803

High priority V2 issues: https://github.com/WebAudio/web-audio-api/issues?q=is%3Aopen+is%3Aissue+label%3A"High+Priority+V2"

rtoyg_m2: We don't need to decide how to do it (issue 1803). Just need to decide if we want to consider it for v2.

mdjp: Can it be done with existing stuff?

hoch: TAG feedback said being able to compose nodes is useful.

mdjp: In favor to V2? In v2 doesn't require us to do it in V2.

chris: Needs to be "under consideration" label.

https://github.com/WebAudio/web-audio-api/issues/1791

Move to V2.

https://github.com/WebAudio/web-audio-api/issues/1443

hoch: Depends on some external dependencies for WASM.

padenot: Luke says it's possible to register things for the WASM heap/views to handle things.

Moved to V2 for further discussion.

padenot: One way is C/C++ style where functions don't allocate it's own memory. Pass in a pointer to memory.

hoch: Common case us big pile of WASM code.

padenot: Should work.

karlt: Suggests a new register processor method geared more to WASM by having a callback instead of full class
... Will add comments about this in the issue.

https://github.com/WebAudio/web-audio-api/issues/1279

hoch: Like Jer's idea.

rtoyg_m2: But startRendering() continues from last time.

hoch: Yes, we'll need a new method.

https://github.com/WebAudio/web-audio-api/issues/783

chris: Proposed IDL looks fine.

https://github.com/WebAudio/web-audio-api/issues/705

<chris> https://www.audiocheck.net/testtones_pinknoise.php vs https://www.audiocheck.net/testtones_greynoise.php with fletcher-munsen equalization :)

General discussion on why this is useful and can't be done in a worklet.

jer: Suggested doing this in a library.

rtoyg_m2: Agreed but wanted to control exactly what the output is guarantee identical output.

Jer: Agrees that's a good reason to specify this as builtin.

https://github.com/WebAudio/web-audio-api/issues/541

General agreement we want this because other systems have this important feature.

https://github.com/WebAudio/web-audio-api/issues/445

<hoch> Reference: https://www.w3.org/TR/audio-output/

Concensus: Can be handled by ADC, but need more work on this since we might want to extend AudioContext instead or in addition to.

https://github.com/WebAudio/web-audio-api/issues/283

https://github.com/WebAudio/web-audio-api/issues/13

<padenot> break, 15min

<padenot> well apparently it was 30 in the schedule

WebMIDI topic

cwilso: Very little update since last time; just a few small issues.
... A few details need to be cleaned up.
... Big issue is back pressure, but waiting for other vendors to contribute to this.
... Many issues are Ready For Editing, so external people can contribute.

mdjp: Will resolving issues help?

cwilso: Best thing is if another vendor decides to implement.

mdjp: Is there another vendor?

padenot: Pretty much ready to go, except for the backends.

cwilso: Main issue is security issues preventing other UAs implementing.

padenot: Mostly a resource problem not security (if shipping without sysex).

<padenot> https://irc.paul.cx/uploads/96bea352472551f5/MVIMG_20190912_113339.jpg

https://github.com/WebAudio/web-audio-api/issues?q=is%3Aopen+is%3Aissue+label%3A"Feature+Request%2FMissing+Feature"

<mdjp> V2 feature requests https://github.com/WebAudio/web-audio-api/issues?q=is%3Aopen+is%3Aissue+label%3A%22Feature+Request%2FMissing+Feature%22

<mdjp> V2 feature requests https://github.com/WebAudio/web-audio-api/issues?q=is%3Aopen+is%3Aissue+label%3A%22Feature+Request%2FMissing+Feature%22

https://github.com/WebAudio/web-audio-api/issues/2006

Move to V2/under consideration

https://github.com/WebAudio/web-audio-api/issues/1850

rtoyg_m2: Do we want to pile more stuff on decodeAudioData

padenot: Probably not; we're really moving to WebCodec.

chris: Close this issue in favor of Webcodec

<padenot> https://github.com/WICG/web-codecs

padenot: Yes, as long as WebCodec does the things we need.

<chris> https://discourse.wicg.io/t/webcodecs-proposal/3662 and https://github.com/WICG/web-codecs/blob/master/explainer.md

mdjp: Closing issue, referencing web codecs.

https://github.com/WebAudio/web-audio-api/issues/1764

hoch: Idea is send text to speech synthesizer, get the output and feed it into WebAudio graph.

chris: So you have have add a reverb to the voice.
... Doesn't appear possible to join together with current speech api.

mdjp: Closing

https://github.com/WebAudio/web-audio-api/issues/1757

mdjp: Close

https://github.com/WebAudio/web-audio-api/issues/1756

mdjp: Close

https://github.com/WebAudio/web-audio-api/issues/1540

<chris> open-source pitch shifter in JS https://github.com/cristiano-belloni/KievII/blob/master/dsp/pitchshift.js

jer: Podcasts is a common use-case where people can listen to podcasts at faster speed (or slower)

Concensus: Move to V2/under consideration

https://github.com/WebAudio/web-audio-api/issues/1480

Basically handled by active processing concept.

Summary of Action Items

Summary of Resolutions

    [End of minutes]

    Minutes manually created (not a transcript), formatted by David Booth's scribe.perl version (CVS log)
    $Date: 2019/09/18 07:25:09 $

    Scribe.perl diagnostic output

    [Delete this section before finalizing the minutes.]
    This is scribe.perl Revision of Date 
    Check for newer version at http://dev.w3.org/cvsweb/~checkout~/2002/scribe/
    
    Guessing input format: RRSAgent_HTML_Format (score 0.99)
    
    Present: rtoyg_m2
    
    WARNING: Fewer than 3 people found for Present list!
    
    Found ScribeNick: rtoyg_m2
    Found Scribe: rtoyg_m2
    Inferring ScribeNick: rtoyg_m2
    
    WARNING: No meeting chair found!
    You should specify the meeting chair like this:
    <dbooth> Chair: dbooth
    
    Found Date: 16 Sep 2019
    People with action items: 
    
    WARNING: Input appears to use implicit continuation lines.
    You may need the "-implicitContinuations" option.
    
    
    WARNING: IRC log location not specified!  (You can ignore this 
    warning if you do not want the generated minutes to contain 
    a link to the original IRC log.)
    
    
    
    [End of scribe.perl diagnostic output]