See also: IRC log
<trackbot> Date: 27 February 2012
<olivier> Scribe: CWilson
<Alistair> hmm
<joe> having trouble w bridge
<olivier> joe, code is 28346
<joe> I can't even get to the prompt for the code, my call gets dropped
logging in now, BRT
<tmichel> what is the pass code ? audio does not seem to work.
<olivier> tmichel, code should be 28346
<chris> signal is more noisy now
that was probably me
<scribe> scribe: cwilso
<scribe> ScribeNick: cwilso
Olivier: we have these 4 areas to look at; we're thinking of rechartering, as we didn't incorporate events and MIDI into the original charter
…would like to hear from group where their priorities are, and what are connected together?
ChrisRogers: synthesis and processing are intertwining. MIDI is enough in its own section that it could be spun off.
Olivier: Chris, would you see capture and the sources are one & a whole with processing? Can capture be made into something else?
ChrisRogers: WebRTC has an API to get user input; but there comes a point where that signal needs to be input into processing. To me, those are just sources - live input is just another type of source node.
shepazu: I note that it's not just WebRTC TF looking at Media Capture; it's also part of Device APIs WG.
<olivier> Related -> http://dvcs.w3.org/hg/dap/raw-file/tip/media-stream-capture/FPWD.html
shepazu: do we want that TF to include us? Or are we satisfied with just providing use cases and requirements? If just the latter, we should do that soon.
olivier: related link is the TF's work in progress, published today.
shepazu: joined their call a couple of weeks ago, they're interested in knowing our use cases/req.
olivier: as editor of UC doc, I can coordinate.
AI: olivier to coordinate use cases with WebRTC TF.
shepazu: do we capture things like line in (from crogers) in our UCs?
olivier: beyond just gathering UC/reqs, we need to go through the WebRTC work and understand how it interfaces with our work. Looking for a volunteer.
crogers: I can give 2 use cases
off the top of my head: 1) line input to process live
instrument, like guitar. 2) recording inputs for DAW.
... you may also want to do more WebRTC-style UCs with
processing, e.g. spatializing WebRTC participants [or do
dynamics/clarity processing on them - cwilso]
shepazu: if our scenarios and requirements match theirs, we need to coordinate with TF more closely; if not, we need to consider how we would cover it
crogers: if we can get the input,
e.g. from a media stream, we should be able to manipulate it
effectively.
... [suggests WebRTC needs to be able to select streams from
hardware inputs?]
olivier: pretty sure WebRTC is
looking at more complex scenarios including multiple input
selection; but we should make it obvious that this is something
we're interested in.
... think we're agreed 1) MIDI can be a separate item, and 2)
we would expect to rely on WebRTC for getting inputs,
although...
crogers: we need to do work to connect to our APIs then.
olivier: how fast do we want to be moving WRT MIDI?
<Zakim> cwilso, you wanted to respond
<olivier> Cwilson: it would be good to recharter to get MIDI on it. Would be good to have a draft soon. May not be as controversial as the rest of our work because MIDI APIs are fairly standard already.
shepazu: I think it would be good
to go ahead and let the MIDI work go at its own pace; if
editors can put focus on it, great. I don't think it needs as
much coordination as the other audio actions.
... should only need to dedicate small amount of telecon time,
e.g., to it.
cwilso +1s what doug said. :)
<joe> q
alistair: seconds what Doug said about timing. also, MIDI is also relevant to lighting (running DMX systems, etc.) - so it makes some sense to run somewhat separately.
olivier: cwilso, you said you'd like to see a draft charter on MIDI; pointing out the obvious, but we need leadership. Doug, how would you like that draft charter to happen?
<tmichel> I would like to have a clearer idea of the impact of MIDI on our specs.
doug: we should agree as a group on what the MIDI API should cover in terms of scope, and then put the charter up for review. It doesn't need to be really rigorous, just cover scope.
<tmichel> It will clearly change the scope but what about the deliverables
joe: seconds overall sense of starting MIDI track, but not investing loads of the collective group time.
<olivier> CWilson: I drafted a couple of sentences a few months ago
<olivier> … thought it would go into web events
<olivier> … can dig that and share with the group
<olivier> … and suggested that Jussi and I co-edit
<olivier> … we can take it from there,
<olivier> … there is a sense that it won't be a big time sink for the group, agree with that
doug: chris, if you can send the charter changes suggestion to the group, we can go from there.
olivier: sounds like we have a couple of prospective editors to this.
AI: cwilso to send draft charter suggestion
<olivier> RESOLUTION: the group will draft a new charter including MIDI work, we already have a couple of prospective editors
<olivier> ACTION: CWilson to propose MIDI charter scope to the list [recorded in http://www.w3.org/2012/02/27-audio-minutes.html#action01]
<trackbot> Sorry, couldn't find user - CWilson
<scribe> ACTION: ChrisWilson to propose MIDI charter scope to the list [recorded in http://www.w3.org/2012/02/27-audio-minutes.html#action02]
<trackbot> Created ACTION-33 - Propose MIDI charter scope to the list [on Chris Wilson - due 2012-03-05].
olivier: seems clear that we would be splitting the charter into two main areas of work
Alistair: have been going through specs to look at differences between specs, hard because specs are both changing (e.g. worker threads - ROC's proposal is only worker threads, while CRoger's proposal is still looking at worker threads)
CRogers: it looked like ROC's
worker threads portion could move into WA proposal directly,
but Dmitry Lomov (worker expert at Google) had some commentary
on the list.
... I'd expect longer term either proposal will have
workers.
<Alistair> http://www.w3.org/2011/audio/wiki/Spec_Differences
Alistair: <missed
statement>
... reaching out to game devs.
... game devs very interested, but people are a bit daunted by
reviewing/commenting on proposals.
CRogers: a few have commented - Rovio (Angry Birds), etc.
Alistair: maybe need to approach
differently to get more directed feedback/commentary on the
specs?
... also would like help on capturing spec differences and
reaching out to game devs.
shepazu: more feedback from game developers would be great. Really want the feedback of where the spec is NOT meeting their needs, rather than where it is - on both proposals.
olivier: just wanted to say, BBC
is starting a small project to try to implement a prototype
using both APIs. Will have one set of requirements, same people
try to implement it using both APIs to give feedback in a real
use case.
... would suggest similar feedback from others would be very
helpful
alistair: what is the application?
olivier: not entirely certain, but will include synthesis and processing.
Jussi: know some details, but not sure if I can disclose
olivier: around the table - are
there others actively using both APIs?
... cwilso/crogers, you don't count. :)
Alistair: we've been working on something. Initially developers are coming at this from scratch.
(Is that Alistair?)
thx
alistair: initial feedback for
the Mozilla API is that there was a bit of confusion when
switching to media streams from audio data API.
... the other side of things, the web audio API was confusing
to people who haven't used Flash Audio API or some such API,
but once it was up and running it was a little easier to
use.
crogers: were you just using the JS processing side?
alistair: mainly, but there was some processing too.
joe: we are working with the Web Audio API, and we;re using it in conjunction with sheet music playback - also looking at using a simple sequencer/synth to synthesize audio. so far going very well. Don't have the bandwidth to double up and do both APIs so far.
shepazu: can we capture this feedback in one place, to gather feedback for the group?
s/initial feedback for the Mozilla API is that there was a bit of confusing when switching to media streams from audio data API./initial feedback for the Mozilla API is that there was a bit of confusing when switching to media streams from audio data API although the mozilla api seemed quicker to get at the audio data.
shepazu: could we have one place where we present both APIs, with examples for both, and ask for feedback? We should be more systematic about collecting feedback on the two.
alistair: certain allure to having something flashy that builds on both to compare.
<olivier> +1 on clarification on audio data api
shepazu: people still talk about
the Mozilla API, and are talking about the audio data API. We
should explicitly remove this as work that would go forward.
We're not interesting in feedback on this, other than how it
might relate to the other proposals.
... maybe the differences document should explicitly mention
this, and say these are the reasons the audio data API is a
dead end.
<kinetik> i'll arrange for that
alistair: the audio data spec should mention this as well.
<olivier> ACTION: MGregan to make sure the Moz Audio Data API pages mentions the new work [recorded in http://www.w3.org/2012/02/27-audio-minutes.html#action03]
<trackbot> Created ACTION-34 - Make sure the Moz Audio Data API pages mentions the new work [on Matthew Gregan - due 2012-03-05].
shepazu: maybe we should push people to evaluate the proposals via use cases.
olivier: a lot of people who have trouble with graphing tools have trouble grokking the web audio API.
alistair: issue with graph-based processing in general - it's difficult for non-engineers to grasp.
crogers: ... our developer relations have been trying to write FAQs/tutorials for the WA API. We've made some progress in this area. Boris Smus is writing another proposal, around games, that covers this.
<chris> http://www.html5rocks.com/en/tutorials/webaudio/intro/
crogers: one other group that we might be able to leverage is the upcoming games conference - the GDC - there will be a lot of discussion on audio APIs for the web, I expect. I think we'll hear a lot of developer feedback int he coming weeks.
olivier: is anyone from Google going to GDC?
crogers: yes, of course.
... I'll reinforce to those going to ask devs to forward
feedback to the group.
olivier: if we can get devs to go a bit further in each of the specs, I think that would be very valuable. We haven't had much detailed feedback yet.
<olivier> nice
<Alistair> :)
jussi: we're working on Javascript codecs - e.g. MP3 - work on all three APIs. Will try to gather some feedback from my team.
olivier: thanks all.
AAAAAAAAAAHHHHH!!!!
<Alistair> lol
<jussi> :D
<Alistair> lmao
bye all
<jernoble> thanks and bye!
This is scribe.perl Revision: 1.136 of Date: 2011/05/12 12:01:43 Check for newer version at http://dev.w3.org/cvsweb/~checkout~/2002/scribe/ Guessing input format: RRSAgent_Text_Format (score 1.00) Succeeded: s/ot/it's also part of Device APIs WG./ Succeeded: s/need to coordinate with TF more closely./if our scenarios and requirements match theirs, we need to coordinate with TF more closely; if not, we need to consider how we would cover it/ Succeeded: s/?:/Alistair:/ Succeeded: s/confusing/confusion/ FAILED: s/initial feedback for the Mozilla API is that there was a bit of confusing when switching to media streams from audio data API./initial feedback for the Mozilla API is that there was a bit of confusing when switching to media streams from audio data API although the mozilla api seemed quicker to get at the audio data./ Found Scribe: CWilson Found Scribe: cwilso Inferring ScribeNick: cwilso Found ScribeNick: cwilso Scribes: CWilson, cwilso WARNING: No "Present: ... " found! Possibly Present: AI CRogers ChrisRogers ChrisWilson Cwilson Doug_Schepers F1LT3R IPcaller Jussi Olivier P0 P10 P13 P15 P7 P8 Ronny ScribeNick aaaa alistair chris colinbdclark cwilso doug foolip jernoble joe kennyluck kinetik paul___irish shepazu tmichel trackbot You can indicate people for the Present list like this: <dbooth> Present: dbooth jonathan mary <dbooth> Present+ amy Agenda: http://lists.w3.org/Archives/Public/public-audio/2012JanMar/0287.html WARNING: No meeting chair found! You should specify the meeting chair like this: <dbooth> Chair: dbooth Found Date: 27 Feb 2012 Guessing minutes URL: http://www.w3.org/2012/02/27-audio-minutes.html People with action items: chriswilson cwilson mgregan WARNING: Input appears to use implicit continuation lines. You may need the "-implicitContinuations" option.[End of scribe.perl diagnostic output]