See also: IRC log
Harald: Welcome
... this is an interim
meeting of the webrtc wg with bits from media capture task
force
Harald: we're here to make decisions; a decision at the meeting gets announced to the list with a delay before they're final
(Harald going through the agenda)
Harald: next point is "view of status document"
... getusermedia is still in LC
... a few comments are still being addressed, but
it s mainly stable and shipped.
Harald: webrtc 1.0 is still not done
... there will be more sessions those two days
about points still in fly.
Harald: we have also new things
... media recorder: in need of an editor
... screen capture: we have a spec but maybe some
lack of interest
... the other specs (audio output, image capture)
are in an unknown state.
Justin: audio ouput
control has initial implementation in chrome 46 behind a flag
... waiting for feedback
... implementation feedback will be pushed to the spec
... then we could move to LC
Keith: on screencapture the spec is out there
... bunch of pen issues that needs to be updated
there is no implementation in Browser yet (AFAIK)
so we need to probe interest from browser vendors
it seems that what is in the browser and what is in spec is different
and we just need to double check
things like app share vs window share
mozilla is in favor of having it
it s just a matter of process
Ekr: we need to either
... find more people to come in and help
... or focus on 1.0 first
... because of constrained ressources
... mozilla implements screen capture but an older
spec
what about implementation from canvas?
video element -> MS
should be implemented in 33, behind a pref
Harald: what about depth channel ?
Dom: there are still a few
changes to be made on the specs.
... this document is a good candidate to be transferred to the
new timed media group
what about media capture in workers
Dom: corresponding work has just started, really.
Stefan: note on the media output document, the web audio group mentionned they will have some comments to contribute
Dom: I think the stats are missing from the list
Harald: I'll update the document
list, but I think it's on track
... let s go in the "non document" part
... about timed data tracks
... there was an objection at Last Call because we
would not support subtitles e.g.
Stefan: yes, we decided to not
address that in the current spec
... as the spec can be updated / extended anyway
Justin: what about if we want to use DataChannel instead?
<hta> From chair: if remote people can't hear what's said, please complain here.
Dan: I was in that call actually
Dan: that information (subtitle)
is considered part of data
... as in part of the audio/video
... we were able to explain the use case that is driving our
specs today, and that there was no intention to restrict the
specs to that
... eventually we will explain how to extend the specs
Justin: how does it work with <video> HTML DOM element today?
Dan: that was interesting, he said that would work out of the box today
Dom: yes, there is a track element and a separate spec for that
Harald: last piece of undocumented wanted feature: use of data channel in web workers
Jesup: I'm writing the documents. In progress. The interest is real, from gaming industry and more.
<mreavy> just to support what jesup said (re: data channels in workers): these is LOTS of interest in data channels in workers. I'm being asked to get it on the Firefox roadmap.
Harald: ok, this is what I can remember, is there anything else that i forgot that should deb in the list?
(silence)
Harald: ok, summary is now
complete.
... ok, now we need to sort the list, and decide which items we
want in webrtc 1.0
... logistics we should work on this until lunch.
Harald: toward the end of the day, we have reserved a slot for Media Capture API
Harald:tomorrow, we might want to exchange about next
version
... not really decide anything
... just exchange, make a plan to make a plan for TPAC
... give the chairs some feedback to work with
Harald:we might also have some other TPAC related items to discuss.
Harald:that s for tomorrow.
Harald:Does anyone want to propose a change to the agenda?
Justin: One thing we talked about i don't see here: generic error feedback?
Harald and Peter: tomorrow morning has a slot for that.
Harald:alright, let s start then
[Peter presenting "WebRTC 1.0 objects at 2015 f2f" slides]
Stefan: peter, you want to speak about RTPSender/Receiver?
Peter: ok, there are a lot of objects
... so I'm going to start
... by those already in Chrome
... then some that have consensus but are not
there
... and finish by the hard ones
Peter: DtlsTransport is DONE
Peter: RTPSender/Receivers parameters are also DONE
Peter: pending ones now
Peter: RTPSender/Receivers capabilities
Cullen: that might be too little.
Peter/Justin: you can always add things in the dictionnary
Cullen: I want to do this but, I
am not convinced that this is needed for any of the use case we
have today
... should it make it into 1.0
Peter and others: this is needed for codec selection, which many user do by modifying the SDP today
Alexandre: yes, needed. The codec selection is a real use case today and require that kind of change.
Peter: pending: SctpTransport (PR270)
Peter: do we merge PR 270 ?
Harald: no opposition, considered OK
Peter: need discussion: IceTransport more (readonly)
infos
... - role
... - component
... - state
... - ...
Peter: all of this would be per Ice candidate and not an aggregate at the peer connection level
Ekr: i m concerned
... that there are a lot of new stuff coming in
... how to draw the line, conceptually?
Harald: stats have to be able to do counting on identifiable objects
Ekr: what is the line between
what goes in stats and what goes in there?
... or one should reuse the other and not be a complete
parallel / duplicate info
Harald: when we started with
statistics, this things did not exist
... and stats might have been trying to stretch out to do
things for which this new API is better.
Cullen: this API might be more suitable for one time pull, while stats might be used for recurrent pools.
Ekr: we see so many people today with flaky connections that want to access this level of information when things don t work
Bernard: yes, it s duplicated in the implementation, but in any case, it s needed by the apps. And by the way, we need more stats in getstats(), like *real* statistics, that are not there today.
Justin: we could decide to have counters and stats (dynamic objects) in getstats() and access static values from this API
Bernard: an example of one thing we might need: ice consent stats. it s dynamic
Justin: RTT? would that be stats or here
Bernard: stats.
Varun: many examples, and discussion with justin to separate between different kinds of values
dynamic objects (bytes sent, RTT, ice consent, ...) should be in getstats(), other which are more discrete could be coupled with an event and/or be used through the capacity/parameters
Cullen: we need more to debug ICE, if only.
Peter: need more discussion: RtpSender more info (PR273)
<vivien> https://github.com/w3c/webrtc-pc/pull/273
Cullen: could we have one FEC for several SSRC ?
Justin: i don't think we will have such a use case.
Harald: comment: the last 3 lines (rtx/fec/rtcp) are not yet decided features for 1.0. We might want to hold on to it before we put it in, at least until we decide to put it in the main specs?
Justin: I would disagree. We
might not have control over it, but we still want to surface
the info, if it happens. Today one has to parse the sdp.
... wether we will use FEC or not we will do it will be
discussed later, but it s happening, and we should surface the
info.
typoe correction: that was about RTCP and not RTP
Peter: apart from typos and style, is everyone ok with that?
Peter: ok, next slide
Peter: "codec selection"
Peter: this mechanism is supposed to be used AFTER an offer / answer, when all the usable codecs for the call are known.
concerns about the cases where this could blow in the face of the user
like, if the JS can pin down on given codec
for a given call
then for the next call, or during a renegotiation
it could be stuck with a codec the remote peer does not support anymore
shall we send an error/event?
shall we fall back to the normal mechanism (and ignore the JS setting)?
Peter: requiring the JS to reset
everytime there is a renegotiationneeded event
... would really be too heavy on JS
Harald: the amount of discussion we re having right now seems to indicate that more thinking is needed.
Peter: questions in my mind: when does the JS app needs to interact with this API? Does it have to remember the values? Is it a hard constraint, or a preference? Say you prefer OPUS, but opus is not proposed.
Cullen: we had agreement some time ago: have a priority associated with each codec.
Peter: we have consensus that we wants to have codec selection. The how is still unclear.
Harald: I think we agree that we
are speaking about two different things :)
... 1. how to influence the codec selection
... 2. which codec do we use right now, after O/A
Peter: one missing semantic is "how do you restore the browsers proposed semantics"
Ekr: use case: you negotiate opus, then opus is gone after a renegotitation. what happen?
Justin: you could call again and say "browser you choose", or you could decide to manually choose again (binded to an error event)
Cullen: did you considered an API
where instead of setting a coder, you would be settings
parameters of the payload type (as you could have same payload
for different params)
... could you use something different than payload type for
selector
discussion around the corresponding use case.
clarifying questions: if you have two lines with, basically, the same mime type, would you able to differentiate using setParam. so you wouldn t have to add param in the selector.
Cullen: possibly ............
Cullen: Another way to ask the question is whether there is anything missing from the .setParameters(params) that would occur in the m-line.
Cullen: we need to hold on to it until we can support H.264 and vp8 (and possibly vp9)
Peter/Ekr: what is missing today ?
Cullen: profile id, resolution
Peter/Ekr: based on their location in the SDP, you could have them using the current API
Cullen: I want to avoid sdp munging. I want to have all in he API for anything we do today.
Bernard: can we go back to harald question: are we trying to achieve codec preference setting, or codec selection ?
Ekr: codec selection.
Harald: post negotiation selection
Cullen: we heard from users that they want both. they also want pre-negotiation selection (a.k.a. codec preference).
Harald: I think this approach is good for post-negotiation selection, and we need something different for pre-selection selection.
Alexandre: in the post selection case, you have all info needed to make a final decision
Mathieu: well, I'm a grown up, if I want to select before i know what the remote peer capacity is, so be it, and if it does work, my choice.
Cullen: use case.
Jesup: I agree with cullen, I want to had a use case, when a 3rd user join a call that has already a codec nailed down. Moreover, there is more demand for pre-negotiation selection.
Peter: even if we go for pre-negotiation selection separately, shall we merge this ?
Dom: what happen in case of renegotiation? I would prefer that every time parameters are erased, as it is closer to what happen today, where the user would re-munge the SDp everytime
Peter: can someone put a note on the PR with the two notes? And if i address it, can i go ahead with the merge?
Cullen: who reviewed it?
Martin: I did
Peter: I m happy to make the codec list re-orderable
so people can shuffle the codec list
Martin: that would work the for
pre-selection as well.
... i m writing up something righ tnow, but .....
Stefan: martin do you want to volunteer?
Harald: shall we schedule some time tomorrow to look at what you will have surely written by then?
Martin: well, ... hum, .... there is a meeting happening ......
Martin: I would need the post-negotiation part to be in before i can work on it
Harald: ok, peter and martin to work an that, and if we have something by tomorrow morning, we can add a line in the agenda during the agenda bashing.
[break 10mn]
Peter: needs discussion: RtpSender more parameters (for simulcast and svc)
Peter: there will be a full discussion tomorrow on simulcast, so we could save that for then.
Justin: in this micro-change, you
propose a mechanism to automate the scaling down of resolutions
(spatial and temporal)
... and combined with what you will proposed later, one might
then handle simulcast?
Peter: yes
Ekr: question: how do those priority interact? they don t seem orthogonal.
Cullen: just a detail maybe
... i would rather have the scales to be an array. For example
if you have thumbnail, it might be very small, and not a power
of two
Peter: it can be any integer
Cullen: oh, ok, fair enough
... shall it be a flag to decide if it should prefer temporal
or spatial scaling?
Peter: what should the default be?
Cullen: well, for talking head, spatial, but for screensharing, the opposite ....
Justin: this is fine tuning, we can deal with it later.
Cullen: ok
Jesup: i can see three cases: spatial, temporal, browser decides.
Ekr/Jesup: we should use a multiplicative factor instead.
<jesup> Otherwise, change name to "resolutionDivideBy" etc
Varun: and bias is what i want?
Justin: yes, it s what you want to prioritize.
Bernard: the reason why we choose an integer scale, it s because i want to use one pixel out of n. It gives less artefact this way.
can we still do 0.5 (e.g.)
Jesup: in the PR, i left it at "DivideBy", mentionned it should be more than 1, and specified we should round down.
Justin: should we return a
NegotiationNeededError or really refrain from upscaling (i.e.
having a value less than 1, for temporal resolution)
... use case: people might want 720p even though the content
itself is not 720p.
Jesup: if there is a written use case for that, I can live with that.
Justin: say we set a parameter that tells the brother to scale down when bandwidth goes down. How do I know what is the resolution eventually used?
Peter: that s a good point, in the simulcast case, what happen is not specified.
Justin: exactly, in case of simulcast / svc, one might want to drop the higher resolution, instead of having each stream adapt.
Jesup: we could have the spatial resolution scale adapt immediately, while the frame rate would used later
Justin: that s sensible, but in the use case of simulcast / svc there are other problems. We might want to postpone this discussion until tomorrow during the corresponding session.
Martin: clone => constraint
Justin: what martin said: make sure that the input stream has the right resolution.
Martin: moreover, these parameters do not make sense in the case of simulcast, and otherwise, it s good as it is.
Cullen: yeah <bla bla bla> it s fine.
Varun: and IETF is expecting some info, so that could be it.
Peter: needs discussion: PC.connectionState (PR 291)
[See Peter's additional slides on "PeerConnection errors"]
Peter: use case: discrepancies between DTLS and ICE states
Justin: we should be gentle with
existing applications.
... iceconnectionstate (in chrome) is already that.
Peter: well, there are some differences: checking / connecting
Justin: well, that's where i m getting concerned.
Martin: i think we should keep ice connection state, and make this a new API.
Varun: what problem are we solving?
Peter: today, if DTLS failed, ICE
state is still connected, and the application doe snot
know.
... so the new enum is closer to what the application expect,
if anything (wether ice or dtls) failed, then the status is
failed.
Justin: what i would like is for
us to come up with a drop in equivalent (not to break existing
code) but which would aggregate dtls and ice states together.
Now, in case of failure, you can then use DtlsTransport and
other transport objects should be used to know if ALL failed,
or a few failed.
... I want not only the migration path to be documented, but
the path to be minimal.
Peter: shall we put it in 1.0?
Justin: well, every time you make
a mistake, you face the same problem. In 1.0 connection state
should keep the same states, but return the combined states. We
could reuse the algorithm that peter proposed. that is what the
application care about today.
... we should revised in 1.1
Ekr: yes, we should keep the current semantic (like checking), and make it cover DTLS and ICE.
Martin: why?
Justin: the modification is so minor, while the impact on existing software is bigger. I don't care.
Dan: if we face a mistake, let s fix it.
Varun: what happen if happen being connected, you lose connectivity, what happen ?
<dom> we're using different terms than for the data channel readState too
Cullen: we need to keep API simple and intuitive. We need to write this thing for people who are using it.
Harald: interesting point on
completed
... I wrote the test, and completed is never reached
Justin: yeah, well, not big surprise here, that s why i advocated we should just remove it.
Dom: when there is a matching state, we should reuse the existing ones
Peter: next slide
Peter: needs discussion: PC.onwarning/onfatalerror
Peter: the original motivation was to handle the DTLS/ICE problem we just spoke about, so i m not motivated about this one again
Justin: are we going to revisit this this afternoon ?
justin, let s keep it for later.
Peter: PeerConnection warmup
Cullen: ok, can we revisit the last IETF discussion about the active flag?
Peter: depending on wether you have a track or not, it will be different.
you could have a null track, set things up first, even before (while waiting for) getusermedia.
what about consent?
Justin: there is a presentation on that this afternoon.
Martin: we have to deal with that anyone as audio doe snot always come from GUM.
Peter: there are more slides coming.
Harald: so are we merging this or not?
Martin: there are other pull requests there that we would need to address first
Peter: there are some questions about which IDs to use, it s all in PR
Martin: i m concerned that if we do not answer those questions we cannot conclude
Martin: in any case, we need to note that if you use replace track with incompatible encoding, it will not work.
Martin: question: firefox has a way to provide a "fake track". would we still need this with the warmup setting?
Peter: I think you would NOT need it anymore. warmup would replace it indeed for a "no stream" / black stream.
Harald: I think we have a rough consensus
"create a nothing track" is being put on the todo list, to be revisited later.
ErikL: who wants this, and what would we need to change.
Dom: Can you have a full fledge PR ready, with the use case?
Martin: it s ok, except suddenly one property is nullable, and the corresponding logic (how to handle the case where stream is null) needs to be documented.
Justin: I'm just concerned about wasting bandwidth
Harald: final decision here
<tedh> Send a timed-text track with the phrase "This text purposefully left blank"?
Harald: yes we want to support PC
warmup here, seems to be the general consensus
... the question about how to set a null track should be left
for later
Peter: if we return to it, can someone stand up to write a full proposal, so we return to it in a better condition
Peter: reviewing PR279
... PR is to add a method to add createRtpReceiver
... it also brings other issues documented in the slide
deck
... a proposal is to to indicate if you want to send or receive
or both
... in case of a warmup when you call addMedia you can specify
send/receive
Justin: createOffer is from old
age this seems a much nicer way to me
... this a nice way to connect with an m-line
Cullen: it looks great compared to what we need to do today
Harald: the abstraction of SDP is bidirectional
Martin: I am not seeing any value to this except hidding @1
Peter: here by calling stop it makes it clear you are closing the line
Justin: there was no way on the
sender side to know the other side had stopped his track
... pb was what happended during the next offer as the sender
did not know it was shutdown by the receiver
Martin: all of that can be done with sender and receivers
Justin: since SDP has no notion of only send we should not add that in our model
Martin: you know from the signaling channel that the port has been set to 0
Harald: if I had this abstraction I can explain some of our magic things with this abstraction. addTrack becomes easy to explain
Martin: I am concerned about the value we are getting in exchange to this. what feature are we going to add to that that we don't already have?
Harald: if we have the overline object then I can stop having stuff I do on sender that affect recevivers (and vice versa) and have object that don't affect each other seems a good feature
Martin: we can already do that, this is adding nothing new to the API
Adam: this seems like an improvement but not a necessary one
Harald: I like it because it stops us as adding more craft I see this as a simplification for stuff we are currently adding
Dom: are you talking about createRTPSender?
Harald: yes
Ekr: so what happens if you create rtcpreceiver @3
Justin: it only adds an m-line or
reuse one
... previously we had no way to set port to 0, we could send it
to receive only and the other side switching in sending only.
in SDP the inactive state is different that having port to
0
Martin: you can stop a track on the receive side and @4
Jan-Ivar: removetrack removes the sender yes
Martin: not sure if it was implemented in firefox. it is not in Chrome
(Harald reading the removeTrack definition in the spec)
Peter: the JSEP has a big TODO on this
Martin: I thought we ad agreement on track stop
Justin: we did but as indicated
in the bug tracker there are some issues
... so here we are trying to have a clearer modeling
... there really should be an object a pair that represents
send/receive
ErikL: are more people interested? it seems to be something to agree between google and mozilla here
Justin: it will be less a to what I mean api. it can be made more explicit.
Cullen: this would make our life easier
Justin: in JSEP we always had the issue of having bi and unidirectlinal mline
ErikL: do you think it should be in 1.0
Justin: I think it is worth doing ?
Ekr: 3 options:
... 1) leave things as is
... 2) go with creatertpsender and receiver
... 3) use this addMedia proposal
Justin: it currently requires application to have deep knowledge of SDP
ErikL: bernard what about edge ?
Bernard: no SDP in Edge so don't concern us
Dan: it is interesting because it might make some of the things we discussed easier todo. so if it simplifies things then it is worth doing
Adam: I see this cleaner. my concern is that we are going to find bad corners
Martin: I want to decouple
senders and receivers even further
... we made a lot of effort to make this abstraction
strong
... to make sure those things are as separate as possible
Justin: before I was fine hidding things but I know think it is better to give people what they want
Martin: I don't agree with the principle but if accepted in the spec I'll deal with it and implement it
Dan: would it be useful to discuss the other related things to see how they are affected by this proposal and come back to this
Jan-Ivar: it seems like a choice between a leaky abstraction and a low abstraction
Ekr: maybe what we need is a work example from peter
Peter: I have more use cases examples not shown in the slides I can write down
Ekr: would motivate us if we have more cases to make the comparison
Several people: good idea
Harald: so we'll return to this on Thursday
Peter: so there are 4 approaches: api as is today, addMedia, createRTPSender/Receiver, EmptyMediastreamTrack
[Jan-Ivar presenting slides for replaceTrack]
Jan-Ivar: this is about replaceTrack we we discussed earlier
Jan-Ivar: we agree to have
replaceTrack
... no renegotiation needed, it changes things on the wire
Jan-Ivar: an edge case when you
have different audio and video channels
... should we fire a renegotiation needed event
Jan-Ivar: pro would be that it will be less complex, con is that it can fails when it had the info to succeed
Jan-Ivar: negotiationneeded event pro potential to just work, con: more complex and need to figure out negotiation failure
Justin: when renegotiation is needed the promise can never resolve
Jan-Ivar: with the event it would not resolve until the negotiation succeed
Adam: @6
Adam: you only do replaceTrack instead of remove/addtrack if you want to avoid renegotiation
Stefan: you could replace an audio track by a video track
Martin: I think we should fail on that one
Harald: an odd reason to like this, I can easily explain what addTrack does so adds abstraction
Peter: we don't have proposal for anything like that
Justin: what is the PR ?
Stefan: 237
<jesup> talk to the room mike please...
<adamR> Jesup: the chairs have instructed us not to use the microphone, as it was hindering conversation.
<sherwinsim> something happened in the last 10 minutes where we can no longer hear people very well
<jesup> right, they asked people to speak up so it's picked up by the room mic
<adamR> Erik: I think setting the mic down in front of the speakerphone thingy may have blocked off the microphone for our remote participants. You may want to remove or move it.
<Erik> ok, thx will do
Dom: the fact that the error gives you more power on how to handle renegotiation this looks to be a good reason
Ekr: we are trading off people getting slightly behaviour vs people don't know what they are doing
Dom: try without renegotation and it it fails then renegotiate
Harald: which state the application sees when the negotiationneeded fires
Harald: it sounds to me we have rough consensus that we fail
[Bernard presenting "CSRCs and AudioLevels in WebRTC 1.0" slides]
Bernard: on #276, do we agree on harald's proposal
Martin: maybe we should not send this if it is not encrypted
Bernard: ok we have a resolution
on this
... #4 usage scenario conversation with an MCU which is mixing
instead of simply forwarding
Bernard: this proposal was discussed in the ORTC CG
Bernard: concerns about load for
people polling all the time
... polling at 50ms is good
... when does it caches out? could keep the last 5 minutes
Cullen: something useful in webex is to know who is talking, critical for conference systems
Martin: I am concerned about the timestamp
Bernard: we did not wanted to have it on every packets
Adam: what I hear is that it has low cost, is easy to implement and is needed for teleconference folks
(harald proposing a quick vote)
Harald: so most of us are in favor
Stefan: so who creates a pull request
Bernard: issue #6
... audio level info
... it is a slight refinement of what we just talked about with
CSRCs
... it is mixer to clients
Justin: people are polling
stats and getting a multistream containing also energy info, so
if they have an easy way to get only audio level it simplifies
things
... could also be used to decide which video to present
Bernard: for selecting dominant speaker you might also want to analyze the audio
Cullen: this proposed UI use
case looks quite nice
... I'll do a PR for #4, and another that tackles #6
Bernard: this API is not for selecting dominant speaking
(20 minutes break)
[Ekr presenting Slides on WebRTC IP Address Privacy]
Ekr: ICE gathers all the host's IP
address
... so the webserver knows your public address but also all your
aother addresses even if you are behind a VPN
... bring privacy issues
... first is fingerprinting
... enables you to distinguish multiple people that are behind
the same NAT
Cullen: the ports would have tell you that
Ekr: it is not useful to track
you across multiple networks
... second issue identify IP addresses hidden by VPNs
... only happens with "split" VPN but vastly used
... some solutions to counter this
... same issue when people are behind proxies
... also security impact as you discover addresses on local
network
... also possible using XHR
... with us and Google went overs several options
... indicator when webrtc is in use, ask consent for any webrtc
access, more restricted ICE gathering, extension to
disable/restrict WebRTC
... developing on restricted ICE gathering
... only show publicly visible addresses. Chrome and Firefox
are deploying this.
<jesup> https://wiki.mozilla.org/Media/WebRTC/Privacy may be useful (about what we've done in Firefox)
<jesup> so far
Cullen: could you explain consequences of those changes ?
Ekr: if you are a relay it would
not work
... a proposal discussed with justin, if camera and micro are
granted then you do notmal ICE gathering, otherwise the
restricted one
... could have an option to control this
<jesup> My personal NAT (standard FiOS router) doesn't hairpin, BTW
Justin: we are gathering metrics in canary
Adam: people might request camera prompt just to get access to full ICE gathering
Ekr: BTW it is a functionality flash and java already have
Mathieu: what happens when request camera/micro access and start a connection, is there an upgrade path then ?
Adam: if you are using warmup we are only going warmup the public addresses
Ekr: if you have TURN it will start with low quality, and if you don't the call would fail
Mathieu: but will the browser upgrade the connection after ?
Ekr: yes that is something we should do
TedH: the current prompt is not clear to users that by granting access to their camera they will also reveal private network information
Mathieu: would the proposed indicator be on all the time ?
Ekr: probably on for some time during gathering and keep it on for some time
TedH: a middle path here we don't suppress the local address if it is a primary address but you suppress it if it is a secondary address
Ekr: we could certainly do that
Cullen: for internal company use, people will not be happy that traffic will first go through an outside TURN server
Ekr: I like Ted's suggestion, but
there should be some condition to get me out of the default box
behavior
... perhaps in the same dialog "is that ok if I view your
private IP address"
... we should be able to detect the simple case
... having two user prompts would not be good
TedH: I think you can say camera microphone and network location
Martin: this also gives geolocation
TedH: with public ipv6 addresses and split VPN, people will be surprised
Cullen: for those using a split VPN we are offering them a solution by using a browser extension (or a browser preference)
TedH: I disagreee with cullen that if you are running with VPN you have to install something else
Cullen: the split VPNs don't bring you privacy and the VPN vendor tell you that explicitly
TedH: if you are doing a data channel you will need to give a user prompt
Jesup: the 1918 candidates if you strip them, can find that information already (XHR, ...) by that you can determine the entire map
Ekr: what we do in firefox is to
take the 0.0.0.0 address to connect to google dns and then find
the public address
... I fell we could make the change proposed by TedH and ship
in beta to test it
Justin: everything that involves
reading the routing table I am usually concerned about
... what if the route to 8.8.8.8 is different than the route to
the TURN server
Ekr: in case if you are multihome you have multiple routes and in Chrome we show all of them, whereas firefox will show only one
Jesup: @8
Justin: people are using webrtc detection to detect abuse for people behind proxies
Mathieu: what about when you use SOCKS proxy
Justin: when you use a web proxy it is not clear to us if you also want your webrtc traffic to go through your proxy
Ekr: if you are exposing 1918 addresses it will almost always work
TedH: if you combine what I suggested and what adam suggested I think it will answer most issues
(adam's proposal was to resolve the STUN server's IP address instead of a random address (here google DNS IP address)
Ekr: you bind a 0.0.0.0 and all your 1918 addresses
Justin: the thing we did not talk
about is the grant permission for enumerating networks
... there is work in W3C for a Permissions API
Martin: It is asking the browser about the state of a given permission, then requesting that a permission move from one state to another
Justin: should this be part of our matrix decision here
Summary written be Ekr on the whiteboard:
... 1. Don't fix proxies
... 2. Don't need to hive 1918 addresses
... 3. "Ask for network" gum argument is bad
... 4. Need a way to get out of the box
... 0. Affordances for extensions are good
[Martin presenting Sreen Share slides]
Martin: extension to gUM to share
screen content
... considerations: security, UI concerns, user awareness, API
extensions
... issue #28 no persistent permission for screen sharing
... we might want to say never share it, or default always
prompt user
... no persistent allow state
... if blocked then the site will immediately refuse the
request
Justin: do you call gUM prompt or do you use the Permissions API
Martin: we would use something
similar to gUM, as users will need to select what part of
screen will be shared
... sharing your terminal is benign, whereas sharing your
browser is risky
keith: a common use case is to share a document within a browser so you share a tab
Shijun: also problem of navigation
Martin: no different from a
security perspective, as soon as you are sharing your screen
you open the pandora box
... there will be an indicatior to indicate that a particular
tab is being shared
Shijun: navigating between tabs might actually means switching between processes
Martin: yes could be a problem for you that is something to deal in each implementation
Dom: you are saying this will remain unspecified in the spec
Martin: each browser would have to decide what is under which class
Bernard: certain apps might also require elevated permissions
Ekr: the user is smart to know what he is sharing
Martin: to answer bernard within the browser there is a lot of control options
Dom: elevated permissions do we
intent to have something interoperable
... I don't want firefox to choose what websites are allowed to
use screensharing
Martin: we actually discussed that in our legal team
Dom: my point is that as a Web
app developper I don't want to implement 10 extensions to deal
with the permissions of each browser
... is that something we want to tackle ?
Alexandre: in November from the app dev perspective we said it would be nice to have a common way
Ekr: chrome only allows you to do it through an extension, and firefox you need to be on a whitelist
Martin: if you want to know the current status you will have to request the Permissions API
Dom: your initial plan was that there would be no need for individual addons
Martin: still the plan I don't
want to have to install a different add-on for each individual
site
... showing other smaller issues
Shijun: if multiple windows do you have multiple video tracks?
Martin: it is one of the question, currently only in one track
Cullen: how do you choose the applications of the window
Martin: you have to choose wether
you want to share applications, of specific windows, so you
don't get a combined list
... we need the windows sharing to be the easiest thing, but
handle other modes users might need