Meeting minutes
EncodedChunk metadata WebCodecs#245
Bernard: Guido's been working on proposals for adding metadata to video and potentially audio
… This is an old issue, is it still relevant given Guido's more recent issues?
Eugene: The metadata being added is for VideoFrameMetadata, and here it's encoded chunks. Not clear that the metadata Guido is adding is helpful there
… It's not something attached to the EncodedVideoChunk itself, it's in the output callback from the VideoEncoder
… There's lots of data there, e.g., SVC data, possibly alpha (not in current implementations)
… But if the encoded chunk is from other sources, e.g., over the network, then no metadata is attached to the EncodedVideoChunk, so not so useful
Bernard: Makes sense. Issues says we'll harmonise, but don't think that's the current direction. We just want constructors
Guido: That's the proposal Harald is working on, to construct one from the other, in both directions
Bernard: Do we need EncodedVideoChunk to have metadata support WebRTC? The RTC encoded chunk doesn't have to be the same
Eugene: I agree, constructing EncodedVideoChunk from RTCEncodedVideoFrame can and should be done. Don't think we need yet another EncodedVideoMetadataChunk to encode everything in RTCEncodedVideoFrame, those are different
Bernard: Wanted to confirm the direction we're going
Eugene: There'll be constructors, and nothing further for video chunk metadata
Bernard: Makes sense, and I can link to Harald's proposal
w3c/webcodecs#813 Add captureTime, receiveTime and rtpMetadata to VideoFrameMetadata - guidou
RESOLUTION: Merge webcodecs#813 to close issue webcodecs#601
webcodecs#855, Audio Metadata
guidou: Adding similar fields for audio metadata. Some of them already have them.
… Also have them for the raw version. No RTC equivalents.
adoba: Comes up with lip sync, if you only have it for video
Eugene: Arguments for video apply to audio as well
ACTION: guidou@ to prepare a PR
chrisn: Take the same approach as video frame metadata, use a registry
… Some slight duplication between audio and video fields.
Guidance for user-defined VideoFrameMetadata entries webcodecs#849
GitHub: w3c/
chrisn: Should there be guidance for users adding their own fields?
… namespacing, look at how they are returned. They are always copied.
adoba: Should implementations only deal with the dictionary entries, or take everything?
Eugene: No way for developers to know which fields are supported or are not, if we copy the known entries
… Prefer to keep around the metadata entries which are known and supported for UA but see both sides
… But see argument from the developer PoV
… People already can add whatever they want to VideoFrame
adoba: If there is a custom field, some browsers may not return it because it's not in the registry
chrisn: Does it clone the metadata?
Eugene: Yes, what is there is copied and returned by metadata.
… currently, need to copy everything and return it.
Eugene: Is there a precedent in other standards?
ACTION: Eugene to research if other specs do something like this with registries.
Opus Packet Loss Concealment webcodecs#558
See TPAC discussion
adoba: Opus is adding advanced PLC ("Deep Red")
… Make sure that WebCodecs supports it
… ffmpeg does not support need for concealment, pass an empty frame
… Other codecs don't have concealment (not generalizable)
eugene: Having gaps in timestamps seems reasonable. If we lost network packet, then we have the next packet with a future timestamp.
eugene: Keep feeding packets with timestamps, the opus decoder should be able to see the gaps.
… Up to decoder to decide behavior, might skip, or emit silence, or guess missing audio
adoba: Specific to opus. For others, do concealment/recovery externally.
Eugene: May want to spec what happens for chunks with gaps.
adoba: Different types of concealment, original vs deep concealment
Eugene: Wait for Paul's opinion on this
AOB
Chris: Marcos and I have been doing the self reviews for the W3C horizontal review, e.g., accessibility, internationalisation, privacy, etc