See also: IRC log
<trackbot> Date: 28 October 2014
<scribe> Meeting: WebFonts f2f meeting, California
<scribe> scribenick: ChrisL
zoltan: excited to see woff
progress. http compression is helped by a command line tool
separate from font use. compiled and ready to go
... ideas to improve the faster codec
... slower one is the open source one, has better
compression
vlad: will the faster one be shared also?
zoltan: yes
vlad: compression is good but slow, people complain
zoltan: faster codec uses a lot of ram
raph: same wire format?
zoltan: yes, but does not use all compression features
raph; any spec changes for this?
zoltan: no
vlad: I have a gift. its a big font. please compress, it crashes on compression
ChrisL: good to get the brotli results for monotype, adobe, and the two fond providers
zoltan: paged mark adler and
asked if he would review it, no response yet
... he reviewed webp lossless, is a great reviewer
ChrisL: asked j-l gailly?
zoltan: yes, no response
... he is not working on compression since 2006
... mark adler is still actively involved
kuettel: show the new top-level brotli site
<raph> https://github.com/google/brotli
<raph> (soft launch for now)
(discussion on open sourcing the fast codec as well)
<kuettel> Kenji found the following JavaScript wrapper implementation via Google Alerts: https://www.npmjs.org/package/brotli
kbx: there is a javascript one
(discussion on whether this is a wrapper, a port or a new impl)
links to http://www.ietf.org/id/draft-alakuijala-brotli-02.txt which is the third version
behedad: want to make sure the chrome deployed code matches the repro
Vlad: binary mode should not be used for metatata, myfonts is adding woff metadata
zoltan: decompression is the same either way
raph: tools should be updated to use the best mode
behedad: any plan to adress compression startup time?
zoltan: not planned currently but we could do
kbx: open bug on chrome to switch to new github brotli, do it now?
zoltan: yes
behedad: chrome uses the copy in
ots
... which gets used in firefox and chrome
... what is the testing plan to avoid regressions over time
kuettel: current reference is good at doing round trip sanity checks
behedad: hard to see if patches break anything
(discussion on open sourcing the brotli test data)
kuettel: there is a new project for woff2 which is unpopulated yet
behedad: reference should be updated and ots updates from that
raph: and right now there is some divergence
behedad: firefox uses the github fork, chrome has not updated yet
zoltan: mostly encoder changes, minor improvements
behedad: brotli we need to get to users for adoption
Vlad: best for brotly to have the fast compression released asap otherwise they think its always slow
behedad: startup takes two seconds, for the text mode
raph: static content can use the
slower compression for best compression
... startup hit is for text mode only
zoltan: can remove dict
transforms to speed it up
... 100 times faster
... loose 0.5% on compression
Vlad: for dynamic compression of a font subset, that balances some of the compression loss
kbx: is there an advantage to
using text encoding?
... for html, css, javascript
zoltan: much better for a few k of text
Vlad: between text mode and binary mode brotli?
zoltan: uses utf-8, using static
disctionary or not
... static very good for vocabs like css, html etc
... can give 30% in binary size
... decompress is faster if a dict was used. so slow encode
gives fast decode
... memcpy of dict entries
Vlad: at some point can brotli be an ISO standard via MPEG, lots of compression experts
zoltan: could do but dont want to spend a lot of time there
Vlad: ground is prepared, takes about 18 months elapsed, process wise
behedad: not clear what the value of ISO standardization is
Vlad: iptv for example uses only ISO standards
(discussion on value of iso standards)
(chris updates woff spec to point to brotli -02)
<raph> (shouldn't you point to http://www.ietf.org/id/draft-alakuijala-brotli which is always the latest draft?)
(reminiscences on PFR)
(hmm maybe)
(except the reference needs a date)
<raph> (ah)
raph: size of reconstructed tt
data is tricky as many decompressions with same semantic
meaning on point data
... encoding of flags bytes, alighnment and padding
Vlad: and coordinate point representation
raph: more about the flags than the coordinates
behedad: we already aboid unecessary padding
raph: goal on nominal size is to
allocate a chunk of memory and know how much is required for an
unsophisticarted decoder and not reallocate
... and also to decide not to decompress
ChrisL: no guarantee its big enough
Vlad: incoming size is known,
efficiency is not known
... nominal size amy be enough but is not a guarantee
raph: can be a guartantee
... its modelling a simple but not brain-dead decoer. always
most compact point rept, greedy flags
... recommended alignment (in OT is obselete)
... ultimate gosl is a rel impl will get inside the nominal
size without much work
its therefor e anominal not a minimal size
Vlad: create an open ended
environment, impl makes its own choices on how efficient to
be
... security aspect of review means buffer overrun is an
issue
... conern was a false sense of security about overrun
raph: level of guarantee: nominal
size is deterministic. algo produces the nominal size. its not
the original size.
... and that size is easy to get smaller than
... impl can reallocate, or can ignore and do dynamically. so
no harsh requiremment
... can alloc to nominal size and will use some size less than
that
... if so its guaranteed not to get larger
... if you do, and exceed it, the font can be rejected as
invalid
ChrisL: the invalid if over makes me a lot more comfortable
behedad: can argue problem outweighs benefits
Vlad: if you can modify the text
to make it clear that it helps implementors
... that bit is informational
... then define the nominal size and give the algorithm and has
no smarts
so if you ever go over that the font is invalid
raph: ok that makes sense
... first impl just used the original size, as it is
known
... decoder used 4byte alignment, and so we tended to run over
and fsail
... not so much on flags, but the alignment
... many fonts have tighter alignment
... chrome decoder switched to 2 byte alighnment.
... (long and short loca tables) hyper omptimizer may choose a
1 byte alighnment omn larger tables
... so around november 2013 there was a normalizer which
computed the nominal sise in the encoder
... except currently specced to 4 byte alignment, should be
2
... released encoder computes true nominal size, so guaranteed
not to reject
... deployed decoders use a 2 byte alignment
"original" length is actually nominal length
raph: original length should be discarded, and encode only nominal length
Vlad: only if its a 100% giuarantee
raph: it is
RSheeter: so we add MUST reject if larger
(yes)
ACTION raph to work with vlad to clarify all uses of original length
<trackbot> Created ACTION-153 - Work with vlad to clarify all uses of original length [on Raph Levien - due 2014-11-04].
raph: better to rename rather than clarify. so change it to decoded length
behedad: impl may have to go from 2byte to 4byte and then will need to overide that value if its 130k or larger
raph: in that case need to rewrite local table
behedad: should ignore what head
dtable says depending on what you are decoding to
... set to ewhat they are using in the decoder
Vlad: if you are right on the boundary)
behedad: must not reject the font in tht case
raph: agree this is what we should do
or make the loca table agree with the nominal
Vlad: its decoder specific
raph: do want tocheck chrome impl decodes when the loca table format is wrong at the 130k boundary
Vlad: proper for spec t glag that
raph: easy for an impl to miss
RSheeter: seemed to work, but should we or not
(explains how woff1 worked)
<scribe> ACTION: chris to make a w3c woff2 test repo [recorded in http://www.w3.org/2014/10/28-webfonts-minutes.html#action01]
<trackbot> Created ACTION-154 - Make a w3c woff2 test repo [on Chris Lilley - due 2014-11-04].
raph: no language for ttc is in
the current spec. nice to have, covers all of OT standard
... no browsers do ttc
Vlad: concluded earlier its nice to have, forn collections not ttc
raph: actually OTC
Vlad: iso v3 now just calls them
collections
... in woff1 were not supported sand noone cared
... expect collections become more widely used, so should not
be impossible
raph: expands woff2 over woff1
Vlad: any wire format changes?
raph: yes
RSheeter: same for things that
are not collections
... flavor in woff2 header, look for collection
(we check about flavor)
raph: ttc flavor is not normative
Vlad: flavor is sfnt version
behedad: first 4 bytes defines collection
raph: sfnt version is not as
normative way to distinguish ttc
... can use for all fonts we know. ttc tag does not go into
woff2 header
RSheeter: flavor of the parts of the collection ges elesewhere
raph: extended table forma t with duplicated blocks
RSheeter: not clear if only one of loca and glyf is shared
behedad: what does otc use as a tag?
RSheeter: some smaller sample files would help
ChrisL: we are ahead of the browsers here. there is no reqt to support ttc in browsers
Vlad: ttcf is iused for ot collections as well
behedad: is it right to do this now, its less mature. argued for this in woff1. why not do it separately
kbx: how do you use it in css
(a fragment identifier)
(discussion on ttc use cases, mostly on ideographs)
Vlad: collections not much in use
but increasing, eg for traditional and simplified chinese
... could recharter to add as a new work item.
(rechartering risk analysis)
Vlad: we can reserve on flavor
kuetel: agree with ChrisL its better to do now from a marketing/adoption perspective
resolved: we will support collections in WOFF 2.0
raph: will do code review on
RSheeter prototype
... about a page of spec added
... one para defining what to do on flavor
... there is aproposal in email, uses indices not byte offsets
so no overlap
behedad: can you load one face without loading the other faces
<RSheeter> http://lists.w3.org/Archives/Public/public-webfonts-wg/2014Feb/0018.html
action-154?
<trackbot> action-154 -- Chris Lilley to Make a w3c woff2 test repo -- due 2014-11-04 -- OPEN
<trackbot> http://www.w3.org/Fonts/WG/track/actions/154
https://github.com/w3c/woff2-tests
close action-154
<trackbot> Closed action-154.
kuettel: brotli decompress is all one, but after than you can select
raph: 90% is common anyway so no gain there
resolved: even for collections, one brotli stream for the whole collection
raph: should nit require adecoder to reject the font if its over nominal size
ChrisL: thought the main issue as a malicious font trying to overrun memory
raph: that is a concern
Vlad: we decoided its a guaranteed largest size. easier to see when we have a firm spec
behedad: may be abale to see how far from ideal the nominal is
raph: reluctant, might want to aligh on wierd boundaries
Vlad: lets start with the spec part and then decide on rejecting oversized fonts
raph: decoder MAY reject the font
<RSheeter> github is rsheeter
<RSheeter> (would also want khaledhosny, or permission to add addtl users [preferred])
Vlad: so in short term not going
to last call.
... for cts, draft test descriptions are there
(adjourned)
This is scribe.perl Revision: 1.138 of Date: 2013-04-25 13:59:11 Check for newer version at http://dev.w3.org/cvsweb/~checkout~/2002/scribe/ Guessing input format: RRSAgent_Text_Format (score 1.00) Succeeded: s/fast but slow/good but slow/ Succeeded: s/goof/good/ Succeeded: s/han/ideographs/ Found ScribeNick: ChrisL Inferring Scribes: ChrisL WARNING: No "Present: ... " found! Possibly Present: ChrisL RSheeter Vlad behedad https joined kbx kbx_ kuetel kuettel raph scribenick sergeym trackbot webfonts zoltan You can indicate people for the Present list like this: <dbooth> Present: dbooth jonathan mary <dbooth> Present+ amy Found Date: 28 Oct 2014 Guessing minutes URL: http://www.w3.org/2014/10/28-webfonts-minutes.html People with action items: chris[End of scribe.perl diagnostic output]