<manu> scribe: manu
Dan provides introductions, sponsors, space, etc.
burn: How many people are coming to dinner?
25-30 people raised their hands
scribe: 25-30 people raised their hands
burn: We do our communications on
IRC... irc.w3.org, put in a nickname, no spaces - please
join.
... We will need scribes for each session.
... We need scribes, please add your name to the agenda.
<brent> scribe: brent
burn: be nice
... introductions - quick around the room
<kaz> i/Dan prvides/scribenick: manu/
burn: Is there an animal you find
intriguing, and why?
... I am Dan Burnett, I am at Consensys, nick is burn
<gannan> link to the slides if anyone missed it https://docs.google.com/presentation/d/1_AKEYKWqaiMIUb6tlo3yVONTl9Z-71H4XfdTtgUF88U/edit
burn: giraffes just look wrong to me. how can they survive?
stonematt: Matt Stone with
Brightlink in Boulder CO
... always fascinated by sea anenomes. They look so soft, but
are brutal predators. It is awesome.
kaz: I am Kaz, team contact from W3C, this is my first time attending.
<kaz> kaz: W3C Team Contact for the VCWG
drummond: I am Drummond read of evernym. I picked drummond as my handle before anyone else could pick it. I am in love with hummingbirds, they are the most magical creatures on earth
me: though that was a liger
Yancy: I am Yancy Ribbens. Turtles fascinate me. Perhaps they gain wisdom being around so long
manu: I really love lemurs. When you look at mok=nkeys you kind of know what they do. Lemurs are kind of hard to read.
Grant_Noble: don't have a nick. Australian marsupials are weird.
DavidC: From university of Kent. Weirdest animals are humans.
gannan: Ganesh Annan. Digital Bazaar. Kangaroos are the strangest animals. They are very aggressive.
dmitriz: With digital bazaar. Dogs fascinate me. All Canids can interbreed. The shape after a few generations converges on Jackals. Converging on standards is really good.
<manu> brent: Hi I'm Brent, from Evernym, I think that squids and octopi are really neat, they have very different brains. Story I like, aliens visit, use octopuses as biological computers because humans are so lame.
oliver: With uPort. I also picked the octopus, they have three hearts.
deiu: Andre. I am an invited expert. The mammal I really like is the orca. A mix bx whales and dolphins. Very aggressive and hunt in packs. Cool to watch.
rhiaro: Amy Guy. Digital bazaar. I love parrots, esp when wet. They look like dinosaurs.
agropper: Adrian Gropper. Slugs, the material they excrete, wow I like slime.
JoeAndrieu: Legendary requirements. Water Bear, only animal we know that can survive in space. Maybe it is responsible for panspermia.
<JoeAndrieu> Tardigrade
ned: From intel. Cuttlefish eyes are shaped like "w" they stared me down while diving.
ken: With Sovrin. I choose a cat. They choose us as pets and allow us to interact with their world. They sometimes try to teach us.
Moses: Hear as observer. big believer in digital ID. Favorite animal is the watu.
jonnycrunch: TranSendX, invited expert of task. Sharks are misunderstood.
Dolphins and Ants added to list of strange animals by two.
eric - favorite animal is the cockroach. They have some amazing properties.
snorre: likes dolphins
alex preuschett from evernym
Miguel: likes SSI, here to learn.
johannes: ouroboros is the favorite animal.
pmcb55: here as observer. favorite animal is the snow leopard.
Jack: observer, birds of paradise. Evolutionary biologists can't firgure them out because they seem to choose mates based on beauty.
achughes: Andrew Hughes. Independent consultant. Identity management world. Most interesting animal is the tardigrade (water bear)
rgrant: Ryan Grant - Kopi Frog
kaz: I like frogs.
burn: welcome everyone. thank you.
<kaz> (group photo before or after the lunch today!)
<rgrant> brent: coqui ;)
burn: most of us know this. Our
goal is to make expressing and exchanging claims possible on
the web.
... the educational related use cases are our primary
focus.
... data model is in scope.
<kaz> fyi, VCWG Charter
burn: browser based API,
protocol, Identity on the web = out of scope
... <reading Slide about what verifiable credentials are and
are not.>
conversation about why the group is misnamed.
burn: If you need more info, there are links
stonematt: we have two pretty
full days, and a big group. will need to manage time.
... we are at the end of our charter, time is of the essence.
Our goal is to finish the work.
... We came to a feature freeze last year. we will be rigorous
about changes to functionality. the agenda reflects that.
... Spend some time on the steps we need to take for CR
... also we will talk about what comes next, but keep that
time-boxed.
... we also want a test vote on going to CR.
... as we move toward the next phase, implementation becomes
critical. we have time for discussion and implementor guidance
and feedback.
<scribe> ... <continues reading through agenda slide.>
UNKNOWN_SPEAKER: tomorrow, more
of the same <next slide>
... Please be here on time tomorrow.
... goal is to reduce issues down to zero.
... some open blocks so we can be flexible with the topics we
cover.
... DIDWG is coming (we hope). there are many who are
interested in it.
burn: placeholder in slides for potential topics
kaz: just a clarification question. charter extension today and next phase and future are in recharting topics?
burn: yes
... will try to briefly cover process.
... <Very busy slide> trying to start CR stage
me: CR = "Candidate Recommendation"
burn: our work is the CR to PR
<kaz> s/next phase and future are in rechartering topics?/next phase and future on Tuesday is rechartering. right?/
burn: handful of docs in the
charter. Names now are different
... that doesn't matter
stonematt: One of several
documents is up for CR.
... we also need group notes: use cases and requirements;
privacy; implementation guidance.
... these have been begun, but we need to keep working on
them.
... data model status: feature complete. some editorial issues,
and one or two sticking points we need to resolve.
<kaz> deliverable documents defined in the VCWG Charter
burn: we have a privacy
considerations section in the data model, that may be
enough.
... diagram with dates.
... quit adding features in Novemberm that still stands.
... as long as we vote in mid march, publish late march.
... earliest PR is May.
... earliest Recommendation is June.
... we are in process of an administrative extension. that does
not give us more time, it only gives us enough time.
... additional CR is two more months. That takes us to the very
end our time.
... this is not more time.
stonematt: set a big goal at TPAC
to resolve all issues and hit CR in November, didn't
happen.
... open issue count has stayed steady
... March in Barcelona. Big audacious goal is now to actually
close the issues and wrap this up.
burn: we might get mean.
... it is easy to think "this thing I want must go in" but that
might kill the whole thing.
... what needs to actually happen?
... <reading slide "What are we missing">
... still waiting on review from some groups.
... not sure when the review comments come. This time we also
want review from TAG.
... we will need to turn those issues around in a week when
they come.
stonematt: There are a couple of
discussions that seem to not be ending.
... one is ToU.
... other is syntax and proof trade-off doc.
... chairs may need to decide.
burn: goal is maximum adoption and getting through w3c process.
stonematt: no more to add.
burn: now is discussion time
stonematt: any questions?
oliver: one question - Do we want to tackle the outstanding issues after lunch?
burn: any issues we have
outstanding today are the implementation issues. we will go
through those today.
... talked about administrative extension. this will happen if
w3c believes we are on track.
... to do this we need to publish a CR
... doesn't require a vote to extend administratively.
... recharter would require a vote from all members.
... There are those who would argue we don't know what we are
doing. That is not the case.
... the ZKP and JWT elements were added late, but that was done
because they are so vital for adoption.
... assuming we stay on track, what then?
stonematt: we have until noon.
Maybe we can keep this discussion time-boxed to 15 minutes.
then start the JWT discussion.
... quick goals or ideas for future work (WG level work)
jonnycrunch: what is demand from implementers?
burn: that is important for future groups. if there are implementers who will join a future group especially.
jonnycrunch: is there a flavor of the demand from implementers for something beyond what this group is going?
ken: what is the distinction bx what we're talking about now and tomorrow. And the future work stuff, is that this group or a future group?
burn: maybe we should have done
future work first.
... recharter is really a new charter, may be widely
divergent
... a charter is to constrain scope. so companies can handle
patent issues according to that scope.
ken: so first item on discussion tomorrow is protocol. is that on the table for recharter?
burn: if there is wide
supprt
... the group wants it.
... there is no requirement that a recharter happen
immediately.
... we think a DIDWG is coming. There would be alot of overlap
bx that group and a rechartered VCWG.
<Zakim> manu, you wanted to note that we have buckets for stuff, but have not defined what's in those buckets.
burn: recharter doesn't need to happen right away. so we could run those in parallel or sequentially.
manu: to give the group an idea
of what these things could be. we identified a bunch of buckets
in the spec. e.g. terms of use. we haven't filled that bucket.
a rechartered group might define that.
... same with evidence. we could recharter to add specifics to
each of the elements in the data model.
... most likely our best bet is to move forward with DIDWG. but
in the meantime organizations will be self-defining the
buckets.
... but maybe that's what we want to see before starting a
working group to define them.
... maybe a couple of years for people to experiment would be
good.
... when JSON-LD became a spec, we waited 4 years to wait for
the screams of demands of implementers.
... I dont think we're ready to define those buckets. too
theoretical an exercise.
burn: interrupt - charters are for 2 years (typically)
kaz: several possible options. I
would like to suggest: another group has a similar
situation
... their charter actually ended last year, but they extended
for 6 months. parallel-wise they're finalizing the CR and
working on future items. I recommend this for this group.
<Zakim> deiu, you wanted to say that future work can also be incubated within a CG, to offset waiting time for a new recharter
deiu: we can still work on future items while a recharter is underway
DavidC: some things like evidence there isn't must experience, but some buckets are almost full.
burn: opinions differ on that
stonematt: my intuition is to put this work into hibernation while we work in DIDs. VCs help showcase DIDs while we work on them
<Zakim> manu, you wanted to focus on community group
burn: yes, work on protocol can
start in community groups before we go to a charter.
... we do work first, so we have a WG to transition to when we
are ready
manu: I agree. I think we can do everything in the CCG.
<JoeAndrieu> +1 from the CCG chair for advancing next gen VC work in the CCG
manu: we are building products on this stuff. the quickest way to discuss interoperability is the CCG until we are ready for a WG.
<JoeAndrieu> (co-chair)
<jonnycrunch> +1 to Manu to push in the CCG
manu: let's get a bunch of people working toward interop, then form a group to standardize
drummond: I agree. Is there anyone sitting here who actually has a concrete proposal for a recharter?
ned: I'm interested in constrained environments. that points to CBOR.
<Zakim> deiu, you wanted to say that it's better to wait and collect compelling arguments for a new WG instead of trying to come up with work items right now
<stonematt> +1 on having a clear charter demand
deiu: please do not recharter while there is no clear definition of what the work is.
<drummond> +1
stonematt: 30 min topic -
oliver: potential trade-off
section still has issues. I don't think we have general
consensus on the contents of that table.
... wanted to have the conversation.
burn: I'm gonna throw out a crazy idea.
<manu> The section we're talking about is here: https://pr-preview.s3.amazonaws.com/w3c/vc-data-model/pull/384.html#syntax-and-proof-format-trade-offs
burn: i believe the original
attempt for this was to guide implementors. that doesn't belong
in the spec.
... this is a matter of opinion, so in the implementation guide
we could have proponents for each of the formats give pros,
then write rebuttels.
... that could go in the implementation guidance
<drummond> I would actually suggest to include JUST the advantages of each.
ken: the current table doesn't mention zkps, so it doesn't seem appropriate to cram it into the spec.
<Zakim> manu, you wanted to note where the table came from
ken: there should be some discussion of the trade-offs, but didn't feel like it fits in the spec
<kaz> rendered version of PR 384
manu: I put a link for the PR.
Background - a number of reviewers wanted guidance. The group
decided it should go in an appendix. That's where we are
today.
... TPAC started on a use case basis, not sure how to write
that.
... instead, there are 3 primary combinations that implementors
are implementing.
... uPort is using JSON in JWTs.
<jonnycrunch> we are using IPLD
manu: my understanding is that
the zkp stuff is using ld-proofs, so there are three options
for syntactic representation.
... there are now multiple syntaxes.
<jonnycrunch> Which is based on dag-cbor
dmitriz: to add to the confusion: the json-ld signature contains a JWT as a JWS.
manu: we hoped for convergence,
but didn't get it.
... the greater problem is that implementation complexity has
grown.
... just want to outline that it isn't easy to read the spec
and make a clear decision.
... we can put a link to best practices or put it in the
appendix. At least the appendix is the same doc.
... if we're going to get this done before CR, somebody's going
to need to write it.
drummond: I agree that the fact
that there are three options makes it important for people to
understand.
... it is going to be a challenge for eventual convergence, but
we should be direct.
... each of the advocates should write a rationale for their
method.
... a neutral part can write a intro.
manu: how many have read the section? <raised hands> not everybody
+1 to Drummond's suggestions
manu: oliver seems to be saying that this table shouldn't exist
stonematt: is there consensus among implementors that this is a binary interpretation?
oliver: I like drummond's idea of
proposals written by the advocates.
... depending on who writes the section, there would be many
items included or not.
... the table seems to be biased toward json-ld
<burn> hey, that was my idea! :)
oliver: I will write the JWT section. I don't think we'll get consensus on the PR
stonematt: often times there are gray areas
<jonnycrunch> brent:
rgrant: what i'm hearing is that in this spec, implementors can create compliant credentials that other compliant implementors can't read?
manu: that is true. that is how standards work. There is a market free for all.
drummond: also evolutionary biology
rgrant: but not random winnowing.
burn: this is how it works when people don't want to be told how to live their lives.
rgrant: are we aware of some implementers who are not going to implement some options?
group: yes
<Zakim> dmitriz, you wanted to point out, it's not really a decision for implementers. it's just "oh now I have to implement 2/3 ways"
dmitriz: I want to say - these are choices implementers must make. We phrase it as options, but really it is all of them or we lose interop
DavidC: this is partly tied to protocol. how the parties communicate os not defined.
<Zakim> deiu, you wanted to mention that while choice is a nice feature to have, it makes interop a nightmare
deiu: it is great to have choice, but that is an implementation nightmare. We want this technology to be used by the end users. But if the required formats are huge, that will limit implemenation options. they need guidance.
manu: concrete proposal -
everybody write a section they are proponents of.
... looking at the existing table, the gray areas are
expressed. I thought uPort was doing the middle column. Is
nobody?
oliver: I don't know all of the
proof formats and implementations they use. left hand column is
uPort and probably microsoft.
... middle column is potentially used
manu: right column is everyone else
dmitriz: what about BTCR?
rgrant: we might use both
manu: you can't, I don't
think
... also the zkp thing - this is about syntax and proof
formats.
<jonnycrunch> Can I volunteer to write a section on IPLD?
drummond: i think manu captured, um. There is clearly a proponent for JSON + JWTs, is there a proponent for the second column?
manu: the comparison is there
drummond: who would write the middle column?
<burn> jonnycrunch, I don't believe ipld is a vc syntax we have defined in the spec
oliver: I know some implementers.
I volunteer to write that section as well.
... JSON-LD + JWT
<scribe> ACTION: oliver volunteers to write JSON+JWT and JSON-LD+JWT sections
<scribe> ACTION: manu volunteers to write JSON-LD proofs section
deiu: since we don't have a clear medium where discussion can be fostered for these particular items, where should they go?
burn: time for a break. be gentle with what you take. There's not a lot of food.
<kaz> [break till 12:30]
<ken> scribenick: ken
stonematt: Let's get started
stonematt: Implementor feedback
from non-members is welcome.
... Observer questions about how things are implemented are
safe. Suggestions on how to fix problems are out of bounds.
dimtriz: I'm excited to hear from
other implementors.
... I would like to have the discussion about which format
implementors should choose to continue.
stonematt: How can we have a
fruitful discussion on this topic?
... A free-for-all might get us started.
... What are other aspects we should cover?
dmitriz: The current test-suite
has two forms for basic and advanced. No current implementation
of the signature verifications.
... That is outside the charter.
... I recommend that developers that plan to support the data
model, that they review the test suite.
... They should notice that there is a one-to-one
correspondence with the spec.
stonematt: There is an existing
test suite that focuces on the conformance to the data
model.
... It the document conformant with the data model?
dmitriz: The test suite has the awkward job of testing conformance to the data model without testing protocol, exhange, or verification of signatures.
<burn> q_
<jonnycrunch> Link for vc.js test suite?
<Zakim> burn, you wanted to talk to multiple syntaxes
dmitriz: Just test the data model and validate the data model but not verification.
<dmitriz> https://github.com/w3c/vc-test-suite
<manu> Link to test suite: https://github.com/w3c/vc-test-suite
burn: There is data model, then
syntax. Any number of syntaxes could exist. In the beginning
there were three.
... XML was there but no one would write it.
... There should not be the implication that only
JSON-LD.
... We wanted to show something so that it could be done.
manu: I don't think it is in the
spec.
... I agree. If it doesn't say that we need to change it.
... Is there one thing we could discuss, it might be to only
say "@context" in a VC.
... The complexity could be simplified by 33%.
oliver: There are some developers strongly opposed.
drummond: In terms of recognizing
this as the prime interop issue. We need to anticipate what
will happen.
... Is it ok to point from the appendix to an on-going
community effort for feedback in a living document.
... I think it will help convergence.
stonematt: Maybe the CCG can consider this.
drummond: if the CCG would agree to that it could smooth things over.
<Zakim> deiu, you wanted to mention that discovery of implementer resources can be improved
joe: Give us a work item.
deiu: This type of resource would
help implementors. Links would help.
... The spec and home page don't help.
burn: We appreciate your willingness to help.
<Zakim> dmitriz, you wanted to mention - the test suite does not test the JWT serialization
deiu: I can help with that.
dmitriz: With re: to the test
suite, only the JSON-LD form is supported. The other two
columns will need other test suites.
... One topic of contention between syntaxes is library
support. JOSE have broader library support requirements.
... The crypto libraries are weakly supported.
yancy: As implemntors should we
be concerned about DID resolvers?
... If we don't know if a particular format will support a
particular type of DID, should we not support it?
stonematt: This is a document format.
<dmitriz> just to clarify the transcription, with regards to syntaxes: JOSE has broader library support in theory. But the actual keys and crypto methods many implementers are using, /are not/ supported by those JOSE libraries
yancy: It there a time line when we cannot make changes.
burn: We need at least one producer and one consumer for each feature.
yancy: If we are the only impementor we need to make you as chairs aware.
stonematt: If you are planning to
be an implementor, we would like to know who, what features,
etc.
... As an issuer, consumer, or both?
... We also need an implementation report.
drummond: Why these terms.
burn: They are W3C terms. As far as the spec goes, can someone write one and someone read one.
<stonematt> Implementers commitments: https://docs.google.com/spreadsheets/d/e/2PACX-1vRTbKsROuQzBWyuYvMBwSrjuv5X3jO-ObXpLEPpWxuPqXi6sigqfIfQwpJNHwK70cBHwV1torftKW0u/pubhtml?gid=285762982&single=true
rgrant: The only syntaxes are json-ld, json w/jwt, and json-ld with jwt.
manu: json and json-ld; json-ld and jwt
<Zakim> burn, you wanted to comment on test suite architecture
<Jesus> q
<stonematt> 1?
<stonematt> 1?
burn: Test suite architecture:
this is a doc model and syntax. No computation is
required.
... Manu encouraged an actual test suite because it show
maturity.
<Zakim> manu, you wanted to note that not adopting @context is a philosophical stance... and to note not a separate test suite, please.
burn: If we continue down this road, we will have interop questions.
manu: json+jwt; There is a
organization that refuses to put a context in the VC field.
They are not participating activily.
... I'm struggling to understand why the WG is bending over
backward to support a format that could be easily fixed.
dmitriz: I meant to say that we need data in the other formats.
manu: If organizations want to show conformance using these optional formats they are there.
burn: I don't want to go down this road.
manu: Is there any implementor who won't put a context in a VC?
jonnycrunch: IPLD is a format we are using that is slightly different? Mostly for security.
manu: Anyone? Anyone?
<dmitriz> question to jonnycrunch: are your concerns about @context / content-addressable issue addressed by the hashlink spec?
oliver: Refusing is a strong word.
manu: If you don't include it the test will fail.
oliver: What is needed to support json in the test suite?
manu: who adds the addtional
context?
... The spec says you have to include a context.
... Are you going to follow the spec?
oliver: If you are going to
support json, its about who has to accommodate?
... The you are going to insert something in the VC.
dmitriz: Test for type, there are other things?
ken: Who can contribute tests?
burn: Anyone especially those who didn't help write the spec.
<Zakim> burn, you wanted to ask Manu about "treating plain JSON as JSON-LD with a default @context" and why this doesn't address the problem
burn: Why can't we treat json with a context?
manu: Because you can just pass the test, but it won't actually work.
burn: Or interpret it as the defualt context.
manu: You will pass the bare
minumum, but you will fail some subsequent tests.
... There are deep reasons for this but no a good use of our
time.
davidc: It doens't say that
context must be present.
... You have to say the context MUST be present.
burn: Have people who read the lack of MUST as significant.
rgrant: We are collect VCs from
as many sources as possible. It is not just the test
suite.
... We are looking for VCs that are JWT, json-base, etc.
... This is not the W3C test suite. It is external.
deiu: T
... The way it is present in all examples. @context is used
everywhere.
oliver: The examples are all informative and not normative
deiu: We need to be more explicit.
<rgrant> rgrant: and the "implicit @context" idea in the air is unclear on how to implement
oliver: should the test suite fail?
dmitriz: Can we hear from the chairs re: test suite vs. none?
burn: This is a document format?
It is very difficult to separate the rules of production and
consumption from the rules of the document.
... Various groups have encountered this. In the end they have
assertions about does the vendor do you supoort thsi
feature.
... This allows the test to occur without the fighting over
processing.
... Not all implementations can or will process all fields in
every VC. I could ignore the @context.
... It doesn't matter to my implementation.
... The downside is always better if you can show real
implementations over just looking the format.
... If we can't resolve, yank it and publish a table.
brentz: There are other fields in the data model, that have optional characteristics. Some we are including; some not.
kaz: The basic process
requirement for CR transition is just write the spec and
assertions, like the table.
... What we are doing is an extended effort.
<rgrant> stonematt: thx
kaz: The test suite helps
implementors. On the other hand, other groups have just
generated a big table of features for manual
verification.
... We need to provide a table of experience in the end.
... The basic expectation is that two organizations interact on
the data. Could be manual.
manu: We could manually or automatically prove interop. Let's please not kick the can down the road.
<Zakim> burn, you wanted to respond to kaz
rgrant: I don't want a table to checklist.
burn: Absolutely love to see
implementations. Most test suites don't demonstrate full
interop.
... Get the spec out is the primary goal before the charter
expires.
... The test suite should not give the illusion of full
interop.
... I agree with Kaz that we don't need a test suite to enter
CR.
... I am concerned that we can make pollyanna statements about
interop.
... If a few missing features.
... What does it mean to have an implementation?
kaz: One of the most important things is does the test suite correspond to the assertions in the spec.
<inserted> burn: the current test suite corresponds to the spec nicely
jonnycrunch: In medical there were connectathons. A test suite could oriented to that way.
manu: That is what the test suite does. It does input a json struct and outputs a VC.
jonnycrunch: Between two testers,
an output test result that describes the pairwise result could
be published.
... The badges of pairwise result could be a dashboard.
rgrant: Implementors could publish their results.
jonnycrunch: It becomes more iterative and pairwise.
pmcb55: I strongly support a multiple language test suite.
<Zakim> deiu, you wanted to mention that we should clearly mention the test suit only covers the model, and not a complete workflow informed by the use cases document
Jesus: We are getting too complex to create an interoperability suit.
<stonematt> s/???/Jesus /
deiu: As part of the document that I am writing, I could include some explanation.
oliver: wrto JWT and ZKPs the test suite is incomplete.
dmitriz: yes.
oliver: I will add tests for JWTs and I expect Sovrin/Everym to add ZKPs.
<Zakim> burn, you wanted to mention industry fora and their implementation conformance certification programs and to remind people that we are required to test the implementability of the
burn: I want to remind people
that we are required to test the implementibility of the spec;
NOT the validity of the implementation
... Not interoperability either.
... An industry forum such as rgrant mentioned could be created
later.
<Zakim> manu, you wanted to note how test suites usually progress
burn: There are ways to test implementation conformance that are beyond this group.
manu: I would like to get feed
back from each implementors.
... I also discovered today that @context is being considered
by some implementors as not mandatory.
... There
... is a new issue. It is being tracked.
stonematt: Let's review the status of implementors.
dmitriz: Basic, advanced,
???
... There is a PR pending on advanced, NO ZKP or JWT
oliver: on track. We used the
table on google drive to fill out our plans.
... I have some questions on credential status.
... Is there anyone who is going to present a credential that
is revoked?
... I see that there must a revocation mechanism in
place.
... We might consume it.
jonnycrunch: Medical credentials are often revoked.
oliver: We will consume but likely not produce.
<rgrant> What does column H on the implementor's table mean?
drummond: In our experience, some credentials don't need revocation.
oliver: the table is up to date.
manu: The table means that you are going to generate a credential that leaves it in. Or I can consume it.
oliver: If you just process it, what's the point if you are not actually going to use it.
manu: A new column @context has been added.
burn: The group can decide if the list is public.
joe: I was confused re: credential status. RE: revocation?
brentz: It doesn't mean it is revoked. It means here is how to check status.
jonnycrunch: Agreed.
burn: if the table is up to date, we're done.
<brent> drummond: evernym and sovrin are different
<brent> ken: the table is up to date
<brent> ... two columns we don't plan on passing that through or using it
<brent> ... refreshService and termsOfUse
<brent> ... tou in its current form is dangerous and unenforceable. in the future perhaps, but not now.
<brent> ... other fields like evidence, we will pass them through, but don't have a current use case.
<brent> ... no philosophical disagreement.
<brent> ... don't have a mechanism to verify JWT and JSON-LD proofs, but no opposition to others using them.
scribenic: ken
<brent> drummond: speaking from evernym and sovrin. the larger question of test suites. in the context of testing a data model this is great.
<brent> ... in the market, interoperability will matter within governance frameworks.
drummond: Testing within a
governance community is what matters in the market. Shared
incentive for interoperability will drive this.
... We have communities within the Sovrin/Evernym
community.
ken: We have multiple implementations. We are seeking more.
Drummond: as Evernym we are only doing one.
burn: this spreadsheet lists
"features at risk". The table is not comprehensive.
... 6 or 7 could remove the item from the table. The table is
to help us identify features at risk.
stonematt: We can look for feedback on other implmentors from Sovrin and Evernym.
<inserted> kaz: for CR transition, W3C WGs are encouraged to identify some of the features within specs as "features at risk" (which might be difficult to get implemented), and I thought this spreadsheet lists those "features at risk" within the Verifiable Credentials Data Model spec.
burn: be back at 3:00 to start
working.
... presentation during lunch on use cases.
... We encourage to be back at 2:30.
<kaz> [lunch till 3pm]
<rhiaro> scribenick: rhiaro
brent: we want o move quickly.
We're not here to hash out the issues, the goal is to decide
what next steps are, who is repsonsible for it
... if you can help, accept responsibility
... And learn to let go. If there is an issue that we could
close, let's close it
... if there's some tender feelings then maybe we could defer
somethings we don't want to quite act on but can't let go
of
... Maybe an option.
... The issues that ended up in this deck either somebody told
me to put it there or it is an issue in github that does not
say CR blocker, does not say defer, and didn't seem to be
realted exactly to the spec
... if there's something missing, feel free to add a slide,
insert it at the end
... Slide with links to all the issues in gh
... The rough break up is implementation guidence, registries,
and then all the other stuff
... if looking over the list you see something that's an issue
and we're not talking about it, make a slide, we'll talk about
it
... Under implementation guidence, there are 4 things, with 6
associated issues (some related)
... First one is default namespace URL #206
... Summary is maybe we should cahnge the context URL to
something else. Should it be w3id or w3.org. There is a task
list from tpac and the two remaining things on the list are to
request the proposed URLs from the web team. Has that
happened?
manu: yes
brent: have we documented the integrity proofs as a note?
manu: no
brent: so should we close
it?
... As we are moving quickly, if someone wants to look at an
issue, go look at it, come back to this deck and then make a
comment on the slide
... or go to the issue and make a comment on the issue
manu: should we give everyone an update on all the things that happened or is there too much?
brent: there's not a ton of stuff, but..
manu: I'd like to skip that but I'm afraid that people care and don't know what happened and will come back later and reopen
brent: before it's closed we should make a comment that we determined at the f2f the tasks were done and it was clsoed
jonnycrunch: before we consider which URL, should it even be a URL as there are other representations of resources like IPLD
brent: that was a different
issue
... I believe it was closed
... There was a question, should we explore using webauthn with
VCs. Implementation guidence tag was added
... Does this belong in implementation guidence? Should we
publish a guide for people who want to implement VCs in
webauthn?
... That seems to be what is implied with the label
manu: no
brent: I have my own opinion but I'm presenting
manu: There were successful experiments, we notified the webauthn groups what they needed to change to make it a core feature. I think it's too early to write it up in the implementation guidence. The CCG can update the docs when it's more secure
brent: I think we state that and
close the issue?
... That it's done by the CCG and this isssue doesn't have to
track it
burn: exploratory items can be done by the CCG, but not just anything to do with implementation guidence
manu: there's a question around who maintains the implementation guidence after this group closes. The guess it's going to be the CCG
burn: That's a nice suggestion
brent: implementation guidence
decision maintenance is not this issue
... So we close this issue.
... There were a copule regarding attachments in credentials.
One regarding outside data, and other regarding credentials
inside of a credential. Labelled implementation guidence
... does it belong there? Is it possible to do this according
to the data model?
manu: yes and yes
... if specs like hashlink or named information or..
brent: is the idea that the whole implementation guide is written by the CCG?
ken: we're going to have at least a first version
brent: this issue should be addressed in it?
manu: yes
ken: does the implementatio guide correspond to this version of the data model?
burn: yes
brent: the suggestion here is
implementers might want a json schema they can use to validate
the credentials against, an official schema, and should it be
referenced from the spec
... is this an implementation guidence thing?
... it feels like this is being accomplished by the test
suite
dmitriz: this is not being accomplished by the test suite. This is whether we recommend another property that points to a schema
brent: addressed by credential schema?
manu: yes
... There were two issues reaised that sounded like the same
thing but were different
... one was what can I use to validate the default VC
thing
... and what can I have that does specific types of
credentials, eg. drivers license, what schema goes along with
that. that's what credentialschema is for
... The other is whether there's a generic json schema that
checks the core fields in the spec
brent: how does it differ from using the test suite?
dmitriz: test suite is for libraries, schema is for individual credentials
brent: 128 is need for a schema
to test against. 129 is addressed by credentialSchema
... This has been addressed and should be closed. 128, do we
need this?
dmitriz: would be helpful
brent: I agree, do we need to do
it now?
... if it is helpful, who is going to do it?
... Volunteers requested. if it's not important enough for
anyone to do it right now but we don't want to let it go, we
should label it defer
manu: +1
brent: unless someone has a valid
reason to not defer it or claim ownership
... or is the valid schema against which to validate the
verifiable credential something that would be included in
implementation guidence
burn: if we're gonna provide it
it doesn't belong in implementation guidence
... either it has normative weight or it doesn't
dmitriz: schemas are a really
easy verison of the test suite. they are a machine readable way
to test individual credentials for conformance
... I gues I'm volunteering.
jonnycrunch: json schema implements MUSTs
??: something about shacl
brent: I took notes about registries, but I don't know what it means
burn: we have a timeslot for registry discussion
brent: context URL doesn't point to anything
manu: cannot close, needs to be done
brent: this is new, it's
ongoing
... manu owns it
manu: *makes a noise*
... I can't make that happen
... it's in kaz's hands
kaz: I checked with webmaster about the namespace files, it's okay to have actual content on the github site, but concerned about the sustainability for the github site, so he suggests we set some mechanism to copy it from github to the w3c side with a webhook. I think that's fine.
brent: we would regularly move the github content?
kaz: automatically yeah
brent: but the context that gets pointed to by the URL shouldn't that be more immutable..?
manu: I think that's a security issue and we can't auto copy it. We can until we hit PR, then stop the script
ken: it's supposed to be immutable. So auto copying it does violate that
manu: so the latest copy is up there through PR that's fine but once we hit PR close it
brent: should we track the need to turn off autocopy at PR?
kaz: we don't have to use auto
copy every day
... we can negotiate with the webmaster how to deal with
that
brent: shoudl we track the need to make sure the context that is referred to by this url is somewhat immutable here, or should that be separate issue?
kaz: maybe another issue
burn: it would be clearer to open a separate issue about that that remains through PR. So we can pull it up and say we know about it with the director
ken: what happens when the immutable data changes?
manu: in the spec we say implementors are urged to hardcode it and use local copies and not go out to the network
ken: when we found a small problem..
manu: we will never change
it
... even if it's wrong. In general we should never touch the
file
... we will release a v 1.1 if we have to
... we can release a.. there is a plan. if significant changes
need to be made we have to release a new version of the
context
brent: verifiable credentials
should have a unique mime type?
... The purpose is not to discuss *whether*, but what are the
next steps to take in making that decision?
manu: this would be a normative
change to the spec so if we go into CR without it we'll have to
go through CR again if we decide we want it
... daniel opened the issue. It raises a really important
question
burn: this is an open topic
deiu: is there suggestion that this is a real issue?
manu: an implementor brought it
up as a real issue
... they want to at the minimum ask for a specific mime
type
brent: there's discussion in the
issue that clarifies what this is asking for
... so we keep talking about it?
burn: yes
brent: because changing this would affect CR is it a CR blocker?
manu: yes
brent: should we put an end of life date on the spec?
manu: this was christopher's issue, he said he wished they did it for TLS. I don't know how you could enforce it
drummond: what's an end of life date on the spec?
manu: it means we say the spec will no longer be valid after a certain date
drummond: an example of a spec that has that?
rgrant: if I see a spec that's end of life, the result is that I go to the next google entry and look for another one. I think it's slightly helpful
deiu: seeing as we don't have a recharter date in mind, we shouldn't put an end of life date either
<dmitriz> +1
manu: also w3c has a process where they rip out old specs. It took 15 years for some of them, but..
brent: defer means not this version possibly the next. Other options are act or close. Act means somebody volunteers to take action to move forward on this issue
JoeA: I volunteer to have Chris talk on a weekly call about why he wants this
brent: images in documents produced by this wg should have descriptions as recommended by the WAI
burn: we will get those comments already from the accessibility group after asking them to review it
manu: tzviya volunteered to do it
brent: she's not volunteering to do it in the issue
stonematt: we were working together on it, I'll take that
brent: if a credential may be delegated, what prevents a verifier from taking a presentation and pretending they are aholder? It was supposed to be discussed by JoeA and DavidC after tpac
DavidC: It's really important but out of scope
manu: there is a protocol answer to it
burn: should go into implementation guidence?
stonematt: is it a defer issue? it's out of scope for now but we might have to solve it eventually
manu: it's probably implementation guidence
stonematt: they're gonna get stuff wrong
dmitriz: this isn't a protcol
level issue. It could be a data model issue for example the way
the question is solved in oauth2 is it requires an audience
property
... that way each presentation, each credential, is only good
for one verifier. that's what prevents it. that's a data model
issue. It requires that property to filter it.
... Our solution could be something else, but it could be a
data model solution
DavidC: first of all we're
different to oidc. we're putting hte user in charge. In oidc
the issuer is in charge. We have another solution which stops
it being passed altogether. Subject-holder not transferrable is
a solution to it being passed at all. It's an even more
constrained solution
... The solution to it is it's not allowed to be passed on at
all, exists. Not allowe dto be passed on *again*
... You can have a relationship credential.. there are
solutions already
manu: the way that it's solved right now with JWT is through the audience field, and same with LD-Proofs. outside of the data model
brent: and with zkps all the verifier can do is know that they got it. it has been solved on the protocol level by the people who are using it
manu: all of the solutions are
outside of the data model
... We could mention in the implementation guidence that that
is what people have chosen to do and that would be better than
not saying anything
brent: who's gonna act?
burn: there is a very beginning outline of th eimplementation guide
dmitriz: can we open issues on
that repo?
... I'll create an issue.
burn: there may be a timing issue. We need to be clear about if this is a CR blocker. i'd say no.
<dmitriz> what's the link to the implementation guide repo?
burn: We were restricted from
doing protocol work by the charter. But we are continually
asked how peopl euse it. We say it's a data model, you're not
allowed to ask that. The conclusion was to have an
implementation guidence document that includes our best
thinking now. It's not normative. It may not even be our best
thinking.. We may disagree
... the reason for it is that people will see the document and
people will begin doing thing swith it before we get to
definining the protocol. I fwe give them nothing they'll go
crazy. If we at least say this is what w'ere thinking about
they're more likely to move in that direction. That's what's
going on.
drummond: is that an official decision? Will it be referenced in the spec?
burn: there are osme issues where
we must put something in the implementation guide to address
the issue
... we can't normatively reference it
manu: we should have a non-normative link to the implementation guide that says it's maintained by the CCG
deiu: I'm supposed to be tasked with creating that document that lists all the other resources, do you want to put it in that document? That's outside of the spec
manu: good idea
deiu: I suggested earlier we have a home document to discover useful resources for implementers to get started
brent: Anything more for this?
manu: kaz can you move it to vcwg first?
kaz: copy or transfer?
manu: transfer
<kaz> implementation guide repo
brent: how JSON/JWT and credentials will fit together. THere were 4 work items. i believe it was owned by Kim. I don't know hwo can speak to the status of the work items, but it feels like this may have been resolved because we have an example of using JWTs inside a credential
burn: Publish rebooting guidelines doc, did that go anywhere?
manu: it was never completed. Nobody agreed to it. The appendix is that. That's what we were supposed to produce
brent: it feels like this is
close as everything was addressed elsewhere
... create tooling for vc creation?
burn: and schema creation
stonematt: tooling has to come outside of this wg
oliver: what is meant by create tooling for vc creation?
burn: I'm looking for anything about tooling in there...
brent: i remember it coming up at tpac
gannan: it was around trying to create a site like jwt.io so develoeprs could create a credential
brent: like a playground. Definitely outside of the scope of this group. Already being done. Awesome.
manu: I'd suggest close. Overtaken by events
brent: on this we said let's leave it open with the implementation guidence tag on it, but we should change it to an issue in the implementation guidence repo
manu: +1
everyone: applauds brent's issue handling.
manu: The spec talks about a
number of different registires, kind of in a handwavey
fashion
... the status registry is for different status schemes, like
revocation, are you going to have a method that does revocation
through a list published to the web, through a blocckhain
method. Kind of like a credential status method registry...
does that make sense?
... Its' for the things that plug in to the credential status
field
... that's what the registry is supposed to hold
... just like there is a DID method registry
drummond: Can I clarify? that's totally different than what ?? calls revocation registry which is an implementation of a cryptographic accumulator
manu: tha twould be one method in the status registry
drummond: and you would describe other types. I see
ken: and there are other types such as a list
burn: any other questions about that one?
oliver: examples?
manu: one of them is .. we have
one that is basically a list that you publish to a website. It
has a bunch of things that have been revoked in it. That's the
simplest form. You publish a file on the web of all the things
that have been revoked. Not the best for privacy
preserving
... you do'nt know who someone is checking the list for
... The next would be a blockchain based one. Waving my hands
there because there may be multiple blockchain based status
registries. I expect they'd be the two general types of methods
that would go in there
... We do need the registry because people are going to start
working on this stuff, more than likely the credentials CG
should be the maintainer since they maintain other registries
that th ecommunity uses
... THe general quesiton at this point is do people think we
need it?
... Suggest yes. Who is going to maintain it? Suggest CCG
<Zakim> burn, you wanted to discuss chair interrupt
drummond: My first suggestion is just.. I don't thikn it's a very good name. Registry of credential status types.. as in ways to check the status of a credential
<Zakim> JoeAndrieu, you wanted to ask if we have good, documented criteria for acceptance in these registies?
drummond: The second thing is if it's just that I'm not too worried about it being centralised, because the same situation is with the DID method thing, it's just a reference tool, you're not trying to create an iana style list. I agree that the CCG, if it agrees, would be good to take on the task
<Carlos> 3Cq
<Carlos> q
JoeAndrieu: are any of these well defined enough yet to understand the criteria for acceptence
manu: no. There's one example fo the list type
JoeAndrieu: the lesson from the DID method spec is that we've had to refine the criteria for what you can put in a method spec because we got name squatting. There's wokr to be done around consensus around how you get in there. The CCG will do that. ther eis longer term risk that it's not clear that the CCG is not the right place, but there doens't seem to be a better place. Does w3c staff like this weird ambiguous architecture?
manu: we don't know yet
JoeAndrieu: it's kind of weird given how lightweight the governance to become a chair an set up a CG that we're using them for infrastructure maintence
kaz: another possibility is
creating a dedicated CG for that purpose. A registry CG or
something.
... my personal opinion, I need to talk with team. WGs used to
use a group note for registry publication, but that's for
after. Some other working group tried an IANA registry but
that's really complicated
... I need to talk with ?? about it
Carlos: makes sense to have a registry also for presentations?
manu: we haven't talked about this yet
<kaz> ACTION: kaz to talk within the W3C Team about having a CG to manage registries after WG
Carlos: what we are missing in the implementation is a presentation request. When the verifier wants to ask the holder what the verifier needs for providing the service to be in some format, if not all the implementations will defer
manu: this is the query language for credentials. What's the query language for asking for a credential..
oliver: that's prtocol specific right?
DavidC: you could standardise a data format, like a policy
manu: those are registries we haven't even contemplated in the group, we should make a note, but
oliver: we need it, but I don't think it's in scope of this document
manu: once we understand what all these registries are there will probalby be a bulk decision on who manages them. I expect most folks to say the CCG
<Zakim> burn, you wanted to mention IANA model as good for understanding options
jonnycrunch: does it have to be done by committee in the CCG? In ?? I represent, each of the member boards have each different ways of maintaining a certification. Each method in this registry.. is it a list on the web or a ilst on a DHT.. there's a whole bunch of complexity, but the burden of something having to go through committee to get something registered it's goign to take forever. It should be self describing. it should stand on its own, eg.
with JSON-LD
burn: this has come up.. I've heard people talking about wrt to the CG.. what should we allow people to do with registries, who should be able to register. IANA has been mentioned, for peopel who don't know, it's what IETF uses, they're a good model to look ath requirements for how to register something. A lot of good thought went into that
<Zakim> manu, you wanted to clarify self-describing and to raise Evergreen Specs proposal raised by wseltzer.
manu: wendy is watching..
... wendy noted that ther eis an evergreen spec proposal to the
w3c process. I don't know that much about it. Does
anyone?
... I think in general its' the how do you do a living spec,
living document, what's the process behind it
... if it's official w3c process it might not be a CCG thing,
it might be implemented elsewhere. Id on't know the current
proposal
... jonny, to clarify, self-describing, these registries are
not meant to be official, they're advisory. These are the
things we know about. All you need to get into them.. longest
is a couple of days
... The requirements are super low. Same as the did method
registry
jonnycrunch: but it took me goign
to meetings to get in
... then the process changed
manu: it's the CCG. Anyone can join it. It's not W3C members
JoeAndrieu: the proposal of CCG
managing the governance, not approving what goes in there. What
we learned with the DID method spec is we have to have better
criteria and process. We don't want this, we have to omuch work
to do, but this seems like the lightest weight way to figure it
out
... if the w3c figures out how to do it that'd be great. Or if
there's another way. But it's about governance, not about
approval
kaz: the github issue on w3c
process. There is some discussion about evergreen approach
including registry as a possible use case. And another issue
about registry itself. There are two different threads
... we shoudl watch the thread on registries defined by w3c
process
<kaz> discussion about registries on the W3C process thread
<kaz> ACTION: kaz to watch the discussion about https://github.com/w3c/w3process/issues/168
manu: the other registries are
the data schema method registry. JSON schema is the only
example we have right now
... I don't know where evernym and sovrin folks are with
reusing that?
ken: it has to be in the proof but could also be in the schema field. The proof points to something more complicated, a mapping and some keys, but eventually points to the schema. Could point to the schema if that's useful for interop
manu: maybe the sovrin/evernym
stuff could go in the data schema method registry
... the refresh service method registry. When you're credential
expries you need to go to the.. the manual refresh service
gives you a URL so when the credential expires it gives you the
URL to request a new one from the issuer
... people have contemplated protocols about automatically
refreshing credentials
... and refreshing them before they expire (eg. like
letsencrypt does automatically behind the scenes). that would
be nice for things like drivers licenses
JoeAndrieu: in the use case
conversation, it's a nice fit for the wallet to handle
that
... I don't want to have to go get it. We talked about whether
or not the verifier gets it, that's not what this is most
likely
manu: it's in the credential. The verifier would see it
JoeAndrieu: they'd see it but they can't use it without a capability
manu: and there's an arguement to
move it to the presentation. The issuer puts a refreshService
and you're moving presentations around not credentials. It's
been proposed. Nobody has made a decision.
... Terms of use method registry. As people build different
types of term sof use it would be nice to have a registry so
you understood the different patterns
... The presentation type registry was just raised to
contemplate.
... We also have a LD-PRoof cryptosuite registry, which
specifies all of the cryptosuite types, like sovrin zkp, rsa
would be another
... and so on
... and then there are other ones like anchoring proofs like
the bitcoin and ethereum folks might want
JoeAndrie: is it really ld-proof? isn't it just a cryptosuite registry?
manu: ldproof is a very generalised format? JWTs have their own registry at IETF
JoeAndrieu: just wasn't clear if there are cryptosuites we carea bout that aren't ldproofs that should be included
manu: not that I'm aware of
... those are all the regstries, who neesd to manage it..
clearly this group isn't going to do anything except maybe say
they shoulld exist and say who sets them up and manages them.
The suggestion is let's have the CCG take it over and create
all the registries and copypaste the process we use for the DID
method registry
ken: is this something that could be handled in a protocol type discussion?
manu: no I don't think so. Those
are all.. half of them are data model things. Theyr'e not
protocol related
... if you rechartered a group you could say the new group
should manage those registries. The easiest thing would be the
CCG doing it.
<kaz> [break till 5pm]
<manu> scribe: manu
Joe: I put the link into the current draft of the VC Use Cases into the slide deck
<burn> https://w3c.github.io/vc-use-cases/
Joe: We have put some work into
the document over the last couple of months, not under same
time pressures of Data Model.
... we made some changes in spec. There are issues with the PR,
we'll need to fix it.
... we wanted to map requirements to use cases in document. All
use cases have about a paragraph explaining things.
... don't know if this is best representation, but we wanted to
get an idea on where things are.
... We have recently added 2.5 use cases, we are going to spend
some time on instructor.
... citizen by parentage -
Joe explains Citizenship by Parentage use case.
Joe: Challenge with this use case
is because parent's name change... mother sends birth
certificate... he gets his passport.
... That's it, but we try to tease out details in these.
... International Travel with Minor and Upgrade... current US
Passports don't note parentage, letter to travel if traveling
with child.
... What we did for this use case was, what are the things that
could go wrong... what are things could ameliorate that
threat.
... There is a stolen key, -- kidnapping her own kid, but
Rajesh could store key w/ 3rd party... nothing to do with our
technology. Could use a hardware wallet/pin/passphrase on
key.
... For our third example, want to focus on threats to third
document, etc.
... Third focal use case is PADI Expert instructor...
<stonematt> https://docs.google.com/document/d/1O-PYcHZYvbjbhONRSdSwdCP2cwg77bxadAJQdHdI66A/edit?usp=sharing
Joe: Pat earned multiple diving
credentials in Fiji, Australia... later hired by NOAA -
requires they maintain certification, remain instructor...
instructor certifications are public, personal certifications
are not public.
... Part of certification is logging hours ... NOAA hiring
local divers... counter-signature of credential, important
because we haven't had that use case here yet.
... Certification in Fiji/Australia -- different schools issued
credentials... different expiration cycle -- validate future
credential status changes...
... Wanted to put in ocap stuff because we've talked about that
use case in this community... certified log of all
divers.
... NOAA makes sure PADI approved those schools... then those
schools issue those credentials.
... Upon accepting the job, get token to check all
certifications, not just single credential.
... when things expire, NOAA's check provides next valid
credential. Archives on laptop.
... Once Pat retires, he revokes token.
... VCs that exist, Advanced Open Water Instructor, Drysuit
Dive Certification, Night Diving Certification, etc...
... Just one VC when going for job application.
adrian: This overlaps
significantly with the prescription use case.
... I'm not sure there is a difference.
manu: Sounds like it's good that this is same as prescription?
stonematt: Yes, but we needed an education / cross-jurisdictional use case.
adrian: Prescription one may have a revocation requirement.
ken: Is purpose of focal use
cases to show off all use cases? Scare people of complexity of
credential management?
... What is the selection criteria?
Joe: This is an opportunity to
provide more depth. Some of the other ones felt cookie cutter,
no distinct characteristics of feature set.
... Let's get something accessible, with a sticky wicket.
... Something that the VC part of it has... not meant to be
exhaustive.
Ken: Do we have a beginner use case?
Matt: We have 20 of them.
... VC Data Model spec is based on university credential.
Ganesh wrote lifecycle of that, embedded in Data Model
spec.
Ganesh: It was pulled in a few weeks ago, into the data model spec.
Joe: Trust hierarchy -- PADI is
liable for correctly certifying dive schools, school is liable
for Pat's schooling, Pat's liable for their actions... this is
about who has liability.
... I don't think this is complete - want to flush this out --
threat model. Pat revokes update permission, NOAA checks it
anyway.
... Pat revokes capability, in hopes that a revocation isn't
detected immediately,
... PADI revokes permission to issue, PADI invalidates issuing
credentials, PADI invalidates affected certifications...
... Pat could have cheated -- PADI revokes certification,
School revokes certification....
... Pat could lie about a diver
Ned: Malware could take control
Manu: Pat could be Phished.
Joe: Could give capability to wrong entity.
Ned: A lot of these assume that
these issuers are well known -- could make changes to
names.
... Is there a way for Pat to not be spoofed when he gets
certification from school - fake issuer provides
credential.
jonnycrunch: If diver needs credential, based on swimming exercise... charging the diver for proving that... requisite on diving... do swimming lessons...
Joe: You're talking about hacking
the VC in some way -- monetize as much as possible, overcharge
you by 3x... but no one notices it.
... No monetization architecture.
... Pat could hack their own credential.
... Boat could sink...
Group adds responses to threats...
Discussion around how dives are logged in the real world... how certificates are published... how one diver can log dives for another diver.
deiu: How are these issues handled today? Seems like we're trying to come up with solutions for things that may already have solutions.
joe: We are trying to capture all
of the threat models... and state that VC are not magic fairy
dust, there are things that they don't solve.
... These are a bunch of data trails that can be
authenticated.
deiu: Couldn't a lot of these threats be mitigated if diver's present VCs before all of this happens?
Matt: These are in tough locales, a lot of this might happen in disconnected locales.
jonnycrunch: Movie "SWIM" where they leave diver's behind... manifest could be captain of ship headcount before/after. Proof that everyone is on board, put name on manifest.
Joe: Some of the things said, vacation, not handled in this use case. In emergencies, sometimes you throw protocol out.
deiu: Attaching evidence of proof of identity, record driver's license and photo, could handle these threat models.
Ned: Is there an assumption that verification can't happen P2P.
Drummond: No.
JoeAndrieu: If you are offline, how do we deal w/ revocation?
manu: If you are offline, you can't check revocation.
Drummond: IBM has a whole offline driver's license use case and system they deployed a while ago.
Ned: There is a lot of assumptions/trust baked into current systems.
Joe: Thank you for the feedback, we'll polish that up.
Matt: This is the plan, is it the right plan?
Burn: Is this the right framing?
Dmitri: I heard no test suite?
Burn: I said that's a last resort if we can't figure out the test suite.
Matt: We have MIME Type discussion.
Ken: Let's bring up MIME Type in first open block
Matt: Could we shrink some of these items down?
Group agrees to have context discussion first, then open issues, then mime type, then test suite
<stonematt> http://www.bar-celoneta.es/
[End of Day 1]<rhiaro> slides: https://docs.google.com/presentation/d/1_AKEYKWqaiMIUb6tlo3yVONTl9Z-71H4XfdTtgUF88U/edit
<rhiaro> scribenick: rhiaro
burn: welcome back survivors from
last night. It becomes clear who the committed people
are..
... We need one more scribe for today
... This morning we're starting with the context discussion, a
big one
dmitriz: Would it be possible to switch the context discussion and the MIME type discussion? I would like a bit more time for preparation
burn: possibly yes. We don't want
to move the context to too late
... We could do MIME type and then.. we need an hour for each.
If we can get started now with the MIME type then we might have
enough time to get it done before the break
gannan: I thought there was a dependency between the two?
burn: anything else we can
switch
... Let's talk about the DID working group and try to keep it
short
burn: The advanced notification
of working on the charter has been sent out [to the AC]
... We are expecting that there will be a DID WG sometime
within the next couple of months. There is defnitely strong
support. It's a matter of process and working through
objections
... We think it's reasonable to expect that a DID WG will begin
before this group finishes
... There is very strong overlap expected between thi sgroup
and the DID WG participants. How can we minimise negative
impact of that?
... For some people they might findi t challenging to schedule
and attend multiple meetings like this one
... I was in a WG that had 6 specs at once and most of us were
intersted in all of them and that was a challenge, that was
exhausting, and that was all in the same WG
... So I want to hear any thoughts on the logistics, how we
might make that work better
<Zakim> manu, you wanted to discuss some thoughts
manu: I really prefer that we..
we've already talked about rechartering, I'm -1 on rechartering
VCWG while DID WG is going on. If the DID WG is chartered in
the next 3 months, we should be mostly through CR at that
point, and then it's autopilot once you get to PR. Seems like
the timing is great.
... So just don't recharter this grup with anything new.
Handing management off to CCG if possible, and then
... (missed a bit)
<stonematt> zakim who's here
yancy: I have some concerns that
many of the decisions we make in this WG will be directly
impacted by th ework done in the DID WG
... I do think there is a possible reason to recharter the work
done in this group because of that
... Some things being proposed in the implementations are
dependent on which resolver you choose, as in which vendor you
choose to help with your DID resolution
... I think that's potentially the conflict
... It's something I need to know about before being an
implementor for VCs
burn: could you be more specific?
yancy: Two of the new
propositions such as ZKP can directly depend on which DID
resolution model you choose
... Which of the two new features that are being proposed that
didn't exist for the VC version
JoeAndrieu: I want to concur with
the conflict of interest issue, which to me has been
productive. It's been a gift to have folks dealing with
different methods bring their concerns with how credentials
manifest. I think it's an inherant complexity. I want to
acknowledge ethe problem, I think it's constructive
... the zkp stuff has been an implementer driving some
features, that's largely from sovrin
... I want to be able to support zkp but that's also a conflict
of interest in doing DID work in isolation and VC work in
isolation
rgrant: it's a conflict of interest in the separation of WGs? it's not about how the WG gets to agreement? It's a poor structure if you're trying to use that feature?
DavidC: it's the wrong term, conflict of interest
manu: it's a coordination issue
stonematt: It's about
whether/when we recharter, vs how we manage the
transition
... Many people will leave this group and go to DID. We will
have to decide in the short term is there a logistical impact
of that
... and a tactical nature of that question
... The second topic is more strategic, about how do we
progress in both lanes?
... DIDs are gonna go and we don't want the whole VC community
to diverge without continuing to work together because we don't
have a group
... I think part of what I took away from yesterday's
discussion was that we will rely heavily on the CG to be the
vessel for the VC work that continues utnil we get to the point
that we have something that we can clearly claim as a standard
that needs to be standardised. Right now we have a set of
problems and a set of things that might be problems
... and we need to go incubate that for a bit and continue
working together while we determine the nature of that problem,
so we can express it as a thing that needs to be
standardised
... We're not ready for that. We can't stop, but the w3c
formatily of WG is not the right construct to have that
discussion
ken: I wantt o clarify what yancy
was saying first
... You feel like there's a dependency so that one needs to go
in front o fthe other so you can make decisions?
yancy: Yes I believe there's a dependency, and that DIDs are a main driver for the use of VC. A critical driver
ken: Joe, yours was about being torn in two different directions?
JoeAndrieu: Mostly I was trying
to ack where yancy was coming from, but taht I saw it as
constructive
... At TPAC, your collective concerns about privacly led me to
support removing the id from presentation. I got that put in in
the first place. But your concerns about privacy convinced me
to change that
... The method point is constructive to the data model
<Zakim> burn, you wanted to say manu may be right that the timing could work out now
ken: there are differences and approaches that need to be resolved, and that's constructive?
JoeAndrieu: yes
burn: I don't know how much I need to say on this. I agree with manu taht the timing might work out with us wrt to copmletion of the current charter of this group with the DID WG. As long as we wrap up on the time schedule showed with the CR, and we don't need a second one that requires a lot of work. The DID WG is not forming tomorrow, or in a month, it takes time. However within a copule of months it could be, and we could be largely done.
Pratically it may not be much of an issue. Unless something goes terribly wrong we probably wont' need another face to face of this group, that would be my hope
drummond: for the reasons you
just articulated, and a couple more, I don't think there's
gonna be a conflict
... I think it's going to be a fairly smooth transition
... I look at this as a a layering issue. DIDs operate at a
lower layer. They're an enabling tech for VCS
... VCs set up a killer use case
... so we're going down a level for a group that's going to
focus on that. Seems like the phasing is pretty good
... The truth is it won't really wrap up.. that is contingent
on it not taking more than one extension. Seems like there's a
whole lot of things that want that to be the case
... I don't think it's going to be that big of a deal. Plus
everyone who wants VCs to be successful wants DIDs to be
successful too
... we need to keep in mind that DIDs seem to have attracted a
much larger intereset base than I ever expected
... there's a whole area of SSI generally is really getting
exciting
... Seeing the whole decentralised PKI aspect of DIDs
... So that would also be a rising tide that helps float the
boats, and it will help float this boat
yancy: VCs have been around for a while and this is the second charter correct?
stonematt: it's the first charter as a WG, but we did work as a CG
yancy: the first implementation was based solely on CG?
burn: there was not a standards track WG. there was pre-standards track work. There was a task force. Just like the DID spec now
yancy: we have implemented
previous versions of VC and there hasnt' been a lot of
traction. I think DIDs will help give VCs that traction.
There's going to be a lot of decisions that come out of the DID
WG that might affect the VCs
... that's the main arguement I wanted to point out.
<Zakim> JoeAndrieu, you wanted to offer ccg as continuing forum
stonematt: I want to reinforce
both of what drummond and yancy just said. I think for us in
this community it's very important for us to stay engaged with
the DID work and to keep our voice clear and even if there's
dissent among us, unified outside of us
... Specifically because there's a broader community gettin
ginterested in DIDs and we don't want to accidentally have an
evolution that has an outcome that makes VCs harder to
implement. I don't know what that means but we have to stay
engaged to make sure that doesn't happen
... To yancy's point, when we started this, I hoped this would
all be done in 2 years and there was al ot more. DIDs are the
secret sauce required to enable this technology
... I suspect there's another one after that
... It's going to be wallet management that's an ecocystem that
VCs eventually sit upon
... This is my education my learning experience is that we're
really really early movers and thinkers in this space and the
actual product in the market place will take some time
... There's a reliance lifecycle that wer'e moving from cool
new tech that thinktanks are thinking about to early market
adoption to can't imagine life without it, andif you look at
GPS and mapping, we all got our first smartphones we though it
was kinda neat, and now we're looking for vegan ice cream in
Barcelona and it draws a line
... No-one thought twice about saying I'm gonna follow the map
and use all the technology that was there
... We're at the very beginning of having a blue dot on the
crummy map. We have to do a lot of steps to get to the point
where noone recognises what went into it
yancy: I'm trying to avoid the situation where if we want to implement VCs and we want to choose zkp or jwt support, we're gonna have to choose which DID method to go with. Whether it's sovrin, ibm or whateve.r Before the DID WG is complete, that causes a conflict
brent: I don't see the DID method determinig what signature types you're allowed to use
JoeAndrieu: speaking as one of
the cochairs of the ccg, I want to anchor that we welcome
bringing conversations there, whether it's proposing work items
or asking for time on the agenda to work on VC issues after the
WG wraps up. We're all welcome there
... You're also welcome at RWOT to write papers about these
issues, and IIW
... each of those are more and more open
<Zakim> burn, you wanted to warn group about dropping off early
JoeAndrieu: Those are places the community can continue the conversation after the WG wraps
burn: to contradict myself from
earlier and irritate all of you. I do have to warn all of you
very strenuously not to drop off this group because you're
excited about the DID work. It is a common problem. Everyone
wants the new shiny. It's easy to look at a spec and think
we're done, but often at that point you still have a huge
amount of work.
... The webrtc spec is an example, it's still not done, and it
was 2 or 3 eyars ago they did feature freeze. But then they
added more.. and more.. and people got interested in other work
and they started working on 2.0. But there weren't many people
left to finish 1.0
... It's like putting out the next release of your product
before you've advertised the first one.
<Zakim> deiu, you wanted to mention separation of concerns and future proofing
deiu: I want to reiterate agaoin
the importance of separation fo concerns wrt to the DID work
and the signature work. In it's current form this wg has
produced 1/3 of the work required to make VCs useful and
valuable to companies today
... VCs without identity and proofs have very low value to
anyone who is trying to use VCs. Being future proof for this
particular spec means we have to create the DID WG as its own
entity, but also a group that handles signatures and proofs.
Merging them and tightly coupling them with the current spec
will make this current spec less strong or much more difficult
to be upgraded later when a new technology that is more
resilient, interesting or
nicer comes along and we want to use that as well
<manu> +1, yes, absolutely.
drummond: it seems there's an era
of VCs before protocol, and an era of VCs after or with
protocol, it seems inevitable
... I am a proponant of doing this work before protocol, but I
think that era is coming
... it will inform a lot of what nees to be done
... And to repsond to yancy's point about the layering.. the
capabilities for DIDs and DID docs to support things above it
is independent
... You need what's necessary to support the protocol layer,
but they're independant layers
... you should be able to get support for what the protocol
needs from any DID method. Some DID methods are very
constrained, so you may have cases, but in more cases what
you'll have is as long as you support extensibility of DID docs
you'll get what you need to support high level protocol
credential exchange. Doesn't mean you'll get thes ame trust
guarantees, that's a big difference eg. from DNS, but that's
the way i've been approaching DIDs for a
long time
oliver: I have a similar point, to emphasise that the VC spec as it reads now it's DID agnostic, it's even possibly eot use plain URLs instead of DIDs, so there's no real dependency, so you should be able to use any DID method with any proof or data format. In uPort's case we have the ether did method and it should be nop roblem to use ld-proofs or jwts
brent: I agree with drummond and oliver
DavidC: In our implementation we
don't currently use DIDs, it's improtant to keep that
separation
... The point about signatures and proofs.. if the proof has a
type, does that not give the flexibility you need to be ableto
future proof with new types of proofs?
deiu: I'm saying the flexibility is there and we should maintain it
brent: what it seems like we're talking about is that we agree the data model itsel fi snot enough. In order for the data model to succeed we need to build protocols. In order to do that we need to keep working together and stay in touch to make them interoprable. once we've done that we can maybe take all these things trying to interoperate we can make a 2.0 that describes formally how they are doing that
deiu: brent, your point at this
stage I wonder whether this group needs a recharter if we could
have the work that's still missing, that's pieceing the puzzle
together based on DID and proofs work, could we have the CG be
in charge of maintaining and producting docs that show
implementers how to build VC systems using emerging technology
all the time
... so we have.. I don't want to say a living spec, it's not a
spec, but a guide to how we use those technologies together
with the spec that has the model definition to build the actual
system
... a VC 2.0 could be obsolete by the time it's out
<Zakim> burn, you wanted to start wrap up and to mention impl guide
<JoeAndrieu> +1000 for CCG to write/publish guides & best practices (proposals and leads welcome)
burn: We already have an implementation guide and we talked about that being something we need to start but that the CG will want to take that and go beyond whatever we write in this group. What you said fits into that. There's implementation, but it's not just implementaiton. When the privacy interest group came and joined us at tpac. They asked us all the hard questions about how we intended it to use. They said just write what you're thinking.
There's an awful lot that can be written that's not ready for spec yet and may never be. The CG is a great place for that, just as it is a great place to incuabe work that may go standards track
scribe: I want to encourage us to
wrap this up now
... If we get some conclusions out of this this is nice. This
is one of the few sessions wwhere we don't need a decision,
this is a discussion about what are the factors.
yancy: I want to second the 2
comments made that VC 2.0 could be obsolete due to the work
coming out of th DID WG, and I understand you can use
URLs
... but I do think that the work coming out of the DID WG is
going to be the secret sauce that makes VCs worthwhile
drummond: I want to be clear what steps are left and what should any of us around this table be doing to help the DID WG happen?
<ken> I agree with Yancy's comments
drummond: I know manu you're close to the chartering process. what timeframe should we expect? People keep asking
<Zakim> burn, you wanted to remind that this is VC 1.0
burn: I heard yancy refer to our current work as VC 2.0 and I want to be very clear ,this is VC 1.0. Everything before that was VC 0.
<Zakim> manu, you wanted to respond to drummond
<kaz> DID WG draft charter
manu: We don't know, it's a black box. It's over the wall with w3m, we have no assigned staff contact. Anything I hear from wendy that is okay to rely, I relay it. What went out was a head's up to the W3C AC. Now the members know it's coming, they can look at the charter, we're gathering feedback. There was a workshop, the conclusion was let's do a DID WG. Or let's float a charter that can be worked on. I expect it's gonna take two more months. once
the actual charter goes out to vote, we'll all have to pitch in and get votes for that charter
scribe: We have access to all of
the AC reps, reach out to any you know. GO down every single
company and see if the technologies would be interesting and
write them directly, say you think it's going to be useful
technology because of xyz, if we can split the list up and
write personal driect emails to AC reps. We already have 65
committements to vote in favour, but it doesn't hurt to remidn
everyone charter is out for review.
... When the charter goes out for official vote, all of us
should mobilise
drummond: if the vote is successful, how long?
manu: a month to two, it depends
on if a staff contact is ready
... I'm pretty sure wendy is lining that up
... the vote is a month and a half, and then maybe another 30
to 60 days before the first official meeting of the group.
Could be like May/June timeframe
burn: Any last comments?
dmitriz: for some of us this is
the main event.
... This is the source of much of the arguement and CR blocking
issues behind the scenes of the spec
... Soooo what is a context? It is an attribute.. we have two
serialisations right now,t he JSON and the JSON-LD
... in the form of a JSON web token
... The thing to note about them is that they are 1-1
mappable
... there is a nice set of setps provided by oliver and team,
pre and post processing, that shows how to translate from the
JSON-LD context attirbutes to the JWT attributes. There are
predefined ones that are one to one and everything else gets
stuffed into the VC attribute
... first of all I want to point out that even though there's
this major philosphical rift that we have two syntaxes in
implementation, it's not that bad, it's almost trivial. That's
outside of the question fo the context
drummond: it's 100% lossless, exactly?
dmitriz: yes, that's the
difference between the syntax
... some signatures require canonicalisation some do not. THe
difference between the syntaxes is orthogonal to the
cryptographic signature method. You can use zkps with iethe
rone, you could use ld sigs with either one
... there you get into a canonicalisation problem, but that
does not touch on the difference between the two
syntaxes.
... The issue now is the context attribute
... It's an array
... When translating between two syntaxes, where does the
context go? It doesnt really belong in the JWT.. in the
standard attributes
... That's fine.
... The JWT format explicitly provisions for, because they
understand that all actual implementations will add
app-specific attributes, we're in the process of registering
the vc ?? with iana
... In translating between the syntaxes, the context attribute
gets put into the VC claim with all the other credentials
stuffl like type, subject
brent: is this how it's happening now?
dmitriz: that's what's in the spec right now
manu: this is what is happening right now
dmitriz: Why are we doing this?
what is context for?
... it's to prevent global attribute collisions. We're goign to
have interoperable verifiable CREDENTIALs, what my app means by
'name' will, if we just go with the typical JSON style claims,
there's a potential for name collision. What my app means by
'name' might be different by what your app means by 'name'. You
might mean first name, I mean something else
... There are way to make json attributes in the credentials
globally unique and collision resistant without reducing
develoepr experience. No name-dmtirisapp. Its' an easy way,
just adding the attribute, to not step on each others' toes
brent: and to not step on our own toes
dmitriz: you also write docs for
yourself, YES
... That attribute there is for interop.
... What's the action item?
... We're asking to clarify the spec. It was assumed in thes
pec but it was not read that way by implementers, so we're
asking explicitly mark the context attribute as required
... What we're not asking is that anybody has to use JSONLD
processors, ro any sort of library stack, aka JWT developers
can continue what they're doing without changing tooling
... The technical reason? If we include this context, we have
machine processable interop between two syntaxes
... What are the downsides? The main reason put forth against
is is that it locks us in into having to use exotic libraries.
Or it's a philosophical battle
... it doesn't have to be a battle. It's just an attribute, you
don't have to use other libraries
... The main arguement is political. But are there actual valid
reasons against it as developers?
... The two things.. what's the dev experience going to be
like? What is it going to look like to develop with VCs? And
what about storage?
... There are ways around these issues
... How do developers know what to put in the context
array?
... Will developers have to create their own vocabs, their own
schemas, where will they publish them?
... We have massive experience in the field, eg. from google
teaching all the developers in the world teaching all the devs
who profoundly don't care how to use context. THey will cut and
paste from examples. From stackoverflow, better yet from our
guide
... Usually from app-specific guides. The educational
credential develoeprs will be copypasting from those specs and
articles
... It can be even easier. We can add some text about it in the
implementation docs. We can in our vast spare time make some
tooling and graphical wizards to make it even easier.
jonnycrunch: I've filed the issue about security concerns, the MITM attacks and we've talkeda bout the mitigation. I want to add .. I have a demo to do DNS spoofing. I just want to voice that legitimate concern. Even if it's mitgated with caching, the first time request is still an issue. Implementers will cache it, but the first time you're going out to get it, the guy in a boat who is the dive instructor, can subvert that and redirect you to my
own context and most crypto libraries, you validate the empty string as true, you should have smart code that prevents that but all I would need to do in the context is to redirect you to an override
DavidC: a pratical question. In the context we've got these URIs for the types. in type we say VerifiableCredential, which isn't a URI because the URI is in the context. You said the devs will copy and paste, and will extend it to their own application use. so they need a new type. Now they're gonna know what to do, see the strings and add their own strings on the end, but of course it won't work because the context doesn't have their string it, so
it won't work
scribe: is it possible to cut and
paste the context, totally ignore it, then put URIs in the
type. So instead of having VerifiableCredential, put the
URI
... so even though the context says you can replace the URI
with the string to save you typing, you can just ignore it and
put the full URIs in, and then the implementors will see
they're all URIs and put their own URIs in there
dmitriz: very good point, we
should consider opening an issue to tweak the examples
... Storage requirement. you may hear in the arguements against
context if I'm storing billions of VCs (a great problem to
have) what about the literal extra bytes taht the context
attributes will have. By the time you have a sophisticated
application with billions of VCs, there are ways to not
actually have to store the context but to add it on
serialisation based on your own application logic. Storage is
cheap and in the very valid cases where you
just don't want to pay for those extra bytes there are ways around it. Storage should not be a dealbreaker for requiring the context of the output, of the serialisation
scribe: What about MITM
attacks?
... We have this array of contexts, they are URLs, which we
fetch them over HTTP
... what about the server hosting the context being
compromised?
... This is a solvable problem in ipfs, via hashlinks
... we have recommendations on how to link to contexts in
verifiable ways
... For even simpler experience, embedding and versioning the
context. You can npm install the context as a dependency and
refer to the context locally
... But please understand that for the very security conscious,
crypticgraphically locking it is possible
... We're asking to mark the attribute as required in the
spec.
ken: The immutability issue,
we're tyring to address that already with hashlink etc. There
are a couple of ways we can address that, I agree with jonny's
concern
... I'm gonna call turtles all the way down.
... That's one of the problems we run into with some of the
contexts, some include others, which include others.. it's not
an insurmountable problem but some people think oh I'm just
includein the VC context, which includes other contexts, and
then I need to create my own context side by side with it.
There's a referential problem that makes this more complex than
just including the VC context. I'm not opposed to having it
there, but it's a bigger
camel than we look at on the surface
scribe: Other issue is tools. In
order to process all the turtles we need tooling.
... Some type of tooling or libraries support will be required
to properly process the context. I don't object to coming up
with a solution but we need to be aware of it then chacnes of
adoption will go up
... IF we require a context to be included as an outward
gesture of including the context but then I ignore everything
it then you don't really accomplish the purpose whic his to
establish the meaning of the words. Just including it and then
violating everything that's in it it's misleading and
dangerous
... I'm going to include a context with all the defintions and
then deviate from it you put a false flag about what you claim
to have followed and haven't actually followed
dmitriz: if we teach the
develoeprs to do this, we point out it makes attribute name
collision resistent.. to make it clear we should address how
you should take advantage of the tooling,b ut that is
optional
... We need to validate so that members of the ecosystem who
*do* want to perform jsonld processing it doesn't break their
stuff
<drummond> Ken just covered the point I was going to make, which is that if we require the context, but developers don't actually know why it's there, they are going to ignore it and thus create invalid JSON-LD.
oliver: I agree with jonny's
concern but it was address ed mutltiple time. I'm not sure how
the solution looks like, which spec it's in
... Another thing, do we assume that contexts can be hosted on
any webserver. Isn't there a problem that we have a dependency
on the availability of the servers? Do verifiers have a choice?
You can always cahce them, that's true.
dmitriz: I want to make offline
capable apps. I want apps to use VCs and not have to fetch them
from the net
... it's possible today.
oliver: some people might not be able to connect to any type of external web service. There are clsoed loop systems.
<Zakim> manu, you wanted to note that we only need two implementers to support @context, and we have many more than that now. and to note some of the discussion we are having assumes the
manu: Some of the discussion we're having assume that we've already said we're going to put context in there. And this discussion is about are we going to put it in there at all. The toher discssuiosn are secondary to whether we make context mandatory. The other point is that we only need two implementers to put context in the spec. if it says MUST we only need two to implement it. Right now are there going to be a whole bunch of implementers who
object to it being in the spec at all. Then it becomes who is going to object more vigorously to having it in vs not having it in. The proposal on the table is do we put context in, are there two implementers who are going to support it (I think we will)
scribe: Then the question is how
many objections are there gonna be to that
... are the objectiosn to it being mandatory goign to be more
than the objections for it not being mandatory
... If one person objects to the MUST and 8 object to it not
being MUST, it goes in the spec
... Consensus means the least number of objections
jonnycrunch: I love JSON-LD, it's
lightweight, it really helps with interop. Ultimately it's the
turtles all the way down issue
... Ultimately in ipld we implement this as a hashlink, in ipfs
it's a /
... it's a hash reference. It's this rooting problem.. it's
turtles sall the way down until it's what tim says
... I think it's a phenomenal way to get on the same page, but
I have some concerns about the way we get there. I don't want
to push an IPFS solution, but IPLD is taking JSON-LD and taking
it to th enext level which is hash based linking
... I know there are concerns about IPFS or IPLD, it's all CC
license
burn: jonny is not currently an
IE or Member. Keep that in mind wrt to this suggestion
... Not in any way demeaning what he's saying, but members must
understand this
dmitriz: Personally I'm really grateful for IPLDs experience in this
brent: requiring that it goes in
does not mean anybody has to use. the goal of the inclusion fo
the context is to enable a shared vocab and enable
interoperability
... Perhaps opposition to including it is that as a solution to
interop it may be a solution that doesn't work for some
people
... Saying you have to put it in might be detrimental to
interop, the way they would go about being interopable wouldn't
be through the context. Just a thought?
<Zakim> rgrant, you wanted to address specific overrides and to ask if contexts are hashed and to ask what consensus is
brent: There's this @context thing and everyone assumes this is how interop will happen but someone needs to do it a different way (hypothetial)
rgrant: I want to ask about specific overrides. I think it is useful if there is a context and it's okay if specific terms have overrides via URI becauase you can look it up, it's like importing a module and the specifying the name of the module and then fully specifying it in different programming langauages. each call can be traced; in this case the meaning of each term can be traced to a parent context or a URI. I think it is useful and not a
problem.
scribe: I want to ask if contexts
are hashed? If they are then we know after 1 web hit whether
the rest need to be followed. This would mean a change in how
context is listed in the document. You would say here's my
context and here's the hash, the deterministic normalised thing
that I would sign, after getting it
... Is this happening? I'm not aware of it happening. Is it
part of the background?
manu: that's what the hashlink spec is about. It's experimental, we need to further it
brent: it could be part of the dereferencing but it's not necessarily
manu: that's an option open to people who want to use it
rgrant: does this data model say when you have it here's where to put it?
manu: that's a different spec
<Zakim> drummond, you wanted to make a suggestion (at or near the conclusion of this discussion)
rgrant: so there's a way to do it? it's not preculded by our inaction?
everyone: correct
rgrant: what is consensus? It's not about votes, it's about valid technical concerns
drummond: the discussion here in this room we need to capture it in the implementation guide, ten thousand devs will ask this question. It's a critical point of understanding. All the considerations bbout security issues, hashlinks, etc, they need to be in there. If it's mandatory and developers cut and paste and ignore it, are we perpetuating a problem of a bunch of broken JSON-LD. Documents that don't actually follow it. if the implementation
guide says declare URIs in there, good solution not a problem
scribe: At RWOT we specifically
said there's goign to be a specific syntax for addressing
hashlink in the context of a DID
... cool, huh
<Zakim> burn, you wanted to comment on manu's explanation of W3C process
DavidC: If someone votes against it and says we don't need it way have to say how do you solve the interopability, whta is your solution, if they have a good technical solution then we can say contexts are a MAY
burn: the discussion should have been framed the way manu said, whether we're going to do this or not.
<jonnycrunch> Would you please invite me to join the working group as an invited expert. I just went to sign the disclosure and says that I am "not authorized to join" and I have "bogus category" jonnycrunch@me.com
burn: Wrt consensus. manu is correct, he explained the end of the process. The goal at w3c is to get consensus. The question is what do you do when there is not consensus. Every organisation has this problem. Ultimately you need to understand what happens when you have a disagreement even when you have tried everything. Ultimately the director makes a decision. That is considered a failure of consensus. That is a black mark on an effort, to have that
happen
scribe: We try really hard not to
do that. It still happens sometimes
... manu is correct, he can absolutely encourage that, the
chairs can say we don't see any other way forward than to do
that, and let objectiosn fall where they may. i was hopeful
from conversations last night at dinner that we were not going
to go down this road.
manu: I'm not saying let's put this to a vote. i misspoke
burn: we do try to get consensus, the reason we discuss it for so long is to get consensus, sometimes you get pushed on time and energy and it falls to what manu said. In the end, that's what happens when you fail to get consensus. Every group has a defintion in w3c, ultimately a failure results in what he said
yancy: about deeply nesting links. it seems like that's a classic parser compiler problem, that isn't necessarily unsolvable but fetching contexts iteratively could be a concern. Althoug hif there's a way to cache it so it works offline then it's not a major issue
dmitriz: we have very easy precedent for exactly that. gemlock, maven, npm files
<Zakim> ken, you wanted to discuss the bottom of the turtles is possible
dmitriz: this question fo I have a deeply nested structure it could change underneath me, that's why we have package lock, we have a lot of experience with infra including caching to deal with this.
ken: the fact that I said it's
turtles all the way down.. there is actually ground at the
bottom. I have walked the turtle chain. I've been down the
whole way, there is an end to it. It's just more complex than
the simplest approach.
... it's not an infinite tree
<Zakim> manu, you wanted to note Lighthouse and to clarify that this is about technical issues
ken: the second thing is that the fact I brought up 4 concerns may be percieved that I oppose the context. I do not oppose the context. I think we have reasonable solutions to 3 out of the 4, i think we should press forward and having it mandatory should be a good idea
manu: i was NOT suggesting we put
it to a vote
... It is about technical merit. I am not hearing the reason
why we shouldn't do it from a technical standpoint, what is the
technical arguement against doing it. dmitriz went through a
number of them. What is the arguement against it? If we don't
have it we can't argue against it. The objection has no
technical merit
... For those of you with chromium.. we've been talking about
dev tooling. Ctrl+shift+J, click on the audit tab. It's a
project called lighthouse built into yor web browsers. They've
built JSON-LD support in. It'll be in 2.6 million peoples'
hands
... the reason they're pulling in jsonld.js is because the
schema.org folks want to do full json-ld processing. They found
it useful.
... Develoeprs were copypasting from schema.org to get it show
up in the search rankings, people were getting it wrong, and
now this tooling helps people extend it in the right way. We
can build on top of this toolling
stonematt: The technical merit of
the objection to making context required. Asking the group to
respond: we have not heard a strong voice or advocate to not
making it required, other than for some percieved future
objection purpose
... Anyone in the room who would clearly object to making it
required?
... Who will object? We need to make a decision, we have very
strong advocacy for it should be required with technical
reasons. we don't want to shut down the discussion, but we need
the voice of dissent to be in the room
... Can we get that?
DavidC: the added complexity of
it, rather than just simply using URLs. We didn't use context.
You go back through x500 and ldap they had the exact same
problem of interop
... but I recognise the value in using it. But I didn't know
about the multiple levels of depth of fetching and
retrieving
ken: it's not insurmountable, and it's not as deep as one might think
manu: it's 2 deep right now
burn: are you making a formal objection?
DavidC: no!
burn: if you have dissent now is the time to own up to it
oliver: I raised some concerns earlier, but I'm really not opposed.. we are fine with adding the context as mandatory. But do you think it makes sense to provide a section in the spec to say it's fine if you don't really validate or use the context?
dmitriz: we're making it required on the spec level. In the implementation guidence, it's require dto include it but you odn't have to take advantage of it
drummond: if there's a direc tpointer to the implementation guide, beacause it's such a deep issue, if someone chooses to ignore it they're going to ignore lots of other things, if that's there it takes a lot of that problem away
burn: remember this when we get to the test suite
Ned: I would like to understand what the security assumptions are.. we're making assumptiosn about the security of the cache
<Zakim> manu, you wanted to "you don't hae to process it"
manu: we're dealing with stuff that's pretty new experimentally. We know that we can do a cryptographic hash over the context. Implementations can hardcode the hash so they'll never go out to the network. The spec can say this si the hash, but we can't do it now cos it's too new. Eventually specs can say this is the hash of the context, and implementations can include the context by hash so you're system never has to go out to th enetwork even once.
It's not necessarily a caching thing. The idea here is that if your implementaiton includes the context file and knows the has, it never has to go out of the network and there's no MITM attack
scribe: That is one way of solving it
Ned: there are other scenarios, local attacks on caches
stonematt: this seems like.. we're assuming other standards get it right
dmitriz: we can note in the implementation guidence security section
manu: to oliver, you don't have
to process it. JSON-LD 1.1 is being written such that JSON devs
never have to install JSON-LD toolling, never have to fetch
from the network
... the minor caveat is that you can't just extend it in
whatever way you want and expect to be compatible with the rest
fo the ecosystem who is using JSON-LD. You can put it in and
ignore it, but you're not going to interop with other people
who do that. If you just hardcode the values in the context,
there's an impleentation guide for an extneded context, and you
hardcode the values, that's fine. You never have to use to the
tooling or processing.
You fall back to what JSON devs fall back to, that there's documentation out there
oliver: its' more a problem of the clients. uPort is a generic platform, you can put any data you want in there. THey might have to take care of that. We can't anticipate what schemas they will use.
burn: it would be good for oliver to come back to the group and say what is still not clear
ken: I want to second what manu said. And if you don't actually verify what's in the context and you just put it in there, it's still valid, but your root of trust, you're using out of band context to rely on what fields should be named what. You're weakening the credential, it's for the issuer to take that on
brent: the discussion fo hashing the context - there are even more ways to guarantee the immutability of the context than a hashed file
<stonematt> a-
<stonematt> a-
burn: the question still stands..
we can proceed with making it required. Unless and until we
receive a direct objection, at which point we will be expecting
along with the objection the reasons for it.
... And in particular the technical reasons for it
... any objections to *that*?
manu: can we do a formal proposal and approval?
burn: do +1 or -1 in irc, and if you are not in IRC raise your hand
<brent> +1 to zakim needing to keep up
manu: we haven't said anything about the value options yet, we haven't said anything about hashlinks in the spec, we can't yet
jonnycrunch: there is text I had objection to that I filed issues to
manu: different issue
jonnycrunch: I filed issues in JSON-LD and DID about the same issues.. those are my reservations because you're committing down this path to documentation that already says it must be a URL
manu: I understand what you're saying, I feel like it's a different issue
jonnycrunch: it's interrelated, it's already int he spec what the value is
manu: you're suggesting it should be a js object, they're not interrelated
jonnycrunch: my concerns are well documented
manu: noted, it's not this issue
burn: the complaint about saying
the current spec does not solve a problem is acceptable.
Proposing a psecific solution is not [for a non-member]
... what I don't want to do is kick the can down the road
... any other questions?
<stonematt> PROPOSAL: Make the expression of the @context property mandatory in the data model
<brent> +1
<burn> +1
<manu> +
<drummond> +1
<JoeAndrieu> +1
<manu> +1
+1
<dmitriz> +1
<stonematt> +1
<DavidC> +1
<ken> +1
<deiu> +1
<kaz> +1
<grantnoble> +1
<oliver> +0
<Ned> +0
RESOLUTION: Make the expression of the @context property mandatory in the data model
<burn> Visitor/guest opinions:
<pmcb55> +1
<jonnycrunch> +1
<rgrant> +1
yancy: I'm neutral on this because obviously you can include it but not use it. I do think it's a bit.. the members are saying they're okay to include it but they're not going to use it is tantamount to saying they don't necessarily agree with it being in the spec. It's a weak endorsement
<yancy> +0
burn: any other questions or comments?
*** applause ***
** httpRange-14 jokes **
<kaz> [break till 12:30]
<scribe> scribenick: yancy
<inserted> scribenick: Yancy_
brent: how to do progressive trust
<rhiaro> scribenick: Yancy_
brent: we know we need to open an
issue in the guidance repo
... still needs to be done
... ping has been pinged again
davidc: we've sent the
questions
... what has been verified is the new payments program
<Yancy__> brent: can we close this?
<Yancy__> burn: doesn't see a problem with closing
<Yancy__> brent: has this been done?
<Yancy__> manu: security context hasn't been locked down and it needs to be
<Yancy__> ... need to put a jsonld context and vocab in the context
<Yancy__> brent: what else needs to be done to move forward?
<Yancy__> johnnyc: how do you deal with future schemes
<Yancy__> manu: they will continue to publish new ones
<Yancy__> brent: doesn't disagree that it's not an issue but no time to raise a different issue
<rhiaro> scribenick: Yancy__
brent: wasn't marked as editorial
burn: please check through all your sections
ken: will volunteer to read and
look
... volunteers Amy
davidc: has gone through normative sections
burn: worried something big is coming
davidc: there are a number of issues that have been talked about
<inserted> (Charles joins)
burn: requests name and animal from random person
brent: ken is going to join Amy
to make sure normative text not in normative sections
... a PR associated has been raised and merged
... some language has been added to the verification
section
... doesn't know how normative the section is because of
protocol
manu: I forget if that has made
it in
... agrees that it is editorial
... Amy gets another issue assigned to her
Brent: next one, we're doing
great
... there was some changes to a PR that went in.
... that's my summary
manu: suggestions strongly that
this PR is closed
... someone else should review it
... modified section
burn: to review and close
brent: this issue 414 is
mine
... if anyone else wants to take a look to see if i'm on the
right track
... only PR that has Brents name on it
... kind of owns this one
... should there be a holder ID in the presentation
<burn> ?
davidc there should not be a holder ID in the presentation
davidc: the proof may be missing due to conflict in text
brent: doesn't say proof section itself is mandatory
manu: dave longley says we should do it but hasn't figured out why
brent: there is a different section where the verifier could hold the ID
manu: you link to a key which links to the holder
davidc: wants to go back to the
text
... reads the property must be present
stone: thought that was the def that this was verifiable
davidc: it was
brent: if that's the problem we need a new issue
davidc: the suggestion was we create one issue about the proof
brent: david l will look at 419
and look at it
... 422 there is no refresh service in the vocabulary
manu: will take that and add it
brent: hopefully before it's
immutable
... the resolution of the url that says example.com don't
resolve to an actual resource
... was pointed out they should actually resolve to
something
... we've already talked about the url doesn't point
anywhere
davidc: there's a hanging term in
the context
... it has to be in a context and it isn't
johnny: shows that in the example
section there is a context
... must be an array of strings
... this should be a way of extending to the future
Brent: this issue is about the example context
manu: thinks this should be fixed
<Zakim> manu, you wanted to discuss example contexts not being retrievable
Johnny: asks how to resolve
manu: we have an example context
checked into repo which is loaded by the document loader
... that is used in the test-suite which can't be put in
example.org
... anyway there is a website out there called example.org
brent: the rest of the issue is that if it's copied and pasted into a browser it goes no place
manu: we talked about using
schema.org
... doesn't feel like there is much that we can do
chaals: what is the term?
... which says everything can be an array of strings
<gannan> a- chaals
brent: will we close this or leave it as is
manu: will volunteer to say why it should be closed
brent: says we should close it
johnny: has reservations
<rhiaro> scribenick: rhiaro
chaals: I suggest we change the
examples to a thing we can control and resolve. Github
pages?
... it's editorial, since these are examples, but it's a useful
thing to do
... I can volunteer to do it
burn: quick check that the group agrees
<Zakim> manu, you wanted to say this is a multifaceted issue.
<Yancy_> rhiaro I can pick back up after manu
manu: this is a multifaceted
problem. We should do that but it does nto address
jonnycrunch's problem, and there are 3 other issues we haven't
addressed that are related
... Let's do this, I think it should go at our w3c url. We have
a /vc/example/v1 that will always resolve
... We change the example url to that
... all of the examples in the spec are valid and the json-ld
processer with handle them
<scribe> scribenick: Yancy_
gannan: for the issue of moving this along is it to act
manu: lets capture the
issue
... the list of urls is wrong because we need to support
objects
... we need to allocate and publish jsonld
... thinks the others are resolved because of that and i'm say
two minus ones because of added complexity
burn: we only have two issues here
manu: minus one to using schema.org
<rhiaro> scribenick: rhiaro
brent: the list of URLs one is a
functional change, manu will do that
... 429, we're expecting 2 PRs
... 430, multiple proofs in the proofs section
... I say this is editorial
manu: Correct, I'll take it
<inserted> scribenick: Yancy__
brent: we have three more slids
<rhiaro> scribenick: Yancy__
brent: is it mandatory or not
manu: someone needs to write a pr to specify why ids are required or not
brent: davidc will create a pr to
address this issue
... one of the zkp section PRs will be announced
... there are a number of language or base direction that
requires metadata
... wants his comment first
... a VC is issued and the audience is not global
... the information in the credential may be distributed
further my the holder
chaals: all user facing text is
global
... needs to be able to handle two languages and needs to be
able to tag that information
... can't do graphics and not do accessibility
manu: working on direction and
language
... it may be good to having something in the spec for how to
do
... coordinate in how to address this in 1.1
burn: jsonld is not the only
syntax we have
... must be handled
ken: there are some international
things that can happen
... what is crypo sighed must be assured
... that needs to be communicated
brent: what I was hoping to get
it in my initial comment
... understand that data model must support this
<Zakim> manu, you wanted to note the problem
brent: what needs to change in the spec to support
manu: nothing
... does not support text direcction
... in the period of time when that gets into jsonld 1.1
chaals: make your own random property
manu: we can
... now we do it for all the w3c
chaals thinks you can make a proposal
chaals: it's ugly
manu: we need an example in the
json section for how to do language in text direction
... also this opens a can of worms that are not supported in
json
... this is the whole reason we said to use jsonld in the
beginning
burn: this falls in the category
of things we must do
... could become a very long conversation
... is hopeful
brent: expects two prs
... one from manu and one from chaals
... dates defined in iso 8601
... may not be sufficient
... points out practical use of the dates may not be
practical
burn: we need to be clear on this issue for what we must do and what we encourage
chaals: has a solution it's not
iso 8601
... everyone uses it and it works
oliver: yesterday we discussed
the tradeoff sections
... do we want to record the decision we did yesterday
brent: just create an issue
oliver: wants manu to be the issue creator
manu: we need int the appendix
for non-normative text
... separate appendix for jsonld vs jsonld proofs
oliver: either appendix or implementors guidance
manu: prefers to keep it in the
spec
... has it's benefits and drawbacks
burn: can do both
... give in implementation guides
... it really doesn't belong in the spec
manu: point readers to the
guidance
... implementation guide notes will be referenced in the syntax
section
johhnycrunch: the has based
linking of lpid approach solves many problems but I keep on
seeing how I would solve it differently
... want it to be considered as future work
... just for you guys to consider
burn: does not have scribe for
the last session
... chaals to scribe after lunch
manu: we could pull of the issue
brent: this is my bosses
issue
... what it would take to try and get a specific MIME
type
... the first decision we need to decide on is is this a good
idea
burn: this is an appropriate discussion
<kaz> issue 421 on MIME type
burn: value of having a mime type
is if you do an http request, and if you get this is a document
there some hope you'll get the mime type back
... servers often doesn't often set their types properly
<Zakim> manu, you wanted to provide background.
manu: I was against this because
of CR
... it is something we can do later
... if you scroll down you can see the options
... two different signatures
... to answer the question it has to be all of those
things
... if json-ld as jwt you use the second from the bottom
... ballons very quickly because of the options we have in the
spec
... when you get the json-ld you know what it is
... you could still determine it
... lets say you want to save and open
... if we just did a simple one option application vc
... maybe we do it for vcs or specific types of vcs
... and at that point it explodes to a number of mime
types
... probably a large discussion of where this lands
... the wrong application will open it on the desktop
... your vs will open in a code editor instead of a
wallet
... you then know it's a vc and your wallet can mess with
it
... at this point you can content sniff
... there is another discussion around this
... if we do that there are only two mime types
dmitiz: wants to second
that
... both serialization requires that
... the json-ld requires json-ld type
... could determine content of credential just off of typew
<Zakim> rgrant, you wanted to note that we have two different kinds of VCs
dmitiz: lets wait until this is a pain point
rgrant: we def have two different
types
... we have implementations that do not deal with json-ld
<Zakim> burn, you wanted to say that these can be done completely independently in IETF since managed by IANA. and to recommend separating from our spec work for timing reasons
rhiaro: can have a profile attribute on content type which point to a specific profile of the content type, eg. to the vc spec
burn: doesn't want to have the
spec fail because of this
... doesnt think we'll finish this
... defer is a strong statment
pmcb55: there was a great article about just using the standard
<Zakim> rgrant, you wanted to say content-transfer-encoding may handle jwt
manu: except we have a json-ld that it doesn't work with that
rgrant: content type and separate
field
... however jws is in jwt and so this is open
<kaz> Media Type registration guide
chaas: if it's ready in the spec add it otherwise doen't
kaz: w3c spec has different
procedure
... let me check within the W3C Team to see if this is still
the case
stone: so I find this discussion
valuable
... we could just defer this
... let some future group make the final MIME type
decision
... any opposition to defer
burn: our spec does not say
... wild west until it's defined
<Zakim> burn, you wanted to demand that we not tie this to the spec
deiu: so we can bring up some text in implementation type
<kaz> mime type section of the Web Annotation Data Model REC
kaz: we have web annotation model based on jsonld and this specification talks about mime type, so we should look into this as well
manu: do we have a higher priority we could talk about
davidc: done task and now raised it as an issue
<Zakim> rgrant, you wanted to say implementations will default to json unless they can lift to json-ld
davidc: the implementation won't know to demand that
<rhiaro> scribenick: rhiaro
manu: the first thing you sniff
for is @context, if it has the one you expect, you
automatically know that you can continue to process it
... if you want to make absolutely sure you run it through an
LD processor
rgrant: so run everything through an ld processor, if it succeeds you have jsonld, if it fails you have json?
manu: if it fails you don't know
jonnycrunch: VCs will be used in did resolution. Each resolver will claim that it resolves this DID for you. This is gonna be where the rubber meets the road, how do the DID resolvers handle the claim for the DID document?
rgrant: that's one applicaton right?
jonnycrunch: it ties in with the crossover between the two groups
rgrant: on the internet.. some people do things right, many people do things wrong
jonnycrunch: the beauty is the cross pollination between the did resolution people, to get on the same page
rgrant: I heard you if json-ld
parsing fails you don't know what you have
... but certainly I know more than nothing
manu: you can't determine
anything from it, you need to fall back to a process
... it depends why it failed
burn: you do not know that it is json-ld. It may be that you can sniff internally for the type attribute and conclude that it's the JSON VC format by that value
rgrant: certainly if it says it's
the JSON then we know that
... what if some random person thought they were making their
own VC and they used something outside of the data model? in
that case do we say it's valid JSON so we could treat it like
one, or since it was supposed to be JSON-LD you've failed
manu: it's up to the
implementation, we don't define it
... the right thing to do is to say it failed. the JSON-LD
processor is erroring for a very specific reason; could be you
overrode a core term so the semantics are different
... so they have made a mistake in publishing their credential,
yo ushould not process this credential
<Yancy_> rhiaro I can take back over now
rgrant: that's a great opportunity for a conversation with the user
<scribe> scribenick: Yancy_
burn: any objections for another
topic before this
... now a discussion of which topic
<rhiaro> scribenick: rhiaro
<inserted> issue 438
oliver: I created a new issue 438 that's about the spec doesn't not allow the VC to be verifiable without a proof property but it's not the case because JWTs have JWS which doesn't require a proof property, that's an issue DavidC found but I created a separate issue because I think it's a longer conversation
manu: I think this is easy. We
have to allow the JWT stuff
... We need to change the language so that's allowed. That's
fairly easy change
burn: is it an either/or?
manu: it's not, you can do
both
... you can have an ldproof and secure it with a jwt
burn: but the requirement is you
must have at least one
... you must have either this, or this, instead of making a
proof section just optional.
... that fits with the spirit of what verifiable means
brent: I like the idea of a proof section regardless, would it be possible if the JWT is serving as the way to prove that we .. is there a way in the proof section to say this is being signed by a jwt therefore look at that signature for verification? once it's unwrapped and I get the credential I see where it's signed, not just a thing with no proof section at all
burn: almost a new feature..
manu: DavidC raised that
today
... I think we'd rather jwts got in trouble, the jose stack got
in trouble by allowing the algorithm type of none
... there were a lot of implementations that misimplemented it
and raised security issues
... we want to try and avoid that. None of the ldproof stuff
has a none thing. If you're going to use a proof tell us the
crypto you're using. If we add a proof section that says none
or don't worry about it it's external, someone will write some
code that will check it and see if the proof says jwt
brent: what would stop someone do something similar for the other proof?
DavidC: you can always say type none
JoeAndrieu: we're talking about credentails that don't have a proof part?
brent: if I'm understanding, it's either we have a proof that specifies jsonld or zkp stuff, or it's signe by a jwt, or possibly both. The jwt doesn't have a proof part
JoeAndrieu: in TPAC we had talkeda bout a detached verifiable credential that didn't have a proof, eg. for testing reasons. If it's not verifiable it's not a VC. At TPAC we said we should specify a different type that's not a verifialbe credential
burn: we have that alraedy. It's only a verfiable credential if there is a proof section
JoeAndrieu: that's not what we agreed at TPAC, credential is too to be something we proposed
manu: credential is a set of one or more claims, a VC is a tamperevident credential that can be
JoeAndrieu: the value of the type field in the credential has to be verifiablecredential.. I don't know how that plays into the desire for JWT that doesn't have a proof field
<kaz> terminology section
dmitriz: I'd like to propose a concrete spec change. We do either/or. We check the type field. if it is a JWT then we phrase it it must be validated as a JWT, that covers all the use cases of signature/no signature. If the type is not JWT then make proof mandatory
brent: each node can have a type
dmitriz: outer type
... if the outer type is jwt, it must be validated as jwt. If
it's not jwt, it means it's json-ld, then we make proof
mandatory
<Zakim> manu, you wanted to expected developer behavior
JoeAndrieu: a detached credential means the jwt is handling it
manu: in our implementations, a couple, large companies may hand this packet off between various systems. When systems recieve thigns they do or don't do things based on what's in the credential. It could be possible to.. I'm concerned about sloppy programming. It's not the developers that do the right thing. it's the ones who do the wrong thing even though the spec says to do something else, and what results from that. my concern is that if we say
proof: jwt, whatever section fothe pipeline gets that is going move it on, oh something else checked this, I'm going to keep moving, not .. I'm concerned about an attack where somebody injects proof jwt and it shortcuts the process, vs it always having to have proof on it, or not at all. if it has proof and it's an rsasignature etc you know you have to check it and it's never removed. In the jwt case it's removed at the beginning and then sent
through the system with nothing inside
scribe: I don't the proof: jwt thing buys us anything
brent: I was going to propose
what manu just spoke so elegantly against
... now I'm having an internal arguement.
oliver: to dmitriz's point, was your suggestion to add an additional type? Or just have jwt as the only type?
dmitriz: jwt as the only type. in the jwt serialisation, it says the type must be the string jwt
DavidC: so inside the LD it would still be the verifiablecredential
dmitriz: correct
ken: I have a question for manu - instead of having a proof with a type that says jwt you would take that out and now you're looking at pipeline, if it has nothing there how is that any different, what's the securty model difference?
manu: thinking about this, if the processing pipeline you get to a stage where you have no proof, it feels like.. if you have a system with all these different types of inputs and jwts over here and VCs over here, at some stage in your pipeline, the issue is when you get both jwts an djsonld into your system. The jwts strip the outermost thing out and it doesn't have a proof. the jsonld stuff always up the proof int the system. The developer is just
going to let the things that don't have a proof go through
brent: so what are you proposing?
manu: I don't know.
brent: I misunderstood where dmitriz was talking about the type. If the credentialtype says verifiablecredential, and that's all I have, there has to be a way for it to be verifiable. Which would go in the proof section. If we're saying that a jwt, once it's stripped off, the resulting thing doesnt' have a proof, we're saying that with just the jwt it's not a verifiable credential. Once the outer envelope has been stripped there's no way of
verifying it
oliver: in an enterprise where the credential gets verified and then passed to the the next thing in the pipeline, they won't just strip away everything that is jwt specific, they will pass the whole jwt and the next in the pipeline can decide whether they want to verify it. I'm not saying everyone is doing this
<Zakim> manu, you wanted to argue against himself and to say keep it the same
oliver: Once you verify the proof once at the gate you could then .. that might happen, there are multiple possible architectures
manu: the concrete proposal is to say one or the other, and I don't think that putting proof: jwt buys you anything
brent: except for consistency
manu: I don't think we need to be consistent in that way, it's not buying us anything and we have to define a whole bunch of stuff and it opens us up to misimplementations
<burn> DavidC
DavidC: because the proof is defined as a type and the type can be anything, then you're never goign to stop anyone defining type as external or x509 or jwt or anything. You can't stop that. You can't say we won't have it for jwt becausue you won't stop anyone from doing it
brent: I agree with DavidC and we are kind of straying into protocol, how it's going to be used, but we have to have some understanding of potential protocol. But what it feels like the more I think about the jwt as an envelope the more it's like something we might want to use in evernym and sovrin as an evenlope to a zkp signed VC, so we'd strip off the jwt and use the zkp to do the verification. What happens after the verification step I'm not
concerned about, if they want it to be verified further they need to provide the meanns. They probably don't need it to remain verifiable anyway
scribe: The credential is received. if it's wrapped in a jwt, is it recognised as a credential that needs to be verified in that way? At what point is the verification process done within the jwt case?
<Zakim> manu, you wanted to say but we're not defining it
manu: DavidC is right, we can't stop anyone from doing that, but we're not doing that, that's the point
<Zakim> dmitriz, you wanted to ask about the types VerifiableCredential vs Credential
manu: people can do that but the WG isn't saying this is the way to do it
dmitriz: what Joe was saying.. is this correct? In the terminology section we differentiate credential vs verifiable credential? but in the types we do not?
DavidC: correct
brent: but if we do that then we're saying a VC is verifable without a proof section
drummond: because it's verifiable in a different way
brent: so if we're all okay with that
burn: we're saying it's verifiable without a proof property, not without a proof
manu: what Joe said earlier about
detached credential thing at tpac.. I thought we said we may
have credential and verifiablecredential, and credential may
exist in the context, but I thought we agreed to not put any
examples anywhere to not plant that idea
... I thought in the context we may have a credential type? but
we don't give any examples for it
JoeAndrieu: that was definitely
not my recollecton. i was arguing we shouldn't have to have a
proof
... but the consensus seemed to go, and I support it, that
having another type that whether it's detached or
externallyverified, will allow a bunch of use cases. Kim said
yeah in testing we often pass data around without the proof
<Zakim> burn, you wanted to argue against detached credential type
JoeAndrieu: It sounded like what brent just proposed is a detached credential and we should specify that in a type
brent: I'm saying we need to have
an understanding. If we're saying verifiable credential doesn't
need proof because it could be verified externally.. or we need
to have another type that says verifiablecredential definitely
has proof, and we need another type
... JWT is in the data model
burn: arguing against the
detachedcredential type. I don't disagree with the use case.
it's that visibly it's not clear what the difference is between
a credential that is missing a proof and one that is a
detachedcredential and has no proof with it
... from a practical programming standpoint
... either it has a proof right here with it so the entire
document can be verified. Or you have something which has
credential information and you need something else
... and you need something else or you don't. Either way this
thing is not internally verifiable
... I don't think it makes any difference to call it a
detachedcredential
JoeAndrieu: it's about redundancy
and understanding that it's supposed to be there or it's
not
... subject-bearer agreement
brent: my fear is that if someone
receives a VC without a proof they're going to think it's
already verified and they're good and nothing else is
required
... if the proof was jwt, you have some assurance it was
already verified
... you know it wasn't just stripped out
... that's my fear. If we allow it to not be present it allows
for potentially incorrect assumptions if someone just pulls it
out
<Zakim> manu, you wanted to note that in testing, we just turn sig checking off. and to note that empty proof could be an attack vector
DavidC: I would say the toss up is between having the type VC with a proof, where the type can refer to different things, one could be jwt. Or we have the credential type with no proof, and verifiablecredential type which does have a proof
manu: the whole
detachedcredential.. in testing we just turn signature checking
off. If we want to skip that whole thing, the dev comments a
line out and it's done
... I don't think testing is an arguement for that
feature
... The having a proof is must and you should put a jwt in
there, I would use that as an attack vector. That gives me
something I can send to a system and if there's no error back I
can see they misimplemented. it's immediate feedback that I can
attack that system
brent: that's also the attack vector of being able to strip proof out
manu: if the system is not even checking proof they're insecure
dmitriz: as far as the VC without verification, I would like to hear more use cases for it aside from testing which is not valid. If there are valid use cases I'd like to propose a type for it, in the proof section, intentionallynotverified or soemthing, a tombstone object, a negativeproof object
burn: anything else we need to announce..?
<kaz> [lunch until 15:30]
<kaz> scribenick: kaz
dmitri: yesterday we mentioned some
concern
... 2 serializations
... JSON/JWT
... the latest consensus is the test suite needs no change
burn: two syntax
oliver: it's possible for now
... reconfirm it
... should work
patt: test still fails?
dmitri: the idea is hook up your
language
... will take a look
patt: can talk with you offline
dmitri: questions?
<Zakim> manu, you wanted to explain
manu: (explains the mechanism)
<gannan> manu what is "it"? the test suite? or vc lib?
dmitri: currently hard coded
... can take it offline
ken: looking at the test
sutie
... one test credential should leak the data
... how to test it?
... not link but leak
dmitri: the test currently skip
it
... in zkp test sutie
... 38
<gannan> link https://github.com/w3c/vc-test-suite/blob/gh-pages/test/vc-data-model-1.0/60-zkp.js#L38
dmitri: probably should open an issue
manu: (explains the text in the
spec at the zkp section)
... may reword it
... the language in the spec
oliver: looking at the test
suite
... conformance stateement
manu: need to update
... some kind of MUST/SHOULD
brent: need an issue?
manu: already one
... will raise an issue
oliver: already MUST in the test
suite
... is it mandatory?
burn: anything else?
oliver: created an issue on vc-data-model
@@
oliver: should update the test suite
accordingly
... proof property mandatory or not
manu: the spec is the authority
burn: anything else?
(none)
burn: action items
stonematt: review last 2 days
... feedback
... outstanding issues
... maybe for the next call
<dmitriz> btw, https://github.com/w3c/vc-imp-guide is transfered & ready for issues.
stonematt: getting CR is the clear
goal
... people to remember what to do
<Zakim> manu, you wanted to comment on getting to CR publication -- when do we vote, when do editors have to be done, when does test suite have to be done?
stonematt: what else?
manu: there is a question when we
vote
... anything outstanding
... my assumption is editors work aggressively to close
items
... the question is when to finish the test suite
... makes us nervous
... like the zkp things
... we need to figure out the language, e.g., MUST/SHOULD
burn: we do have one more review
issue
... small and editorial
... TAG is scheduling our review
... March 12
... scheduled for this call
... someone to be on the call
manu: volunteer for the call
stonematt: there are conformance related issue
david: quickly mention what I found
@@@ issue 440
david: (explains the issue)
brent: I volunteered
burn: how do you test?
... you must use the property
david: you may have issue on state
burn: you can't say "could
semantically mean", etc.
... we can provide guidance
david: the evidence didn't have an
id
... would say the id is optional
... make schema optional?
manu: optional
david: can create a big PR for this
issue
... if you go to the repo
... conformance pr
stonematt: anything else?
manu: evidence is pretty shaky
<gannan> Conformance branch is here https://github.com/w3c/vc-data-model/tree/Conformance
manu: we don't have
implementation
... still need to read through
david: change to nontransferable
burn: it's time to check everything
oliver: is this a cr blocker?
manu: yes
brent: can create a pr?
david: meant to do so
stonematt: already on a branch
... and you need to create a PR
brent: will make a PR
... just did
<gannan> here's the conformance PR https://github.com/w3c/vc-data-model/pull/442
burn: (shows CR blockers)
... CR blocker issues
stonematt: all the PRs are also CR blockers
@@ CR blocker issues
stonematt: how do we do that?
burn: can do this here
... nobody believes there is anything outstanding
<burn> Rough straw poll showed no one who would object to publishing as CR once all items are completed as agreed at this meeting.
youtube video!
real world crypto 2019 - day 2 - session 1 - morning
by Brent
brent: over 600 people there
https://youtu.be/pqev9r3rUJs?t=12278 without any explanation :)
<deiu> Here's the URL https://www.youtube.com/watch?v=pqev9r3rUJs
<rhiaro> Public service announcement: I happened to have lunch at BarCeloneta so while I was there I took the liberty of making us a reservation for tonight to be on the safe side (they seemed happy to not be surprised by a crowd)
[break till 17:00]
<chaals> scribe: chaals
stonematt: Join it.
<deiu> /me manu, by "cover" do you mean finish?
<Zakim> manu, you wanted to ask for volunteers to cover the event food.
stonematt: This WG is winding down at least for now, DID work will start up soon, and there is more stuff we want to do in this space so the Community Group will be the way to participate.
Manu: Also, we're looking for contributions to cover the costs here - thanks to Digital Bazaar, @@ for contributing.
Ken: I have gathered stuff from
the community - there might be more I missed, it isn't intened
as a slight.
... how do we prioritise? Broad adoption and interoperability
are attractive characteristics in any feature.
... What would improve security? That's generally a nice thing
to be doing.
... What would make verifiable Credentials easier to use?
Manu: Does security come last in terms of how we prioritise? I think we have done a decent job and in terms of explaining what we are doing, maybe we should focus on uptake and other stuff more.
stonematt: If we have good security there will be no work to do there.
Brent: Think security should be first priority as a principle
Burn: Let's please use the queue. Chaals isn't that good at figuring out what people say.
<inserted> scribenick: manu
chaals: Security, like i18n, is not something you put in the priority list, it's an interrupt. If there is a problem, you fix it as fast as you can.
Ken: There is distinction about fixing security problems, or taking it to the next level.
<inserted> scribenick: chaals
Ken: There is a distinction between fixing security, and looking at things that might actually be enhancements we can afford to spend time on because they aren't a problem.
Manu: User experience often gets neglected, and I think there are things around moving credentials from A to B that we haven't discussed. And of course protocol.
[slide: potential dispositions]
<inserted> scribenick: manu
chaals: These slides are available, yes?
dan: Yes, they are.
<inserted> scribenick: chaals
Ken: Changes to the current
version will be unpopular. But there is also stuff we might
want to put in a next version, there are things we might ask
the CG to noodle on until we know more about them, and there
may be things we want to send "somewhere else"
... when we are framing things for a spec, we should be looking
for stuff where there is already a reasonable base of
consensus, because that will get through the standards process
faster.
... A data model without a protocol is kinda missing
something.
... in Sovrin we had a connect-a-thon.
... we have people working with credentials, 11 different
agents with 6 codebases, 2 *completely* independent. 6 of them
got it all right. And we have seen people working on protocol
so htey can ahieve this. Maybe we should be incubating this for
now.
burn: I like people doing implementation work before we standardise. but it is good to encourage people to think we might standardise a protocol, so they should bear in mind there will be standards coming. It is the right thing to do if people know the direction.
yancy: Want to understand what the protocol work entails, and how that connects to DID
<Zakim> manu, you wanted to note CHAPI
ken: There is no protocol work planned and DID is trying to spin up a group. There are multiple communities exchanging credentials but no planned standards work so far.
Manu: there are other people looking at protocols and the goal is to standardise something. So we need to make sure we are talking to each other. Credential Handler API would be a good thing to look at and see if it's a good base since we have been building on it, and e.g. IBM and Digital Bazaar have implementation experience…
<Zakim> chaals, you wanted to noodle on protocols
Ned: protocol can mean lots of
things. In security it has another meaning ("security
protocol") so please be specific about what you are meaning -
RESTfulness, subscription messaging models, ...
... PubSub is useful in various places as a model, think about
IoT, ...
<Zakim> manu, you wanted to note the plan
Ned: Note that getting timely updates is important to security - PubSub can help that
manu: This is like the refresh
service. We want an automatic way to do this, if we can agree
on one it's less than trying to connect disparate
systems.
... Goal for credential handler API is to get more interop and
deployment, share it as open source project, we want to be able
to deploy in production.
DavidC: Alternative viewpoint, that we took, was to use FIDO - now WebAuthN - we would look at going in that direction, supplemented with IETF token binding. That is another possible pathway.
Kaz: I work on WoT and they have an API draft and a group note on protocol bindings, and that is another useful input.
<kaz> wot scripting api
<kaz> wot protocol binding template
Dimitri: Token Binding is only in Edge, not in any ongoing browser work.
stonematt: Not to completely derail, where do we introduce other things like crypto technologies for selective disclosure.
Ken: It's on the list for talking about here.
DavidC: We are looking at verifying through OIDC. On your own system you have an issuer. OIDC is widely supported, so I don't need Google to identify me I can say "it's me". Again, alternatives
Dimitri: I highly recommend my paper from rebooting ( Oliver also contributed to it)
<DavidC> -> Dimitri. send us the URL of your paper please
Ken: I have one example of
cross-platform interoperability in the wild - we started with
LD signatures a year ago, now we have added some stuff for JWT
and Zero Knowledge Proofs.
... The ZKP are in turn looking at newer approaches to ZKP…
there is presumably more on the horizon, some of which we
should be looking at.
Ned: Clarify interop in terms of signatures please?
Ken: How do you take credentials
with one type of signature and work with them in a system based
on different kinds of signature? How do you make libraries that
deal with multiple signature types?
... so consume-only, produce multiple signatures, ...
<Zakim> manu, you wanted to discuss plan on LD-Proofs, LD-Signatures, RDF Dataset Normalization... also, hashlinks (eg cross-platform specs)
Ken: ZKP project in hyperledger is trying to bring security/crypto work together to help people share benefits (and bugs)
Manu: There is a plan to get LD proofs and ZK to a W3C spec. It's in early stage of exploration.
<dmitriz> DavidC: Here are the current snapshots of our drafts of the OIDC papers (by myself, Oliver, and others) - https://github.com/WebOfTrustInfo/rwot8-barcelona/blob/master/draft-documents/did-auth-oidc.md and https://github.com/WebOfTrustInfo/rwot8-barcelona/blob/master/draft-documents/did-auth-vc-exchange.md. There will be ongoing work in the next few weeks to finish them.
Manu: For this sort of thing you need decent formal analysis. This is related to other Linked Data work, so there might be a group that handles a few pieces. Or there could be a signature packaging group. With any luck there might be a group working formally on this in a year.
<Zakim> achughes, you wanted to say goodbye to Alex from Caelum Labs
Manu: The other bit of work is to look at cross-ledger compatibility for proof.
Alex: thank you - we want to make sure that people in Europe can continue to participate
Manu: Is there a plan for the CL and bullet proofs stuff?
Ken: CL has had academic review.
Brent: Want to get from RSA to elliptic curve but that means we can't do predicate proofs. Maybe we don't need them if we go around another circle
Ken: In more security, we discussed immutability of signed meaning. We talked about immutable storage options, or using hashlinks as a mechanism.
Manu: Think the next thing for hashlink is to see if it is broken, can be implemented, solves actual problems…
Ken: You can describe alternate locations for a resource with hashlinks, as well as immutability guarantee
Ned: How does meaning mutate?
Ken: Someone updates a schema, which can change what a particular field is used for. e.g. drivers license changes birthday field to mean saint's day, so credentials mean something different.
Ned: Like tying versioning of schema to your credential.
<Zakim> kaz, you wanted to check who just joined webex
Ken: This can be by accident, or
done deliberately as a anttack.
... this work is not new. Do we
do it here? Should we try to standardise it in the next
version, does it need more incubation, …
<Zakim> manu, you wanted to comment on multi-sig. and to also mention multi-proof
Ken: Another type of security is allowing multi-signature (e.g. require 3 of 5 possible signatories)
Manu: Multi-sig and multi-proof
has a lot of interest. LD enables sets of signatures, chain
signatures which adds the ordering of who signed.
... (like Notaries do with ink)
... Multiproofs are people proving that something existed -
e.g. is anchored on a blockchain somewhere
... Cuckoo cycle is being used to build ASIC-resistant proof.
You can mix and match these things. Some of this work might go
to wherever the linked data signature stuff happens.
Ken: In terms of making things
more useful...
... how to use multibase, ...
Manu: It's a way to declare
binary encodings. So you can describe what the rest of the data
uses. Multihash tells you what the following bytes are going to
be (SHA-1, Keccak-257, …) and how many of them there are.
... these are used in hashlink spec, and come from IPFS
community. They're handy tools.
Ken: These seem like low-hanging
fruit.
... There are lots of people around who want a compact form of
the data model. ZKP for example gives a lot of data, so e.g.
CBOR can help reduce the storage and transmission
requirements.
<Zakim> manu, you wanted to note another benefit of compact representation
Manu: Also compactness helps to hide data application developers shouldn't be messing with. E.g. key information you don't want to expose.
Ned: Does compat representation have ismilar goals to multbase or are they orthogonal
Manu/Ned: There is some overlap.
Dimitri: They are complamentary. Compact representation describes encoding, multi* helps integrate that for use.
Ken: Are there other tings we should put on a list for consideration?
Manu: SGML (or some specialised compact version of it)
Ned: CDDL, as an analogue to CBOR/JSON*
Ken: Some of the spec could use
deeper definition - e.g. terms of use, evidence, status and
revocation info, etc.
... We have MIME type work somewhere on our list of things we
could do…
Ned: I can see status being pretty deep information / lifecycle.
Manu: Sure.
stonematt: There is the language
of a status, and whether it is active. This is pretty
contextual to the issuer.
... e.g. how do you represent a post0-expiry grace period where
you still apply the value of an expired credential.
Dmitri: Aurthorisation framework based on credentials? In the past we said Don't do this! SHould we suggest how you actually can?
<Zakim> manu, you wanted to note ocaps!
[nodding...]
Manu: Object Capabilities - using these as a transport mechanism for an auth framework.
<burn> DavidC said yes to auth framework :)
Manu: how long can this website control or access your driver's license credential …
Joe: A request framework for
requesting credentials. We know that is part of what we
need.
... how do we do that to allow privacy as well as c hoice of
which issuer I refer to in the presentation I give to a
verifier
stonematt: My brain just
broke.
... At rebooting workshop, claims manifest came up. Is it
related?
Joe: You are asking for the proof of a claim. That's how we get consistency with ZKP terminology
Oliver: It's claims manifest,
being explored in DIF, some implementors are looking at
it.
... the issuer provides information that can be used in an SSI
wallet to present to the user how to find their credential
information to answer a request.
DavidC: How do we manage this one hour a week.
Joe: We prioritise.
... (and do work. And apologise, when we didn't)
DavidC: can we increase the meeting rate?
Joe: Sure… but that relies on people having the bandwidth.
<Zakim> manu, you wanted to note Credential Handler API has an example of this.
manu: Credential Handler API has
a query by example thing. Here is what I want, here are the
issuers I trust, …
... this is agnostic to query/response formats.
... partly because we don't expect version 1 to be the exact
thing we really want, partly because we might want to be
running different formats over the same API
Brent: There is an agent wallet project that might help here, hopefully going into HyperLedger.
<Zakim> burn, you wanted to talk about CCG scheduling
Burn: standards groups are volunteer work, There is more work than resources available, so the work that gets done is the work that people step up and do. You can't order it to happen.
<Zakim> achughes, you wanted to say this might tie into work at ISO SC27 ID proofing
Burn: And there are even fewer constraints than in a WG. So if you care, it shows because you did the work.
achughes: ISO SC27 are working on identity proofing standards - seems a bit like the request response is going to be a necessary part of this in future.
Oliver: Decentralised Identity
Foundation have started their interop project. Currently
looking at Educational use case.
... there is no membership required, so you can readily
join.
Ken: So there is a lot going on. The results will be seen from the work people do.
Burn: We need to be out of here
soon. I do want to say thank you to everyone here for a
productive meeting, staying focused. Especially Brent and Ken
for really good preparation that made a lot of that
possible.
... Thank you for being prepared to compromise.
Manu: Thank you to the chairs for doing the thankless work of making it go on.
Burn: Thank you to the sponsors
who helped make this possible. Caelum Labs, Digital Bazaar
(because food is more important than many people realise),
University of Kent and Brightlink for contributing to
that.
... Thank you to Payment Innovation Hub.
Silvia: We hope you felt comfortable. We were very happy to have you here.
[meeting adjourned]