W3C

- DRAFT -

Web Annotation Working Group Teleconference

15 Jul 2016

Agenda

See also: IRC log

Attendees

Present
Rob_Sanderson, ShaneM, Jacob_Jett, Randall_Leeds, Benjamin_Young, shepazu
Regrets
Tim_Cole, Ivan_Herman, Ben_De_Meester
Chair
Rob_Sanderson
Scribe
tilgovi

Contents


<azaroth> ...

<azaroth> trackbot, start meeting

<azaroth> Scribenick: tilgovi

<azaroth> PROPOSED RESOLUTION: Minutes of the previous call are approved:

<azaroth> https://www.w3.org/2016/07/08-annotation-minutes.html

<azaroth> PROPOSED RESOLUTION: Minutes of the previous call are approved: https://www.w3.org/2016/07/08-annotation-minutes.html

RESOLUTION: Minutes of the previous call are approved: https://www.w3.org/2016/07/08-annotation-minutes.html

Announcements

azaroth: what we had changed from the previous version of protocol, TAG (who?) are happy

we were anticipating it might run into August. for it only to be one week delayed is very good.

Issue Review

<azaroth> github: https://github.com/w3c/web-annotation/issues/326

Questions that one?

<azaroth> github: https://github.com/w3c/web-annotation/issues/327

Benjamin, do you want to describe the issue?

bigbluehat: Yeah, I also commented on it. I'm all good with your response.

it makes sense the etag value obviously reaches far beyond MVCC style updating, to cache control and other things

I was looking for clarity on why it didn't match, but I should have just taking in to the mailing list. Happy to close it.

It may be good to call out the use for cache control.

If you just expose a directory of annotations you won't get an etag with most servers.

tilgovi: I think it's fine. I'm also surprised it's a MUST from LDP, but I think it's fine as is.

<bigbluehat> basically, it's out of our scope (as we depend on LDP), but the clarity would be a Good Thing

<azaroth> github: https://github.com/w3c/web-annotation/issues/329

azaroth: next one is from sarven

notes that the protocol spec says if the annotation contains a canonical link then it might be maintained without change

The "it" is intended to refer to the canonical link and not the annotation

The issue would be to clarify what the "it" refers to

tilgovi: +1

<Jacob> +1

<TimCole> +1

<bigbluehat> +1

<azaroth> github: https://github.com/w3c/web-annotation/issues/328

azaroth: Benjamin you also had a new issue yesterday, 328, Container Representations section should be more demanding

Would you want to explain that one?

<ShaneM> you've gone very quiet

<ShaneM> bigbluehat: ^^

<bigbluehat> https://rawgit.com/w3c/web-annotation/protocol-qs/protocol/wd/index.html#h-container-representations

bigbluehat: basically, container representations make no actual requirements on the server in response to the prefer headers

azaroth: we could be clearer, but I wouldn't want to repeat all of the information from the model document that describes collections and annotations

bigbluehat: If you want I can send you a pull request for what I have in mind

Mostly I'm looking for clarification for this section 4.4.2

For a long time I was just handling PreferContainedIRI, which is actually goin gto get you all the IRIs not as a URL

If you're writing a server you really have to pay attention to the Client Preferences more than you might think you have to

azaroth: the other two new issues ... registering the profile with IETF so we have an official media type which I have not done ... the other one to make a table of all the predicates so we can track usage which I have also not done

but both of them need to be done by the end of CR, so we have a little bit of time

<bigbluehat> what's our CR "deadline" date?

<bigbluehat> aka "end of CR"?

other open issues, most of which can be closed

<ShaneM> My goal is by TPAC

azaroth: we hope for mid-september so then we have a brief extension for issues and feedback to get from end of CR to full technical recommendation

that depends on lot on implementations and being able to demonstrate that we have implemented all of the features

We have aspirational end of CR dates.

It is somewhat in our control but there are other factors as well.

Testing

azaroth: two things, update on the status of the test suite and a discussion of feature testing vs validation and what our test suite will do

ShaneM: Progress recently... we've gotten the annotation model framework, that allows us to generate and execute tests, integrated into the environment now. That's good news.

That only has to do with model testing, not protocol.

That's progress on the framework for the model.

bigbluehat: Other than the issues we've been discussing, I've made it to the bottom of the spec for the MUST issues.

I'm dealing with a bug related to the tests I am writing. I was stuck on the prefer headers.

The next thing is abstracting it so that each test runs with its own temporary directory and a client can get its own space.

That gets messy. Works for one off tests and local testing, but not for hosting it.

Two parts... one is making it contained per test run and the other is reporting how it all happened

<Zakim> ShaneM, you wanted to talk about reporting

And I would love help with doing any of that sort of reporting.

ShaneM: Once we get the basic plumbing in place attaching it to the built in reporting structure should not be a problem.

I didn't actually realize you were writing the tests.

That's fantastic.

If you need help with the temporary files, etc, I can advise on that as well.

<bigbluehat> awesomeness ^_^

azaroth: My understanding is we have been waiting a little bit until this ended up in WPT, but now any thoughts on when we will be getting back to writing more tests?

<azaroth> TimCole, Jacob, Janina_ ?

<TimCole> Once resolve validation versus showing feature implementation, will resume putting in schemas

<TimCole> still having some difficulty with making runner work locally

<TimCole> stalls at creation of MANIFEST.json

azaroth: Last weekend I started playing around with the servers and test suite.

with the intention of integrating them into my server implementation

<TimCole> this done by a py script that does a shell

so that I could validate input to see if it's an annotation or random junk

It's not really, at the moment, intended for automated validation.

ShaneM: The w3c test suites are designed to test implementations and demonstrate that all the features in the spec are implemented in two implementations.

If they have other add-on benefits that's great, but not usually a design goal.

Examples of add-on benefits: tests can be run doing continuous integration.

I sent an email yseterday about "Feature Testing Philosophy". We want these tests to be as discrete as possible without being overly onerous on the tester.

If a feature is "body" we need a test for body. If a feature is "bodyValue" we need a test for bodyValue.

In some cases it makes sense to combine tests, say for specific properties that are all required, to make this easier on the tester.

azaroth: Is continuous integration sufficiently valuable that we should spend extra time ensuring that the schemas and so on are useful for that?

I don't even have an opinion on that. Mostly I'm not sure how difficult that is to do at the same time.

ShaneM: The add-on benefit is substantial, but we need to get out of CR.

My plan all along is that the tests would be structured identically, every test looks exactly the same, so that someone who wanted to automate the tests in WPT could do so.

Once we have tests in place, we could do that.

We could show how that would be done, and then put it in the wild so anyone can do it with their implementation.

I don't anything prevents us from doing that, but the manual tests are the thing we need first.

ShaneM: Coming back to the complexity discussion. Rob, thank you for raising the issue. I think having an incredibly complex JSON schema that's built out of little schemas that allows you to do validation of any annotation is a great exercise.

There should be a tool out in the world that you can use to check that your annotation is valid.

But for testing, I hope that these things are relatively simple. We have finely grained components and then we bundle those together in ways that make sense to exercise a given requirement.

ShaneM: I put together a hasBody.json file, a schema that checks to see if there's a body property and, if so, that it matches the expectations for a body property.

<ShaneM> {

<ShaneM> "$schema": "http://json-schema.org/draft-04/schema#",

<ShaneM> "title": "Has a body property",

<ShaneM> "description": "'http://www.w3.org/ns/anno.jsonld must be' an @context value (Section 3.1)",

<ShaneM> "assertionType": "must",

<ShaneM> "expectedResult": "valid",

<ShaneM> "errorMessage": "There is no body property",

<ShaneM> "type": "object",

<ShaneM> "required": [ "body" ],

<ShaneM> "properties": {

<ShaneM> "body": {

<ShaneM> "oneOf": [

<ShaneM> { "$ref": "stringUri.json" },

<ShaneM> { "type": "object",

<ShaneM> "oneOf": [

<ShaneM> { "$ref": "textualBody.json" },

<ShaneM> { "$ref": "specificResource.json" }

<ShaneM> ]

<ShaneM> }

<ShaneM> ]

<ShaneM> }

<ShaneM> }

<ShaneM> }

ShaneM: The important part is that there's a thing that says '"type": "object"', and within that object "body" is required and that that property is "one of" (using "$ref") a "textualBody" or a "specificResource" (using the definitions that are already provided).

<TimCole> keep in mind that value of body can be an array

ShaneM: If two implementations pass that test, then great, we're laughing, that feature is implemented across two annotations.

Maybe that needs to be more complicated because the value of body can be an array.

AJV does a great job of parsing these and evaluating them very quickly (AJV is the library being used for this)

It does a good job of telling you what's wrong if there's a problem with a schema.

<TimCole> See also: https://github.com/w3c/web-annotation-tests/blob/master/annotations/bodyResource.json

ShaneM: Does anyone remember how many features we think there are?

azaroth: It's hard to count. Probably on the order of forty or fifty.

<TimCole> But this then requires that you separately check that each type of body is correct.

<TimCole> There are close to 100 features, grouped into about a dozen sections (outline of the spec)

ShaneM: There are two ways to approach this problem. I'm going to talk about just _a_ way.

<ShaneM> stub-n.n.n-name.test

What people have done, is put things like "stub-n.n.n-name.test" in the tree.

Stub means it's not really a test yet, it's a reminder that we need a test.

As we write them, we rename them.

<azaroth> +1

<ShaneM> what do you think of the stub concept TimCole

<TimCole> okay with me, we started with a spreadsheet, but stub naming would help organize

<ShaneM> ACTION: ShaneM to create stub files... [recorded in http://www.w3.org/2016/07/15-annotation-minutes.html#action01]

<trackbot> Created ACTION-34 - Create stub files... [on Shane McCarron - due 2016-07-22].

<azaroth> TimCole: Can you send Shane the spreadsheet to use, just to make certain that we're singing from the same hymnsheet

<TimCole> yes, it's a google doc. I'll send pointer after today's call.

ShaneM: I'll also modify the test generator so that if a thing is a stub so that the test that's generated is basically an automated test that says "not run" or "not ready" or whatever.

<azaroth> Thanks Tim!

<TimCole> it's still incomplete

We'll have a whole list that can say in a table that each test ran, passed, failed, etc or didn't run.

ShaneM: I know you want to talk about implementation progress, but we need test development and test development progress.

I would really like a time we can tell people that there are enough tests here that they can validate their implementation.

<TimCole> question: if an annotation has correctly implemented one type of body, but in same annotation incorrectly implemented another type of body, how do we report it?

<ShaneM> It would probably fail TimCole

<TimCole> thanksa

ShaneM: I'm not going to impose a deadline, but I want someone to tell me when they can get the work done.

In my head, we were populated out by first of August. I don't know now.

<azaroth> TimCole: Do you have an estimate of when the schemas can be finished up?

That was my feeling a month ago.

<azaroth> At least to the point where it's useful to run existing annotations against them, even if the entire set isn't complete?

<TimCole> schemas can be done over the course of the next week, if we're clear about how test scripts are to be structured

<TimCole> at very least the granular schema definitions can be finished next week.

azaroth: This isn't an overly optimistic deadline.

ShaneM: Let's shoot for it.

How wrong can we be?

<azaroth> So we can try to get something set up to have implementations run beginning of august :)

<ShaneM> I will get stubs thrown in to the annotation-model tree this weekend.

<azaroth> TimCole: Do you have everything, or are confident you'll get everything, you need to work on the schemas?

<ShaneM> bigbluehat: are you there ben? it's me, margaret

<bigbluehat> blast...lost my mic...somehow...

<TimCole> I've been slowed down a bit by not being able to get runner going, but doesn't interfere with making schemas

<ShaneM> low tech linux crap

<bigbluehat> k. we're close. we just need to generate a temp space per-run

<azaroth> :D

<bigbluehat> ShaneM: close...Window 10 ;)

<ShaneM> TimCole: if you need help getting your environment running - ping me. I can walk you through it.

<bigbluehat> yes. all testing of the test tester thing would be super :)

azaroth: Hopefully by next week we can walk through how some annotations can be tested.

ShaneM: I know there are example annotations in the spec for each data shape. I've been using those to test the tests. My question is "are those adequate"? Is there a way we can pull them out into independent files?

Could someone do that?

azaroth: That is already done, actually.

In the main spec repository.

<TimCole> but we need annotations that are not correct as well - there's space for this, but not yet populated

<azaroth> https://github.com/w3c/web-annotation/tree/gh-pages/model/wd2/examples/correct

ShaneM: that's beautiful. I want to use those to run against the tests.

That's why I like having the .json files separate from the .test files. We can cycle over them more easily.

Thank you.

azaroth: Thank you!
... Any more topics?

<TimCole> https://docs.google.com/spreadsheets/d/13LRf2-OCJlKplQE5MTV3breguuRhUyhQW8IZ_jQMBjw/edit#gid=595504397

shepazu: I just want to say thank you again to Shane for paving the way for us to be able to test JSON-LD at the W3C. This is very useful.

Adjourn

azaroth: Thank you everyone for coming. We'll discuss progress next week.

rrsagent generate minutes

Summary of Action Items

[NEW] ACTION: ShaneM to create stub files... [recorded in http://www.w3.org/2016/07/15-annotation-minutes.html#action01]
 

Summary of Resolutions

  1. Minutes of the previous call are approved: https://www.w3.org/2016/07/08-annotation-minutes.html
[End of minutes]

Minutes formatted by David Booth's scribe.perl version 1.144 (CVS log)
$Date: 2016/07/15 16:02:04 $

Scribe.perl diagnostic output

[Delete this section before finalizing the minutes.]
This is scribe.perl Revision: 1.144  of Date: 2015/11/17 08:39:34  
Check for newer version at http://dev.w3.org/cvsweb/~checkout~/2002/scribe/

Guessing input format: RRSAgent_Text_Format (score 1.00)

Succeeded: s/they/TAG/
Succeeded: s/selections/collections/
Succeeded: s/wel./well./
Found ScribeNick: tilgovi
Inferring Scribes: tilgovi
Present: Rob_Sanderson ShaneM Jacob_Jett Randall_Leeds Benjamin_Young shepazu
Regrets: Tim_Cole Ivan_Herman Ben_De_Meester
Agenda: https://lists.w3.org/Archives/Public/public-annotation/2016Jul/0058.html
Found Date: 15 Jul 2016
Guessing minutes URL: http://www.w3.org/2016/07/15-annotation-minutes.html
People with action items: shanem

WARNING: Input appears to use implicit continuation lines.
You may need the "-implicitContinuations" option.


[End of scribe.perl diagnostic output]