15:10:58 RRSAgent has joined #htmlt
15:10:58 logging to http://www.w3.org/2010/07/27-htmlt-irc
15:11:03 OK
15:11:06 let's start then
15:11:14 Chair: Kris
15:11:18 scribeNick: plh
15:11:52 agenda: http://lists.w3.org/Archives/Public/public-html-testsuite/2010Jul/0009.html
15:12:02 Topic: bugs in approved tests
15:12:04 Kris: None
15:12:19 Topic: Review Current Tests Posted To List For Approval
15:12:22 Lets move on to #2
15:12:35 Philip Taylor's canvas tests
15:12:49 So we have Philip taylor's first batch of tests (fallback, type, size)
15:13:18 I look at the first set and they seem fine except the type.delete test
15:13:41 http://test.w3.org/html/tests/submission/PhilipTaylor/canvas/type.delete.html
15:14:34 So the spirit of the WebIDL is to ensure the DOM objects are like JS Objects
15:15:14 kris: it seems that we're all wrong
15:15:18 ... and the test is wrong
15:15:19 Hmm? I'm not sure why that's the "spirit" of WebIDL
15:15:45 The test tests that you can't delete the HTMLCanvas object
15:15:58 If browsers interoperably implement one behaviour I would expect that to be retained
15:15:58 delete window.HTMLCanvasElement;
15:16:15 but delete is supported by all JS objects
15:16:44 Any js object can be [[DontDelete]]
15:16:50 or the ES5 equivalent
15:16:52 http://dev.w3.org/2006/webapi/WebIDL/#delete
15:17:14 I don't see a problem with this test
15:18:00 kris: do we want to support for JS in 2006 or align with ES5?
15:18:24 ... ES5 is forcing you to delete the object
15:18:37 the ES5 equivalent is [[Configurable]]=false
15:18:37 ... there is no [[DontDelete]] option
15:19:14 See 8.12.7
15:19:17 Of ES5
15:19:20 pointer?
15:19:58 The delete operator calls that with Throws=false (in non-strict mode)
15:20:33 http://www.ecmascript.org/docs/tc39-2009-043.pdf
15:20:43 thanks
15:20:44 see page 40
15:22:01 Yes 8.6.2 explains what [[Configurable]] means
15:22:19 so, on delete window.HTMLCanvasElement
15:22:25 one need to throw a TypeError exception
15:22:31 if [[Configurable]] isn't true
15:22:32 No
15:22:40 Only if Throw is true
15:22:47 oh yes
15:23:01 (which is isn't unless you are in strict mode)
15:23:10 (which the test case isn't)
15:23:47 so I think we should align to ecma5
15:24:08 does the 2d spec needs to say anything about [[configurable]] ?
15:24:20 WebIDL needs to be updated to ES5
15:24:42 But it is clear how [[DontDelete]] maps in this case
15:25:12 I gree
15:25:14 agree
15:26:51 OK
15:28:26 so, I would expect the canvas 2d spec or html5 to say that the property cannot be deleted
15:28:37 so then the test should be [[configurable]] == false
15:29:41 there is probably a general statement in thml5 indicating that you can't delete any of the properties
15:29:41 so then we just need to check that the HTML5 spec states this so that it's clear
15:29:54 I assume that WebIDL says that interface objects in generat cannot be deleted so that every spec doesn't need to say it for every interface
15:29:59 *general
15:30:29 oh yes it does
15:30:32 in section 4.5
15:30:40 If a host object implements an interface, then for each attribute defined on the interface, there MUST be a corresponding property on the host object:
15:30:48 # The name of the property is the identifier of the attribute.
15:30:48 # If the attribute is declared readonly, the property has attributes { DontDelete, ReadOnly }. Otherwise, the property has attributes { DontDelete }.
15:31:19 so ew're fine
15:31:21 we're
15:31:51 ok then we should assume that at some date the webidl will change and be updated to use configurable and not dontdelete
15:32:05 Any other feedback on the first set of tests?
15:33:11 Resolution: fallback, type and size canvas tests are approved
15:34:06 I can move them into the approved folder
15:34:07 Opera's and Microsoft's getElementsByClassName tests
15:34:51 I'll keep the same structure
15:35:20 Seems like these should be changed so they can fit into the harness
15:35:31 Anne wrote the Opera tests. I think he plans to convert them to the new test harness and submit them
15:35:45 I can do the same for the MSFT tests
15:36:24 great
15:36:44 (if Anne doesn't do it soon, I will do it; it doesn't look like much work)
15:37:03 Now in theory once this is done they will look like http://test.w3.org/html/tests/submission/Opera/resources/apisample.htm
15:38:01 So we just need to combine this with the test runner http://test.w3.org/html/tests/harness/harness.htm
15:38:24 Though since these don't need manual verification they should run automatically
15:38:52 It should be easy to combine the harness with a funner. It provides hooks to get callbacks when the tests are complete
15:38:56 *runner
15:39:19 It's the same way the output is implemented
15:39:36 yep
15:40:02 we can approve them once they get moved into the proper harness
15:40:15 jgraham do you agree?
15:40:19 Yeah
15:40:34 Topic: Discuss test runner/harness
15:40:44 Discuss test input and test results xml formats
15:41:58 currently the test runner outputs plain text
15:42:21 I'd like to convert this to xml, or JSON (jgraham's feedback)
15:42:53 jgraham do you really want JSON or can you live with xml?
15:43:07 I can live with XML but I would prefer not to
15:43:20 I think for key-value pairs JSON is a better format
15:43:44 That's essentially exactly what we have here
15:43:57 It is also easier to work with from code
15:45:02 I think either would work just fine, though xml would seem to be easier to validate that the results don't have a type/error
15:45:33 though let's just choose JSON and move on
15:45:35 I don't think this is going to be an issue
15:45:57 if you don't give me correct results, I'll simply complain :)
15:46:21 Yeah, if we need a "validator" it should be a few lines of custom code
15:46:36 in python or javascript or whatever
15:46:45 as long as the results go through my transformation steps, I'll be happy
15:47:14 OK let's talk about the data - to make sure we all agree
15:47:29 so, I need a clear result for each test
15:47:36 either pass, or fail
15:47:38 the first section of data should contain UA info
15:47:41 is there an other state?
15:47:46 like not applicable?
15:47:57 kris had "not implemeneted"
15:48:05 I don't understand the use case though
15:48:12 We don't really have optional features
15:48:17 for the section, I need a simple string to identify the agent
15:48:25 OS info might be good
15:48:43 That is typically included in the UA string, I think
15:48:44 userAgent, browsername, date, submitter, Testsuitename
15:49:27 so then we want userAgent, browsername, date, submitter, Testsuitename, and OS
15:49:34 Testsuitename?
15:49:35 why submitter?
15:49:36 TestSuiteName == HTML5
15:49:42 Why testsuitename?
15:49:56 If it is a constant
15:50:25 As Philip pointed out, we probably care about the version of the testsuite that was used
15:50:25 that is true it should not change - so then userAgent, browsername, date
15:50:42 what's the different between userAgent and browsername?
15:50:43 do we agree
15:51:06 some browsers for compat place stuff in the UA to make them look like other browsers
15:51:49 As long as there is a 1:1 mapping between the complete UA and the browser, it is not a problem
15:52:09 s/UA/UA string/
15:52:48 We should keep both - since browser name makes more sense that a UA
15:53:10 Well I don't mind if it is easy to extract using js
15:53:10 Lets move on to the next part - data for each test
15:54:52 so we have URI,Featurename, specref, result
15:55:08 do we have specref?
15:55:14 or even feature name?
15:55:18 I think we just need URI, Featurename, result
15:55:49 I think URI:result is fine
15:55:59 I think we should have feature name - since parts of HTML5 are much farther along and will be interoperable long before other parts
15:56:03 But maybe URI:[result, message] is better
15:56:15 We can get the feature names from the URI
15:56:18 agreed. if we have featurename available somewhere, I can find it based on the URI anyway
15:56:22 No need to submit it each time
15:57:13 That will work, we can pul lhe feature name from the URI
15:57:29 was thinking that I might need to do an extra classification anyway to make the results more comprehensive
15:57:32 s/pul/pull/
15:58:39 I tihnk using the URI will work e.g http://test.w3.org/html/tests/approved/audio/audio_001.htm == audio test
15:59:01 so, we need pass, fail? do we want to differentiate between the fail states? (timeout, not implemented, crashed, come back later, etc.)?
15:59:03 using a simple split on '/'
16:00:02 I think timeout is like fail for conformance tests
16:00:23 the next part is the results part pass, fail, not implemented
16:00:30 It might be different for browser vendors so it is worth distinguishing at the harness level
16:00:42 I don't want timeout
16:00:43 but not in the presentation of the results
16:01:59 The reason for not implemented is that we are going to have some features not implemented
16:02:08 for example SVG in HTML -
16:02:43 so, you'll expect the harness to recognize if the feature was implemented or not?
16:03:00 I don't really like the idea of not implemented
16:03:15 if a browser doesn't implement SVG in HTML that's differnet than one that does a very poor job at implementing SVG in HTML
16:03:22 It seems likely to be inconsistent between different parts of the testsuite
16:03:29 I disagree that is different
16:03:55 From the users point of view it is feature that you cannot rely on in that browser
16:05:00 the boundary between not implemented and failure isn't clear to me
16:05:15 so then we should just stick to pass/fail?
16:05:18 if a function returns a wrong result, is it a failure or not implemented?
16:05:32 I think we stick to pass/fail
16:05:39 Anything else is too complex
16:05:47 Well the browser vendor that submits the result can just say it's not implemented
16:06:15 surely they would know if a feature is implemented or not
16:06:47 I was thinking of the case for css2.1 run-ins
16:06:56 yes, but one might say it's not implemented, and an other one might say it's a failure
16:07:03 My imagined process here doesn't involve people editing the reaults files by hand
16:07:34 I agree with sticking to fail/pass seems a lot easier
16:07:46 even if you don't implement css2.1 run-ins you end up passing about 1/2 the cases
16:07:57 which is a little misleading
16:08:31 oh, that's different indeed. so you want to be able to say "not implemented" on tests that you actually pass>?
16:08:44 If we end up with a data-presentation problem, we can compile a seperate list of features that UAs claim to not implement
16:08:55 It doesn't need to be tied to the test harness
16:09:23 and indeed I don't see how it can be (without manually editing the results files) in the case you mentioned
16:09:34 Ok then lets just have the tests results be pass/fail and then we can list features that are not implemented when the data doesn't make sense
16:09:52 like the css2.1 run-in case
16:09:53 I don't mind have a "blacklist" on the side
16:09:57 having
16:10:35 ie, if you really insist, sure I'll pretend you don't pass the tests :)
16:10:51 heh
16:11:23 Ok then we have for the test results part pass, fail
16:11:55 and we can add a not implemented list to the report when it becomes a problem
16:12:02 ok
16:12:24 The test input part - I assume this should be easy
16:12:39 just uri and type
16:12:59 type?
16:13:04 type == manual verification || trype == automatic verification
16:13:32 That seems to be a property of the testcase
16:13:44 So we don;t need to submit it with the results
16:14:01 This is the input not the output
16:14:02 I think Kris is saying the same thing
16:14:11 OK
16:14:52 At first I didn't think this woud be needed at all
16:15:02 and we could just use plain text
16:16:25 though when you run the tests you don't want to run a manual test, wait for a automatic test to run, run a few more manual tests, wait for another automatic test to run, etc...
16:16:45 it should just let you run that manual tests as one bunch
16:17:18 makes sense to me
16:17:53 any othe agenda items?
16:17:58 I have 2
16:18:38 I
16:19:04 I'll send out another set of canvas tests to review (philip's tests)
16:19:17 since we want to keep making progress on these tests
16:19:27 also I'll be submitting some tests for http://dev.w3.org/html5/spec/Overview.html#dom-document-getselection
16:20:02 so for the next meeting we should be able to approve some more tests and have further progress on the harness and test runner
16:20:17 any others?
16:20:24 ok, souds good
16:20:27 sounds
16:20:36 I have made the request to generate http://www.w3.org/2010/07/27-htmlt-minutes.html plh
16:21:04 -Plh
16:21:05 -krisk
16:21:05 HTML_WG(HTMLT)11:00AM has ended
16:21:06 Attendees were Plh, krisk