See also: IRC log
<plh> http://www.w3.org/wiki/TestInfra/goals
plh: separated a little more but did not add
examples yet
... SAZ said that most accessibility tests may be manual
http://www.w3.org/wiki/Testing/accessibility
<francois> shadi: I added a page on accessibility testing where I tried to expand a little more on the different types of accessibility testing. Some tests can be automatically automated.
<francois> ... In particular, some ARIA test cases can be nicely automated, I think.
plh: semi-automatic seem to be a mix of
self-describing and automatic
... may be similar to geo-location testing
... they would also require human intervention
... interesting case about how people would write their tests
... for some WAI tests the human provides the results whereas with the
geo-location the result is provided automatically based on what a human does
mc: had thoughts but did not add them to the wiki yet
<MichaelC_> Test cases separate from test files
<MichaelC_> Test cases supported by 0 or more test files
<MichaelC_> A given test file can be used by 0 or more test cases
<MichaelC_> Test cases and test files can be shared among WGs (implication of above requirements is that WGs could share test files but have different test cases)
<MichaelC_> Metadata not stored in test files
<MichaelC_> Provision for passing and failing test cases
<MichaelC_> Support for automated execution of test cases
<MichaelC_> Support for manual execution of test cases
<MichaelC_> Easy way to store test results
<MichaelC_> (possibly future) way to store and compare test results from different testers, and declare an "authoratative" result
<MichaelC_> Way to associate test cases with specific spec requirement
<MichaelC_> Way to perform tests against multiple user agents (defined broadly as UA on a platform, potentially with a given AT, plugin, or other support tool running)
<MichaelC_> (possibly future) separate analysis against different UA, different OS, different AT, different AAPI, etc.
<MichaelC_> Different test cases allowed for different UAs
<MichaelC_> Way for not-to-technical users to add tests
<MichaelC_> Way to add large amounts of test cases and/or test files at once, e.g., because auto-generated from spec
<MichaelC_> Way to isolate each spec feature tested in order to associate test cases with them
<MichaelC_> Metadata requirements: see TSDTF format
<MichaelC_> (possibly future) way for members of public to submit tests for consideration (would need a vetting process)
<MichaelC_> Way to associate test cases with 1 or more spec features (preference for unit tests associated with one feature, but aggregated tests may be needed)
<MichaelC_> Types of tests: parsing, DOM (or other memory model) effect, presentation, API exposure, reaction
saz: the break-down of semi-automated testing is
useful to WAI too, for instance for user input for script testing
... might be more relevant for test authoring but need to keep that in mind
plh: mc, didn't understand the separating test file from test case
mc: let's say testing for the display of a
particular attribute in a browser vs testing how it is communicated to the
API
... two different tests on the same test file
plh: should make the test automatic, can't separate the automation
mc: by separating test files from test cases, just an attribute in the metadata
plh: adds a lot of complexity
mc: if just looking at one or two groups' needs
for now, then may do differently
... but on the long term, thought we need to support a W3C-wide framework
http://www.openajax.org/member/wiki/Accessibility_Rules_Format_1.0
plh: how will the test logic look like?
mc: by adding the logic into the test files you
are making each a small evaluation tool
... think it may be easier to create a single evaluation engine to process
script logic
plh: just need to write the assertions for testharness.js and then you are done
saz: how did we end up with testharness.js?
plh: needed a simple approach to carry out a
test
... DOM WG and others had something like this years ago
... problem is that other libraries are pretty heavy
<plh> test(function() {assert_true(true)}, "assert_true with true")
saz: are we starting with requirements then trying to find tools that match these needs or the other way around?
plh: difficult enough to get people to write tests
saz: isn't testharness.js a type of framework in itself, and don't people need to write logic anyway?
mc: also hard to port to other frameworks in the future
plh: not really
mc: haven't looked at testharness.js specifically
but from experience test metadata does interfere with test cases
... so at least these need to be separated
plh: have examples from web performance group and
can tell you that separating the logic makes it extremely difficult
... on the other hand, sometimes logic within the test case can get in the
way
... like checking the DOM where writing logic would actually change the DOM
... need to support different methods, do not want to constrain people
<Zakim> shadi, you wanted to ask about different methods
fd: could have different ways of authoring
tests
... could provide a tool that will convert declarative tests into
testharness.js format
... not easy to do the other way around
... testharness.js would be the smallest common
mc: yes, it is possible but not sure what we are supposed to be doing
fd: even if you do it on your own, it would become part of the overall tool
<MichaelC> ARIA Draft test plan
mc: some ARIA tests may fall out of scope of testharness.js
fd: this would fall back into the scope of requirements
mc: would need to write some tests and bring back the requirements
plh: or at least one test
http://www.openajax.org/member/wiki/Accessibility_Rules_Format_1.0
<MichaelC> side note, the work done by OpenAjax on ARIA testing is something we expect to incorporate into the ARIA test plan, this work is done by PFWG members anyways
saz: so we have the requirement to support both the declarative and procedural approach, and a converter from declarative to procedural
plh: agree with requirement but have to be careful of scope
saz: was thinking of unicorn or some other framework
plh: unicorn is more to merge reports
<MichaelC> http://www.w3.org/WAI/GL/WCAG20-TECHS/PDF1#PDF1-tests
mc: example linked above, need to support many ways of carring out a test
plh: this is what the requirements page needs to
cover
... also need to specify what our requirements for the declarative approach
would be
... can quickly get out of hand
saz: suggest we need to refine the requirements a little bit more, especially to add the differentiation between declarative and procedural approach
<MikeSmith> many of the requirements we have already added on the wiki page do not necessarily have any level of consensus either
mc: was not sure if should put requirements directly in the wiki or if we need more discussion
plh: need to put them in now
ms: have a draft for the IG, very sketchy at this point
<MikeSmith> http://www.w3.org/2011/05/testing-ig-charter.html
ms: appreciate comments and feedback
... need to have well-bounded group
saz: concern about the limited scope, more limited than the scope of the Vision TF intended
plh: WAI-ARIA is missing. But I wouldn't want to see XQuery listed
ms: often have the issue of scope-creep so need to keep the scope bounded
<francois> shadi: think mentioning types of tests might be more useful than listing relevant technologies.
fd: think good idea to have an IG, think the list of specifications is good for a start
<MikeSmith> I personally have never written up a charter like this without listing specific technologies that the group's work is intended to be related to; I think in terms of the technologies first
saz: agree with an IG as well, but currently the scope would exclude WCAG
<MikeSmith> we don't typically test server-side behavior; we test for what responses/information the server sends back to the client/browser
plh: it would be exponentially harder to expand the scope
saz: should look back at the requirements and see
what overlap we have
... but need to address WCAG and WAI-ARIA
<MikeSmith> hmm, the Web Notifications API is another case where the behavior cannot be tested within a browser
<MikeSmith> because the behavior is that the browser causes some platform-level notification to be generated
<MikeSmith> generated outside the browser
<plh> good point
<MikeSmith> the only thing we can test within the browser is whether it actually fires the "show" event
<MikeSmith> oh, and whether it actually creates a Notification object