See also: IRC log
allen: our Java ME test suites
are used to make sure the devices under test conform to a set
of well-known devices
... my interests are around Web development and java
programming
... in terms of expected outcome, I'm quite open and would like
to hear other people opinions
... We need to deliver some kind of test suites that can run on
different UA on devices
... whether it is conformance test suite or something else, we
need to make a decision
... I have some experience in conformance testing on devices,
some of which I can share
... in case it's useful for this group
dsilaev, can you hear us?
Carmelo: working for NIST (US
National Institute of Standard and Technologies)
... been doing testing for the past 20 years
... was involved with the DOM Test Suites
... the XSL FO Test Suite
... XSLT Test suite
... and most recently, been with the XML Query WG
... which has just released their Recommendation
... NIST wrote about 60% of the test suite
<scribe> ... new to mobile web groups
UNKNOWN_SPEAKER: in terms of
expectations, we may be doing both conformance and
interoperability testing
... I would like to suggest that we look at the existing work
that exists for catalog, infrastructure, ...
... so that we don't reinvent the wheel
... NIST is very happy to be part of this
Christophe Strobbe, from Katholieke Universiteit Leuven (Belgium): involved with accessibility groups
scribe: working esp. on test
suites for WCAG 2.0
... WCAG and ERT (evaluation and repair tools) WG have set up a
task forces to develop test suites for WCAG 2
... we have a test case description language developed for our
European project (ben2web?)
... we have set up some infrastructure (cvs, wiki) for setting
up these tests
... I'm hoping we can share ideas and maybe infrastructure for
test suites in W3C
Joachim, Drutt: business of mobile delivery platform, primarily for mobile operators
scribe: have been involved in the
device description Working Group for about a year
... I'm interested to see what's this WG is about
... and whether Drutt should be involved
... We have a test suite for devices testing
... with a pragmatic appraoch: trying to see what a device can
do wrt our delivery platform
... Basic testing
scribe: Would be interesting to
see whether this group can come up with better/automated
testing methodology
... one of the goals being to check that the content produced
by our platform appears the way it is supposed to
[Allen presents slides on "TCK 101" (member-only at this time)]
TCK stands Technology Compatibility Kit
it has test cases, tools, and harness
scribe: plus set of
documentations (users guide, releases notes, rules for
conformance testing)
... TCK testing is strictly based on specifications
... no assumptions on the OS or hardware, blackbox
testing
... we don't the quality of an inmplementation, nor robustness,
performance, etc.
... Java ME contains at least a dozen specs called JSR, which
are under the Java community process
... each JSR is targeted at a given platform
... targets a certain of functionalities in the devices
... specs are layered in stacks, e.g. a given spec is based on
a set of underlying specs
... each approved JSR must come with a tck that enforces the
compatibility
... these tck are available under licenses
... (but can be obtained for free under certain conditions)
-> http://jcp.org/en/jsr/all List of Java Specification Requests (JSR)
scribe: We have a test harness
called JavaTest
... it provides quite a lot of functions
... test execution, requiring a plugin to communicate with the
device
... configuration also done through the harness
... tests selection
-> http://java.sun.com/developer/technicalArticles/JCPtools2/ The Java Compatibility Test Tools: JavaTest Harness
scribe: JavaTest used only on
Java technologies at some point - don't know if that's an
option
... if the user agent is written in java, shouldn't be a
difficulty
... our tests are mostly positive, some are negative
... we test whatever is specified in JSR
... the tests give a clear result (pass, fail, error)
... [describing the test development process, incl. test
assertions]
-> http://java.sun.com/developer/technicalArticles/JCPtools/ The Java Compatibility Test Tools
scribe: javatest is open source
-> http://www.jcp.org/en/resources/tdk#java_ctt Javaâ„¢ Compatibility Test Tools
<Christophe> List of test suites (some possibly relevant to mobile web): http://www.bentoweb.org/html/BenToWeb_D4.1.html#heading28
acid2 test
<Christophe> URL: http://www.webstandards.org/action/acid2/
<Christophe> DOM Test Suite Methodology Report: http://www.itl.nist.gov/div897/ctg/conformance/DOMTSmethod.pdf
[demo of the MIDP TCK]
<Christophe> dom shows MWI Best Practices tests for Encdoding Declaration Support: http://www.w3.org/2005/MWI/BPWG/techs/EncodingDeclarationSupport
Dom: conformance or interop testing?
carmelo: both would be
interesting
... maybe with a greater focus on conformance
... depending on the interest from the browser community on
interop
... would be interesting to know what user agents developers
are looking for
... and see what's their needs are
Allen: the kind of ts we will
produce depends on our charter
... our charter mentions conformance testing
... but dom mentioned we could orient it one way or the
other
... shouldn't we try to finalize the charter?
dom: would like to do so in this very discussion, indeed
allen: what's the most desired
outcome of this WG?
... is conformance the most important or interop?
carmelo: probably equally
important
... if you're conformant, it's likely that there is
interop
... but the reverse may not be true
dom: problem with conformance
testing is that there may not be so many requirements
established for user agents
... e.g. XHTML Basic defines conformance more for documents
than for user agents
carmelo: given the nature of
browsing, it may be more interesting to focus on interop
... e.g. on rendering
... looking at acid2 and how it appeared in IE was
interesting
dmitri: how do you define interop testing?
dom: testing that focuses on how user agents react to actual authoring practices (vs ill-defined requirements)
carmelo: don't think there has
been much effort in interop testing for browsers
... at least in W3C
<scribe> ACTION: carmelo to look if NIST has a test cases/results submission system [recorded in http://www.w3.org/2007/01/25-mwts-minutes.html#action01]
<scribe> ACTION: Allen to check if there is any test case submission system/process as part of the JCP [recorded in http://www.w3.org/2007/01/25-mwts-minutes.html#action02]
<scribe> ACTION: Dom to look into test cases submissions systems [recorded in http://www.w3.org/2007/01/25-mwts-minutes.html#action03]
<Christophe> WAI: Techniques for WCAG 2.0 submission form: http://www.w3.org/WAI/GL/WCAG20/TECHS-SUBMIT/
<Christophe> This form submits the data to a mailing list with public archive (http://lists.w3.org/Archives/Public/public-wcag2-techs/); Working Group reviews them later.
<scribe> ACTION: Carmelo to report on the XSLT/XQuery metadata for test cases [recorded in http://www.w3.org/2007/01/25-mwts-minutes.html#action04]
-> http://www.w3.org/TR/2005/NOTE-test-metadata-20050914/ Test Metadata
Carmelo: in case of our interops test, we would also need to categorize tests in terms of technologies involved (css, markup, etc)
test suites licenses
public availability
conformance / interoperability
coverage / test assertions avaibility
mobile web relevance
test harness
automated or not
goals: creating a portal of
available test suites
... evaluate the possibility of packaging them together
stability / status
maintenance process
2 main deliverables:
* Survey of existing "conformance" test suites (based on criteria established above)
scribe: possibly leading to
packaging them together
... and advertizing them to user agent developers
* a contributions-based test suites
scribe: accepting contributions
both for test cases and test results
... with 2 outcomes: a packaged set of test cases
... a report of how current user agents do on these test
cases
... that would need to define a test process
... including to define what's in scope and not (e.g. is
non-conformant content acceptable?)
... and a formal review process by the WG
<Christophe> WCAG TSD TF review process: http://www.w3.org/WAI/ER/2006/tests/process
scribe: to identify existing sources of tests cases
<carmelo> http://www.w3.org/XML/Query/test-suite/Guidelines%20for%20Test%20Submission.html - URL for Query Test Submissions
scribe: tools to submit and
review the test cases
... tools to gather test results
... a preliminary set of tests results based on testing
internal to the WG
... a simple test harness to navigate through the tests
[note on properties for existing test suites review: how to navigate through the test cases?]
<scribe> ACTION: Dom to turn the list of work items into a roadmap with a schedule [recorded in http://www.w3.org/2007/01/25-mwts-minutes.html#action05]
Carmelo: flexible in frequency
Allen: may be difficult to go
outside of the US
... but if in Europe, Dmitri could attend
Dom: periods would be either April or May
Carmelo: prefers meeting starting in the middle or the end of the week