Revision: 2.00
Date: May 18, 2000
By: Lofton Henderson
Rev |
Date |
Description of Change |
---|---|---|
1.00 |
2000-02-08 |
Standing project document, 1st WG release. Based on Cupertino docs, later conformance decisions, additional rsch, and core info from path doc. |
1.01 | 2000-04-27 | Releasable HTML version. |
2.00 | 2000-05-18 | Incorporate cumulative experience from BE suite construction. |
This document is intended as a permanent reference document and user manual for designers and contributers to the SVG Conformance Test Suite. As well as being the repository for currently agreed methods, templates, procedures, and techniques, it also contains the SVG Test Suite Issues Log.
Some parts of this document are still in progress. In particular, the synopses of technical content of other conformance projects, in the chapter "Related Conformance Work", have not yet been detailed.
The ultimate goal is a comprehensive and detailed conformance test suite for SVG 1.0.
Amongst the multiple purposes which such a suite can serve, we identify the most important as: a publicly available suite to help implementation builders achieve interoperability.
There are at least three areas in which conformance testing is applicable:
This project's scope is limited to the third -- a conformance test suite for interpreters and viewers.
At the Cupertino SVG-WG meeting (11/99), the question of what sort of suite we are building was discussed and resolved. The options included:
The SVG WG decided at Cupertino: we are building #3, a publicly available suite for such uses as informal conformance analysis and developer self-testing.
Presently, there are no plans for a SVG "certification" service, therefore there is no need for #4 -- W3C doesn't currently carry out certification testing, nor is any currently proposed by other entities.
What is the difference? In what ways would the suite differ depending on its purpose?
Some identified differences include:
While the formality and rigor of a certification suite might not be needed, the SVG conformance suite will (eventually) embody "traceability" (see below) -- what specification in the standard justifies a given test?
The SVG WG, as part of the extension of its charter, has agreed to two milestones:
A timetable for test suite construction, in the context of other WG activities, looks like:
For those interested in a quick user guide for test construction, you can skip directly to "How to Write Tests". The rest of this document provides background, explanation, and motivation for the methods used.
The material in Section 2, especially the brief synopsis of the nature and content of each existing suite, is incomplete.
Section 3 is substantially complete.
The material in section 4 is complete for Static Rendering, but Dynamic has not been addressed. A couple of topics like overall test-suite linking structure are still incomplete.
Section 5 -- How to Do It -- is substantially complete for Static Rendering "how-to", including incorporation of experience from several months' work on BE tests.
Section 6, Glossary, is mostly a placeholder so far.
Section 7, Issues Log, ditto.
There is now a substantial body of test suite experience and material, for several different standards:
These suites and the experiences of building them are useful to the SVG conformance effort, and to contributors of test materials, in a number of ways:
The "level of effort" data should be particularly interesting to the SVG group.
See [5] and [3].
The applicability of CGM test suite experience to at least the static rendering subset of SVG is obvious.
CGM and SVG differ in other ways:
See [9].
[Analysis of potential applicability VRML suite and methods will be written for future document release.]
See [7].
For application of visual properties (graphical attributes) SVG borrows heavily on CSS2. The syntax for all such properties (plus some others) is patterned on CSS, and a number of CSS2 properties (esp. font properties) are adopted directly by SVG. The full font selection and matching machinery of CSS2 is required in conforming SVG processors (interpreters and viewers).
W3C has made a test suite for CSS1. The methods of the CSS1 test suite should clearly be applicable to some aspects of the SVG suite. Some actual CSS test materials might be (almost) directly usable.
See [6].
We'll have to test the SVG DOM. NIST's XML DOM suite for Javascript binding is released, for both XML and HTML. The Java binding is in progress. Methods and techniques should be applicable, and maybe some materials can be borrowed with minimal modification.
See [8].
A conforming SVG interpreter (hence also a conforming SVG viewer) "must be able to parse and process any XML constructs defined in [XML10] and [XML-NS]." A conforming SVG viewer therefore incorporates XML-suite conformance, by reference.
Size: about 200 simple, atomic tests.
Level of effort: difficult to determine, but probably about 1 - 1.5 FTE, external contractor plus NIST staff.
Size: 270 new tests, (70 new plus extensive redesign and revision of existing 200+ V1 tests).
Level of effort: difficult to determine, but probably about 1 - 2 FTE, external contractor plus NIST staff.
Size: Estimated about 1,000 tests.
Level of effort: 3 people full time for about 2 years at NIST. There was a steep learning curve. NIST released the first tests, after 3 months. They released tests as soon as a node was completed.
Size of 1st release: 1,000 XML tests -- DTD+4000 lines of XML code; 400 lines of XSL.
Level of effort: 1.5 FTE -- 2 people for approximately 9 months. One person designed the test harness and some of the tests and the other designed some tests and spent lots of time validating and filling in the holes from what other people contributed.
Size of 1st release (Ecmascript with XML) -- 800 tests, 30,000 lines of code (this is only the Fundamental and Extended tests).
Level of Effort: 1.5 FTE -- One person half time for 9 months, who did the test harness (following much of what was done for VRML and XML); plus another full time for about 9 months; plus a third person about 4 months full time.
Unknown. To be researched for inclusion in future version of this document.
The CGM suite ([5]) consists of 269 test cases, each of which has three components:
There is no interactive harness or driver, and hence there are no navigation buttons to assist navigation through the suite. The operator has to invoke the viewer, access the Reference Picture, and access the Operator Script.
[Synopsis of content and structure of the VRML suite will be written for future document release.]
[Synopsis of content and structure of the XML suite will be written for future document release.]
[Synopsis of content and structure of the DOM suite will be written for future document release.]
[Javascript, XML and HTML finished. Java version being built now. NIST did the Javascript XML, then Javascript HTML reusing some the data file (which is a big HTML or XML document on which the DOM tests work). And the re-used the Test Assertions and XML document for the Java DOM-XML tests.]
[Synopsis of content and structure of the CSS1 suite will be written for future document release.]
The following basic process is applied for construction of most of the test suites referenced above -- CGM, VRML, XML, DOM, at least. In overview:
In practice, these steps need not be overly formal. In the case of a certification suite, formality is important. For a conformance suite, it is less so. In any case traceability (see below) is required.
Therefore, explicitly or implicitly, these steps are carried out -- the document is read exhaustively and decisions are made about what to test about each functionality, and how to realize these decisions in a set of test cases.
Section 4.2 of reference [9] contains an interesting discussion of TRs (which it calls SRs) and TCs -- the step of generating TPs is implicit in this reference, not explicitly treated as a formal step.
Some basic principles have been learned during the construction of previous test suites, applicable to both graphics suites and others:
The SVG specification divides fairly cleanly into semi-independent functional modules. Test materials will be developed and released progressively, subject to the constraint that we have agreed to make an entire breadth-first, basic effectivity (BE) test suite release first, and a drill-down (DT) release subsequent to that.
The major natural division in the specification is:
Static rendering has first priority for development and release, although work on dynamic can proceed in parallel, resources permitting.
Functionality will be ordered in the suite, for purposes of execution and navigation through the suite, from most basic to most complex -- implementations should encounter the simplest and most basic tests first, before being subjected to progressively more complex and advanced functionality.
Given our intent to make progressive releases of test suite modules, it makes sense to generally follow this ordering for the building of the materials, at least for the completion of the DT and ER tests.
The SVG WG agreed at Cupertino to divide up the functionality by chapter. For static rendering, the following chapters are candidates for testing (based on the document organization of the 3 March 2000 public version):
Building and executing tests in chapter order does not appear to always lead to a basic-to-complex ordering.
From most basic (or fundamental -- basic does not necessarily mean simple), to most advanced, a rough functional ordering might be:
The issue of final test suite ordering and organization is not yet completely resolved.
[TBD.]
Each Test Case in the static rendering module will contain three principle components:
#1 and #2 will be file instances. #3 could be a file instance, but now is being handled as the content of a tag in a simple XML grammar which generates the HTML navigation page.
Note. In the earliest test suites, for CGM, the Operator Script was a rigid and rote checklist for use by (non-expert) certification testing technicians, to score each test. It has evolved in more recent conformance work, to being more informative about the test's purpose and what to look for. It also can (and should) function to improve the accessibility of the test suite.
Details and examples of writing an Operator Script are given in the next chapter.
Other supporting material will be generated for each test case:
Note. The traceability links may be postponed until the SVG spec stabilizes -- probably at least the PR version.
See below, section [4.4.2] and [4.4.4], for futher details about the test harness(es).
Most of the SR materials will be applicable to most dynamic tests. However, there may be cases (e.g., some DOM) which do not have graphical output, and there will be some which could (but need not necessarily) have animated graphical "reference images".
This material will be further refined as more of the dynamic functionalities' tests are developed.
Four generic test categories have been decided. These are equally applicable to static rendering and dynamic test modules:
For BE tests, an attentive reading of the applicable spec sections is required, but an exhaustiveTR enumeration is not. The generic BE test purpose is: correct basic implementation, including major variations within the functional area.
For DT and ER tests, an exhaustive TR extraction from the SVG spec will be a part of the process, and test purposes will be derived from the TRs (see next chapter).
Following is a list of generic test purposes for DT tests (ER also?):
The Generic Test Purposes provide a high-level checklist for the sorts of test cases which should result from the analysis and test design of a functional area. If any major categories are not represented, it may indicate that some implicit or explicit requirements have been missed.
These requirements are agreed, at least for the static rendering module:
The VRML suite, as well as the CSS, XML, and DOM suites, employ an interactive test harness (HTML page), which:
Unlike the VRML suite, which presents the test rendering and the reference image side-by-side in one window, the SVG WG has decided on a two-window approach, for standard release harnesses:
So in any case, a browser will have to be available for convenient viewing of all of the materials, but it is not necessary that a viewer-under-test be a browser plug-in.
Note. A single side-by-side (PNG plus rendered SVG) HTML harness is a simple variant, and can be generated using the harness-generation tools which have been developed.
A strong naming convention for the materials is useful, both with the management of the test suite repository, as well as with requirement #2 above.
Test names will be brief but informative. The name design is: chapter-focus-type-num. 'Type-num' is a concatenation of the test type -- BE, DT, ER, or DM -- and its ordinal in the sequence of such tests -- 01, 02, ...
Examples: path-lines-BE-01, shapes-rect-DT-04, styling-fontProp-DM-02.
The test harness (static rendering, at least), will be an HTML page which identifies the test, invokes the PNG reference image, and presents the operator script.
Navigation buttons will be provided to go back to a table of contents (and maybe an index), to navigate laterally through BE tests, and to drill down to "child" tests (from the BE level to the DT, ER, and DM tests), to go back up to "parent" tests from the lower levels.
Per a Cupertino decision and subsequent discussions, the principle HTML harness will present the operator script and the PNG reference image, but will not assume a browser-invokable SVG viewer -- the test administerer will have to get the SVG image into another window, or onto a printer, or whatever is appropriate. To make this easier, a second, all-SVG navigation harness is provided (with exactly parallel navigation capabilities to the PNG-plus-Operator Script harness.)
With the current method for producing harness(es) -- XSLT stylesheet applied to instances of a simple XML grammar which describes each test case -- it is not difficult to produce multiple harness versions, including other possible variants (e.g., PNG plus rendered SVG plus operator script, for browser-plugin SVG viewers).
The first generation design of simple XML grammar for describing tests, and the XSLT stylesheet for producing the HTML page, have been released. A "manual" SVG template has been released as well -- see next chapter for details.
Work is underway on "second generation" harnesses and templates to:
Each test , at least for the static rendering module, will be put into a standard template. As just discussed, this is presently a manual process -- the test writer puts the test body content into the template.
The template incorporates these features:
The serial number is a method for ensuring that the PNG reference image, the SVG instance, and the SVG rendering are all current, all agree. To be useful, it must unfailingly increment whenever any change is made to the SVG file, in fact whenever the SVG file is saved. A way to automate this is being sought.
The <text> elements of the Legend are as simple as possible -- ideally, defaulting all attributes and properties except size and position. [Note. The current template does have font-family selection -- it is an issue whether this should be eliminated, or changed to a generic specification like "sans-serif", or changed to a different font.]
There may be some test purposes (e.g., if we want ed an "empty.svg" test) which require no graphical content, in which (only) cases the Legend may be omitted.
The overall linking structure is decided -- TOC (and possibly index), BE layer throughout the suite (next/previous), DT drill down from BE (child), BE pop up from DT (parent), etc. Some details are to be worked out. Does each BE point down to a different DT "stack"? Or do all BEs in a chapter or chapter-focus point down to the first DT in that area? The latter has been tentatively decided (the structure of the suite is not likely to be regular enough to make the former widely practical).
Processes and procedures are still being designed for:
For now, the test suite editor is the repository. Once a test case is submitted, it is "owned" by the repository. All WG and public releases are from the repository, and all maintenance changes to test cases are applied to the latest repository version.
In particular, the editor releases test cases for maintenance work, and ensures the integrity of the versioning information (serial number). With pending second generation tools, adherence of test cases to the exact formatting (as opposed to functional) details of the template conventions ought to be automatable, and not a significant concern to test contributors.
The examples which already exist in the SVG Specification have proved to be an excellent basis for material for some of the "Basic Effectivity" tests -- simple tests which minimally illustrate an SVG functionality.
Several of the SVG WG members have in-house development efforts. Materials ranging from basic path and shape graphics, to filter effects tests, to DOM and animation functionality are known to exist. Though some adaptation and integration into the SVG Test Suite framework is required, these existing QA materials have already proved to be a valuable resource.
To be done: inventory what is available within the WG.
For graphical output functionality, there is substantial commonality between CGM and SVG (see comparison table in [12]). The CGM Test Suite (for ATA, release 3.0, see [5]) has 269 tests, conforming to the test suite principles articulated above.
There are two interesting possibilities to leverage this CGM design and implementation work:
Note that NIST-certified CGM viewers exist, as well as certified printer drivers and certified rasterizers.
Contributions from outside of the WG will be solicited, once these documents and the template materials are stabilized. Miminal processing for contributions will include:
There is only one way to achieve comprehensive coverage of Test Requirements: build new tests, carefully designed and targeted at specific TR(s).
There are two ways to approach this:
Experience with the first has been: the output drivers usually don't have precise enough control of the individual pattern of elements, and manual touchup is almost always required.
From CGM experience, an interesting source of Test Purposes, and possibly even test materials, are instances which have:
These indicate trouble areas, where implementers are likely to misinterpret or incorrectly implement the specification.
Beyond static rendering module, we should be able to leverage methodology, or tests, or both, from such resources as the DOM test suite, [6]. This we intend to do for the DOM tests. CSS, [7], should even applicable in static rendering.
This is meant to be a cookbook for writing the test cases for a functional module. Functional modules will generally correspond to chapters in the SVG spec. These techniques were prototyped with the Path chapter, and have been applied in the generation of BE-level tests for the whole spec.
Note. When we did the Path chapter, an exhaustive TR extraction was one of the first actions. This is not necessary prior to BE test case specification, and the TR analysis is postponed until after the BE test case generation in what follows.
Implicitly or explicitly, formally or informally, in this order or another which you prefer, you will go through these steps for writing BE tests:
DT test generation involves more rigor and more systematic methodology than BE test generation. The basic steps for DT tests are similar, with some differences at the beginning:
The rest follows as for BE tests. The difference between the DT outline and the BE outline is in the rigor and thoroughness of the first steps, which are deciding "what to test".
The naming convention for SVG tests is: chapter-focus-{BE|DT|ER|DM}-NN (NN=01, 02, ...).
Decide how to subdivide your functional area ("chapter") into subsections -- "focus" sections.
Chapter is self explanatory -- a one-word, though possibly complex, name for your document chapter or functional area. Examples: path, coordSystem, clipMaskComposite.
Focus is simple to specify in some cases: shapes-rect-...; path-lines-...; filters-feColorMatrix-...
In some cases, focus might not seem obvious. However, the "focus" component of the name is always required.
"{BE|DT|ER|DM}" indicates that exactly one of the two-letter test type designators is to be used, BE or DT or ER or DM. The numbering runs consecutively throughout the chapter, it does not restart with each focus subsection.
Note. Use "camel case" for compound words for chapter and focus. The first letter is lower case, and the first letter of subsequent words is upper case. Examples: clipMaskComposite, radialGradient, textAnchor.
Remember, at the BE level, we are only trying to verify that the interpreter or viewer has implemented the given functional area. Therefore, we focus on the major functional pieces of the chapter.
There are no firm rules as to what comprises a BE test and what is DT -- sometimes it's a judgement call. However, here are some helpful guidelines:
Example. The 'path' element has a "d" attribute, which can contain a number of commands: Mm, Ll, Zz, Hh, Cc, ... It also has the "nominalLength" attribute, which is unique to Path. The BE tests for path give a basic exercise of these attributes and commands, including verification that the implementation understands the concept of subpath (holes and islands).
Note that there are also attributes like "style", "class", "transform", which are functionalites widely applicable to other parts of SVG. It is a judgement call, but in the path BE tests, we avoid these details -- they will in fact be attacked in their own modules, such as Styling, Transform, etc. (In other words, we don't even deal with them extensively in the Path tests, not even in the DT tests.)
We adopted a principle for the Path tests: just enough styling -- basic colors, etc -- to make the tests visually less grim (than b/w, one-pixel wide lines, no fill). This principle is applicable for all tests -- do not unnecessarily clutter the test with functionalities which are unrelated to the functionality being tested.
When starting to design the BE tests, look at the existing very simple examples in the SVG spec for starters.
The guidelines in this section apply to DT, ER, and DM tests, as well as BE tests.
The test description is only for you, the test designer, unless you're dividing the labor into specification versus production -- different people doing each -- which we actually did on the Path prototype. Nevertheless, experience shows that the challenge of writing the description force one to actually design the test in sufficient detail that it is easy to then write SVG content to implement it.
It is the Conceptual Description of the test (see below) which is useful at this early stage (and it might be done after you have sketched/drawn the Test Case).
In the Path prototype, the following format proved useful in describing Test Cases:
See later section for writing the Operator Script (#4).
The Associated Test Requirements (#5) and Document References (#6) are the crux of traceability. You will have generated this information (implicitly, at least) by the time you have designed and written your test case. If you are writing test cases, generate and preserve this information (see, for example, next section and [11]).
Note.Traceability data currently are not integrated into the almost-complete BE suite. This will have to be done (as links into the SVG spec) when both the spec and the test suite have stabilized significantly.
A comprehensive list of Test Requirements is the critical first step in writing a comprehensive set of DT test cases for your functional area.
There is no magic rule. See [9], section 4.2, for an interesting discussion of this. See [11] for a fairly thorough (as far as it went -- it's incomplete) example for the Path chapter.
It starts with an intensive reading of each sentence of your chapter. Weigh the question, phrase by phrase: is there a testable assertion here? If so, highlight it and add it to your list (however you want to manage it. You also will need to read some other chapters, at least:
and maybe others like Accessibility. Depending on your chapter, you might be led off into other sections of the document for requirements, or into other standards (e.g., CSS2).
Build up a list of TRs -- testable assertions -- which might be applicable to a viewer or interpreter. In fact, don't descriminate by "applicable to" at this stage. The SVG spec is written to describe a file format, and the semantic requirements associated with data elements are not always explicitly stated. For example from the Path chapter:
"The command letter can be eliminated on subsequent commands if the same command is used multiple times in a row."
This is a statement about allowable data configurations within 'path' elements. But combined with the statement form Appendix G, "All SVG static rendering features ... must be supported and rendered ...", a testable assertion about viewers results (and a test purpose can be derived).
You'll be lucky to find any (or many) statements which jump out with a "shall" or "must" -- it doesn't occur once in my (incomplete) list of 59 TRs for Path, in [11].
So for a first pass, pick up anything and everything which looks like it might lead to a Test Requirement on an SVG viewer.
For the Path chapter, I assembled a list of Test Requirements after the first intensive reading, during which I did markup on paper. You can follow this, or do whatever is most agreeable for you.
Each entry in the list contained a document reference, and the text of the TR -- I did cut-and-paste against the HTML version of the document for the latter (note: there is some danger of volatility with this, at this stage of the document).
Simple example:
Reference: 10.3.2.Mmtable
Statement: (x,y)+ -- Mm must be followed by one or more x,y pairs.
Lengthier example:
Reference: 10.4.p1.b2
Statement: nominalLength= The distance measurement (A) for the given 'path' element computed at authoring time. The SVG user agent should compute its own distance measurement (B). The SVG user agent should then scale all distance-along-a-curve computations by A divided by B.
I used the following ad-hoc notation for referencing Test Requirements: pN.bN.sN, where pN = paragraph N, bN = bullet N, sN = sentence N. Plus unambiguous constructions like "Mmtable" to point at tables and table entries.
See [11] for the example of the complete listing of those Test Requirements (TR) which pertain to geometry and syntax of Path (extracted from the 19990914 SVG spec.)
We want to turn the TR list into a number of Test Cases which exhaustively covers the requirements in the TR list.
The first step, implicitly or explicitly, is derivation of a set of Test Purposes associated with the TR list. Example:
TR: "Mm must be followed by one or more x,y pairs"
TP: "Verify that interpreters correctly handle Mm with one (x,y) pair, or several pairs, or many pairs."
Note that, because of the "Error Processing" requirement about Path, that this suggests another TP: "Verify that interpreters respond correctly to invalid Mm data combination." (Such as "M x L x y", or "M L x y", or "M x y x y x Z".
This leads to another point. The general conformance requirements of Appendix G imply a list of "Generic Test Purposes" (see [4.3]):
Keep this list at hand while you are looking at your TR list and deciding what to test, i.e., deciding Test Purposes. (This list might be extended -- suggestions welcome.)
I have been a bit informal with my "TP list", but I still keep track of what I have covered on the TR list and the generic TP list, so that I know when I'm done. See, for example, [11], the section, "Detailed Drill-down Tests (DT) for Line Commands."
The final principle here, once you have an idea of what you're going to test and how, is to put a reasonable number of "atomic" tests together to make a Test Case. This will reduce the number of individual test cases and increase their content density. The guiding principles should be:
The first release has been made of a simple XML grammar for describing tests, and the XSLT stylesheet for producing the HTML page.
These are the "first generation" production tools. If you process the XML instances with CreateHTMLHarness.xslt, you will get HTML pages which pull together and presents the PNG reference images, the operator scripts, and navigation buttons for the suite. If you process the XML instances with CreateHTMLHarness.xslt, you will get a parallel set of SVG pages with SVG elements for navigation buttons, and inclusion by reference of the test case SVG instances themselves.
Note. A simple modification of the CreateHTMLHarness will allow you to 'embed' (non-standard HTML tag) the SVG side-by-side with the PNG, if an SVG plugin is available for your browser.
I have been using the XT tool of James Clark (get it from his Web site). You can use whatever tool you prefer, but a caveat -- I have been warned that different XSLT processor may give inconsistent results. This is not to say that XT is correct, but for now it is my "reference tool".
A "manual" SVG template has been released as well.
The scheme is still being developed, and will eventually lead to automatic generation of the SVG skeleton file, with some of the details filled in (see next section).
Starting with the static-output-template:
Use good and thorough comments in the SVG content itself to describe what everything is doing.
There may be some test purposes (e.g., the structure-emptySVG-BE-01.svg test) which require no graphical content, in which (only) cases the Legend may be omitted.
See below about the Serial Number.
Note. As described earlier, work is underway on a "Second Generation" of tools, which should allow developers to ignore these template details, as long as the test case content body is submitted in a correct SVG with correct coordinate space, etc. But for now, write into the template.
The Operator Script comprises a few sentences and is written as one or more paragraphs of the XML instance (see earlier section) for the test case.
Once again, there are no firm rules. However, the Operator Script can address any or all of:
#2 could conceivably be: "picture should look like the PNG". However, some specifics could be pointed out, such as (for an accuracy test): "All lines should pass through the cross-hairs", or "Vertexes should be at the locations of the markers".
#3 and #4 go together. If there are allowable deviations of the rendered SVG from the PNG, it should be stated (e.g., maybe a style falls back to the default style sheet, which can vary).
About #6, optionality. If a test is exploring an optional or recommended feature, that should be clearly indicated right at the beginning of the operator script.
#7 refers to a brief description of SVG functionalities, other than the one under test, which are used in the test file instance.
In addition to the other purposes of the Operator Script, a well-detailed Operator Script can be useful as an aid to accessibility (#8).
This section is likely to develop and evolve further. It should ultimately be a repository of successful methods for getting PNG reference images, which have been discovered by you, the test developers.
When you develop a test case, you submit to the repository: the PNG reference image; and, a description of how you generated the PNG.
By far, the most common method of generating the PNG reference image is:
The screen capture method relies on having an SVG viewer which can correctly display the picture. Often, in these early days of implementation development, this is not possible -- no SVG viewer can handle the test instance correctly.
However, it is often the case that there is another SVG file which is exactly equivalent (pictorially). These are called "patch" files, and are named, for example: structure-nestedSVG-BE-02-patch.svg (actual example).
Example: viewer doesn't correctly establish the origin of the user space for a simple test of nested SVG elements. Then compensate for the viewer error by changing the coordinates of the innermost graphical elements so that the viewer positions them (graphically) correctly.
Example: viewer defaults something wrong. Then compensate by explicitly setting that value (assuming that the viewer does this correctly).
Example: multi-stop gradients don't work, but two-stop are correct. Make a "-patch" file where the correct picture of a multi-stop gradient is built by stringing together multiple two-stop gradients.
Test contributors should submit any "-patch" files, along with the PNG files and "how to" description.
This technique has been used by some contributors, before the development of SVG viewers was very advanced:
With this method, you should be on guard against accuracy issues, as the SVG and the PNG result from independent and disjoint drawing pipelines.
A variant of this is to use a graphics program to draw just the incorrect piece of the SVG rendering, and then cut-paste with a raster editor to get a complete and correct reference PNG.
This method has been postulate, but (to my knowledge) no yet used by anyone. It would be equally applicable to formats other than CGM, when transcoders are available.
Whether or not this will work for your test case depends on whether the result of step #3 is close enough to the SVG configuration you need for the test (and correct!), and more importantly, whether the hand-editing of #4 preserves the graphical accuracy of the rendered picture.
A variant, for simple test cases, would be to hand-code the desired SVG, hand-code (graphically) equivalent clear text CGM, and not use the transcoder at all -- just use hand-coded CGM as a route to a correct picture.
This would only be useful, in place of the previous methods, if none of the existing SVG viewers could get a correct rendering of the desired SVG test case, and it was too difficult to reproduce the desired drawing in a graphics program.
Two aspects of the PNG file generation should have your attention:
The only exceptions for #1 are test cases which specifically deviate from the canonical 450x450 coordinate space, in order to test viewer handling of different SVG address spaces. In this case, match the SVG test instance.
8-bit PNG should suffice for most tests -- 256 colors are possible. 24-bit PNG is likely to be required for tests such as:
While it might be possible to compute the number of colors required in some of these cases, and optimize with PNG-8, nevertheless it is strongly recommended to be conservative and use PNG-24, if there is any doubt.
For these same cases which require PNG-24, it has been discovered that attention must be given to the color mode of the monitor, if screen capture is being used. On PC Windows systems, for example, noticeable color banding has occurred on some tests when using "High Color" (16-bit) mode, and it disappears if "True Color" (24-bit) mode is used.
During CGM test suite development, a major annoyance and quality impact arose from not being able to keep synchronization between the reference image (the PNG files, for this SVG test suite), and the test case (SVG for us). Changes made to the latter often weren't reflected by updating the former. Worst of all, there was no way to detect the problem when it occurred.
A "serial number" in the SVG, which is encoded in graphical text, is the solution for this -- it is quick and easy to determine if the PNG corresponds to the SVG file and a given rendering of the SVG file (e.g., printout or screen image) ... assuming that the PNG was generated from the SVG!
The serial number is part of the Legend of the SVG file. The only way to use it now is to manually maintain and update it, which is something of a drawback. Nevertheless, the version control benefits warrant the inconvenience.
Currently, the serial number is identical to a version number -- 1, 2, 3, ... Its maintenance is solely the responsibility of the test suite editor, which somewhat alleviates the error-prone manual aspects.
Ultimately, automating this is a better idea, e.g., the serial number changes whenever the test case is checked into the repository (and the PNG is then recreated from that version). Automation is being looked at for the second generation of test suite tools and methods.
There are two reasons that these guidelines are provided:
Considering the chapter and its set of test cases as a whole, assess:
Looking at the SVG test cases instances individually, evaluate:
Note. About "Self-documenting (i.e., in rendered content)," the style of in-picture animation should be to describe what is being tested, but should not describe visual effect (the latter may be done in Operator Script). So for example,
Specifically, this refers to the Operator Script, which is to be evaluated for:
Evaluate at least these criteria for the PNG reference image:
A test which lightly exercises one of the basic functionalities of the SVG specification. Collectively, the BE tests of an SVG functional area (chapter) give a complete but lightweight examination of the major functional aspects of the chapter, without dwelling on fine detail or probing exhaustively. BE tests are intended to simply establish that a viewer has implemented the functional capability.
A test which is intended to show the capabilities of the SVG specification, or a functional area thereof, but is otherwise not necessarily tied to any particular set of testable assertions from the SVG specification.
Also called drill-down tests. DT tests probe for exact, complete, and correct conformance to the most detailed specifications and requirements of SVG. Collectively, the set of DT tests is equivalent to the set of testable assertions about the SVG specification.
See Detailed Test.
An Error Test probes the error response of viewers, especially for those cases where the SVG specification describes particular error conditions and prescribes viewer error behavior.
See Test Requirement.
See Test Requirement.
A testable assertion which is extracted from a standard specification. Also called Semantic Requirement (SR) or Test Assertion (TA) in some literature. Example. "Non-positive radius is an error condition."
A reformulation of a Test Requirement (or, one or more TRs) as a testing directive. Example. "Verify that radius is positive" would be a Test Purpose for validating SVG file instances, and "Verify that interpreter treats non-positive radius as an error condition" would be a TP for interpreter or viewer testing.
As used in this project, an executable unit of the material in the test suite which implements one or more Test Purposes (hence verifies one or more Test Requirements). Example. An SVG test file which contains an elliptical arc element with a negative radius. In practice (and abstractly), the relationship of TRs to TCs is many-to-many.
The ability, in a test suite, to trace a Test Case back to the applicable Test Requirement(s) in the standard specification.