Copyright
©
2012
2013
W3C
®
(
MIT
,
ERCIM
,
Keio
),
All
Rights
Reserved.
W3C
liability
,
trademark
and
document
use
rules
apply.
This document collects best practices for implementers and users of the XML Signature specification [ XMLDSIG-CORE1 ]. Most of these best practices are related to improving security and mitigating attacks, yet others are for best practices in the practical use of XML Signature, such as signing XML that doesn't use namespaces, for example.
This section describes the status of this document at the time of its publication. Other documents may supersede this document. A list of current W3C publications and the latest revision of this technical report can be found in the W3C technical reports index at http://www.w3.org/TR/.
This document is expected to be further updated based on both Working Group input and public comments. The Working Group anticipates to eventually publish a stabilized version of this document as a W3C Working Group Note.
The practices in this document have been found generally useful and safe. However, they do not constitute a normative update to the XML Signature specification, and might not be applicable in certain situations.
The
changes
to
this
document
since
the
last
publication
on
9
August
2011
10
July
2012
are
the
following:
Added
new
best
practice,
see
section
4.3
4.4
For
Signers:
When
using
an
HMAC,
truncate
the
output
to
increase
attack
resistance
encrypting
and
signing
use
distinct
keys
.
Restructured
to
put
all
Implementation/Application/Sign-Verify
best
practices
in
separate
sections.
Indicate
target
(Implementer/Application/Signer/Verifier)
in
each
practice
statement.
Editorial
updates.
Updated
references.
A diff-marked version of this specification which highlights changes against the previous published version is available.
This
document
was
published
by
the
XML
Security
Working
Group
as
a
Working
Group
Note.
If
you
wish
to
make
comments
regarding
this
document,
please
send
them
to
public-xmlsec@w3.org
(
subscribe
,
archives
).
All
feedback
is
comments
are
welcome.
Publication as a Working Group Note does not imply endorsement by the W3C Membership. This is a draft document and may be updated, replaced or obsoleted by other documents at any time. It is inappropriate to cite this document as other than work in progress.
This document was produced by a group operating under the 5 February 2004 W3C Patent Policy . W3C maintains a public list of any patent disclosures made in connection with the deliverables of the group; that page also includes instructions for disclosing a patent. An individual who has actual knowledge of a patent which the individual believes contains Essential Claim(s) must disclose the information in accordance with section 6 of the W3C Patent Policy .
The XML Signature specification [ XMLDSIG-CORE1 ] offers powerful and flexible mechanisms to support a variety of use cases. This flexibility has the downside of increasing the number of possible attacks. One countermeasure to the increased number of threats is to follow best practices, including a simplification of use of XML Signature where possible. This document outlines best practices noted by the XML Security Specifications Maintenance Working Group , the XML Security Working Group , as well as items brought to the attention of the community in a Workshop on Next Steps for XML Security [ XMLSEC-NEXTSTEPS-2007 ], [ XMLDSIG-SEMANTICS ], [ XMLDSIG-COMPLEXITY ]. While most of these best practices are related to improving security and mitigating attacks, yet others are for best practices in the practical use of XML Signature, such as signing XML that doesn't use namespaces.
XML Signature may be used in application server systems, where multiple incoming messages are being processed simultaneously. In this situation incoming messages should be assumed to be possibly hostile with the concern that a single poison message could bring down an entire set of web applications and services.
Implementation of the XML Signature specification should not always be literal. For example, reference validation before signature validation is extremely susceptible to denial of service attacks in some scenarios. As will be seen below, certain kinds of transforms may require an enormous amount of processing time and certain external URI references can lead to possible security violations. One recommendation for implementing the XML Signature Recommendation is to first "authenticate" the signature, before running any of these dangerous operations.
Best Practice 1: Implementers: Mitigate denial of service attacks by executing potentially dangerous operations only after successfully authenticating the signature.
Validate
the
ds:Reference
elements
for
a
signature
only
after
establishing
trust,
for
example
by
verifying
the
key
and
validating
ds:SignedInfo
first.
XML Signature operations should follow this order of operations:
Step 1 fetch the verification key and establish trust in that key (see Best Practice 2 ).
Step
2
validate
ds:SignedInfo
with
that
key
Step 3 validate the references
In
step
1
and
step
2
the
message
should
be
assumed
to
be
untrusted,
so
no
dangerous
operations
should
be
carried
out.
But
by
step
3,
the
entire
ds:SignedInfo
has
been
authenticated,
and
so
all
the
URIs
and
transforms
in
the
ds:SignedInfo
can
be
attributed
to
a
responsible
party.
However
an
implementation
may
still
choose
to
disallow
these
operations
even
in
step
3,
if
the
party
is
not
trusted
to
perform
them.
In
step
1,
if
the
verification
key
is
not
known
beforehand
and
needs
to
be
fetched
from
ds:KeyInfo
,
the
care
should
be
taken
in
its
processing.
The
ds:KeyInfo
can
contain
a
ds:RetrievalMethod
child
element,
and
this
could
contain
dangerous
transforms,
insecure
external
references
and
infinite
loops
(see
Best
Practice
#5
and
examples
below
for
more
information).
Another
potential
security
issue
in
step
1
is
the
handling
of
untrusted
public
keys
in
ds:KeyInfo
.
Just
because
an
XML
Signature
validates
mathematically
with
a
public
key
in
the
ds:KeyInfo
does
not
mean
that
the
signature
should
be
trusted.
The
public
key
should
be
verified
before
validating
the
signature
value.
For
example,
keys
may
be
exchanged
out
of
band,
allowing
the
use
of
a
ds:KeyValue
or
X509Certificate
element
directly.
Alternatively,
certificate
and
path
validation
as
described
by
RFC
5280
or
some
other
specification
can
be
applied
to
information
in
an
X509Data
element
to
validate
the
key
bound
to
a
certificate.
This
usually
includes
verifying
information
in
the
certificate
such
as
the
expiration
date,
the
purpose
of
the
certificate,
checking
that
it
is
not
revoked,
etc.
Key Validation is typically more than a library implementation issue, and often involves the incorporation of application specific information. While there are no specific processing rules required by the XML Signature specification, it is critical that applications include key validation processing that is appropriate to their domain of use.
Best Practice 2: Implementers: Establish trust in the verification/validation key.
Establish appropriate trust in a key, validating X.509 certificates, certificate chains and revocation status, for example.
The
following
XSLT
transform
contains
4
levels
of
nested
loops,
and
for
each
loop
it
iterates
over
all
the
nodes
of
the
document.
So
if
the
original
document
has
100
elements,
this
would
take
100^4
=
100
million
operations.
A
malicious
message
could
include
this
transform
and
cause
an
application
server
to
spend
hours
processing
it.
The
scope
of
this
denial
of
service
attack
is
greatly
reduced
when
following
the
best
practices
described
above,
since
it
is
unlikely
that
an
authenticated
user
would
include
this
kind
of
transform.
XSLT
transforms
should
only
be
processed
for
References,
and
not
for
ds:KeyInfo
ds:RetrievalMethod
s,
and
only
after
first
authenticating
the
entire
signature
and
establishing
an
appropriate
degree
of
trust
in
the
originator
of
the
message.
<Transform Algorithm="http://www.w3.org/TR/1999/REC-xslt-19991116"> <xsl:stylesheet version="1.0" xmlns:xsl="http://www.w3.org/1999/XSL/Transform"> <xsl:template match="/"> <xsl:for-each select="//. | //@*"> <xsl:for-each select="//. | //@*"> <xsl:for-each select="//. | //@*"> <foo /> </xsl:for-each> </xsl:for-each> </xsl:for-each> </xsl:template> </xsl:stylesheet> </Transform><Transform>
As discussed further, below, support for XSLT transforms may also expose the signature processor or consumer to further risks in regard to external references or modified approvals. An implementation of XML Signature may choose not to support XSLT, may provide interfaces to allow the application to optionally disable support for it, or may otherwise mitigate risks associated with XSLT.
Best Practice 3: Implementers: Consider avoiding XSLT Transforms.
Arbitrary XSLT processing might lead to denial of service or other risks, so either do not allow XSLT transforms, only enable them for trusted sources, or consider mitigation of the risks.
Instead of using the XML Signature XSLT transform, deployments can define a named transform of their own, by simply coining a URI in their own domain that can be used as the Algorithm. How that transform is implemented is then out of scope for the signature protocol - a named transform can very well be built in XSLT.
Choosing to name a new transform rather than embedding an XSLT transform in the signature reference has the advantage that the semantic intent of the transform can be made clear and limited in scope, as opposed to a general XSLT transform, possibly reducing the attack surface and allowing alternate implementations.
What may be lost is the general flexibility of using XSLT, requiring closer coordination between signer and verifiers since all will be required to understand the meaning of the new named transform.
The
XSLT
transform
in
the
example
below
makes
use
of
the
user-defined
extension
feature
to
execute
arbitrary
code
when
validating
an
XML
Signature.
The
example
syntax
is
specific
to
the
Xalan
XSLT
engine,
but
this
approach
is
valid
for
most
XSLT
engines.
The
example
calls
"os:exec"
as
a
user-defined
extension,
which
is
mapped
to
the
Java
lang.Runtime.exec()
method
which
can
execute
any
program
the
process
has
the
rights
to
run.
While
the
example
calls
the
shutdown
command,
one
should
expect
more
painful
attacks
if
a
series
of
attack
signatures
are
allowed.
If
an
implementation
of
XML
Signature
allows
XSLT
processing
it
should
disable
support
for
user-defined
extensions.
Changing
the
Transforms
element
does
invalidate
the
signature.
XSLT
transforms
should
only
be
processed
after
first
authenticating
the
entire
signature
and
establishing
an
appropriate
degree
of
trust
in
the
originator
of
the
message.
Example:
<Transforms xmlns:ds="http://www.w3.org/2000/09/xmldsig#"> <Transform Algorithm="http://www.w3.org/TR/1999/REC-xslt-19991116"> <xsl:stylesheet version="1.0" xmlns:xsl="http://www.w3.org/1999/XSL/Transform" xmlns:java="java"> <xsl:template match="/" xmlns:os="java:lang.Runtime" > <xsl:variable name="runtime" select="java:lang.Runtime.getRuntime()"/> <xsl:value-of select="os:exec($runtime, 'shutdown -i')" /> </xsl:template> </xsl:stylesheet> </Transform></Transforms>
Best Practice 4: Implementers: When XSLT is required disallow the use of user-defined extensions.
Arbitrary XSLT processing leads to a variety of serious risks, so if the best practice of disallowing XSLT transforms cannot be followed, ensure that user-defined extensions are disabled in your XSLT engine.
The following XPath Transform has an expression that simply counts all the nodes in the document, but it is embedded in special document that has a 100 namespaces ns0 to ns99 and a 100 <e2> elements. The XPath model expects namespace nodes for each in-scope namespace to be attached to each element, and since in this special document all the 100 namespaces are in scope for each of the 100 elements, the document ends up having 100x100 = 10,000 NamespaceNodes.Now in an XPath Filtering transform, the XPath expression is evaluated for every node in the document. So it takes 10,000 x 10,000 = 100 million operations to evaluate this document. Again the scope of this attack can be reduced by following the above best practices
<dsig:Transform Algorithm="http://www.w3.org/TR/1999/REC-xpath-19991116"> <dsig:XPath>count(//. | //@* | //namespace::*)</dsig:XPath></dsig:Transform>
An implementation of XML Signature may choose not to support the XPath Filter Transform, may provide interfaces to allow the application to optionally disable support for it, or otherwise mitigate risks associated with it. Another option is to support a limited set of XPath expressions - which only use the ancestor or self axes and do not compute string-value of elements. Yet another option is to use the XPath Filter 2.0 transform instead, because in this transform, the XPath expressions are only evaluated once, not for every node of the transform.
Best Practice 5: Implementers: Try to avoid or limit XPath transforms.
Complex XPath expressions (or those constructed together with content to produce expensive processing) might lead to a denial of service risk, so either do not allow XPath transforms or take steps to mitigate the risk of denial of service.
When an XML Signature is to be verified in streaming mode, additional denial of service attack vectors occur. As an example, consider the following XPath expression that is conforming to the [ XMLDSIG-XPATH ]: "//A//B". This XPath is intended to select every occurrence of <B> elements in the document that have an <A> element ancestor. Hence, on streaming parsing the document, every occurrence of an <A> element will trigger a new search context for the subsequent <B> element. Thus, an attacker may modify the XML document itself to contain lots of nested <A> elements, i.e. "<A><A><A><A><A><A><A><A><A><A>....". This will result in n search contexts, with n being the number of <A> elements in the document, and hence in O(n^2) comparisons in total. Even worse, if an attacker also manages to tamper the XPath expression used for selection itself, he can trigger an even more rapid Denial of Service: an XPath of "//A//A//A//A//A..." causes the number of search contexts to explode to O(2^n).
Hence, besides following Best Practice 1, it is strongly recommended to reduce the use of "wildcard" XPath axes (such as "descendant", "following" etc.) in XML Signatures to a minimum.
Best Practice 6: Implementers: Avoid using the "descendant", "descendant-or-self", "following-sibling", and "following" axes when using streaming XPaths.
The evaluation of such "wildcard" axes may cause an excessive number of evaluation contexts being triggered concurrently when using a streaming-based XPath evaluation engine. Since this may lead to Denial of Service, it is essential that an attacker can not alter the XPaths prior to evaluation (see Best Practice 1), and that the valid XPath expressions reduce the use of these axes to a minimum.
The
ds:KeyInfo
of
a
signature
can
contain
a
ds:RetrievalMethod
child
element,
which
can
be
used
to
reference
a
key
somewhere
else
in
the
document.
ds:RetrievalMethod
has
legitimate
uses;
for
example
when
there
are
multiple
signatures
in
the
same
document,
these
signatures
can
use
a
ds:RetrievalMethod
to
avoid
duplicate
ds:KeyInfo
certificate
entries.
However,
referencing
a
certificate
(or
most
other
ds:KeyInfo
child
elements)
requires
at
least
one
transform,
because
the
reference
URI
can
only
refer
to
the
ds:KeyInfo
element
itself
(only
it
carries
an
Id
attribute).
Also,
there
is
nothing
that
prevents
the
ds:RetrievalMethod
from
pointing
back
to
itself
directly
or
indirectly
and
forming
a
cyclic
chain
of
references.
An
implementation
that
must
handle
potentially
hostile
messages
should
constrain
the
ds:RetrievalMethod
elements
that
it
processes
-
e.g.
permitting
only
a
same-document
URI
reference,
and
limiting
the
transforms
allowed.
The following examples are of a loop within a single RetrievalMethod and a loop with two RetrievalMethod elements .
<RetrievalMethod xml:id = "r1" URI = "#r1" />
<RetrievalMethod Id="r1" URI="#r2" /><RetrievalMethod Id = "r2" URI = "#r1" />
Best
Practice
7:
Implementers:
Try
to
avoid
or
limit
ds:RetrievalMethod
support
with
ds:KeyInfo
.
ds:RetrievalMethod
can
cause
security
risks
due
to
transforms,
so
consider
limiting
support
for
it.
An
XML
Signature
message
can
use
URIs
to
references
keys
or
to
reference
data
to
be
signed.
Same
document
references
are
fine,
but
external
references
to
the
file
system
or
other
web
sites
can
cause
exceptions
or
cross
site
attacks.
For
example,
a
message
could
have
a
URI
reference
to
"file://etc/passwd"
in
its
ds:KeyInfo
.
Obviously
there
is
no
key
present
in
file://etc/passwd,
but
if
the
xmlsec
implementation
blindly
tries
to
resolve
this
URI,
it
will
end
up
reading
the
/etc/passwd
file.
If
this
implementation
is
running
in
a
sandbox,
where
access
to
sensitive
files
is
prohibited,
it
may
be
terminated
by
the
container
for
trying
to
access
this
file.
URI references based on HTTP can cause a different kind of damage since these URIs can have query parameters that can cause some data to be submitted/modified in another web site. Suppose there is a company internal HR website that is not accessible from outside the company. If there is a web service exposed to the outside world that accepts signed requests it may be possible to inappropriately access the HR site. A malicious message from the outside world can send a signature, with a reference URI like this http://hrwebsite.example.com/addHoliday?date=May30. If the XML Security implementation blindly tries to dereference this URI when verifying the signature, it may unintentionally have the side effect of adding an extra holiday.
When implementing XML Signature, it is recommended to take caution in retrieving references with arbitrary URI schemes which may trigger unintended side-effects and/or when retrieving references over the network. Care should be taken to limit the size and timeout values for content retrieved over the network in order to avoid denial of service conditions.
When implementing XML Signature, it is recommended to follow the recommendations in section 2.3 to provide cached references to the verified content, as remote references may change between the time they are retrieved for verification and subsequent retrieval for use by the application. Retrieval of remote references may also leak information about the verifiers of a message, such as a "web bug" that causes access to the server, resulting in notification being provided to the server regarding the web page access. An example is an image that cannot be seen but results in a server access [ WebBug-Wikipedia ].
When implementing XML Signature with support for XSLT transforms, it can be useful to constrain outbound network connectivity from the XSLT processor in order to avoid information disclosure risks as XSLT instructions may be able to dynamically retrieve content from local files and network resources and disclose this to other networks.
Some kinds of external references are perfectly acceptable, e.g. Web Services Security uses a "cid:" URL for referencing data inside attachments, and this can be considered to be a same document reference. Another legitimate example would be to allow references to content in the same ZIP or other virtual file system package as a signature, but not to content outside of the package.
The
scope
of
this
attack
is
much
reduced
by
following
the
above
best
practices,
because
with
that
only
URIs
inside
a
validated
ds:SignedInfo
section
will
be
accessed.
But
to
totally
eliminate
this
kind
of
attack,
an
implementation
can
choose
not
to
support
external
references
at
all.
Best Practice 8: Implementers: Control external references.
To
reduce
risks
associated
with
ds:Reference
URIs
that
access
non
local
content,
it
is
recommended
to
be
mitigate
risks
associated
with
query
parameters,
unknown
URI
schemes,
or
attempts
to
access
inappropriate
content.
XML Signature spec does not limit the number of transforms, and a malicious message could come in with 10,000 C14N transforms. C14N transforms involve lot of processing, and 10,000 transforms could starve all other messages.
Again
the
scope
of
this
attack
is
also
reduced
by
following
the
above
best
practices,
as
now
an
unauthenticated
user
would
need
to
at
first
obtain
a
valid
signing
key
and
sign
this
ds:SignedInfo
section
with
10,000
C14N
transform.
This signature has a 1000 C14N and a 1000 XPath transforms, which makes it slow. This document has a 100 namespaces ns0 to ns99 and a 100 <e2> elements, like in the XPath denial of service example. Since XPath expands all the namespaces for each element, it means that there are 100x100 = 10,000 NamespaceNodes All of these are processed for every C14N and XPath transform, so total operations is 2000 x 10,000 = 20,000,000 operations. Note some C14N implementations do not expand all the Namespace nodes but do shortcuts for performance, to thwart that this example has an XPath before every C14N.
<Transform Algorithm="http://www.w3.org/TR/1999/REC-xpath-19991116"> <XPath>1=1</XPath> </Transform> <Transform Algorithm="http://www.w3.org/TR/2001/REC-xml-c14n-20010315" /> <Transform Algorithm="http://www.w3.org/TR/1999/REC-xpath-19991116"> <XPath>1=1</XPath> </Transform> <Transform Algorithm="http://www.w3.org/TR/2001/REC-xml-c14n-20010315" /> <Transform Algorithm="http://www.w3.org/TR/1999/REC-xpath-19991116"> <XPath>1=1</XPath> </Transform> ... repeated 1000 times
To totally eliminate this kind of attack, an implementation can choose to have an upper limit of the number of transforms in each Reference.
Best
Practice
9:
Implementers:
Limit
number
of
ds:Reference
transforms
allowed.
Too
many
transforms
in
a
processing
chain
for
a
ds:Reference
can
produce
a
denial
of
service
effect,
consider
limiting
the
number
of
transforms
allowed
in
a
transformation
chain.
As shown above, it is very hard for the application to know what was signed, especially if the signature uses complex XPath expressions to identify elements. When implementing XML Signature some environments may require a means to provide a means to be able to return what was signed when inspecting a signature. This is especially important when implementations allow references to content retrieved over the network, so that an application does not have to retrieve such references again. A second dereference raises the risk that that is obtained is not the same -- avoiding this guarantees receiving the same information originally used to validate the signature. This section discusses two approaches for this.
While doing reference validation, the implementation needs to run through the transforms for each reference, the output of which is a byte array, and then digest this byte array. The implementation should provide a way to cache this byte array and return it tot he application. This would let the application know exactly what was considered for signing This is the only recommended approach for processors and applications that allow remote DTDs, as entity expansion during C14N may introduce another opportunity for a malicious party to supply different content between signature validation and an application's subsequent re-processing of the message.
While the above mechanism let the application know exactly what was signed, it cannot be used by application to programmatically compare with what was expected to be signed. For programmatic comparison the application needs another byte array, and it is hard for the application to generate a byte array that will match byte for byte with the expected byte array.
Best Practice 10: Implementers: Offer interfaces for application to learn what was signed.
Returning pre-digested data and pre-C14N data may help an application determine what was signed correctly.
A better but more complicated approach is to return the pre-C14N data as a nodeset. This should include all the transforms except the last C14N transform - the output of this should be nodeset. If there are multiple references in the signature,the result should be a union of these nodesets. The application can compare this nodeset with the expected nodeset. The expected nodeset should be a subset of the signed nodeset.
DOM implementations usually provide a function to compare if two nodes are the same - in some DOM implementations just comparing pointers or references is sufficient to know if they are the same, DOM3 specifies a "isSameNode()" function for node comparison.
This approach only works for XML data, not for binary data. Also the transform list should follow these rules.
The C14N transform should be last transform in the list. Note if there no C14N transform, an inclusive C14N is implicitly added
There should be no transform which causes data to be converted to binary and then back to a nodeset. The reason is that this would cause the nodeset to be from a completely different document, which cannot be compared with the expected nodeset.
Best Practice 11: Implementers: Do not re-encode certificates, use DER when possible with the X509Certificate element.
Changing the encoding of a certificate can break the signature on the certificate if the encoding is not the same in each case. Using DER offers increased opportunity for interoperability.
Although X.509 certificates are meant to be encoded using DER before being signed, many implementations (particularly older ones) got various aspects of DER wrong, so that their certificates are encoded using BER, which is a less rigorous form of DER. Thus, following the X.509 specification to re-encode in DER before applying the signature check will invalidate the signature on the certificate.
In practice, X.509 implementations check the signature on certificates exactly as encoded, which means that they're verifying exactly the same data as the signer signed, and the signature will remain valid regardless of whether the signer and verifier agree on what constitutes a DER encoding. As a result, the safest course is to treat the certificate opaquely where possible and avoid any re-encoding steps that might invalidate the signature.
The
X509Certificate
element
is
generically
defined
to
contain
a
base64-encoded
certificate
without
regard
to
the
underlying
ASN.1
encoding
used.
However,
experience
has
shown
that
interoperability
issues
are
possible
if
encodings
other
than
BER
or
DER
are
used,
and
use
of
other
certificate
encodings
should
be
approached
with
caution.
While
some
applications
may
not
have
flexibility
in
the
certificates
they
must
deal
with,
others
might,
and
such
applications
may
wish
to
consider
further
constraints
on
the
encodings
they
allow.
XML Signature offers many complex features, which can make it very difficult to keep track of what was really signed. When implementing XML Signature it is important to understand what is provided by a signature verification library, and whether additional steps are required to allow a user to see what is being verified. The examples below illustrate how an errant XSLT or XPath transform can change what was supposed to have been signed. So the application should inspect the signature and check all the references and the transforms, before accepting it. This is done much easier if the application sets up strict rules on what kinds of URI references and transforms are acceptable. Here are some sample rules.
For simple disjoint signatures: Reference URI must use local ID reference, and only one transform - C14N
For simple enveloped signatures: References URI must use local ID reference, and two transforms - Enveloped Signature and C14N, in that order
For signatures on base64 encoded binary content: Reference URI must local ID references, and only one transform - Base64 decode.
These sample rules may need to be adjusted for the anticipated use. When used with web services WS-Security, for example, consider the STR Transform in place of a C14N transform, and with SWA Attachment, Attachment Content/Complete transform could be used in place of a base64 transform.
Sometimes ID references may not be acceptable, because the element to be signed may have a very closed schema, and adding an ID attributes would make it invalid. In that case the element should be identified with an XPath filter transform. Other choices are to use an XPath Filter 2 transform, or XPath in XPointer URI, but support for these are optional. However XPath expressions can be very complicated, so using an XPath makes it very hard for the application to know exactly what was signed, but again the application could put in a strict rule about the kind of XPath expressions that are allowed, for example:
For XPath expressions The expression must be of the farm : ancestor-or-self:elementName. This expressions includes all elements whose name is elementName. Choosing a specific element by name and position requires a very complex XPath, and that would be too hard for the application to verify
Best Practice 12: Applications: Enable verifier to automate "see what is signed" functionality.
Enable the application to verify that what is signed is what was expected to be signed, by providing access to id and transform information.
Consider an application which is processing approvals, and expects a message of the following format where the where the Approval is supposed to be signed
<Doc> <Approval xml:id="ap" >...</Approval> <Signature> ... <Reference URI="ap"/> ... </Signature> </Doc>
It is not sufficient for the application to check if there is a URI in the reference and that reference points to the Approval. Because there may be some transforms in that reference which modify what is really signed.
In this case there is an XPath transform that evaluates to zero or false for every node, so it ends up selecting nothing.
Whether this is an error or not needs to be determined by the application. It is an error and the document should be rejected if the application expected some content to be signed. There may be cases, however, where this is not an error. For example, an application may wish to ensure that every price element is signed, without knowing how many there are. In some cases there might be none in the signed document. This signature allows the application to detect added price elements, so it is useful even if the were no content in the initial signing.
<Doc> <Approval xml:id="ap">...</Approval> <Signature> ... <Reference URI="ap"> <Transforms> <Transform Algorithm="...XPath..."> <XPath>0</XPath> </Transform> </Transforms> </Reference> </Signature> </Doc>
An XPath evaluation will not raise an exception, nor give any other advice that the XPath selected nothing if the XPath expression has incorrect syntax. This is due to the fact that an XPath parser will interpret misspelled function names as regular XPath tokens, leading to completely different semantics that do not match the intended selection.
<Doc xmlns="http://any.ns" xmlns:dsig-xpath="http://www.w3.org/2002/06/xmldsig-filter2"> <Approval xml:id="ap">...</Approval> <Signature> ... <Reference URI=""> <Transforms> <Transform Algorithm="...xmldsig-filter2"> <dsig-xpath:XPath Filter="intersect">//*[localname="Approval" and namespace-uri="http://any.ns"]</dsig-xpath:XPath> </Transform> </Transforms> </Reference> </Signature> </Doc>
In this case, the XPath filter looks like it is selecting the Approval element of namespace http://any.ns . In reality it selects nothing at all since the function should be spelled "local-name" instead of "localname" and both function calls need brackets () in the correct syntax. The correct XPath expression to match the intent is:
//*[local-name()="Approval" and namespace-uri()="http://any.ns"] .
Since nothing is selected, the digital signature does not provide any data integrity properties. It also raises no exception on either signature generation or on verification. Hence, when applying XML Signatures using XPath it is recommended to always actively verify that the signature protects the intended elements.
Best Practice 13: Applications: When applying XML Signatures using XPath it is recommended to always actively verify that the signature protects the intended elements and not more or less.
Since incorrect XPath expressions can result in incorrect signing, applications should verify that what is signed is what is expected to be signed.
Similar to the previous example, this one uses an XSLT transform which takes the incoming document, ignores it, and emits a "<foo/>" . So the actual Approval isn't signed. Obviously this message needs to be rejected.
<Doc> <Approval xml:id="ap">...</Approval> <Signature> ... <Reference URI="ap"> <Transforms> <Transform Algorithm="...xslt..."> <xsl:stylesheet> <xsl:template match="/"> <foo/> </xsl:template> </xsl:stylesheet> </Transform> </Transforms> </Reference> </Signature> </Doc>
This one is a different kind of problem - a wrapping attack.There are no transforms here, but notice that Reference URI is not "ap" but "ap2". And "ap2" points to another <Approval> element that is squirreled away in an Object element. An Object element allows any content. The application will be fooled into thinking that the approval element is properly signed, it just checks the name of what the element that the Reference points to. It should check both the name and the position of the element.
Best Practice 14: Applications: When checking a reference URI, don't just check the name of the element.
To mitigate attacks where the content that is present in the document is not what was actually signed due to various transformations, verifiers should check both the name and position of an element as part of signature verification.
<Doc> <Approval xml:id="ap">...</Approval> <Signature> ... <Reference URI="ap2"/> ... <Object> <Approval xml:id="ap2">...</Approval> </Object> </Signature> </Doc>
By electing to only sign portions of a document this opens the potential for substitution attacks.
Best Practice 15: Applications: Unless impractical, sign all parts of the document.
Signing all parts of a document helps prevent substitution and wrapping attacks.
To give an example, consider the case where someone signed the action part of the request, but didn't include the user name part. In this case an attacker can easily take the signed request as is, and just change the user name and resubmit it. These Replay attacks are much easier when you are signing a small part of the document. To prevent replay attacks, it is recommended to include user names, keys, timestamps, etc into the signature.
A second example is a "wrapping attack" [ MCINTOSH-WRAP ] where additional XML content is added to change what is signed. An example is where only the amounts in a PurchaseOrder are signed rather than the entire purchase order.
Best Practice 16: Applications: Use a nonce in combination with signing time.
A nonce enables detection of duplicate signed items.
In many cases replay detection is provided as a part of application logic, often and a by product of normal processing. For example, if purchase orders are required to have a unique serial number, duplicates may be automatically discarded. In these cases, it is not strictly necessary for the security mechanisms to provide replay detection. However, since application logic may be unknown or change over time, providing replay detection is the safest policy.
Best Practice 17: Applications: Do not rely on application logic to prevent replay attacks since applications may change.
Supporting replay detection at the security processing layer removes a requirement for application designers to be concerned about this security issue and may prevent a risk if support for replay detection is removed from the application processing for various other reasons.
Nonces and passwords must fall under at least one signature to be effective. In addition, the signature should include at least a critical portion of the message payload, otherwise an attacker might be able to discard the dateTime and its signature without arousing suspicion.
Best Practice 18: Applications: Nonce and signing time must be signature protected.
A signature must include the nonce and signing time in the signature calculation for them to be effective, since otherwise an attacker could change them without detection.
Web Services Security [ WS-SECURITY11 ] defines a <Timestamp> element which can contain a Created dateTime value and/or a Expires dateTime value. The Created value obviously represents an observation made. The expires value is more problematic, as it represents a policy choice which should belong to the receiver not the sender. Setting an expiration date on a Token may reflect how long the data is expected to be correct or how long the secret may remain uncompromised. However, the semantics of a signature "expiring" is not clear.
WSS provides for the use of a nonce in conjunction with hashed passwords, but not for general use with asymmetric or symmetric signatures.
WSS sets a limit of one <Timestamp> element per Security header, but their can be several signatures. In the typical case where all signatures are generated at about the same time, this is not a problem, but SOAP messages may pass through multiple intermediaries and be queued for a time, so this limitation could possibly create problems. In general Senders should ensure and receivers should assume that the <Timestamp> represents the first (oldest) signature. It is not clear how if at all a <Timestamp> relates to encrypted data.
Best Practice 20: Applications: Long lived signatures should include a xsd:dateTime field to indicate the time of signing just as a handwritten signature does.
The time of signing is an important consideration for use of long-lived signatures and should be included.
Note that in the absence of a trusted time source, such a signing time should be viewed as indicating a minimum, but not a maximum age. This is because we assume that a time in the future would be noticed during processing. So if the time does not indicate when the signature was computed it at least indicates earliest time it might have been made available for processing.
It is considered desirable for ephemeral signature to be relatively recently signed and not to be replayed. The signing time is useful for either or both of these. The use for freshness is obvious. Signing time is not ideal for preventing replay, since depending on the granularity, duplicates are possible.
A better scheme is to use a nonce and a signing time The nonce is checked to see if it duplicates a previously presented value. The signing time allows receivers to limit how long nonces are retained (or how many are retained).
Best Practice 21: Applications: When creating an enveloping signature over XML without namespace information, take steps to avoid having that content inherit the XML Signature namespace.
Avoid enveloped content from inheriting the XML Signature namespace by either inserting an empty default namespace declaration or by defining a namespace prefix for the Signature Namespace usage.
When creating an enveloping signature over XML without namespace information, it may inherit the XML Signature namespace of the Object element, which is not the intended behavior. There are two potential workarounds:
Insert an xmlns="" namespace definition in the legacy XML. However, this is not always practical.
Insulate it from the XML Signature namespace by defining a namespace prefix on the XML Signature (ex: "ds").
This was also discussed in the OASIS Digital Signature Services technical committee, see https://lists.oasis-open.org/archives/dss/200504/msg00048.html .
Best Practice 22: Applications: Prefer the XPath Filter 2 Transform to the XPath Filter Transform if possible.
Applications should prefer the XPath Filter 2 Transform to the XPath Filter Transform when generating XML Signatures.
The XPath Filter 2 Transform was designed to improve the performance issues associated with the XPath Filter Transform and allow signing operations to be expressed more clearly and efficiently, as well as helping to mitigate the denial of service attacks discussed in section 2.1.2. See XML-Signature XPath Filter 2.0 for more information.
Even though XPath Filter 2.0 is not recommended in XML Signature 1.0, implementations may still be able to support it. In this case signers and verifiers may be able to follow this best practice.
Resolving external unparsed entity references can imply network access and can in certain circumstances be a security concern for signature verifiers. As a policy decision, signature verifiers may choose not to resolve such entities, leading to a loss of interoperability.
Best Practice 23: Signers: Do not transmit unparsed external entity references.
Do not transmit unparsed external entity references in signed material. Expand all entity references before creating the cleartext that is transmitted.
Part of the validation process defined by XML Schema includes the "normalization" of lexical values in a document into a "schema normalized value" that allows schema type validation to occur against a predictable form.
Some implementations of validating parsers, particular early ones, often modified DOM information "in place" when performing this process. Unless the signer also performed a similar validation process on the input document, verification is likely to fail. Newer validating parsers generally include an option to disable type normalization, or take steps to avoid modifying the DOM, usually by storing normalized values internally alongside the original data.
Verifiers should be aware of the effects of their chosen parser and adjust the order of operations or parser options accordingly. Signers might also choose to operate on the normalized form of an XML instance when possible.
Additionally, validating processors will add default values taken from an XML schema to the DOM of an XML instance.
Best Practice 24: Signers: Do not rely on a validating processor on the consumer's end.
Do not rely on a validating processor on the consumer's end to normalize XML documents. Instead, explicitly include default attribute values, and use normalized attributes when possible.
Best Practice 25: Verifiers: Avoid destructive validation before signature validation.
Applications relying on validation should either consider verifying signatures before schema validation, or select implementations that can avoid destructive DOM changes while validating.
Best Practice 26: Signers: When using an HMAC, set the HMAC Output Length to one half the number of bits in the hash size.
Setting the HMAC Output Length of an HMAC to one half the bit length of the hash function increases the resistance to attack without weakening its resistance to a brute force guessing attack.
An HMAC is computed by combining a secret such as a password with a hash function over the data to be protected. The HMAC provides Authentication and Data Integrity protection in a shared secret environment. Its security properties depend crucially on the cryptographic properties of the hash algorithm employed. It is widely understood that a collision attack (finding two messages which have the same hash value) on a hash function or an HMAC has a work factor proportional to the square root of the hash value.
Recently published research has shown that other attacks on an HMAC, such as Forgery (being able to compute a correct HMAC value without knowing the key) and Key Recovery (being able to compute the correct HMAC for any message) may also have a work factor proportional to the square root of the hash value [ HMAC-Security ]. In other words, the strength of an HMAC is no better than a brute force guessing attack on half the bits in the HMAC value. The same paper demonstrates that reducing the number of bits in the HMAC value available to an attacker, by means of the HMAC Output Length parameter, makes these attacks more difficult or impossible. Prior research has reported the same finding for other attacks on an HMAC.
Best Practice 27: Signers: When encrypting and signing use distinct keys
If the same key is used for different operations such as signing and encryption attacks are possible that can allow signatures to be forged, so separate possibly derived keys should be used for different functions.
Use of state-of-the-art and secure encryption algorithms such as RSA-OAEP and AES-GCM can become insecure when the adversary can force the server to process eavesdropped ciphertext with legacy algorithms such as RSA-PKCS#1 v1.5 or AES-CBC [ XMLENC-BACKWARDS-COMP ]. In this case the attacker may be able to forge valid server signatures if the server decrypts RSA-PKCS#1 v1.5 ciphertexts [ XMLENC-PKCS15-ATTACK ] and the signatures are computed with the same asymmetric key pair.
Accordingly, in situations where an attacker may be able to mount chosen-ciphertext attacks, we recommend applications should always use a different symmetric key for data confidentiality and for data integrity functionality (likewise for public key functions). When use of a single key is planned, key derivation should be used to produce different keys for these functions.
This document records best practices related to XML Signature from a variety of sources, including the W3C Workshop on Next Steps for XML Signature and XML Encryption [ XMLSEC-NEXTSTEPS-2007 ].
Dated references below are to the latest known or appropriate edition of the referenced work. The referenced works may be subject to revision, and conformant implementations may follow, and are encouraged to investigate the appropriateness of following, some or all more recent editions or replacements of the works cited. It is in each case implementation-defined which editions are supported.