> If you're stuck behind such a monster, then use either string
> packing or expect that -your client- is going to have such
> issues. That has nothing to do with the -server- which can
> still be legitimately expected to put namespace definitions
> where we want it to.
But if my server is behind such a firewall - for example if my
institution runs such a firewall (which will become increasingly likely
- especially with commercial institutions), then it will effect anyone
accessing my server.
> As I understand it, you have to wrap the entire thing in a
> <Signature> element, then you have another service which
> checks the signature, unwraps it and returns the original
> xml? And this is -required- to return the canonical form?
No - you add a signature digest in the SOAP header but you don't change
the XML tree (but possibly the serialized XML) of the SOAP:body - that
way any client can still understand the response but a WS-Security aware
will be able to check the signature. However part of this process is to
convert the message part you are signing to (exclusive) canonical form.
An example of a signed SRW response below (as I'm signing the whole
response, I have to move all namespaces up to SRW:searchRetrieveResponse
- in this case I've used string packing to avoid moving the record
Actually the signing bit needed actually send canonical XML for the bit
it signs, but it probably will.
>You could have a client behind a firewall that turns the
>message into morse code and transmits it via ham radio
>for all the server and the protocol specification could care.
And if you are using something like XOP (optimised binary transmission
of XML), something like this may indeed be happing. But such a thing
would transmit the XML tree *not* the literal XML (XOP certainly encodes
the XML tree rather than the literal XML), so the thing at the other end
converting back from morse code to XML will render the same XML tree but
not necessarily the same literal XML.
As the client and server have no control over what might happen to the
literal XML in transit. XOP routers, XML over morse routers etc are only
required to preserver the XML tree in transit not the literal XML text.
So an SRW client cannot rely on anything intrinsic to the literal XML
over the XML tree, unless it has full control over the route from server
to client. This is why signing needs an canonical form.
> But the whole reason we're using the XML flavour of
> rather than (say) BER-encoded GRS-1 is to make a world where
> this kind of hack can work.
This kind of hack has never worked (properly) - in XML you have
namespaces, Entities and other such references which will break this
kind of hack - and even when this kind of hack does work then it is
definitely regarded as *bad* practice (far worse than putting entities
in the root elements, which is both common practice and canonical form).
Incidently, it was interoperability problems cause by "laziness" (SOAP
tools which broke if the namespace prefix wasn't the string "SOAP", for
example) that motivated the WebService Interoperatbility profile work...
> it seems at best rude for SRW
> servers or hypothetical intermediaries to build XML which,
> while technically legal, spoils this property.
These are not hypothetical imtermediaries - these are real products you
can buy today (http://www.reactivity.com/products/index.html,
http://www.xtradyne.com/products/ws-dbc/ws-dbc.htm). Such products will
become increasingly common as the indutry worries about security.
We could take the DDTT approach - but do we want to position SRW in a
way that means SRW servers have trouble supporting WS-Security; SRW
servers wo'n't work behind WebService firewalls; accepted XML bad
practice is SRW good practice?
Sample signed SRW response
xmlns:wsse="..." xmlns:wsu="..." xmlns:ds="...">
<title>Sound and fury : the making of the
<details>Not authorised to send record</details>
<DIAG:message>Result set created with valid partial results