> -----Original Message-----
> From: Z39.50 Next-Generation Initiative [mailto:[log in to unmask]]On Behalf Of
> Matthew J. Dovey
> Sent: Wednesday, April 06, 2005 3:52 AM
> To: [log in to unmask]
> Subject: Re: Server Identification in the Explain Record
>
>
> > it does make sense for the server to
> > set up a specific method of communicating with a specific
> > client - if there are enough of them, and there is some gain
> > to one or both sides. This may be in reduced overhead for
> > particular clients because they don't need everything (a full
> > html user screen for returned results for example is a
> > serious waste of resources for a MSE/SE communication, when
> > an XML structure will be of more use to both sides)
>
> The example is not a good one since SRU/W would never return a html user
> screen.
It was a deliberately out of scope response in the larger context of the
above excerpt to show that the MSEs have to deal with SRU/W servers in a
more mixed context
> However the principle that the server can send additional
> information/do additional processing for a particular client is already
> catered for in the request ExtraData mechanisms
> (http://www.loc.gov/z3950/agency/zing/srw/extra-data.html). However,
> rather than the server doing such things in a non-deterministic way
> based on recognising the client, the otherInfo mechanism allows the
> client to explicitly request the server to return additional
> information/do additional processing) and also allows the server to
> indicate to the client whether it recognised and acted upon the request,
> or simply ignored it. Having the client explicitly request such things
> has the advantages that any client can in principle take advantage of
> the additional functionality (rather than client with a particular
> vendor/version signature - which would lead to clients sending false
> information in order to get at the information, just as I often have to
> get Opera to impersonate IE6 to get websites to work), and also makes
> the server behaviour more deterministic which is a general principle in
> SOA and for interoperability in general.
I believe responding in a manner which is determined by (some form of)
identification of the client is just as deterministic as responding to a
particular request from the client. In fact it may be a more strict
determinism as the client will only receive what its identification by the
server allows it to be sent. It can send all the otherInfo requests it
likes - if the server will not recognise them they will not be acted on.
Hopefully the client will get diagnostic or extraData messages pointing out
the error of its ways.
To my mind this is a major component of a SOA and SLAs - a particular class
of client is entitled to a particular set or level of services. Being able
to offer different services to different classes of (paying) users is a very
common business model.
The security issue is definitely there. Saying you are who you are not has
been with us for quite a while, so why should computing be immune. However
service level based on client class is not going to make things less secure
than a 'one size fits all' server which will answer any request as long as
you know the request. Currently MSE identification gets the MSE virtually
nothing so 'faking it' is not really a problem. In fact it has the effect
that it is likely to get a real user a set of results which are less
comprehensible (certainly not as pretty) so there is a dis-incentive for
actual people to pretend to be software. Remember also that there is a whole
layer of authentication/authorization going on as well so this is not any
sort of backdoor.
>
> > it is
> > useful in the event that a particular implementation sends
> > back less then complete information - there is a 'fall back'
> > position available. (I know that, for example, not sending a
> > schema URI with the record is non compliant for SRU/W, but
> > there will be some programmer out there eventually who
> > decides it is a waste of time. As a community the size of
> > stick we can wave at them is limited. If they have valuable
> > data users will want to access it however 'compliant'
> > they actually are.)
>
> I don't think we should encourage people to be lazy in their
> implementation of the protocol. That would lead to the sort of
> indeterminate behaviour which makes z39.50 so difficult to debug. In
> this case you may think you have a fully working client but it turns out
> that your client is full of bugs but the server is being too forgiving
> and it is only until your high profile customer tries it on a different
> server that the bugs come to light.
>
> It is also the case that there are a number of firewalls emerging which
> are WebService (both SOAP and REST) aware and will check incoming and
> outgoing messages very strictly for conformance. So whilst your server
> may be forgiving the lazy/buggy client may still not be able to
> communicate with you (or may mysteriously break overnight and be a
> diagnostic nightmare because someone has improved the security of your
> network with such a beast - an avoidable nightmare if everyone has stuck
> to strict adherence of the specification).
>
I agree entirely that we wish to ensure that "compliant" systems are
compliant. Z39.50 is a good example of nastiness. My point is only that the
real world is nasty and we will get partially compliant "compliant" systems
which will have to be dealt with. I suspect the firewalls, et al will not be
a factor as those organisations which are technically savvy enough and good
citizens enough to institute those measures are not the ones who will
produce lazy/buggy software. And if they are considered on the server side,
then is keeping potential customers away in the name of upholding standards
a good ROI?
Peter
Dr Peter Noerr
Chief Technical Officer
Museglobal, Inc.
[log in to unmask]
www.museglobal.com
|