I think one has jumped the track on this one... 

What does an uptime scalar tell me? Nearly every ISP likes to announce numbers
like >99% uptime yet as we know that silly number is little indication of
quality. It can mean 36 seconds per hour downtime or it can mean nearly 14 1.2
min. a day downtime, 1.68 hours a week down or...  it also says nothing of
when.. All at once? An when? Every hour? Week? Or every six-months (a week-end
every six months off-line)?  How often is the uptime measure measured? And
what does "up" mean? A server running and handling requests but under heavy
load and unresponsive (long queues and high latency) with just a trickle of
available bandwidth is "up" is it not? 

Instead of "reliability" as a measure of uptime I think we can talk about the
reliability of a resource as its ability to provide the information requested.

A server that's off-line, for example can't provide any answers but being
always available does not make a target better than one that is regularly
off-line at specific hours (just to skew our conception of uptime) as long as
its there when I submit my request. The uptime from the perspective of
personal perception (in contrast to the whole different set of objectives held
by administrators and network managers) has no impact on my perception of
availability or reliability as long as its there when I want it. If I don't
want it then it too does not matter. If someone gives me the answer I want
before I need to go there it too does not matter if it was there or not since
it was not needed.  What does an uptime scalar tell me unless its telling me
that its always online or 99% of the time up...?

These values, however, need to be from MY perspective and not that of the target!

From MY perspective as a broker to a network of servers--- federated search--
the uptime as other network measures are important but they are measured and
relative to MY own network. A server might be online and even reachable by
somebody but perhaps not by me or my network. The networks are NOT neutral.
This is not just a feature in countries like China with their Great Firewall
but we've recently seen some nasty routing rules appear on some US networks
(both IP traffic and with DNS records). And there are server policies and job
priorities (depending upon the caller, time of day and state of the server
different search resources may be provided).. 

For search routing the uptime is not the sole measure--- its not even that
interesting--- but we're really more interested in traffic flow,
latency/performance etc. and, of course, our perception of target
information/search reliability (again "reliability" with the semantics of
providing a good answer to a search request, viz. "information content and
search quality"). The target can't tell me these but I can measure them.

Moral: "Reliability" (whatever it means) does not belong in Zeerex. 

On Mon, 12 Apr 2010 17:22:00 +0100, Mike Taylor wrote
> We have found it useful, in our IRSpy register of Z39.50 and SRU
> targets, to add a measure of "reliability" for each server, expressed
> as a percentage and measuring what proportion of all the connections
> we've tried to make have been successful.  Using this, we can search
> for only those targets that are up, say, 90% of the time.  (This
> searching facility is not yet wired out to the public Web UI at
> but it will be.)
> In order to enable searching in this way via SRU, we need to add a
> "reliability" index -- so far as we can determine, there is no such
> index in any of the existing context sets.  This seems like a good
> match for ZeeRex, which is all about describing databases and the
> services that provide them, so we propose that the new index be added
> to the ZeeRex context set.  We propose a brief, non-prescriptive
> semantics statement like "an integer in the range 0-100 indicating 
> how reliable the server had been found to be".
> --
> As an aside, the LC page about context sets,
> links the ZeeRex set to the location:
> but this URL has gone away since Rob Sanderson left the Cheshire
> project.  So have the Record Metadata set ("rec"), the Network
> Resource Information set ("net"), the Collectable Card Games set
> ("ccg") though that one will probably not cause so many problems, and
> the Relevance Ranking set ("rel").  This is very bad.
> Some, but not all, of those sets are available as old versions on the
> WayBack Machine: for example, there is an old "rec" set at
> but I have not been able to get it to give me an old "zeerex" set.
> For that reason, I have resurrected an old copy of the ZeeRex site as
> it was before I foolishly handed it over to Rob, and it is now
> available on
> In particular, the ZeeRex context set for CQL is at:
> I hope this is useful to more than just me.


Edward C. Zimmermann, NONMONOTONIC LAB
Basis Systeme netzwerk, Munich Ges. des buergerl. Rechts
Umsatz-St-ID: DE130492967