Print

Print


There are several uptimes to consider: server (hardware), network and service.

Service uptime is a protocol based check: a process connects to the server and
port and interacts with the service and checks the response. 

This check can be from the machine itself, within a network or from a
heterogeneous network of remote monitoring stations (this pragmatically fuses
server, network and service uptime together). Also.. not the just search
protocol needs to be monitored but also the DNS.. etc. etc. etc..

I strongly doubt that most sites have global site monitoring or are in a
position to implement such. There are good number of companies offering Web
monitoring services but the statistics they produce are not completely
comparable and we're not really Web and I don't think we have a market.. and
we'd need to talk about what a "good" response is (within a time window or..?).. 

And even those that have global monitoring in place.. I'm not too sure that
they would be keen to publish the data as it can be quite company sensitive.
ISPs publish data but only when its part of the sales blurb or part of the
service agreements.....


On Wed, 14 Apr 2010 00:18:27 +0000, Peter Noerr wrote
> Back to the original suggestion, after the rather ironic detour this 
> thread took...
> 
> Such numbers would be useful to us as a fed search service. We 
> actually maintain this sort of data for all the Sources we connect 
> to, by means of an active checking program of our own, so it would 
> not add greatly to our own practices, but it would be useful to have 
> the site's own idea of how often it thought it was available, and it 
> would be useful to the vast majority of systems which had no 
> justification to set up monitoring programs.
> 
> Which leads to the question of what this "percentage reliability" is 
> actually measuring and how? The aforementioned power outage and 
> servers playing doorstops obviously counts as "unavailable", but 
> what if they were still happily running on their (long life) UPS,
>  while the router was down? From the outside world's point of view 
> both are bad, but how does the server check itself from outside? And 
> is this a time average, a moving average, a snapshot, based on 
> number of tries irrespective of time, or just whatever the server 
> thinks is a good idea (better than nothing - probably)?
> 
> Peter
> 


--

Edward C. Zimmermann, NONMONOTONIC LAB
Basis Systeme netzwerk, Munich Ges. des buergerl. Rechts
http://www.nonmonotonic.net
Umsatz-St-ID: DE130492967