Hash: SHA1

That depends on the particular engineering that is done (or not done) between a given retrieving site and a given source of information. If remote information is cached or indexed, then it should be available (as of the date of the cache or index) when the remote site isn't. If it has been made accessible from more than one access point, then an alternate could pick up the trail. Otherwise, perhaps not. Keep in mind that good network architecture (in this context) suggests the provision of a network of caches and indexes, so that no one institution has to bear the work of "caching the Web". This is discussed very interestingly in Roy Fielding's famous dissertation on the technical architecture of the Web [1], although he doesn't discuss indexing in particular.

It's a little like what you might find when your bank's website goes down. If you have off-line copies of your account data, you may turn to them. But the bank's records are still the most authoritative information, even when they aren't available to you.


- ---
A. Soroka
The University of Virginia Library

On Sep 25, 2013, at 4:58 PM, J. McRee Elrod wrote:

> In the last few days my use of the Internet has had two interruptions:
> A virus caused me to be taken to advertisements, as opposed to the
> site I had identified through a Web search.  A train derailment in
> Saskatchewan severed an optic cable, interfering with Web access in
> Western Canada.
> In the brave new world of linked data, will such interrupt ions affect
> patron aces to bibliographic data, assuming the data must be
> assembled from a variety of sources?
>   __       __   J. McRee (Mac) Elrod ([log in to unmask])
>  {__  |   /     Special Libraries Cataloguing   HTTP://
>  ___} |__ \__________________________________________________________

Version: GnuPG/MacGPG2 v2.0.19 (Darwin)
Comment: GPGTools -