Note that Linked Data caches work quite differently from the way as it is known from plain HTTP retrieval. An example how to cache Linked Data resources is implemented in the open-source software Apache Marmotta

There is no reason to license linked data access from commercial catalog providers. All the data should be free and open (another story is licensing software and paying for support). When using restrictive licensing on data, catalog linking will not work. This is the reason why many national libraries are providing authority files by using a CC0 waiver.

Transporting such files which are publicly available through the library community (and through the vendors' silos) will be easier than ever. First, with the internet, we have enormous network bandwidth capacities available for cultural data. Second, with Linked Data Fragments - see also

you can shovel the data you need very fast into your local system and prepare it for display or editing. 

This is very different from traditional MARC data supply path. You are no longer tied to tedious file import/export or slow and unstable SPARQL endpoints. Linked Data fragments may be retrieved once per day, or once per hour - the target is the real time web, with the help of large aggregating linked data hubs, to distribute the authoritative information globally at the highest possible speed and reliability.


On Sun, Jun 29, 2014 at 9:21 PM, Simon Spero <[log in to unmask]> wrote:
On Jun 29, 2014, at 8:15 AM, [log in to unmask] <[log in to unmask]> wrote:

It's hard for me to imagine that those using linked library data will be putting what is retrieved from the network directly in front of their patrons without intervention. That's not really technically feasible, whatever the desire may be.

It’s not even really desirable to present the raw data directly to most users in most cases, anymore than it would be to present them with a raw MARC record (looking the tags and offsets in the directory  is a real pain).  Assembling the entities from streams of RDF triples is almost as painful to do by eye. :) 

We will need to be guided by the Web architecture and use a design with caching. If you cache remote linked data resources locally (and if you intend to give your patrons a reasonable experience, you will be caching) you can certainly make emendations into or out of the cache, processing data in whatever ways you see fit. 
[Small but important point:  Web caches aren’t supposed to monkey about with the contents of documents they are caching - see ]   

Exactly! Adding assertions about some Entity is one of the things that Linked Data makes really easy.  If an IRI  denoting some fictional thing lacks the  appropriate characterization, you can make the appropriate assertion yourself; for example adding an assertion that the entity is a member of the class of FictionalThings.  This can be published for others to use.  

A well designed future bibliographic network for open data would allow for all assertions about a given entity to be retrieved cheaply;  indeed it is probably feasible to copies of all assertions locally using simple overlay techniques.  Data can be fused from multiple sources.  

Commercial cataloging providers might require a subscription to access their data, and strictly license its use and reuse. 


On Jun 28, 2014, at 5:04 PM, J. McRee Elrod <[log in to unmask]> wrote:

The ability to add information locally, to be displayed with the remote data?  The RDA relator terms (MARC $e), for example, would I assume not be in the authority data, but rather stored locally for display with the authority form.  Perhaps the same could be done for missing qualifiers (MARC $c)?

Of course a better solution would be for LC/PCC to restore those missing (Fictitious character) and species qualifiers.