LISTSERV mailing list manager LISTSERV 16.0

Help for BIBFRAME Archives


BIBFRAME Archives

BIBFRAME Archives


BIBFRAME@LISTSERV.LOC.GOV


View:

Message:

[

First

|

Previous

|

Next

|

Last

]

By Topic:

[

First

|

Previous

|

Next

|

Last

]

By Author:

[

First

|

Previous

|

Next

|

Last

]

Font:

Proportional Font

LISTSERV Archives

LISTSERV Archives

BIBFRAME Home

BIBFRAME Home

BIBFRAME  June 2012

BIBFRAME June 2012

Subject:

GND in RDF (was: Re: Latest brilliant idea (fwd))

From:

Jög Prante <[log in to unmask]>

Reply-To:

Bibliographic Framework Transition Initiative Forum <[log in to unmask]>

Date:

Tue, 5 Jun 2012 06:06:28 -0400

Content-Type:

text/plain

Parts/Attachments:

Parts/Attachments

text/plain (59 lines)

Hi,

I would like to share my personal opinion. 

The German National Library (DNB) has released the GND in an RDF Turtle Dump under a CC0 
license. More information: 

http://www.dnb.de/DE/Service/DigitaleDienste/LinkedData/linkeddata_node.html

-> Download der Linked Data Dumps

What does that mean? Well, each of us can download a GND base set in RDF, put it into a search 
engine, for example into Elasticsearch (it took only a few minutes for indexing 9.493.987 subject 
URI-based documents consisting of a total of 97.267.642 triples), or into a triple store like 4store, 
and start locally using GND as a source for authority control and for mixing up and building 
mashups with other bibliographic and non-bibliographic data.

Setting up an OAI client - for example - completes the scenario. By fetching RDF/XML updates on a 
regular basis from DNB you will always have the most recent authoritative data. This is not a vision, 
it is reality.

A central triple store would be a major drawback. It does not scale. Each library has a lot of users 
and applications, a single triple store would soon collapse. A good strategy is to put URI-based 
authorities under an open data license and encouraging and enabling everybody around the world 
to use it, too. 

With RDF you can organize the data not only in records, but also in a graph of bibliographic 
entities. Such a graph has a wealth of sub-graphs, attributes, and other properties. Well, if you 
prefer, you can interpret the RDF graph of the GND as a record sequence of subject URIs as you 
would do with MARC record collections, for example, to build searchable documents. But you are 
no longer restricted to the record model. An RDF graph has an abstract semantic interpretation 
and follows the rules of the W3C, describing statements about resources and facts (literals), having 
rules in ontologies that are also part of the Semantic Web etc.

By using one of the many RDF serializations, bibliographic data can be packaged for transport 
purposes. If you need to transport such packages over the wire, you can choose between formats 
such as N-Triples, N3, Turtle, or RDF/XML. You are no longer restricted to the record-centered ISO 
2709 format family with ancient character encodings, or XML wrappers around ISO 2709 that are 
inheriting all the weaknesses from ISO 2709, since they are not aware of how to link to external 
bibliographic entities or to reference them in a stable, reliable manner.

We all know, with the Internet, the massive number of mobile devices, and broadband connectivity, 
transporting records in file packages from one place to another - like our elders need to do on 
magnetic tapes due to the lack of affordable online transport capacity - is becoming more and 
more the exception. The typical read access on catalog entities today is performed as lookups via a 
growing number of web browsers and other web clients. These clients need to search documents, 
traverse links and reference related information in many not foreseeable ways. So, methods for 
bibliographic file packaging should be seamlessly connected to such popular use cases, i.e. how 
the data is used later on the web. 

Technologically, an RDF-based framework is a remarkable difference, it means that libraries are 
joining Tim Berners-Lee's effort to interpret the World Wide Web as a global database where 
everyone (even machines) can use (bibliographic) entities automatically just because they are part 
of the Web, and not just being exposed to the Web.

Best regards,

Jörg

Top of Message | Previous Page | Permalink

Advanced Options


Options

Log In

Log In

Get Password

Get Password


Search Archives

Search Archives


Subscribe or Unsubscribe

Subscribe or Unsubscribe


Archives

April 2019
March 2019
February 2019
January 2019
December 2018
November 2018
October 2018
September 2018
August 2018
July 2018
June 2018
May 2018
April 2018
March 2018
February 2018
January 2018
December 2017
November 2017
October 2017
September 2017
August 2017
July 2017
June 2017
May 2017
April 2017
March 2017
February 2017
January 2017
December 2016
November 2016
October 2016
September 2016
August 2016
July 2016
June 2016
May 2016
April 2016
March 2016
February 2016
January 2016
December 2015
November 2015
October 2015
September 2015
August 2015
July 2015
June 2015
May 2015
April 2015
March 2015
February 2015
January 2015
December 2014
November 2014
October 2014
September 2014
August 2014
July 2014
June 2014
May 2014
April 2014
March 2014
February 2014
January 2014
December 2013
November 2013
October 2013
September 2013
August 2013
July 2013
June 2013
May 2013
April 2013
March 2013
February 2013
January 2013
December 2012
November 2012
October 2012
September 2012
August 2012
July 2012
June 2012
May 2012
April 2012
March 2012
February 2012
January 2012
December 2011
November 2011
October 2011
September 2011
July 2011
June 2011

ATOM RSS1 RSS2



LISTSERV.LOC.GOV

CataList Email List Search Powered by the LISTSERV Email List Manager