You raise entirely legitimate concerns.

We have produced a linked data catalogue at NTNU where we use linked data throughout the system, from cataloguing to end product. We faced exactly your questions.

The result — though still very rough [1] — consumes and and produces data. 

The APIs are accessed by content negotiation and include:

3) HTML+RDFa,+Microdata/Schema.org
4 Search APIs

The reason for all of these APIs is that we're still unsure as to how “being part of the web” actually manifests itself. We’re consuming RDF with the application, but we can see that this needn’t be the case.

Note that for us, the work of getting to a situation where we could reliably produce even this rough system has involved a large amount of unexpected work. From creating functional data models (extremely simple) to implementing functional workflows in our institution (extremely hard), the original work and its presumed benefits have largely slid out of view and become refocussed on producing a system that simply slots into the current concept of the Web stack and linked data.

I believe that refocussing like this this is essential in any “modern” approach as the benefits we intuited in 2009 based on our knowledge of traditional systems largely proved dead ends, while the benefit of having usable data via usable APIs has opened our eyes to realistic opportunities such as improving search from within and without, simplifying front-end development and facilitating creation of re-presentations of the data (i.e. digital exhibitions, thematic pages, etc.) It is also quite clear that advanced systems for physical libraries can be developed upon such a stack, without the cost of de facto, commercial solutions.

From our small experience, I can say that producing data is not an impressive feat, nor is using it locally. The really difficult task is twofold. Firstly, making data from outside your control work well for you. We spent a lot of time identifying what needed to be done to supplement the data we consume so that it has the structure and content we need. I can mention — because it is indicative of how we work — that we use SPARQL CONSTRUCT far more that we had ever imagined because we’re manipulating data to our ends. We still have a very long way to go here.

Secondly, do not underestimate the extent to which existing workflows really are tainted by the dominance of the technologies we use; MARC, AACR2, ISBD, FRBR & co. have all done their bit to create workflows and systems that simply don’t migrate well. Again, we have made design decisions that we know will come back to bite us, but transition is transition.

It might be pertinent to ask where BIBFRAME ends up in all this? Following the lead of others, we’re willing to produce a BIBFRAME application profile that can be accessed via content negotiation; this is simple work in our system, but I suspect that it will not be as used as the standard Schema.org representation that is used otherwise throughout the system.

I have rambled a bit, I apologise!

Kind regards,


[1] http://www.ntnu.no/ub/digital/document/ntnu201 

On 26 May 2014, at 10:20, Vladimir Skvortsov <[log in to unmask]> wrote:

Hi, Jorg,

Thank you for your respond.
Is there difference indeed between "publish", "expose", "exporting data to
web" applying to web? When I wrote "publish" I ment "expose", "exporting
data to web" etc. as well.

Actually, I could re-phrase my question: is it possible for catalog data to
be a part of the Semantic Web when standard web tools are not able to
understand what is what among our RDF instructions? Or what is the aim of
making our efforts?

Thank you anyway, Jorg,

National Library of Russia

Rurik Thomas Greenall
NTNU University Library | NTNU Universitetsbiblioteket
[log in to unmask]