On 8/24/18 1:07 AM, Svensson, Lars wrote:
> Karen,
>
> On Wednesday, August 22, 2018 7:21 PM, Bibliographic Framework Transition Initiative Forum On Behalf Of Karen Coyle wrote:
>
>> The one thing that I always find missing in the discussion of linked
>> data is: what linking will be done? There isn't much use in moving to
>> linked data if our systems don't use it for linking.
>
> This brings me to another important requirement that I usually take for granted but that probably needs to be explicit:
>
> The system must expose dereferenceable URIs for all entities managed in the system. There must also be a way for the administrators to disable this for all or some kinds of entities (e. g. resources that must not be shared).
>
>> Admittedly, some of
>> the resources we might like to link to either aren't open or aren't yet
>> linked data. I would think that one might want to link from courseware
>> to the library. I would also think that the library might want to link
>> to Wikipedia via Wikidata, and to Wikicite. I suppose there could be
>> some interest in linking to geographic databases and other data sets.
>
> I think so, too, but don't have use cases for it. I also don't think that all linking can/should be made from within the system but that there needs to be an import interface in place, possibly supporting Beacon or other link formats.
Yes, this. There can definitely be a difference between the internal
formats of the system and what is exposed as linked data. There's a lot
that one might not choose to expose - I happen to know folks who work on
banking data, and although they use linked data internally they
obviously don't expose it as LoD. For libraries, it may only make sense
today to expose data that has widely known identifiers, like VIAF or
DOI. I'm just riffing here, but I think this is an area that needs more
thinking on.
kc
>
>> I'm sure others have even better ideas. To me, this is the key to making
>> a change, and making library catalogs more informative.
>
> +1
>
> Best,
>
> Lars
>
>> On 8/22/18 1:55 AM, Svensson, Lars wrote:
>>> Dear all,
>>>
>>> One of the outcomes of the 2017 European BIBFRAME Workshop [1] was a paper
>> called “BIBFRAME Expectations for ILS tenders” containing suggestions for
>> “requirements for Integrated Library Systems (ILS) vendors to fulfil Linked Data
>> model, with particular focus on BIBFRAME conformance“. [2]. Over the past few
>> weeks I have pondered this document a bit and wanted to share my reflections with
>> the community.
>>>
>>> First of all: Tiziana and the Organiser Group for the 2018 European BIBFRAME
>> Workshop have made a nice good job putting this together; I found the idea to use the
>> Maturity Model particularly interesting. Thank you!
>>>
>>> That said, I do have some issues with the general direction this paper takes. The
>> main one it that I find it too focused on technology and too little on functional
>> requirements. The paper suggests a transition from a paradigm (1) where cataloguing
>> is done directly in MARC records and where the Integrated Library Systems (ILSs) use
>> a relational database (RDBMS) to store this and any associated data to another
>> paradigm (2) where cataloguing is done in RDF (using the BIBFRAME data model) and
>> ILSs use a triple store to store the necessary information. I’d say that the first
>> assumption isn’t necessarily true (there are quite some ILSs where data is not stored
>> using an RDBMS and at least some libraries where cataloguing is not done by creating
>> or editing MARC records but using another metadata format that can be converted to
>> MARC if so desired). And I’d also say that the suggestion for a new system paradigm is
>> far too narrow and might even hinder innovation by mandating too strong
>> requirements on which technology to use. There is, for instance, an emerging
>> technology called graph databases that allow for interesting ways of analysing the data
>> in the graph, including finding the shortest path between two nodes, finding “islands”
>> (graphs or subtrees not connected to any other part of the graph) or loosely connected
>> subtrees (e. g. subtrees that are connected by only one edge). If we mandate the use
>> of a triple store, a vendor would not be able to use this technology and thus would lose
>> the possibility to implement interesting statistical functions. In my opinion, a call for
>> tender should be as technology neutral as possible (at least with regard to system
>> internals).
>>>
>>> So what should a call for tender contain instead?
>>>
>>> My take is, that it should specify the desired functionality. After all, the interesting
>> thing is what we want the system to do (or at least what we want to do with the
>> system). A non-exhaustive list of things I can think of for a LinkedData-based system
>> would be [3]:
>>>
>>> - The system must be able to import library data in the following formats:
>>> -- MARC 21 (perhaps in different flavours)
>>> -- RDF using the BIBFRAME data model
>>> -- RDF using the RDA data model
>>> -- …
>>>
>>> - The system must be able to export data in the following formats:
>>> -- MARC 21 (perhaps specifying a flavour)
>>> -- RDF using the BIBFRAME data model
>>> -- DC-XML (for use with OAI-PMH)
>>> -- …
>>>
>>> - The system must support the following machine import and export interfaces:
>>> -- SRU/SRW
>>> -- OAI-PMH (synchronising both ways)
>>> -- Z39.50 ;-)
>>> -- W3C WebSub (to ensure the system is “webby”)
>>> -- …
>>>
>>> - The cataloguing module must allow cataloguers to:
>>> -- Connect titles and authorities to other titles and authorities residing inside the local
>> system (e. g. connecting a publication to its successor or to its author; connecting an
>> author to her/his place of birth and the place of birth to the country it’s part of)
>>> -- Connect titles and authorities to other titles and authorities residing in online
>> databases (perhaps mandating a list of search interfaces the system must support)
>>> -- …
>>>
>>> - The system must allow administrators to
>>> -- Configure the data input forms, e. g. restricting which authority files the
>> cataloguers can use
>>> -- Import and export data input configurations and metadata profiles in a
>> standardised format (possibly stating a list of such formats, e. g. SHACL, ShEx, JSON
>> Schema, XML Schema)
>>> -- Seamlessly include third party databases (IEEE Xplore, EconBiz, PubMed, …) into
>> end user search
>>>
>>> - The system must allow end users to:
>>> -- Search in both library internal content and third party databases using a single,
>> united search interface
>>> -- Export bibliographic citations into third party citation management systems
>>> -- Subscribe to an RSS/Atom feed for new content matching a custom search
>>> -- …
>>>
>>> To me, this approach would have the advantages that
>>> 1) The customers need to think of what they really want the system to do
>>> 2) The vendors can concentrate on implementing this functionality in a way they are
>> comfortable instead of having to focus on new technology.
>>>
>>> What do you think?
>>>
>>> Best,
>>>
>>> Lars
>>>
>>> [1] https://wiki.dnb.de/display/EBW/Documents+and+Results
>>> [2]
>> https://wiki.dnb.de/download/attachments/125433008/BIBFRAME_Expectations_for_ILS
>> _Tenders.pdf
>>> [3] Some of those and much more can be found in the 2013 list of use cases and
>> requirements: http://bibframe.org/documentation/bibframe-usecases/
>>>
>>>
>>> *** Lesen. Hören. Wissen. Deutsche Nationalbibliothek ***
>>>
>>
>> --
>> Karen Coyle
>> [log in to unmask] http://kcoyle.net
>> m: +1-510-435-8234
>> skype: kcoylenet/+1-510-984-3600
--
Karen Coyle
[log in to unmask] http://kcoyle.net
m: +1-510-435-8234
skype: kcoylenet/+1-510-984-3600
|