Roy
I think you are correct in a lot of ways.  It is a huge shift in the way in which we think about bibliographic data.  There is enormous potential in linked data if it is embraced fully.  When I talk about Bibframe with librarians, the first place they go to is 'how will i do my job?'.  I usually spend some time telling them they are asking the wrong questions!  Sometimes they get it and sometimes not.  The hardest part of any transition is generally not the messy data...it's the people.  It will take a few people brave enough to fully embrace it and make systems work with it for others to see it and come around.
For many catalogers, we need visual example(s) where we can 'lay the path' for them and show them...this is what it looks like and this is how it works.  

Let's go break some furniture!  I've got my sledgehammer

On Fri, Feb 19, 2016 at 3:15 PM, Tennant,Roy <[log in to unmask]> wrote:
Eric,
You created a plausible outline that I'm afraid is missing a rather large and important step. For the lack of a better term I'll call it "entification," which is what we call it around here. This might encompass the creation of your own linked data entities or the use of those created by others (such as, dare I say it, OCLC). In other words, Step 5 is deceivingly simple when in fact it is devilishly complex.

We witnessed this recently when we took a look at some BIBFRAME records produced by a large research university and they were punting on the entification. That is, by simply taking records in MARC and translating them to BIBFRAME in a one-to-one operation, you are basically left with a BIBFRAME record that really isn't linked data at all. You have assertions that are basically meaningless, as they link to nothing and nothing links to them. How many URIs do you think Washington, DC should have? I would argue one, at the very least within your own dataset, but that isn't what you end up with without taking a great deal of time and trouble to do the entification step -- whether using your own data or reconciling your data against someone else's entities, such as LCSH.

I get the sense sometimes that the library community doesn't fully grasp the nature of this transition yet, and it worries me. We need to shake off the shackles of our record-based thinking and think in terms of an interlinked Bibliographic Graph. As long as we keep talking about translating records from one format to another we simply don't understand the meaning of linked data and both the transformative potential it has for our workflows and user interfaces as well as the plain difficult and time consuming work that will be required to get us there.

Sure, we at OCLC are a long way down a road that should do a lot to help our member libraries make the transition, but there will be plenty of work to go around. The sooner we fully grasp what that work will be, the better off we will all be in this grand transition. No, let's call it what it really is: a bibliographic revolution. Before this is over there will be broken furniture and blood on the floor. But at least we will be free of the tyrant.
Roy Tennant
OCLC Research




On 2/19/16, 12:15 PM, "Bibliographic Framework Transition Initiative Forum on behalf of Eric Lease Morgan" <[log in to unmask] on behalf of [log in to unmask]> wrote:

>On Feb 19, 2016, at 1:12 PM, Joy Nelson <[log in to unmask]> wrote:
>
>> ...In our system we store marc as marc and as marcxml.  In my initial thoughts into this process, I'm wondering if the system just needs to become more 'agnostic' in the data format.  If I provide BIBFRAME in RDF/XML then the system should be able to pull out the bits it needs for display.  We would need some logic in the innerworkings to deal with various types of XML data.  And using an indexer on the system that can handle various XML formats would help in searching by users.  (I'm thinking Elastic Search here).   Right now I tend to think of the BIBFRAME descriptions as distinct units that would be similar to a marcxml record.  It is concievable to think that there would be an additional layer on top that would store ALL the triples and use some kind of SPARQL querying/searching???  I don't know about that yet.  An ILS has need for relational database structure since it is transactional.  But...could there be  component that is a graph database???…
>
>
>Very interesting. Thank you, and based on this input, I’ve outlined a possible workflow for creating, maintaining, and exposing bibliographic description in the form of BIBFRAME linked data:
>
>  1. Answer the questions, "What is bibliographic
>     description, and how does it help facilitate the goals
>     of librarianship?"
>
>  2. Understand the concepts of the Semantic Web,
>     specifically, the ideas behind Linked Data.
>
>  3. Embrace & understand the strengths & weaknesses of
>     BIBFRAME as a model for bibliographic description.
>
>  4. Design or identify and then install a system for
>     creating, storing, and editing your bibliographic data.
>     This will be some sort of database application whether
>     it be based on SQL, non-SQL, XML, or a triple store. It
>     might even be your existing integrated library system.
>
>  5. Using the database system, create, store, import/edit
>     your bibliographic descriptions. For example, you might
>     simply use your existing integrated library for these
>     purposes, or you might transform your MARC data into
>     BIBFRAME and pour the result into a triple store.
>
>  6. Expose your bibliographic description as Linked Data
>     by writing a report against the database system. This
>     might be as simple as configuring your triple store, or
>     as complicated as converting MARC/AACR2 from your
>     integrated library system to BIBFRAME.
>
>  7. Facilitate the discovery process, ideally through
>     the use of a triple store/SPARQL combination, or
>     alternatively directly against integrated library
>     system.
>
>  8. Go to Step #5 on a daily basis.
>
>  9. Go to Step #1 on an annual basis.
>
>If the profession continues to use its existing integrated library systems for maintaining bibliographic data (Step #4), then the hard problem to solve is transforming and exposing the bibliographic data as linked data in the form of BIBFRAME. If the profession designs a storage and maintenance system rooted in BIBFRAME to begin with, then the problem is accurately converting existing data into BIBFRAME and then designing mechanisms for creating/editing the data. I suppose the later option is “better”, but the former option is more feasible and requires less retooling.
>
>—
>Eric Lease Morgan



--
Joy Nelson
Director of Migrations

ByWater Solutions
Support and Consulting for Open Source Software
Office: Fort Worth, TX
Phone/Fax (888)900-8944

What is Koha?