A skim of the Diffusion of Innovations literature as adequately summarized at Wikipedia:
will signal what awaits folks in the leftmost two sections of Wikipedia diagram as they try to realize Tim’s thought within a global network of institutions and individuals. Much of the terminology is already familiar to you, but here is the whole framework.
Assuming that you have already figured out what and how you want to do something that others will perceive as an innovation, this approach to understanding and fostering the adoption of an innovation has the benefit of being descriptive and predictive – you can predict what things won’t work and what things to avoid. Understanding the current situation and being proactive in fostering adoption is essential. There are no inevitable, displacing, forces in this approach.
Oh, Yeah: Be aware that every acronym in Tim’s message – especially the ones with X’s in them - has been or is considered to be an innovation in its own right - and has to/has had to work its own way towards widespread adoption.
Building on Tim Thompson’s trenchant observations concerning the use of MARCXML as a lingua franca for legacy bibliographic records, XML out-of-the-box supports data maintenance and validation in ways that the W3C RDF standard currently does not. XQuery and XSLT can both be used for sophisticated data analysis at any scale, in addition to their capacity to transform records - for example, a colleague of mine at UIUC wrote XSLTs to provide structured PROV-ontology data on how other XSLTs moved MARC files into RDF markup. Some of the monumental validation and data-provider feedback woes of both the Europeana and DPLA projects, in my view, could have been avoided or ameliorated by creating comprehensive quality assurance applications using XML data typing and reportage capabilities up front, instead of trying to decipher what is wrong with the DC or MARCXML feeds when malformed RDF serializations fails to parse. At the American Theological Library Association, my former employer, we developed a battery of in-house diagnostic tools that worked with our exhaustively documented MARC application profiles to guarantee that the bibliographic and authority data uploads sent to EBSCO met and exceeded their format specs. It is both expensive and labor-intensive to do up front, to be sure, but the reliable outcome more than justifies the cost. Most of you whose libraries have relied on copy cataloging for years haven’t far to excavate in the library catalog database to identify substandard MARC records (fail tests for conformance to RDA, AACR2, MARC21), added pell-mell without systematic QA routines – what will happen if we repurpose them as BIBFRAME RDF using blind transformation scripts and then try, after the fact, to diagnose the parsing errors?
From: Bibliographic Framework Transition Initiative Forum [mailto:[log in to unmask]] On Behalf Of Tim Thompson
Sent: Tuesday, February 03, 2015 10:37 AM
To: [log in to unmask]
Subject: [BIBFRAME] Have your MARC and link it too (was 2-tier BIBFRAME)
Here's a thought (apologies in advance if this message is naive, either from a technical or practical standpoint--it lacks detail and does nothing to provide specific recommendations--but here goes nothing anyway).
Why not make BIBFRAME about the future of bibliographic data rather than its past? Libraries have invested tremendous resources over the years to produce and share their catalog records; it is only natural that they (and the catalogers who have worked so hard to encode those records--and to master the arcane set of rules behind their creation) would want to preserve that investment. Ergo the desire to devise a lossless (or nearly lossless) crosswalk from MARC to an RDF vocabulary (i.e., BIBFRAME). For years, libraries have been driven by just-in-case approaches to their services (certainly in the acquisition of new materials). But when we're dealing with data, do we really need to follow the same costly pattern? Rather than spending additional time and resources to attempt the quixotic task of converting all of MARC into actionable linked data (just in case we might need access to the contents of some obscure and dubiously useful MARC field), why not embrace a just-in-time approach to data conversion?
As Karen has pointed out here, MARC records are structured as documents: much of our access to their contents comes through full-text keyword searching. Now, we already have a standardized way to encode data-rich documents: namely, XML. The MARCXML format already gives us a lossless way to convert our legacy data into an interoperable format. And the W3C has spent the last 15 years developing standards around XML: XQuery 3.1 and XSLT 3.0 are now robust functional programming languages that even support working with JSON-encoded data. Needless to say, the same kind of ecosystem is not available for working with binary MARC. Next-generation Web application platforms like Graphity and Callimachus utilize the XML stack for conversion routines or as a data integration pipeline into RDF linked data. The NoSQL (XML) database MarkLogic (which I believe the Library of Congress itself uses) now includes an integrated triplestore. Archives-centric tools like Ethan Gruber's xEAC also provide a hybrid model for leveraging XML to produce linked data (as an aside: leveraging XML for data integration could promote interoperability between libraries and archives, which continue to rely heavily on XML document structures--see EAD3--to encode their data).
So, why not excise everything from BIBFRAME that is mostly a reflection of MARC and work to remodel the vocabulary according to best practices for linked data? We can store our legacy MARC data as MARCXML (a lossless conversion), index it, link it to its BIBFRAME representation, and then access it on a just-in-time basis, whenever we find we need something that we didn't think was worth modeling as RDF. This would let BIBFRAME be the "glue" that it is supposed to be and would allow us to draw on the full power of XQuery/XSLT/XProc and SPARQL, together, to fit the needs of our user interfaces. This is still a two-tiered approach, but it does not include the overhead of trying to pour old wine into new wineskins (terrible mixed metaphor, but couldn't resist the biblical allusion).
This kind of iterative approach seems more scalable and locally customizable than trying to develop an exhaustive algorithm that accounts for every possible permutation present in the sprawling MARC formats.
Similar suggestions to this may have already been made on this list, but I think it's at least worth reviving the possibility in the context of the current thread. In short: we could extract the essence from our legacy bibliographic records, remodel it, and then, from here on out, start encoding things in new ways, without being beholden to an outmoded standard and approach. All the old data would still be there, and would be computationally tractable as XML, but our new data wouldn't need to be haunted by its ghost.
Tim A. Thompson
Metadata Librarian (Spanish/Portuguese Specialty)
Princeton University Library