My guess is that most ILS's (I haven't actually polled them but have
worked with a few) use some form of relational DBMS. However, that
doesn't mean that the data has been normalized "relationally" - in
fact, in systems I worked on, we mainly stored "blobs" in the RDBMS,
and for retrieval built sets of bit strings on top of the RDBMS
indexes to make retrieval efficient enough. Most enterprise systems
use RDBMS's because that's what's on the market, well-tested and
supported. Some are now moving to "NOSQL" DBMS's that are not
relational. These latter have been designed to handle things like
XML documents, and data that isn't a regular as that required by
Relational databases are good when you have data that has lots of
repetition, and where most of that repetition is one-to-many. Our
data just isn't like that.
This D2RQ thing is just a red herring. Moving to linked data is not
just a matter of taking our current data and outputting it in a
different serialization. In fact, my fear is that we will do just
that if we develop BIBFRAME as a "new version of MARC." Sure, we can
write programs to turn MARC into triples -- but that won't get us an
active place in the linked data cloud.
On 5/30/13 12:16 PM, Mitchell, Michael
[log in to unmask]"
I must have missed that most libraries don't store their
data in relational databases. I thought most of the big ILS
did by now and they would cover most libraries. That's where
MARC goes to rest in our Sirsi-Dynix system after being
rendered apart. Oh well.
I still think a lot of the discussion is directed to
discovery relationships that are pointed the wrong way. Out
from the library rather than in.
point came up earlier that most libraries don’t store their
data in relational databases, so this particular tool won’t
help in those cases. Somebody else argued that most
relational database are unmappable into anything useful, but
I find that hard to believe.
why are we going through all these changes with RDA and
Bibframe? These changes have all been touted as a way to
make our data accessible to others on the Web. This is
apparently what D2RQ does so why don't we fine tune this
and we are done? A goodly portion of what I'm reading here
sounds more like attempts to add sources of info outside
of our libraries (e.g. six different name authority
sources) rather than the original facilitation of others
coming in to our existing library data. We're supposed to
be breaking down the silos, not building new Googles.
Seems D2RQ already breaks those silos.
your data is a relational database, try D2RQ rather than
I have a vision of a future in 5,
more likely 10, years where I'll send my database out
for <linked?> authority work and to automagically
change descriptive elements from AACR2 to RDA. I think
this kind of system will work better for popular, not
research, public libraries. I think that because
popular public libraries have greater turnover of
material and are not preserving older material. Still,
it won't be perfect, but part of what will drive such a
decision is a better record display from having the
consistent data. I can't see my ILS trying to reconcile
245 $h and 336, 337, 338 to give me the same icon for
type of material, and I can't see BIBFRAME reconciling
these different MARC data elements as well.
But, first we get to see the ugly
of my ILS trying to get its SQL for MARC to line up
with its SQL for BIBFRAME. Good times!
County Public Library
E 6th St.
859-572-5035, ext. 26
[log in to unmask]
On Thu, May 30, 2013 at 11:51 AM,
Simon Spero <[log in to unmask]>
Yes, especially since the
bulk of our content at first will be MARC,
transformations from what was in MARC yesterday,
today and tomorrow will be there.
Technically, the end
requirement is to provide a format that is a
credible replacement* for MARC(21) in an RDA
context (appendix M of the RDA Test report).
However, modeling the semantics of the AACR2
would seem to be a necessary endeavor along the
The underlying (conceptual)
model of the bibliographic universe ought to be
one that can be mapped to all major systems. The
properties of these mappings are somewhat
complicated. To use a recent subject of discussion
as a simple example, if one starts with MARC-21
data that contains a non standard textual string
and a coded relator, the mapping into a semantic
model might not preserve the non-standard string.
MARC-21 data using a standard string might map to
the same semantic representation.
Such mapping would not be
invertible, since the non-standard string would
not have been preserved.
[I have some concerns and
suggestions about some of the work that has been
done and some work that has not been done under
the BIBFRAME which I will explore under separate
[log in to unmask] http://kcoyle.net