LISTSERV mailing list manager LISTSERV 16.0

Help for BIBFRAME Archives


BIBFRAME Archives

BIBFRAME Archives


BIBFRAME@LISTSERV.LOC.GOV


View:

Message:

[

First

|

Previous

|

Next

|

Last

]

By Topic:

[

First

|

Previous

|

Next

|

Last

]

By Author:

[

First

|

Previous

|

Next

|

Last

]

Font:

Monospaced Font

LISTSERV Archives

LISTSERV Archives

BIBFRAME Home

BIBFRAME Home

BIBFRAME  November 2011

BIBFRAME November 2011

Subject:

Re: What the data tells us -> Dublin Core application profiles

From:

Thomas Baker <[log in to unmask]>

Reply-To:

Bibliographic Framework Transition Initiative Forum <[log in to unmask]>

Date:

Tue, 8 Nov 2011 12:47:11 -0500

Content-Type:

text/plain

Parts/Attachments:

Parts/Attachments

text/plain (224 lines)

I agree with the strategy of "modularity" presented here. This thread reminds
me of Dublin Core workshops circa 1996-2000, when the focus shifted from "the
core" per se to extensibility, modularization, and application profiles.
Application profiles "sequester" complexity, as Roy puts it, by providing a
context for complexification while retaining interoperability on the basis of
the shared or overlapping parts (e.g., "core" properties).

A strategy of modularity suggests that MARC be replaced not by any _one_
"unworkably complex data carrier" (as Karen puts it), but perhaps by a series
of well-designed application profiles. Indeed, a DCMI/RDA Task Group -- formed
after a meeting among participants from the RDA, DCMI, and Semantic Web worlds
[1] -- has been working towards the goal of expressing RDA in RDF and
constructing application profiles on its basis since 2007.

In a Linked Data environment, interoperability is achieved not by sharing
specific "formats", but by ensuring that diverse applications produce triples
that are coherent -- i.e., that overlap semantically and use, or are mapped to,
common vocabularies.

Putting the design of application profiles onto a formal basis was the
motivation behind the DCMI Abstract Model, or DCAM (mentioned in the LC
announcement) and related specifications. The process of designing application
profiles was seen as directed towards the specification of implementation
_formats_ [2] -- formats with straightforward mappings to triples.

If the goal is to enable data to be managed on the back end by a variety of
implementation technologies (e.g., in XML or databases), and cleanly exposed as
triples, then the DCAM specifications can provide one good starting point --
one that would need to be revised in light of specific requirements and
subjected to iterative testing, perhaps incorporating new approaches from the
OWL community.

The notion of "core" need not be seen as absolute either, but rather as
relative to requirements for "coherence" between different types of data. The
goal should not be to perfect some "ideal core", but to identify which
statements are needed to produce coherent data in which situations.

In the library world, DCMI is still widely associated with simple XML formats
from the early 2000s based on "the fifteen elements", but its potentially far
more useful contribution to the bibliographic framework initiative lies in
building on the DCMI/RDA and DCAM work to help to bridge the gap between
closed-world implementations and the open world of Linked Data.

Tom (wearing his DCMI hat)

[1] http://dublincore.org/documents/singapore-framework/
[2] http://www.bl.uk/bibliographic/meeting.html


On Sun, Nov 06, 2011 at 12:00:11PM -0800, Karen Coyle wrote:
> Roy, I wish you'd said all of this to begin with! Yes, we need to
> create a simple core structure that can be extended. This is what we
> do not have with MARC, and we definitely do NOT have with RDA.
> Unfortunately, RDA is more like MARC than what you describe below.
> We do have an opportunity to create a something more workable in
> this transition, but if we do not then we will be stuck with an
> unworkably complex data carrier for a very long time. As some said
> when RDA was still in progress, this may be our last chance to get
> it right because we are falling further and further behind as
> information providers.
>
> Coming up with a core is tricky, to say the least. RDA's core
> includes elements that are core for all of the formats that it
> supports -- so there are core music elements, core maps elements,
> etc., all as part of a single core. I'm not sure that helps us.
> FRBR's entities are probably a better core -- although I find there
> to be some idiosyncrasies in FRBR (the four Group 3 entities, to
> start) that need to be ironed out. I do think that it is essential
> that we start from zero and re-think core for the purposes of a new
> framework.
>
> kc
>
> Quoting Roy Tennant <[log in to unmask]>:
>
> >Karen,
> >I think you missed my point. The point wasn't to enrage music catalogers by
> >leaving a field or subfield behind that they simply must have -- it was
> >rather to determine a core of bibliographic description (which I submit the
> >data DOES tell us), then allow communities of interest to specify ways in
> >which that core can be decorated with what they require without ending up
> >where we did with MARC -- with an arguably bloated record (and I'm including
> >subfields here) that tries to be prepared for every eventuality. That's why
> >I suggested modularity as being an excellent strategy for accomplishing one
> >of my pet goals (to respond to Hal Cain's request):
> >
> >· Simple aims should be simple to accomplish.
> >
> >· Complexity should be avoided unless it is absolutely required to achieve
> >the goal.
> >
> >· If complexity is required, it should be sequestered. That is, complexity
> >should not spill over to affect those who donąt need it to achieve their
> >goals.
> >
> >When a MARC subfield is used 17 times out of 240 million records we may want
> >to consider just how important it is to create it, document it, and write
> >software to process it.
> >Roy
> >
> >On 11/5/11 11/5/11 € 1:24 PM, "Karen Coyle" <[log in to unmask]> wrote:
> >
> >>Quoting Roy Tennant <[log in to unmask]>:
> >>
> >>>I believe you are missing the point. The evidence is clear -- the vast
> >>>majority of the some 3,000 data elements in MARC go unused except for a
> >>>small percentage of records in terms of the whole. What isn't there cannot
> >>>be indexed or presented in a catalog, no matter how hard you try. In other
> >>>words, which fields were coded is the only relevant information. It is the
> >>>ONLY relevant information when you are discussing how to move forward.
> >>
> >>I disagree. (As does the OCLC report, BTW) To some extent the stats on
> >>MARC records reflect the many special interests that MARC tries to
> >>address. I have spent more time on the Moen statistics [1] than the
> >>OCLC ones, although since they were done on the same body of data I
> >>don't see how they could be very different.
> >>
> >>In the case of what Moen turned up, the most highly used fields were
> >>ones that systems require (001, 005, 008, 245, 260, 300) -- it's a bit
> >>hard to attribute that to cataloger choice. But for the remainder of
> >>the fields there is no way to know if the field is present in all of
> >>the records that it *should* be, or not.
> >>
> >>At least some of the low use fields are ones that serve a small-ish
> >>specialized community. Only 1.3% of the OCLC records have a
> >>Cartographic Mathematical Data (255), but according to the OCLC report
> >>that represents a large portion of the Maps records (p. 23 of OCLC
> >>report). It's harder to make this kind of analysis for fields that can
> >>be used across resource types. For example, 35-47% of the records
> >>(OCLC v. LC-only, respectively, from Moen's stats) have a Geographic
> >>Area code (043). Undoubtedly some records should not have that field,
> >>so is this field a reliable indicator that the resource has geographic
> >>relevance? We have no way of knowing. In addition, as MARC fields are
> >>constantly being added, some fields suffer from not having been
> >>available in the past. (Moen does a comparison of fields used over
> >>time [2], and the OCLC report also looks at this; see below.)
> >>
> >>Neither the Moen stats nor the OCLC report really tell us what we need
> >>to know. It's not their fault, however, because we have no way to know
> >>what the cataloger intended to represent, nor if the MARC record is
> >>complete in relation to the resource. My experience with some
> >>specialized libraries (mainly music and maps) was that these
> >>communities are diligent in their coding of very complex data. These,
> >>however, represent only small numbers in a general catalog.
> >>
> >>The OCLC report reaches this conclusion:
> >>
> >>"That leaves 86 tags that are little used, or not used at all, as
> >>listed in the ?MARC 21 fields little or not used? table (Table 2.14,
> >>p. 32). Of these infrequently occurring fields, 16 are fields that
> >>were introduced between 2001 and 2008. Three of these fields
> >>(highlighted in orange) have no occurrences in WorldCat since OCLC has
> >>no plans to implement them."
> >>
> >>This means that there are really 67 fields that seem to be underused.
> >>That is out of 185 tags (not 3000, which would be more like the number
> >>of subfields). That's about 1/3. Having sat in on many MARBI meetings,
> >>however, I am sure that there are communities that would be very upset
> >>if some of these fields were removed (e.g. musical incipits, GPO item
> >>number). Admittedly, some fields were introduced that then turned out
> >>not to be useful. If those can be identified, so much the better.
> >>
> >>Basically, there is no way to know a priori what fields *should* be in
> >>a MARC record other than the few that are required. Deciding which
> >>fields can be left behind is going to take more than a statistical
> >>analysis. I agree that we should not carry forward all MARC data just
> >>"because it is there." The analysis, though, is going to be fairly
> >>difficult. Even more difficult will be the analysis of the fixed
> >>fields. I could go on about those at length, but that analysis will be
> >>complicated by the fact that the fixed fields are frequently a
> >>duplicate of data already in the record, and we never should have
> >>expected catalogers to do the same input twice for the same
> >>information -- we should have had a way to accomplish indexing and
> >>display with a single input.
> >>
> >>kc
> >>[1] http://www.mcdu.unt.edu/?p=41
> >>[2] http://www.mcdu.unt.edu/?p=47
> >>
> >>>
> >>>The one thing you said that I agree with wholeheartedly, is that we should
> >>>know what data is useful to users. Yes. That.
> >>>Roy
> >>>
> >>>
> >>>On 11/4/11 11/4/11 € 10:41 PM, "J. McRee Elrod" <[log in to unmask]> wrote:
> >>>
> >>>>Roy Tennant <[log in to unmask]> wrote:
> >>>>
> >>>>
> >>>>>"Implications of MARC Tag Usage on Library Metadata Practices"
> >>>>>http://www.oclc.org/research/publications/library/2010/2010-06.pdf
> >>>>
> >>>>This study told us what fields were in records, not whether those
> >>>>fields were utilized in OPACs. MARC has a wealth if information never
> >>>>put to practical use. Which fields were coded is fairly useless
> >>>>information.
> >>>>
> >>>>A study of what fields OPACs actually use might be helpful, but that
> >>>>still does not tell us what fields might be helpful to patrons if they
> >>>>were utilized,'
> >>>>
> >>>>
> >>>> __ __ J. McRee (Mac) Elrod ([log in to unmask])
> >>>> {__ | / Special Libraries Cataloguing HTTP://www.slc.bc.ca/
> >>>> ___} |__ \__________________________________________________________
> >>>>
> >>>
> >>
> >>
> >
>
>
>
> --
> Karen Coyle
> [log in to unmask] http://kcoyle.net
> ph: 1-510-540-7596
> m: 1-510-435-8234
> skype: kcoylenet

--
Tom Baker <[log in to unmask]>

Top of Message | Previous Page | Permalink

Advanced Options


Options

Log In

Log In

Get Password