Jeff,
I've just done a study of the MARC indicators which has made me feel
somewhat cautious about continuing their use.
http://kcoyle.blogspot.com/2011/09/meaning-in-marc-indicators.html
Aside from the fact that the indicators serve a wide variety of
functions, some which with hind sight seem a bit dubious, I find that
there is at least one basic flaw in the design: there is no way to
make clear which subfields the indicator value should be applied to.
In some cases the indicator refers to the entire field, but in most
cases it logically applies to only some of the subfields (I give more
about this in the blog post, but non-filing indicator and the 245 $a
is an obvious one). However, there is nothing explicit in the standard
nor in the actual instance records that would make clear which
subfields are being addressed by the indicator. It's possible that
could be defined on a field-by-field basis, but that means that a
system needs to have "outside information" in order to process the
data -- I think it's best when fields and subfields self-define so
that it isn't necessary to refer elsewhere for processing information.
This is an issue also for indicators that are not defined. "Not
defined" is coded as "blank", but not all blanks mean "not defined" so
again it is necessary to build into a system the information about
which indicators are defined and which are not. This kind of
complexity and special knowledge is a deterrent to data exchange with
other communities because there is a steep curve to getting the
information that you need in order to process the records, and much of
that information isn't in the records themselves. (Not to mention that
we don't have a machine-readable version of the MARC format that one
could use for validation... sheeeesh!)
Although it may seem wasteful, my preference is for each data element
to be fully described in itself. So rather than having a single field
that can carry different forms of the data based on indicators, I
would prefer that each "semantic unit" have its own data element
(which in MARC means its own field). If that seems too complex for
input (although it doesn't actually change the number of meanings in
the record, only their encoding), the user-interface could present
something like:
title/textual
title/content
etc.
making sure that the various forms of the same element can easily be
seen as a logical unit to the person doing the input.
kc
Quoting Jeffrey Trimble <[log in to unmask]>:
> I've been thinking about this issue because it is an interesting way
> for Catalogers, Data Analysts and Librarians to look at the issue.
> This also plays into
> the Cataloging thinking of "transcribing a title" and the
> interesting new feature(s) of RDA. Let me remind you that I'm doing
> this on the cuff, so some things
> are not presented "pretty" or necessarily logical--I'm thinking aloud.
>
> So we have this MARC record structure. As I have been mentioning
> before, it is possible to expand the structure. For the sake of
> this discussion, let's assume
> we were to expand the Indicators from 2 to 3. The new indicators
> has definitions of:
>
> 0 This is textual data [transcribed]
> 1 This is content data [transcribed]
> 2 This is textual data [non-transcribed]
> 3 This is content data [non-transcribed]
> 4 This is transcribed data (textual and content)
> 5 This is non-transcribed data (as it appears on a title page or on
> the item textual and content)
>
> ... maybe more.
>
> 1. Transcription Solution:
>
> So you could then define a 245 in two ways:
>
> 245 104 The adventures of Huckelberry Finn / $c Samuel Clemens.
> 245 005 The ADVENTURES of HUCKELBERRY FINN /$c samuel CLEMENS <==
> Appears on the t.p.
>
> Noticed that I actually used indicator position 1 to indicate
> indexing (printing or not printing on the card). Now the ILS vendor
> has to make it possible when two 245s are present to make sure these
> indicators work correctly or you will either have duplicate entries.
> (And filing can be a problem if the ILS does not normalize the
> character string when indexing and gives different weighting to
> upper case letters and lower case letters.
>
> 2. Content vs. textual.
>
> 300 ##0 $a xii, 543 p. : $b ill., maps ; $c 28 cm.
> 300 ##1 $a xii $a 543 $ap. $bill $b maps $c 28 cm.
>
> You can now teach the display to see the second 300 hundred as
> content data and the computer knows roman numerals from non roman.
>
> 3. Example with Imprint statement
>
> 260 ##0 [New York, N.Y.] : $b Moonshine Press, $c c1990.
> 260 ##2 NEW YORK :$b MoonSHINE, <==appears on t.p., but no
> date until you turn to t.p. verso.
> 260 ##1 New York, New York : $b The Moonshine Press, $c 1990, $g 2008
>
> Do you see where I'm going with this. We are able to record data in
> a variety of ways and let the machine manipulate it as needed. The
> subfield codes can more or less stay the same, but we still may need
> to expand on this area.
>
> --Jeff
>
>
> Jeffrey Trimble
> System LIbrarian
> William F. Maag Library
> Youngstown State University
> 330.941.2483 (Office)
> [log in to unmask]
> http://www.maag.ysu.edu
> http://digital.maag.ysu.edu
> ""For he is the Kwisatz Haderach..."
>
--
Karen Coyle
[log in to unmask] http://kcoyle.net
ph: 1-510-540-7596
m: 1-510-435-8234
skype: kcoylenet
|