Quoting Jeffrey Trimble <[log in to unmask]>:
> Historically many of the indicators were used for card production
> generation. We have the 240/245 indicators that say to "print" or
> "not print".
> Other indicators used for coding of controlled vocabulary (as in the
> 6XX) fields.
Actually, some indicators that appear to be for card production
actually carry vital information that should be retained. See my
where I attempt a full categorization of the roles of the indicators.
(And welcome comments where my own knowledge is insufficient.)
Some of the display constant indicators are information that should
instead be part of the customization of a system, since they relate
primarily to a library's local preferences, which shouldn't be carried
in the record that is shared. Some of the tracing indicators are
directly related to putting a field at the top of a card (e.g. do not
trace 245 if there is no 1xx).
In any case, I still do not think that indicators within MARC are
sufficiently precise. XML at least allows attributes on each data
element or group of data elements, so there is no ambiguity in
relating an indicator/attribute to particular data elements (if done
correctly). MARC, or we should say ISO 2709, doesn't allow that level
> As for their continuation, I think we can continue, but again, we'll
> need to make sure that the definitions of the indicators may need to
> be more precise. And we'll have to
> assume that the card production environment is DEAD.
> That all said, well and good, we will have a much larger problem at
> hand: the ILS vendors themselves. As many of us can witness, no
> two vendors use the MARC
> record the same. That can be said of XML structure too, but I will
> address that later in the posting.
> Some vendors fully support and implement the MARC21 standard as we
> have it now. Some do it half-way. Some just let the record get
> loaded and use it for
> 'pretty display and editing' but they transfer it to some internal
> tables, and strip off the guts of the record. Exporting it out of
> that ILS is impossible. (And I know
> of one ILS that does it--none of the three big ones.....)
> This brings me to XML structure. Again ILS vendors have to support
> it in ways we don't know about just yet. Let's look at Dublin Core.
> DCMI has one standard,
> OCLC uses a completely different standard, DSpace uses "Qualified
> Dublin Core"--and it hasn't been updated since version 1.2 (Version
> 1.8 is in beta!) Fedora
> uses something different.
> If we were to transfer MARC21 to and XML structure, I think we would
> have as many XML standards as there are catalogers in the world.
> XML is a wrapping language
> not a standard. I could write this email with XML wrapping--it
> wouldn't mean a thing unless you were to use "my interpreter
> program" to understand what my
> XML wrappers mean. I would have to establish my Namespace (which by
> the way, I have a namespace for my own XML coding and it can be
> found on the internet)
> for the interpreter to know what to do with the data.
> Let's look a little further into XHTML and HTML. The latest and
> greatest standards are there, but to do all the browsers implement
> it the same? I wish. I now have
> about 5 CSS style sheets for our web services here--each to address
> the different browsers.
> Funny, we have one MARC21 standard, and yet most of the ILS vendors
> display the data pretty well. I said display--I didn't say anything
> about data extraction/interpolation, etc.
> Back to the indicators. We may need to define indicators and
> subfield codes in a "paired" environment. We may need to think
> about each MARC tag in a deep sense and any
> associated indicators. We will now have to associate data content
> with subfield codes and indicators.
> We are back to the basics: if the ILS doesn't do what the standard
> says, it doesn't matter a hill of beans what you use to store the
> cataloging data in. We can store the data
> in Postgresql or Oracle tables and edit from there, but interpreting
> that data by our vendors will be (and is) paramount.
> On Sep 28, 2011, at 12:28 AM, Karen Coyle wrote:
>> I've just done a study of the MARC indicators which has made me
>> feel somewhat cautious about continuing their use.
>> Aside from the fact that the indicators serve a wide variety of
>> functions, some which with hind sight seem a bit dubious, I find
>> that there is at least one basic flaw in the design: there is no
>> way to make clear which subfields the indicator value should be
>> applied to. In some cases the indicator refers to the entire field,
>> but in most cases it logically applies to only some of the
>> subfields (I give more about this in the blog post, but non-filing
>> indicator and the 245 $a is an obvious one). However, there is
>> nothing explicit in the standard nor in the actual instance records
>> that would make clear which subfields are being addressed by the
>> indicator. It's possible that could be defined on a field-by-field
>> basis, but that means that a system needs to have "outside
>> information" in order to process the data -- I think it's best when
>> fields and subfields self-define so that it isn't necessary to
>> refer elsewhere for processing information.
>> This is an issue also for indicators that are not defined. "Not
>> defined" is coded as "blank", but not all blanks mean "not defined"
>> so again it is necessary to build into a system the information
>> about which indicators are defined and which are not. This kind of
>> complexity and special knowledge is a deterrent to data exchange
>> with other communities because there is a steep curve to getting
>> the information that you need in order to process the records, and
>> much of that information isn't in the records themselves. (Not to
>> mention that we don't have a machine-readable version of the MARC
>> format that one could use for validation... sheeeesh!)
>> Although it may seem wasteful, my preference is for each data
>> element to be fully described in itself. So rather than having a
>> single field that can carry different forms of the data based on
>> indicators, I would prefer that each "semantic unit" have its own
>> data element (which in MARC means its own field). If that seems too
>> complex for input (although it doesn't actually change the number
>> of meanings in the record, only their encoding), the user-interface
>> could present something like:
>> making sure that the various forms of the same element can easily
>> be seen as a logical unit to the person doing the input.
>> Quoting Jeffrey Trimble <[log in to unmask]>:
>>> I've been thinking about this issue because it is an interesting
>>> way for Catalogers, Data Analysts and Librarians to look at the
>>> issue. This also plays into
>>> the Cataloging thinking of "transcribing a title" and the
>>> interesting new feature(s) of RDA. Let me remind you that I'm
>>> doing this on the cuff, so some things
>>> are not presented "pretty" or necessarily logical--I'm thinking aloud.
>>> So we have this MARC record structure. As I have been mentioning
>>> before, it is possible to expand the structure. For the sake of
>>> this discussion, let's assume
>>> we were to expand the Indicators from 2 to 3. The new indicators
>>> has definitions of:
>>> 0 This is textual data [transcribed]
>>> 1 This is content data [transcribed]
>>> 2 This is textual data [non-transcribed]
>>> 3 This is content data [non-transcribed]
>>> 4 This is transcribed data (textual and content)
>>> 5 This is non-transcribed data (as it appears on a title page or
>>> on the item textual and content)
>>> ... maybe more.
>>> 1. Transcription Solution:
>>> So you could then define a 245 in two ways:
>>> 245 104 The adventures of Huckelberry Finn / $c Samuel Clemens.
>>> 245 005 The ADVENTURES of HUCKELBERRY FINN /$c samuel CLEMENS
>>> <== Appears on the t.p.
>>> Noticed that I actually used indicator position 1 to indicate
>>> indexing (printing or not printing on the card). Now the ILS
>>> vendor has to make it possible when two 245s are present to make
>>> sure these indicators work correctly or you will either have
>>> duplicate entries. (And filing can be a problem if the ILS does
>>> not normalize the character string when indexing and gives
>>> different weighting to upper case letters and lower case letters.
>>> 2. Content vs. textual.
>>> 300 ##0 $a xii, 543 p. : $b ill., maps ; $c 28 cm.
>>> 300 ##1 $a xii $a 543 $ap. $bill $b maps $c 28 cm.
>>> You can now teach the display to see the second 300 hundred as
>>> content data and the computer knows roman numerals from non roman.
>>> 3. Example with Imprint statement
>>> 260 ##0 [New York, N.Y.] : $b Moonshine Press, $c c1990.
>>> 260 ##2 NEW YORK :$b MoonSHINE, <==appears on t.p., but no
>>> date until you turn to t.p. verso.
>>> 260 ##1 New York, New York : $b The Moonshine Press, $c 1990, $g 2008
>>> Do you see where I'm going with this. We are able to record data
>>> in a variety of ways and let the machine manipulate it as needed.
>>> The subfield codes can more or less stay the same, but we still
>>> may need to expand on this area.
>>> Jeffrey Trimble
>>> System LIbrarian
>>> William F. Maag Library
>>> Youngstown State University
>>> 330.941.2483 (Office)
>>> [log in to unmask]
>>> ""For he is the Kwisatz Haderach..."
>> Karen Coyle
>> [log in to unmask] http://kcoyle.net
>> ph: 1-510-540-7596
>> m: 1-510-435-8234
>> skype: kcoylenet
> Jeffrey Trimble
> System LIbrarian
> William F. Maag Library
> Youngstown State University
> 330.941.2483 (Office)
> [log in to unmask]
> ""For he is the Kwisatz Haderach..."
[log in to unmask] http://kcoyle.net