Reading LC's statement (see URL above) on plans for future bibliographic
control, led to some interesting offlist correspondence.  I've
received permission from the authors, Michael Gorman, Hal Cain, and
Bernhard Eversberg, to post their comments. 

I think "American Libraries" editors should approach Bernhard for an op
ed piece, similar to Michael's earlier one on RDA.

I said:

Looking at these links, I see very little in the way of actual
proposals,  just lots of generalities. Am I missing something?

Hal Cain wrote:

I haven't studied the statement closely. At a hasty reading, there  
didn't seem to be much that required serious attention.

I doubt LC has many remaining staff in the data management area who  
are competent to contribute to creating a new medium of recording and  
exchanging bibliographic data. I suspect Rebecca Guenther's retirement  
was strategic: "I don't want to be mixed up in this mess!" but I could  
of course be wrong. I have no contacts remaining inside LC, since Tony  
Franks has opted to go away in search of peace.

That leaves the enterprise prey to consultants. Alternatively it may  
be outsourced to OCLC.  I wonder where they think they'll get grants?

The likely outcome, as I see it, is that there will be an outline  
scheme, with a rudimentary crosswalk to MARC 21 (OCLC are good at that  
kind of thing, and will have to be on the inside anyway because LC  
cataloguing couldn't survive without OCLC) but there will be no  
consensus about its value and usefulness, by NLM and NAL.

I remain totally bemused by the blind pursuit of two conflicting  
goals: more simplicity (BIBCO "standard record" schemes) vs.  
complexity (RDA detail and the structure of the code).

Michael, I'm a fan of your "Concise AACR2" code. I wish RDA had been  
written (if it was truly needed, of which I'm still not completely  
convinced) in that style, with application manuals for particular  
types of resources.


Michael Gorman wrote:

This begins with a gaseous piece of nonsense: "[MARC is] based on  
forty-year-old techniques for data management and is out of step with
programming styles of  today", and gets worse.  They want to change
for change's sake but have no idea what to do.  What we can be assured
of is that the result will be worse and the   slide toward
bibliographic chaos accelerated.  

MARC is a framework standard that defines bibliographic elements  
precisely.  RDA and metadata (faux) standards such as the Dublin Core
(a pathetically inadequate subset of MARC) will ensure that the
content   standards will be worse than before, so perhaps they deserve
a less precise   framework standard .

Bernhard Eversberg said in response to Michael's comments above:

A harsh verdict, and it doesn't come from just somebody.  This view 
needs to get out in the open. It borders on an "emperor's new clothes" 

Then somebody, just anybody of those in the know, should reveal a bit
more about the wisdom behind the statements in that paper. Someone
ought to defend it, I mean, since it is not just any old paper but a
highly important one of potentially far-reaching ramifications and a
high impact on the quality of the stuff we are working with, and thus
the quality of our work, from now on into an indefinite future.

Having been involved in library computing for decades, I must confess
I'm a bit at a loss when reading the paper.

For one thing: the plan puts all eggs into one basket in committing
itself to Web standards like XML and RDF when, far and wide, there is
no large-scale bibliographic database that serves real-life library
work and is based on those. They are not even new standards, and there
certainly have been lots of attempts, even some at very prestigious
places, to employ them in a grand way. Where are the success stories
and the smoothly running new-age engines based on the results? I'm
asking this not for the first time, but up until now got no answers
in the forums.

Of course, library systems need to be able to export and import XML
and RDF structures, side by side with many others. With the appropriate
tools, library catalogs need never show anybody, except those working on
their upkeep, what their data looks like internally.

Even today, not every library system uses MARC internally. They just
all of them are able to swallow it and spew it out. (No mean feat,
I think, even today. Even something like VuFind takes in MARC and
nothing else.)

Secondly, there is no need for there to be one and only one exchange
standard. If some community needs some peculiar different format xyz,
there may be tools that take in MARC and serve xyz. On a per record
basis, web services can do that nicely, with no one caring what
the original was looking like. If we create more and flexible
standards for web services, these might solve or support most of the
requirements our catalogs of the future are expected to fulfill,
no matter what they look like internally or how they exchange data
with other library systems.

Even the paper itself says that MARC21 should be retained as an
exchange format for as long as necessary. So why not first create
an alternative format, test it far and wide, improve it or
add yet another better design, and so on. And creating and enhancing
web services standards all the time, as the primary means of access
to library data from any outside agents.

And thirdly, data input and editing may use any modern techniques
available today, hiding all the ghastly stuff involved with MARC under
layers and subwindows of pulldowns and radio buttons and plain language
labeled input fields. Ask the vendors why they don't provide them.
But don't forget to evaluate the economy of a new cataloguers' interface,
and different ones in systems A, B and C, in comparison to the universal
interface everybody is used to now. If you want to move away from plain
tagged editing, it becomes lots more difficult to create a standard.

MARC does have its flaws, and I have written up and published a
long list of them. With some, I don't know why they haven't long since
been solved. They may, however, be cured without sacrificing the
economy of MARC, without dismissing the entire concept and logic. From
here on, I'd be repeating myself, so let me cut it short at this point.

Michael Gorman added:

I thought I should expand a little on my testy reply of yesterday (though
I meant every word of it).   
 MARC consists of sequential denominators of elements of access points and
bibliographic descriptions (plus some too-little used codes).  Those
denominators identify a wide range of real world bibliographic conditions
precisely (i.e., a particular combination of tag and code will specify
exactly what that condition is and, by implication, what it is not) but
does not dictate how that condition is expressed (hence the reason why the
term "MARC cataloguing" is a nonsense--the cataloguing defines what goes
into MARC, not the nature of MARC--the framework that contains and defines
the data).  That being so, we should ask: 
1.  Will the replacement for MARC have (a) the same level of precision,
(b) more precision, or (c) less?  And why? 

2.   MARC is defined by numeric tags and alphabetic codes, what is t  o
replace them?  Why?  

3.  My understanding is that vendors have based the programming for their
library systems on MARC.  How are they to migrate from MARC to non-MARC.
If the answer to 1., above, is a,   the transition would be easy but what's
the point?  If it is b or c, the transition would require a massive effort
that would not, I would have thought, be cost beneficial. 

English speakers call dried plums "prunes."  If it is decreed that, as
of January 1st, we call them "ghiwibels" and "ghiwibel" means 'prune' we
have gained nothing but suffered inconvenience.  If "ghiwibel" means either
'dried fruit' or 'fruit with a stone in it,' we have lost definition (the
language being poorer)  and suffered inconvenience.

   __       __   J. McRee (Mac) Elrod ([log in to unmask])
  {__  |   /     Special Libraries Cataloguing   HTTP://
  ___} |__ \__________________________________________________________