On Fri, 4 Nov 2011 15:41:02 -0700, Roy Tennant <[log in to unmask]> wrote:

>In addition to focusing on goals, I would like suggest that we study what 40
>years of machine-based bibliographic practice tells us. Oh wait, that's
>already been done:
>"Implications of MARC Tag Usage on Library Metadata Practices"
>One essential fact this report uncovers is that "only 21 to 30 tags occur in
>10% or more [WorldCat] records."
>I would like to suggest that one potential implication of this finding is
>that we should think about allowing our 40 years of actual practice to
>identify the core set of elements that clearly are used in bibliographic
>data, and sequester additional complexity in separate packages that although
>they can travel with this core they can also be easily ignored by
>applications that don't choose to address the added complexity.

In general, learning from our experience is necessary; where else could we start? 
However, if the fundamentals of RDA are valid (and I believe they are, though I 
don't necessarily agree with the way they have been turned in to a code, not that 
everything in that code is equally valuable -- but much of it is), then we must 
also consider what RDA gives rise to that isn't in the range of our experience with 
MARC. In this category I would place the possibility of constructing work- and 
expression-level records (or datasets, if "records" is now a bad word), distinct 
from the current construct of authority records for names and for works (without 
distinction between work and expression), and including the subject and genre terms 
that currently can now be applied only to bibliographic/manifestation 
>In other words, one of our goals should be "simple requirements should be
>simple to accomplish". Building mechanisms for various communities to build
>out richer descriptions for particular kinds of resources is great, but it
>should not happen at the expense of added complexity to a core description.

Added complexity is a bugbear of mine.  I have often expressed my anxiety that the 
new code must be *workable* and so far (and the RDA test experience appears to have 
shown) I think it falls short of that goal.
>The monolithic nature of the MARC record has been, I submit, one of its
>chief problems. When processing it you never know whether you will see a
>couple dozen data elements or nearly 3,000, so you have to code for every

I wonder it you're talking in the same terms others of us use. We (working 
cataloguers) are accustomed to working in at least three levels -- bibliographic 
(description and indexing terms, including classification notation, plus a varying 
range of coded data; authority-controlled terms for indexing terms which are or may 
be common to various resources; and holdings/item level data for the local scene 
and for data specific to the particular copy and required in the particular 
collection environment. Beyond that, we may have links between bibliographic 
records for analytics, components, etc., but much of this data has to be at the 
local level because its workings are system-specific.
>Think modular, with a manifest at the top so you know what you have in hand.

Spell out your goals first. Karen Coyle's and Mac Elrod's lists are a good start; 
more can be added, and some can be sub-goals of more general statements.  Designing 
a system before this work is done and reviewed by the bibliographic-control 
community would be pointless, whatever the callos for rapid creation of a new 
bibliographic framework.

Hal Cain
Melbourne, Australia
[log in to unmask]