29.05.2013 18:59, Diane Hillmann:
> I found the same kinds of things when aggregating NSDL data about a
> decade ago, though of course on a smaller scale! (Defaults with various
> misspellings of 'unknown' were my particular trigger). I think that what
> would help us avoid having to cope with crappy text into our dotage is
> to build tools that help us serve up standardized text when we think we
> still need it, while not actually creating or storing it as text. We
> know humans will continue to make these kinds of errors if we ask them
> to enter text during the cataloging process, but if users need to see
> these kinds of notes, we need to build smarter tools to make it happen.
Exactly. I cannot understand why RDA does not, wherever possible,
advocate the use of codes instead of English text. And all MARC
specimens of RDA data abound with verbiage although codes do exist,
as for example in the 33X's. One important aspect is international
exchange and multilingual catalog interfaces, the other is most
certainly consistency. As you point out, software should of course be
able to catalogers with comfortably inputting codes with not having to
know the numbers or acronyms literally, and OPAC software can display
whatever text is found suitable for the situation at hand. Imstead,
English language verbosity and loquatiousness abound in the data, with
excessively long labels in XML designs to make your head spin and at
the same time increasing the probability of errors, i.e.
The front end for catalogers will likely be the most important
aspect of RDA cataloging if it is ever to become a success story.
It will have to be much more efficient than MARC.
With BIBFRAME, I see little hope of this happening, more to
(Furthermore, it doesn't help that the RDA scripture is under lock and
key of a global monopoly.)