I heartily agree with Michael that the discussion has been very
interesting, and also very encouraging. I would like to provide a little
more background on the design of EAD, in particular its flexibility (or
laxness) , and the quality and consistency of archival description.
Before doing so, however, I would like to correct an amusing error (to me
at least) in my response to Hugo yesterday: "Voice sensitized output"
should of course be "voice synthetized output."
When EAD was developed, everyone involved realized that it was merely the
first step. While computer standards certainly present difficult technical
and intellectual challenges, they also present political and economic
challenges. At the beginning of the design process, the developers adopted
a modest methodology. In part the methodology recognized the limits of
those involved. None understood the technology and its potential uses well
enough to envision and articulate a complete reformulation of archival
description. At the same time, they also recognized that the community
would have rightfully rejected such a reformulation. Instead they decided
to focus on representing existing archival description in a
machine-readable form. They believed that if the archival community could
be persuaded to use EAD, the use would provide a means to acquire
experience with the technology, to explore and experiment with its
potential uses, to find out what worked and did not work, and that the
experience and understanding acquired would lead to ongoing discussions,
negotiations, and reformulations of archival description, and EAD.
EAD's flexibility is thus by design. It had to be flexible in order to
accommodate a wide variety of archival description, the vast majority of it
based loosely on archival descriptive principles, but not on content
standards. This was a practical political and economic decision. DTDs are
frequently described as being of two kinds: constraining and enabling.
Constraining DTDs are very rigorous. They tightly control both the order
and number of tags. Such DTDs are common in situations where there is a
need for very controlled data and there is no diverse existing (or legacy)
data with which to contend. Generally such DTDs are only workable in
environments where there is sufficient authority to mandate use. In other
words, not the archival community. Enabling DTDs are used for accommodating
diverse legacy data, where extremely tight control over order and number of
tags is not necessary or possible, because authority is lacking to enforce
a strict regimen. While EAD is an enabling DTD, the developers added as
many constraints as seemed necessary to make it usable. They recognized
that more constraints would be better, both for those providing system
support, as well as end users, but that constraints would need to be
arrived at over time, and by consensus. Making the DTD more constrained,
however, will not in itself lead to consistent and high quality description.
There are a number of other, more important challenges to overcome in the
standardization of archival description than "tightening up" the DTD. We
need detailed content standards. While ISAD(G) is a good content standard
structure or outline, a good content standard needs to provide much more
detail. The North American CUSTARD project will address this for one
community, but similar efforts need to take place in other communities. The
guidelines developed by various consortia and RLG are also a contribution
to the ongoing normalization of EAD use. Reconciling these in North America
would also increase quality and consistency. We also need better education
and training. While the SAA and RBS EAD workshops are a good start, no one
will become an expert in the application of EAD in two days or five days.
If one thinks about the resources that are devoted to training good library
catalogers, the archival community is no where near that. Libraries rely
heavily on existing catalogers training new catalogers. A new cataloger
generally works under review for many months. I am not sure how we provide
this kind of intensive training, but we certainly need to give it some
thought. Given the very limited human and material resources of most
repositories, education and training challenges are likely to be best met
in the context of consortia that pool and leverage resources.
All things considered, since EAD's release in 1998, great progress has been
made on many fronts. But we certainly have a lot more work to do. We need
to keep working on furthering consensus, but also recognize that we will
always need to accommodate national, regional, institutional, and cultural
differences, as well diverse archival records and contexts.
Daniel
At 04:45 PM 9/17/01 -0500, you wrote:
>It has been a most interesting set of messages that have come across the
>list today. Let me add a few footnotes.
>
>Is the DTD lax (or just permissive)? Should we have EAD lite? Or less
>flexibility in EAD itself? More prescription? The origins of the DTD rest
>in a conscious effort to accommodate a fairly wide range of existing finding
>aids practice at the time. It was in fact a tactical issue to some extent.
>How does one promulgate a standard? How does one encourage good and
>consistent practice? What do you think the market penetration of EAD now
>would be today if a group had determined in 1995 what finding aids ought to
>have looked like (either as to content or presentation) and created a DTD
>that supported only that view? One could argue that we should have started
>out with first principles and spend ten years of committee work to lay the
>firm principles by which we might proceed. I'm not sure that such an
>approach would have succeeded, particularly if it were to have an
>international component. In any event, it did not happen.
>
>The issue is what do we do now. Perhaps we are now a time when the pendulum
>can swing the other way. Everybody agrees that we need an agreement on
>structure- what informational pieces should be included in descriptive
>products. ISAD(G) and ISAAR(CPF) are a start but they are really only broad
>outlines, the basic core. If we were to live in a logical world, we would
>define these elements and then move on to a content standard that described
>how these data elements would be constructed- a content standard, a data
>dictionary, business rules- depending on the vocabulary of the discipline
>from which you come. While I never like to disagree with my colleague Hugo
>Stibbe, I would argue that ISAD(G) is not a content standard or if it is, is
>one with very weak semantics. RAD, now that's a content standard. APPM
>too. You could actually describe something using their rules. Not really
>so with ISAD(G). In North America, the CUSTARD project is attempting to
>blend the best of RAD and APPM to create a content standard that is
>independent of output. But what about in the UK and Sweden and France? We
>have to remember that EAD has an international face and must accommodate
>various national descriptive traditions just as MARC can accommodate data
>created according to APPM, or the Oral History Cataloging Manual, or other
>content standards, as well as AACR2.
>
>With the EAD Cookbook, so much of the conversation, including that which
>started this list thread, has been on the stylesheets. That's the applied
>side. I have always felt that the encoding protocol was really the more
>important part of the Cookbook. The idea was to encourage consistent
>content and markup by offering tools that would produce pretty results.
>Another of these backdoor tricks to encourage standardized best practices.
>I am dismayed that one of the unintended side effects seems to be "tag
>abuse" to make data that does not conform to the encoding protocol display
>nicely. This is either a perversion of my intent or simple necessity for
>dealing with the myriad of reasonable possibilities that the Cookbook could
>not envision.
>
>We now have several encoding protocols out there- from California, LC, and
>elsewhere- that we ought to consider. I talked with several individuals at
>SAA about bringing together a discussion of these with the possible goal of
>formulating one consolidated recommendation. I personally have no interest
>in perpetuating something independent for the Cookbook. The question is
>who would do this and under what auspices. Could it be timed to incorporate
>the work of CUSTARD?
>
>If we had such an agreement as to data structures and their encoding, we
>wouldn't need EAD lite or some local subset of the DTD. (By the way, the
>later is, on a practical level, a dangerous direction in my mind. Who would
>maintain these local variants when the master DTD changes? These are not
>trivial concerns or simple technical matters. You thought XSLT was tough-
>try writing a DTD without making a hash of it.) We could implement the
>standardization in the form of work forms, templates, encoding protocols,
>structural standards, etc.
>
>Once we have some reasonably consistent content and encoding, then we can
>talk about consistent presentation, which I also support. Many of you may
>have heard my rant on this subject during a session at SAA. And I'm not
>talking about the touches that provide institutional band identification of
>the sort that Jim Cross mentions. Patrons understand library catalogs
>because there is a consistency of content and presentation that we all
>learned around fourth grade. Archivists need to learn a lesson from that
>example. Are we ready?
>
>
>Michael J. Fox
>Assistant Director for Library and Archives
>Minnesota Historical Society
>345 Kellogg Blvd West
>St. Paul, MN 55102-2409
>651-296-2150 (phone)
>651-296-9961 (fax)
>[log in to unmask]
|