Print

Print


At Harvard we are trying to use METS to archive the audio preservation and
production process our music library is engaged in. We will not drive
delivery from these METS files, but instead will use persistent
identifiers pointing to deliverable audio segments from EAD finding aids.
The METS files are being designed to aid preservation and future
processing operations. Our current thinking about how to use METS for this
need is described below, and we'd welcome input.

Our process currently reformats an original audio item such as a tape or
record in 3 resolutions: a 96kHz/24 bit preservation master, a 44.1kHz/16
bit production master and a real audio file/.smil playlist. Any of these
resolutions may consist of a single file, of several files meant to be
played in parallel, e.g. left channel and right channel each in their own
file (hereafter referred to as a file_group), or several files or
file_groups strung together by an edit decision list that defines which
part(s) of which file(s) to play when.

Since all 3 resolutions represent the same intellectual content, we would
like to maintain a relationship between them that allows us to know that a
song or performance begins at such and such a time, no matter which
resolution we are dealing with. In other words, song 1 starts at 1 minute
in all 3 resolutions, not 1 minute in the first, 1:10 in the second and
zero in the third. Also worth noting is the fact that many of these source
audio objects are multi-track in nature. Some tapes are 2 track mono,
meaning you play one track, them flip the tape and play the other. Some
are 1/4 track stereo, and so on. Additionally we want to preserve digiprov
information that includes some files that are used during the production
processes. These include project files, parameter settings for de-noising,
waveform reduction files, etc.

We currently are considering using the METS structMap to define the
hierarchy of the resolutions, preservation master, production master and
deliverable. However, we have found it difficult to represent the full
complexity of multi-file audio across multiple resolutions using the METS
structMap alone. Therefore, when any of these versions is more complicated
than a single audio file, we are proposing to point from the structMap to
an Audio Decision List as defined by the AES31-3-1999 Standard, "AES
standard for network and file transfer and exchange-Part 3:Simple project
interchange."

We would construct these ADLs so that the content started at the same time
index in each case so that the time line for each ADL would be identical
to the others in the METS document. In many cases, some audio information
is omitted from one of the versions, e.g. the deliverable may not always
contain everything that is present in the preservation master due to
copyright issues or lack of interest. In those situations, the audio
portions that are present will be aligned in their ADL to the correct time
relative to the master and silence will be inserted to fill the gaps, thus
preserving the program/time relationships between versions. We believe
this provides a simpler and clearer way to associate deliverable and
production segments with the appropriate sections of the preservation
master than anything we have been able to model using the structMap alone.

Each ADL will have as many tracks as the original audio object. They will
each correspond in a 1: 1 fashion to the original item. Thus we are
essentially modeling the original object with the ADL, and using the METS
structMap to associate different versions of that object. We will keep
track of audio file metadata using the AES core audio schema in METS
techMD and sourceMD as appropriate. Each audio file will point to its own
core audio metadata from its entry in the METS filegroup. Parameter files
and other auxiliary data will also be packaged as metadata. The AES
process history schema will describe digiprov metadata that show how to
get from the preservation master to the production master and how to get
from production master to delivery copy. This will also be linked from the
file group entries. The AES process history metadata will provide the
links to the auxiliary processing files packaged in other metadata
buckets.

Our use of METS in this particular case is intended for internal use only,
but we would be interested in reactions and suggestions from other METS
audio implementers, particularly from anyone who has been able to model
complex relationships among audio files entirely within the METS
structMap.

-- Robin Wendler (1/4) and David Ackerman (3/4 -- the clear parts)

Robin Wendler  ........................     work  (617) 495-3724
Office for Information Systems  .......     fax   (617) 495-0491
Harvard University Library  ...........     [log in to unmask]
Cambridge, MA, USA 02138  .............