I agree personally, although I easily understand the resistance to using
IT for this. 

The multimedia company I work for has, as of today, 171,000 audio, text,
HTML, .swf, and other files all archived. About half of the audio files
have 'linked' interrelated data files of one sort or another. We make
thousands more every day in 9 studios. We tried managing this manually
with databases and it was hopeless. It required too many people with too
much special knowledge. Finally we hired a couple of truly experienced
and talented software developers. It was as you suggested here, about 4
months were spent on just planning the basic feature set, through many
meetings and discussions, the developers took the time to understand
business model AND the users voices and previous experiences. Then it
was 'frozen' for a first version. You can't expect anyone to program for
a moving target. The software itself was created rather quickly, and
thoroughly QA'd before release. It works very well. While this software
is not designed with the needs of the folks here in the equation at all,
I bring it up only to point out that Damien is right... The hard part is
trying to agree on what the databases have to do FOR us, and what raw
data they need in them to do it. IMHO, the astronomical amount of audio
/ video to be archived makes no other approach even remotely possible.
Saving all the material is little better than seeing it disintegrate if
no one could ever search or find anything. After all, I doubt the
average office admin using Word knows or cares how it works. Question
is, how do we get there from here ?

-----Original Message-----
From: Association for Recorded Sound Discussion List
[mailto:[log in to unmask]] On Behalf Of Damien Moody
Sent: Tuesday, March 15, 2005 7:36 AM
To: [log in to unmask]
Subject: Re: [ARSCLIST] .wav file content information

It seems to me that what we really need is competent programmers and IT
professionals to perform a thorough analysis of the needs of the a/v
community to create truly usable and customizable software. IT isn't
nasty stuff - but the output of IT efforts, as with anything else, can
be. In IT there is a very under-utilitized, mis-utilitzed and
misunderstood concept called "systems analysis and design". That is, a
bit simplistically perhaps, but truly enough, all that we need. 

Damien J. Moody
Information Technology Specialist
Library of Congress

>>> [log in to unmask] 03/14/05 11:57 PM >>>
Has anyone had experience with any of the few really good software
packages that are available for recording studio management? The best
ones are very good at managing everything from inventory to tape
libraries, including track sheets, track notes, even console and
outboard gear setup and management. All the gory details of sample
rates, noise reduction, etc. are a natural part of that. I'm not
suggesting that anyone use the packages for managing very large audio
archives, just that there may be some real lessons to be learned (for
that matter, also learned not to use) by examining how people have
already tried to solve this in the commercial world.

I agree with Richard about option #3 personally, but it seems to me that
it is the only way, nasty as it is, that has real long-term legs to it.
What we really need is a program (s) that seamlessly link databases with
the audio file / text / visual (whatever type), and doesn't require a
database expert to operate. I can't see how that is really possible any
other way, what with changing digital standards. While I live in fear of
the deadly 'single database file', it at least, if planned properly,
could be arranged so that as ASCII information it could be imported to
any reasonable future database. Further, until everyone everywhere
agrees to exactly what would be included in META information without
exception, there could be no way of conforming older files to the
'standard' without opening and changing every file. Better to make the
file and handle it as little as humanly possible. Databases can do
global changes rather more easily.

I know, I know, IT .... Nasty stuff just keeps getting in the way ! ...

-----Original Message-----
From: Association for Recorded Sound Discussion List
[mailto:[log in to unmask]] On Behalf Of Richard L. Hess
Sent: Monday, March 14, 2005 9:54 PM
To: [log in to unmask]
Subject: Re: [ARSCLIST] .wav file content information

As usual, Scott and John make some very persuasive points.

But, there is a huge leap from putting metadata in the BWF file to
running a database. Let's name the database: it's a digital asset
management system or media asset management system. Buzzword software
that delivers less than it promises in many iterations, sadly. I'd love
to hear good responses about MAM software.

In reality, I think there are three levels (perhaps more)
(1) Essence (to use the SMPTE term) and metadata in one file.
     This is the BWF as well as the MXF and AAF approach. This was, to
     one of the big paradigm shifts when migrating from dBase III to
     Access. Separate files vs. all-in-one.
(2) Essence and metadata in one folder
     This is perhaps the easiest to deal with, but doesn't scale
     all that well
(3) Essence in a file system, indexed by a MAM system. The MAM system
     holds all the metadata while it merely points to the essence.
     the essence file names become totally NON human readable.

(1) and (2) can be managed by mere mortals. (3) requires an IT

But BWF begs the question. Do you insert the album jacket scans at 450
dpi in the file? What happens when there are multiple audio threads? Can
you put 24 tracks in one BWF? I'm not sure. If nothing else, you'll run
out of space.

Some of the BWF specs seem to limit it to 48ks/s. What about higher
It's possible, but what about interchange?



Richard L. Hess 

Quoting Scott Phillips <[log in to unmask]>:

> One would think that the LAST thing anyone would want to do is resave 
> a complete audio archive file simply to add new text data. Why chance 
> any alteration or corruption of the original audio file ? This is 
> particularly true since the 'new' file won't byte for byte match the 
> original, how would one reasonably (I.E. quickly) verify the new file 
> against the original ? I would agree with John, a 'loose coupling'
> allows for a proper revision history to be kept as well without any 
> risk to the most irreplaceable part of all... the audio. The adding of

> an ID number when the file is first generated solves that.
> -----Original Message-----
> From: Association for Recorded Sound Discussion List 
> [mailto:[log in to unmask]] On Behalf Of John Spencer
> Regarding the usage of MYSQL or other database applications, remember 
> that the relative size of the metadata "stack" will be MUCH smaller 
> than the resultant audio files.  We prefer to link the metadata record

> with a unique ID in the BWF header that we also record in the metadata

> database.  By "loosely coupling" the two, you can add/ make changes to

> the individual metadata record without having to load the audio file 
> itself.
> I would be more concerned that the metadata that I was collecting was 
> structured in a manner that would allow for it to move into other 
> database environments without re-keying the information.