I recall George Blood's presentation. It begged the question then and
does now: is digital preservation of audio something smaller archives
can actually do, given present standards and opinions on what
constitutes accurate rendering, without without blowing the budget (if
there even is one)? Or is the promise of digital preservation outside
the reach of these institutions, in which case, what are they to do?
Thankfully my institution, the University of Georgia Libraries, has
the resources to do some work in-house and send some work out
(sometimes to George Blood), but we also council small archives around
Georgia when they ask about digital preservation. These are places
that may still actively be dubbing cassettes on a dual-well deck for
backups, but realize there may be digital solutions out there within
reach. Are they, really, within reach for these institutions, given
the conversation that has gone on here? As Andy has mentioned, I have
to think there are reasonable solutions for them.
In any case, Audacity and bundled softwares are often mentioned on the
Archivists and Oral History listservs as useful for smaller
digitization projects and shoestring budgets. My personal experience
has borne this usefulness out. Although we use a number of different
softwares (including WaveLab and Soundforge for preservation work),
I've found Audacity to be capable of generating WAVs and of limited
editing tasks; I use it all the time on my Mac, when I need to work
with video soundtracks exported out of Quicktime. I've never had a
problem with the resulting sound quality.
I think Tom clarified his thoughts on Audacity, and I understand what
he meant when he gave his opinion regarding it. It's great that a
larger conversation has resulted. But I will say if there is real
evidence that Audacity or bundled softwares like Cubase LE cannot
handle what it says it can or is somehow buggy, this should be made
evident to the larger archival community beyond ARSC, many of whom use
the software that came with their interface or Audacity. It would
also be interesting to see what Audacity's developers have to say.
On Sat, Aug 16, 2008 at 5:46 AM, Goran Finnberg <[log in to unmask]> wrote:
> Bob Olhsson:
>> Testing bit-accuracy doesn't require rocket
>> science but it is amazing how many developers
>> seem to skip even the simplest of tests
> But reading what Mr Blood has to say is like entering the hall of
> software shelved "Not ready for use" but we need to get some income so
> we release it any way since nobody will notice the errors at all.
> George Blood:
> SLIDE 45
> We feed 24 bits into the system, the system can't or choses not to
> handle the extra-wide word width, truncates to 16 bits, passes the
> signal through the DSP (digital signal processing) -- which we don't use
> in preservation -- then applies dither on the output to pad the file
> back to 24 bits before writing to the drive! Who would ever know? Who
> could ever find this?
> Far more troubling, as we saw in our previous example, some times the
> system (and we cannot determine where in the chain this is occurring,
> except that it's happening between the ADC and the hand off to the
> operation system to write the file) decides to do this on the fly -- in
> the middle of the file.
> This is a trap in action.
> As I have noted when trying out different software myself it can be very
> frustrating trying to find out all the hidden gotcha´s that should not
> be there no matter what.
> But many software vendors are entirely unscrupulous for sure in thinking
> that nobody will notice nor care.
> Goran Finnberg
> The Mastering Room AB
> E-mail: [log in to unmask]
> Learn from the mistakes of others, you can never live long enough to
> make them all yourself. - John Luther
Head, Media and Oral History
Richard B. Russell Library for Political Research and Studies
University of Georgia
Athens, GA 30602-1641