It really depends on material. With tapes, 0 VU reference levels were
set at 185 nWb/m to perhaps as high as 500 nWb/m. MRL makes calibration
tapes at 200, 250, G320, and 355 nWb/m. G320 refers to the German 320
nWb/m standard that is measured differently than the other three. MRL's
measurement for tape fluxivity is slightly different than Ampex's so the
difference between 185 and 200 is not what a strict calculation would
I use 250 nWb/m calibration tapes and I generally set those for -20 dBFS.
With many converters, one must be aware of what they are looking for for
0 dBFS. For example, the RME converters in the "Lo Gain" environment
consider +19 dBu as 0dBFS which is 15 dB above +4 dBu. Their higher-end
Fireface UFX with high-level balanced outputs can go to +24 dBu for 0
dBFS to make them compatible with the SMPTE standard of +4 dBu = -20 dBFS.
However, in a transfer facility one is generally not bound by the same
rules a large broadcast plant would be bound by, so if the signal on a
line is -1 dBu at lineup tone it does not really matter.
Since I do not even have VU meters on my A80RC machines (there is a very
small risk that they add a slight amount of distortion), it is easy to
set up the machine using the meters in the PC. I will admit to adjusting
250 nWb/m to +4 dBu on the Sony APR-5000s, but I will drop that as
needed. Most of the master tapes are played on the A80RC while the Sonys
see more general tapes.
The Sony APR-16 multitrack (5 audio formats from 4 T half-inch up to 16
T one-inch) is generally calibrated to -20 dBFS for 250 nWb/m. There was
one series of tapes where I aligned one preset about 4 dB lower because
the dbx was misbehaving and clipping even with 20 dB of headroom. There
were no tones on those tapes for calibration.
Tom's very detailed response is good and I concur with his points,
though I tend to record a bit lower than he does because I have been
surprised by louder passages down a tape and I do not like to adjust
levels during a transfer.
I looked at the levels briefly in a symphony concert I recorded last
Saturday with a pair of DPA omni mics in a reverberant space. I have a
preliminary boost of about 3.5 dB above the nominal recording level and
peaks are coming within 0.5 dB of full scale. The largest peak-to-VU
difference is on applause where it is almost 20 dB. The music appeared
to have about a 10-12 dB difference. The Sound Devices 722 recorder I
use for this has both VU- and peak-responding indication in its LED
meters, so it is easy to see how to set the levels.
I generally peak normalize across an entire file. It seems pretty
transparent in Samplitude. I know of one person who switched to Sequoia
(Samplitude's more-featured big brother) because it sounded so much
better than what he was previously using even on simple level shifts. I
will slightly adjust the first and second half of a concert to be closer
in level than they were if the material changes ("fireworks" in the
first half and an early, smaller-ensemble symphony in the second, for
I think it is easy to become obsessed with worrying about standards, but
once we normalize the file, any standardization goes out the window.
People listening to music are used to hearing normalized CDs for the
most part. 16-bit might have been one thing driving that. TV audio was
20 bits fairly early on which makes it more comfortable to keep
everything referenced to -20 dBFS. I think this is where much of the
cinema world is at, but DialNorm metadata also comes into play there. (I
did get into a short discussion of this with another industry
professional at Neil Muncy's memorial service this week--if Neil
noticed, I think he would have been happy! It was a topic (along with
grounding) that I wish I had more of an opportunity to discuss with him.)
Levels are getting better. The European loudness requirements are
complex but seem to be making some difference as AGC based on them
becomes more available in software.
In reality, in an archive I doubt that there can be normalized loudness
levels across the entire archive as loudness is a function of peak and
average levels which involve compression. I think doing archival
transfers we should take the material as it was delivered to us and
provide those raw files as the preservation copy. I am often asked to
increase intelligibility in oral histories, and for that I use a variety
of manual and automated tools. In order to stay in budget, adding
compression (rather than adjusting each phrase manually) is often
necessary. The access copies are different from the raw copies.
On 2012-10-19 9:51 AM, Henry Borchers wrote:
> Hello all,
> I've been hitting a brick wall with my research and I was hoping that with all the experts here, someone could point me in the right direction. Iím currently looking for research done on digital reference levels. I am particularly interested in looking for references related to the amount of headroom standards digital audio archivists and audio digitization technicians use in their digital masters and the digital level dBFS that analog equipment have been calibrated to. Iíve been able to find a lot of references about dBFS standards when it comes to audio for DVD, TV, and cinema (such as SMPTE standards) but not much for the digitization of audio only content. I have been having trouble locating good research regarding this area and I was hoping someone here could point me in the right direction..
> Henry Borchers
> Broadcast Media Digitization Librarian
> University of Maryland
> B0221D McKeldin Library
> College Park, MD 20742
> (301) 405-0725
Richard L. Hess email: [log in to unmask]
Aurora, Ontario, Canada (905) 713 6733 1-877-TAPE-FIX
Quality tape transfers -- even from hard-to-play tapes.