Much agreement here so far...
We've stopped normalizing archival masters and just capture 96kHz/24-bit data, that ideally peaks at -1 to -12 dBFS, but might rarely be accepted with a pretty consistent envelope peaking as low as -16 dBFS, i.e. not outlying transient(s) peaking that low. Plenty of resolution.
Not normalizing the masters also leaves our Prism Verifile information intact, so interstitial error status can easily be known by anyone who wants to check it well into in the future. Otherwise we'd have to preserve sidecar Verifile log files that must be trusted to be correct later. Backing up a minute, one should know that any gain adjustment prohibits you from running Verifile again - Verifile data is effectively clobbered, if that isn't known to some folks.
Derivative Access Copies:
We create create mp3 access copies of relatively consistent loudness by pushing archival masters through LUFS (Izotope RX 6 Advanced). That's a modified BS.1770-2/3, using -16 dB LKFS and -1 dB True Peak instead of -24 and -2 respectively.
I can see the argument to stay at -24 dB LKFS or something between that and -16, especially for high quality audio with a lot of dynamic range that one wants to maintain.
But -16 worked well given the majority of spoken word (high crest factor) and music recordings (of relatively medium to low crest factor) that we typically work with, either of which can vary greatly in average level throughout a tape side however. On the samples of material we've tried it on so far, -16 LKFS still only limits transient peaks for the most part, i.e without audibly "frying" the material. Similar to online music services based on our research too.
Having said that, we are just starting some rarer high fidelity classical music recordings, i.e. high dynamic range, and so I reserve the right to handle the derivative access copies a little more gently, given some curatorial guidance! We'll see.
Also, before bothering with the bother of LUFS processing, it pays to know if your access delivery system(s) may do it anyway, and if that is so, determine whether or not it may be advantageous to do some massaging yourself for extra control in advance of uploading files.
That was a mouthful. Hope it helps provides some useful additional perspective.
214 Olin Library
Ithaca, NY 14853
[log in to unmask]
From: Association for Recorded Sound Discussion List <[log in to unmask]> on behalf of seva, soundcurrent mastering <[log in to unmask]>
Sent: Friday, October 5, 2018 10:27:18 AM
To: [log in to unmask]
Subject: Re: [ARSCLIST] Normalization Question
with current 32bit file-based archives i see no reason whatsoever for using
RMS normalization with a value as high as -16dBFS.
-24 or even -30dBFS would be much more reasonable for archival reasons.
noise transients would make Peak normalization equally inappropriate, for
if it's desired to make the volume reasonable for net-based playback, the
ITU1770/A85 (with a true peak limiter at -2dBFS to stop those noise
transients) would be fine.
personally, i see no reason to use normalization routines whatsoever. if
the capture level setting is appropriate to begin with, no adjustment is
necessary. unknown math processes can damage the data which was captured!
On Fri, Oct 5, 2018 at 9:49 AM Jeff Willens <[log in to unmask]> wrote:
> I never use RMS normalizing for exactly that reason, among others -- it
> can easily lead to clipping.
> Wavelab as Shai says, has a great batch processing feature. I don't use
> Wavelab anymore, but IIRC it was quite configurable. I would trust the
> quality of that more than most others.
[log in to unmask]