----- Original Message -----
From: "Dave Bradley" <[log in to unmask]>
> >Another question...
> >When you use "16-bit sample," do you mean that the value is
> >represented by a 16-"digit" binary number, and can thus have any of
> >65,536 values (from 0 to 65,535)...right? So that means that the
> >value of the step (say 30000) actually represents an analog value
> >that could be anything from 29999.50..1 to 30000.49..9, which makes
> >the maximum inaccuracy 1/(65536/2)...is right?
> Ok, assuming that your math is accurate, which I have taken as a
> given rather than working it out myself, yes, a possible error of 1
> point would give that range of 29,999.50.... to 30,000.49.... SO,
> what would a 14 point error give instead? And again, I ask which is
> more acceptable? The difference that one point in value will give or
> the difference that 14 points will give?
> I'm not claiming that digital is in any way perfect, but when it's
> provable mathematically that you can have such a large error vs such
> a small error, why accept the large error simply because it's digital
> which means it's inaccurate to start with?
> Your argument certainly tells me that I'd never hire you to digitize
> anything of any archival importance for me, nor would I ever
> sub-contract work out to someone willing to take that approach.
Again, I wasn't trying to prove anything about digital sound...I was
just making sure I understood the language that has been bouncing
around ARSCLIST this past while. Obviously, the inaccuracy (which
would seem to be a given in A/D conversion) would only be a factor
when/if we could hear it!
So, the question then becomes: can humans hear any actual difference
(note that many CLAIM to hear differences, but this could be easily
disproven by blind A/B aural comparisons!) between 16-bit and
24-bit conversions...or either of the above and the analog original?
Steven C. Barr