I believe you have it right -- except that the WHEN situation is more
complicated, especially on the d-a, than generally imagined.
On Mon, Feb 11, 2013 at 10:43 AM, Carl Pultz <[log in to unmask]> wrote:
> The way I've thought about this matter is that the data representing the
> audio waveform is the WHAT. The other key aspect, the WHEN, is assumed.
> is, WHEN is defined by the sample-rate, but is not defined within the data
> in such a way that it can control the subsequent hardware clocks. Timebase
> errors at a-d will have an effect because there is no way for d-a to know
> about those defects. It assumes perfection. For copying or DSP WHAT is all
> you need to know. For conversion, WHEN becomes critically important and is
> subject to various approximations, even given the much improved hardware
> A reasonable interpretation?
> -----Original Message-----
> From: Association for Recorded Sound Discussion List
> [mailto:[log in to unmask]] On Behalf Of Tom Fine
> Sent: Monday, February 11, 2013 9:19 AM
> To: [log in to unmask]
> Subject: [ARSCLIST] Jitter (was Re: [ARSCLIST] Audibility of 44/16 ?)
> Can jitter be introduced on the A-D stage? As I understood Mike Gray's
> posting, he was saying jitter can be induced from the get-go, in the A-D
> process. Konrad, do you know that to be untrue?
> Also, I've been told by one of Sony's senior EE guys that it can be baked
> into a glass master. As I understand it, jitter can be induced any time the
> bits are clock-aligned for whatever reason. I'm not sure why that occurs in
> making a glass master, but a lot of research was done on this back in the
> 80s and 90s, at least that's my understanding from what the Sony guy told
> So, I think (but may have learned this wrong, I'm not an EE) that bits is
> bits only when the bits are kept absolutely intact and the
> timing-transmission is rock solid.
> -- Tom Fine