On 22/06/2012, Aaron Z Snyder wrote:
> These contributions from Ted Kendall and Doug Pomeroy have convinced
> me that all attempts at reconstructing "accidental stereo" from two
> simultaneously-recorded 78's with separate mic sources will be futile
> --- that is, unless someone can construct an algorithm to keep the
> separate sources in a continuously stable phase relationship. I've
> already looked at a couple of these reconstruction attempts and
> observed that even short-term phase stability is completely absent.
> The net result is pseudo stereo even if there's genuine spatial
> information shared between the two sources. I take no pleasure at all
> in coming to this conclusion, but that's reality.
Even so, the ones I have heard, such as the "Ellington at Newport" or
the Elgar Cockaigne on Naxos, sound quite good and certainly better than
However, the accidental stereo problem is not quite the same as the
noise reduction problem, because for stereo the two signals are
_supposed_ to be different. You would want to automatically line up on
the lowest frequencies only, or on the amplitude peakes. For noise
reduction you need to line up to 1/44,000 sec.
There has been an enormous amount of research on lining up images that
do not exactly match (the term to search for is "stitching") and I am
sure a good programmer with a maths background could adapt some of these
algorithms to audio. An audio record is the same as an image that is
extremely wide and one (or two if stereo) pixels high.
A thorough tutorial here:
Most of the maths is above my head, but I can get the general ideas.
[log in to unmask]