Hi David,
the method sounds fine to me as long as the jitter of the sound card's a/d process is much less then the jitter you are measuring. given the short length of a single sampling period (1 second / 44100) that does seem likely, doesn't it.
so, you have found that the max deviation between quarter notes is ~2ms... am i correct in thinking that implies that the max deviation of a given quarter note from the 'perfect' time is smaller than that? i seem to remember elektron giving a figure for the latter that was around 1ms. and i remember thinking 'that's good enough for me, and better than most'. would you agree? of course, it would be great if the mpc3000 set a new benchmark that would be followed by all modern machines
also, do the errors accumulate in such a way to cause a variable drift over longer periods... or do the errors 'self adjust' over the course of each couple of bars keeping the total drift smaller?
on your site you talk about inter-ear timing; i believe that is computed in the brain by a different mechanism from what we call 'rhythm'... I have seen claims from psychoacoustic testing that highly skilled musicians typically can't distinguish timing errors of less than 4ms or so. (Of course, smaller jitters still do matter, because relative changes between different voices will cause different phasing etc, similar to the inter-aural phenomena. i suppose that's what you were thinking.)
pls correct me if im misinterpreting any of your findings; its interesting stuff.