> Your method still has the quantization error described in my paper Could you expand on this a bit? I think converting from a random double [0,1) to a ranged integer [0,r-1] has a maximum quantization error of about 1/2^(53-log2(r)). The denominator is the number of floats that map to a particular integer, and the numerator is the maximum difference between any two integers. This would be be a much smaller quantization error than is present with the modulo approach without rejection. In some applications, this degree of bias would be acceptable, in others it wouldn't. I'd hesitate to call it the "same" error, but I see why you might. Is this accurate, or is there some larger quantization error in the floating point method that I'm missing? > y = ((uint32_t)x * (uint64_t)r) >> 32; Is there further explanation of the equivalence of this to the modulo approach in another one of your other papers? > (which will be published in the next issue of Journal of Modern Applied Statistical Methods) Congratulations! |