Skip to content

Precision of random numbers

In some sense, random numbers uniformly-distributed in the range \([0, 1)\) are the easiest class of random number to generate.

Because of the internal representation of floating-point numbers, all you need to do is fill the significand with random bits, set the exponent to -1, and the sign bit to positive.

Some language run-times do this better than others.

This article shows how to check your run-time, and how to fix it.

1 Python

To investigate, we'll use a couple of functions to help us inspect a run-time.

First, we need to know how many bits are available in a float. This can be done by starting with any power of 2 (which will have a single bit set in the significand), then counting how many progressively lower-order bits can be added until the result rounds.

Python uses doubles and also has 53 bits of precision:

def sigbits():
    x = 1.0
    i = 0
    while x != 2.0:
        x = x/2 + 1
        i += 1 \
    return i

sigbits()
53

Python appears to use IEEE-754 double-precision floats, and does not use any of the significand's bits for type-tagging, leaving all 53 available.

Second, we need to know how many bits are set in a random number. We check this by multiplying randoms by powers of 2 (making the exponent change, but not the bits in the significand), then taking only the fractional portion. If we've shifted all but the last random bit, we should see 0.0s and 0.5s. If we see all 0.0s, we've shifted too much; if we see other values, we've shifted too little.

We can see that Python uses all 53 bits for randoms:

def runif(n):
    return [random.random() for i in range(n)]

def lowbits(x, shift):
    y = [a * 2**shift for a in x]
    return [a - math.trunc(a) for a in y]

lowbits(runif(10), 52)
0.0
0.0
0.0
0.5
0.5
0.0
0.5
0.0
0.0
0.5

2 SBCL

SBCL maps ANSI Common Lisp's four float types onto two concrete types. For single-precision floats, SBCL emits 23 random bits out of a possible 24:

(defun sigbits (model)
  (let ((two (float 2.0 model)))
    (do ((x (float 1.0 model) (1+ (/ x 2)))
         (i 0                 (1+ i)))
        ((= x two) i))))

(sigbits 0.0f0)
24
(defun runif (model n)
  (loop for i to n collect (random model)))

(defun .* (x a)
  (mapcar (lambda (xi) (* xi a)) x))

(defun lowbits (x shift)
  (let ((y (.* x (expt 2 shift))))
    (mapcar '- y (mapcar 'truncate y))))

(lowbits (runif 1.0f0 10) 22)
0.5
0.0
0.5
0.0
0.5
0.5
0.0
0.5
0.0
0.0
0.5

For doubles, it emits 52 out of a possible 53:

(sigbits 0.0d0)
53
(lowbits (runif 1.0d0 10) 51)
0.5d0
0.0d0
0.5d0
0.0d0
0.0d0
0.5d0
0.0d0
0.0d0
0.5d0
0.0d0
0.0d0

The missing bit in both cases is because SBCL's generator starts with 1.0, then clobber[s] the significand … with random bits, then subtract[s] 1.0 (SBCL source, target-random.lisp).

The leading bit of any normal number (i.e., not a subnormal number, which are those extremely close to 0), is assumed to be set. Practically, while a double may have 53 significant bits in its significand, the internal representation skips the leading assumed bit and only stores the following 52. So, starting with 1.0 (a normal number), then clobbering the significand with random bits, then subtracting 1.0, is a straight-forward bit-bashing approach.

However, after subtracting 1.0, the number will be re-normalized so that the next-highest set bit will take the assumed position, and guarantee one more bit that could be set randomly while maintaining a perfectly smooth discrete uniform distribution.

3 R

In R, things are a bit different. Floats are again doubles with 53 bits of precision:

sigbits <- function () {
    x <- 1.0
    i <- 0
    while (x != 2.0) {
        x <- x/2 + 1
        i <- i + 1
    }
    i
}

sigbits()
53

However, its randoms have only 32 significant bits:

lowbits <- function (x, shift) {
    y <- x * 2^shift
    y - trunc(y)
}

lowbits(runif(10), 31)
0.5
0.5
0.5
0.5
0
0
0
0
0
0

This is mentioned briefly in the docs (entry for base::Random):

Do not rely on randomness of low-order bits from RNGs. Most of the supplied uniform generators return 32-bit integer values that are converted to doubles, so they take at most 232 distinct values….

This can cause some algorithms that rely on randoms to appear to be more accurate than they really are.

4 Fixing low randomness

If your runtime isn't emitting enough random bits, you can work around it fairly easily, although there is a trick involved.

The main idea is to generate two random numbers, and add them together, with one shifted to the right by the correct number of bits. Really, the number isn't being shifted—just the exponent changed.

The problem with that is the higher-order random (which is unshifted) may not have its leading 0.5-bit set; so, as the number conceptually normalizes for addition, you'll get precision to bits of a lower order than expected.

In other words, numbers in the range \([0.5, 1)\) will have precision down to \(2^{-n}\) (for the some \(n\)), those in the range \([0.25, 0.5)\) precision down to \(2^{-n-1}\), those in \([0.125, 0.25)\) down to \(2^{-n-2}\), and so on.

One approach would be to force a higher-order bit to peg the precision to the desired range. The first bit available for this is 20, or 1.0. So, the higher-order random first has 1.0 added to it to peg the precision, then the lower-order random (shifted in place) is added, and finally the 1.0 is subtracted back out.

Beyond sacrificing a bit, it doesn't really work. Because of the default round-to-even rule, you will round a very small number of cases up to 1.0, when its frequency should be 0. On the flip side, the frequency of 0.0 will be under-represented by the same very small amount. This could be fixed by setting any output of 1.0 to 0.0. However, the problem can be avoided entirely.

In the end, the trick isn't really that tricky. It's just to recognize that rounding should be avoided, and so the addends need to be appropriately truncated. The higher-order random is already fine; the lower-order random can be shifted left, truncated, then shifted right. In the end, we can get 53 random bits in R:

runif2 <- function (n) {
    runif(10) + trunc(runif(10) * 2^21) / 2^53
}

x <- runif2(10)
lowbits(x, 52)
0
0
0.5
0
0
0
0.5
0.5
0.5
0.5

Trackbacks

No Trackbacks

Comments

Display comments as Linear | Threaded

No comments

The author does not allow comments to this entry

Add Comment

E-Mail addresses will not be displayed and will only be used for E-Mail notifications.
To leave a comment you must approve it via e-mail, which will be sent to your address after submission.
Enclosing asterisks marks text as bold (*word*), underscore are made via _word_.
Form options

Submitted comments will be subject to moderation before being displayed.