Bellard achieved this with just a humble home computer. Well, a rather turbo-charged home PC, if truth be told:"My computation used a single Core i7 Quad Core CPU at 2.93 GHz giving a peak processing power of 46.9 Gflops. So the supercomputer [used in the previous record] is about 2000 times faster than my computer. However, my computation lasted 116 days, which is 96 times slower than the supercomputer for about the same number of digits. So my computation is roughly 20 times more efficient." You can just see him gloating! Because of the amount of data, he also had 7.5 TB of disk storage using five 1.5 TB hard disks.

But why did he bother? I mean, it took 131 days to complete the whole job! "I am not especially interested in the digits of Pi, but in the various algorithms involved to do arbitrary-precision arithmetic. Optimizing these algorithms to get good performance is a difficult programming challenge."

However, mathematicians are also interested in the expansion of pi for seemingly more esoteric reasons. We know that pi is irrational and transcendental, but is it normal? What's normal, anyway? In mathematical terms a "normal number" is one where the probability of finding any given digital string within its expansion is the same as would be expected from a random sequence. Not only that, but a number is

*absolutely*normal if it is normal over every natural number base. And you thought normality was getting a job and learning to do the dishes.

Let's stick to decimal numbers to avoid any headaches. The probability of finding a '9' in a random string of digits is 0.1. Similarly, the probability of finding the string '42' within a random string is 0.01, and so on. So, for example, the fraction 1/9 (=0.111111...) is far from normal as the probability of finding a '1' is 1.0 and no other digits exist. In binary, this fraction is written as 0.000111000111000111, and so on. The probability of finding a '0' or '1' is each 0.5, which

*is*normal over base 2, but the probability of finding a '00' is a third rather than the expected 0.25 so again it falls foul of the rules for normality.

It is more difficult to show normality here in an article as it would involve searching really large numbers, hence the fascination with stretching the number of digits of pi to as much as computationally possible. It is thought that pi is, indeed, normal but a solid proof has so far been elusive. In case you think this is tantamount to mathematical navel-gazing, such calculations are important in the field of file compressions. We all have the experience of seeing the same image file compression algorithm often have very different results depending on the nature of the image. Files are finite but can be very big. If the file, seen as a binary string by your binary computer, is normal then it is essentially random as far as your digital abacus can discern and hence any attempt at compression is completely futile.

Being normal is thus another definition for being random.

## Comments