Fast Inverse Square Root
"Fast inverse square root" refers to a famous algorithm invented by William Kahan, especially as implemented in Quake III Arena.
If you prefer, there is also an excellent video covering this topic here.
Although the algorithm has been made mostly obsolete due to advances in hardware technology, it was previously used to quickly and accurately (with a maximum of 1% error) calculate the multiplicative inverse of the square root of a number ().
float y = 1 / sqrt(x);
The primary use of determining the inverse square root of a number is for normalizing vectors (maintaining their direction while setting their magnitude to 1). Normalized vectors are useful in computer graphics and physics simulations because, just like the number 1, multiplying by them doesn't change the magnitude of other vectors (while still modifying their direction).
Vectors can be normalized by dividing them by their magnitudes. The magnitude of a vector is determined by taking the square root of the sum of the squares of its components (the two-dimensional version of this is called the Pythagorean theorem: ). In other words, multiplying a vector by its inverse square root (the inverse of its magnitude) normalizes it.
The thing that makes fast inverse square root so important is, as the name implies, the computation speed of the algorithm. Division is very slow compared to the other basic arithmetic operations, so it is important to minimize its use. Even a small improvement means a lot in a function like fast inverse square root, which could be used hundreds or thousands of times per second.
Code Analysis
The remainder of this article will be an analysis of the Quake III Arena code, which can be found here. For reference, here is a slightly modified version of the function:
float Q_rsqrt(float y) {
float x2 = y * 0.5f;
long i = *(long*)&y;
i = 0x5f3759df - (i >> 1);
y = *(float*)&i;
y *= 1.5f - x2 * y * y;
return y;
}
IEEE 754
In order to understand how the function works, one must first understand how floating-point variables are stored in memory. Whenever a variable is declared as a float
in C, it is stored according to the rules of IEEE 754. IEEE 754 was created by William Kahan, in collaboration with Intel, in the 1980s. Today, almost all computers use it to store floating-point values.
IEEE 754 is based on scientific notation, in which numbers are written as a value in the range multiplied by some power of , where is the base of the number. For example, . The first number ( in the example) is called the "mantissa", and the exponent of the () is just called the "exponent."
IEEE 754 floating-point values are stored in 32 bits. The first bit is the "sign bit." The sign bit is 0b1
if the number is negative, or 0b0
if it is positive. The sign bit comes first in order to make it easier to sort floating-point values.
The next 8 bits encode the exponent. This number is right-aligned. Rather than using two's complement (which is a way to store signed integers where the first bit represents the negative of its typical value), the exponent is "biased" by adding , where is the base and is the number of bits in the exponent, to its value, such that the smallest representable exponent is represented as 0b00000001
. This is done so that larger numbers visibly appear larger, and to make floating-point values easier to sort.
The final 23 bits encode the mantissa. These bits are left-aligned, with each bit representing , where is the index of the bit. Since computers store values in base-two (binary) rather than base-ten (decimal), the mantissa is instead multiplied by a power of 2, and must fall in the range . Since the integer portion of the mantissa is guaranteed to be 1 by definition, it doesn't need to be stored.
Since it would be impossible to represent with the standard described above, IEEE 754 also includes a few special rules:
- If the exponent field is all zeros, the number is "denormalized," which means that the exponent becomes , where is the bias, and the integer is not added to the mantissa.
- If the exponent field is all ones and the mantissa is all zeros, the value represented is infinity.
- If the exponent field is all ones and the mantissa is not all zeros, the value represented is not a number.
For example, to store the number in IEEE 754:
- The number is positive, so the sign bit is
0b0
. - The exponent is . Adding the bias of , this means that the represented value must be , or
0b10000000
. - The mantissa is . Since the integer part doesn't need to be represented, the bits represent , or
0b11100000000000000000000
. - All together, is represented as
0b01000000011100000000000000000000
.
The Magic Constant
Since the floating-point values that fast inverse square root is designed to work with are always positive (the square root of a negative number is imaginary), the sign bit is always 0b0
, so it can be disregarded. Since the 8-bit exponent is followed by the 23-bit mantissa, if given a binary exponent and a binary mantissa , the IEEE 754 representation of a floating-point number can be calculated as , since multiplying by shifts it 23 binary digits to the left. Reverse-engineering this formula yields the following equation for the values of the IEEE 754 representations of the mantissa and the exponent for a given number :
Where comes from the bias in the exponent bits. For example:
Taking the binary logarithm of the above equation and simplifying yields the following equation:
For small values of , . A corrective term, , is added to this, yielding . gives the smallest average error for numbers in the range . Since is in the range , the equation can be simplified further as follows:
The appearance of here is important because, as explained previously, that is the IEEE 754 representation of . In other words, the IEEE 754 representation of a number is its own binary logarithm, albeit scaled and shifted by some constants.
Evil Floating-Point Bit-Level Hacking
The second line of code in Q_rsqrt
(as written above) is as follows:
long i = *(long*)&y;
Usually, when casting a value from one type to another in C, the bits that represent that value must be changed so that the value remains consistent. For example, 1
as a float
is represented as 0b00111111100000000000000000000000
, while 1
as a char
is represented as 0b00000001
. This line circumvents this automatic process by telling C that the pointer to the float
y
, &y
, is a pointer to a long
(long*
), and then dereferencing it (*
). This allows i
, which is a long
, to have the exact same bit representation as y
, which is a float
. This is useful because it is possible to use bit manipulation techniques on long
s but not float
s.
One such bit manipulation technique is that shifting a long
(binary) number to the right one place (i >> 1
) halves it, and shifting it to the left one place (i << 1
) doubles it, both of which are very fast operations. This is similar to the way that shifting a decimal number to the right one place (such as to ) divides it by .
Note that, after the next step, the same operation is applied in reverse:
y = *(float*)&i;
Applying the Magic Constant
Recall that halving the exponent of a number yields the square root of that number (), and that negating the exponent yields its multiplicative inverse (). Therefore, .
As explained in a prior section, i
now contains the binary logarithm of y
(scaled and shifted by some constants). Using this fact, it is possible to simplify the algorithm by finding the binary logarithm of the multiplicative inverse square root of y
, rather than finding the multiplicative inverse square root of y
directly.
The third line of code is as follows:
i = 0x5f3759df - (i >> 1);
The - (i >> 1)
part of the line is effectively halving the binary logarithm of y
, while the constant 0x5f3759DF
comes from the scaling and shifting applied to the binary logarithm previously. The constant can be reverse-engineered as the function as follows:
Find the binary logarithm of both sides:
Replace both sides with the bit representation:
Solve for the bits of :
is equivalent to - (i >> 1)
in the code, which leaves , which makes up the constant 0x5f3759DF
.
Newton's Method
Newton's method is an iterative algorithm which produces successively better approximations to the roots of a real-valued function. In other words, every time we run our approximation through Newton's method, it will become closer to the real value.
The approximation that we've computed thus far is already so close to the real value that just one iteration of Newton's method will bring the error within 1%.
Newton's method is accomplished in the code like this:
y *= 1.5f - x2 * y * y;