Why is 0.1 + 0.2 equal to 0.30000000000000004?

10 Dec 2015

The answer is

0.1 and 0.2 cannot be represented as finite in the base 2 system. They are repeating decimals in the base 2 system. Since computer has limited memory and use the base 2 system, computer can only approximate these numbers.

0.1 (base 10) is actually stored as the value 0.1000000000000000055511151231257827021181583404541015625 or 0.1100110011 0011001100 1100110011 0011001100 1100110011 010 x 2-3 in the base 2 system

0.2 (base 10) is stored as the value 0.200000000000000011102230246251565404236316680908203125 or 0.1100110011 0011001100 1100110011 0011001100 1100110011 010 x 2-2 in the base 2 system

0.3 is stored as 0.299999999999999988897769753748434595763683319091796875

However, 0.1 + 0.2 is actually 0.1100110011 0011001100 1100110011 0011001100 1100110011 010 x 2-3 + 0.1100110011 0011001100 1100110011 0011001100 1100110011 010 x 2-2

or

0.1100110011 0011001100 1100110011 0011001100 1100110011 010 x 2-3 +

1.1001100110 0110011001 1001100110 0110011001 1001100110 100 x 2-3

=

10.0110011001100110011001100110011001100110011001100111 x 2-3 =

0.3000000000000000444089209850062616169452667236328125 (base 10)

That's not the represented number for 0.3. Hence, the computer outputs it as 0.30000000000000004.

Now the question is:

how does it round 0.3000000000000000444089209850062616169452667236328125 to 0.30000000000000004?

It's a bit confusing. Now we have a perfectly representable number, but we decide to round it to be inaccurate when outputing it (around 17 decimal places). Why?

Well, some numbers in base 2 will have repeating decimals in base 10. We will cut off those repeating decimals (remember we can only store finite decimals). If we convert that kind of number back and forth many times, the number will be more and more erroneous.

Therefore, we want the round-trip conversion to be consistent (as early as possible). There's a proof that, with 53-bit fraction in the 64-bit double-precision number (which is what we normally use), 15 to 17 decimal places is guaranteed.

This means 14-decimal-places numbers are totally consistent, but only some 17-decimal-places numbers are guaranteed.

Here's some Scala output around the numbers with 17 decimal places:

scala> 0.29999999999999998 res1: Double = 0.3 scala> 0.29999999999999997 res2: Double = 0.3 scala> 0.29999999999999996 res3: Double = 0.29999999999999993 scala> 0.29999999999999995 res4: Double = 0.29999999999999993

Take the first example: 0.29999999999999998 doesn't have the consistent round-trip conversion. 0.29999999999999998's representable form is:

0.1001100110 0110011001 1001100110 0110011001 1001100110 011 x 2-1

which is 0.299999999999999988897769753748434595763683319091796875, which also represents 0.3 (base 10).

That's why 0.29999999999999998 is converted to 0.3.

Now the question is:

how a computer maps from 0.299999999999999988897769753748434595763683319091796875 to 0.3?

This answer says that it is a non-trivial algorithm. It's probably some form of rounding to the nearest 17-decimal-places number that is consistent for the round-trip conversion.

The last question is:

how do we find the computer-represented binary for 0.1?

First, I need to tell you about how to write a number with decimal places in the base 2 system.

In the base 10 system, the number looks like this: 0.001245 = 0.1245 x 10-2

In the base 2 system, the number looks like this: 0.0011001, which can be re-written as 0.11001 x 2-2

The general form is 0.ddddd x 2e. That is basically IEEE754. The ddddd part is called fraction, and the e is called exponent.

In IEEE754 double-precision. there are 53 bits for fraction and 11 bits for exponent.

With that, we can actually compute the represented form of 0.1. We convert the number to the base 2 system. And we round the number to have 53 decimal places. We, then, convert the number back to the base 10 system. And that's the represented form of 0.1.

So, first we convert 0.1 to the base 2 system. Here's how.

The number is 0.0(0011)repeating = 0.11(0011)repeating x 2-3

Next, we round it to the 53 decimal places. Since we don't know which one is better, we have 2 numbers now:

0.1100110011 0011001100 1100110011 0011001100 1100110011 001 x 2-3

0.1100110011 0011001100 1100110011 0011001100 1100110011 010 x 2-3

We rearrange them a little, so that they can be easily calculated.

1100110011 0011001100 1100110011 0011001100 1100110011 001 x 2-56

1100110011 0011001100 1100110011 0011001100 1100110011 010 x 2-56

Now we turn them into the base-10-system numbers by using this and this.

7205759403792793 x 2-56 = 0.09999999999999999167332731531132594682276248931884765625

7205759403792794 x 2-56 = 0.1000000000000000055511151231257827021181583404541015625

And 0.1000000000000000055511151231257827021181583404541015625 is closer to 0.1. Therefore, 0.1 is represented as 0.1000000000000000055511151231257827021181583404541015625 in computer.

Next time ,your friends ask why 0.1 + 0.2 = 0.30000000000000004. You can do him/her the math :)

Give it a kudos