But, after doing a bit of research, I figured out that this was not an error. This is math and the way computers deal with numbers.
Well, there are some other weird stuff too -
What’s happening behind the scene?
The simple logic behind it is computers use a base-2 (binary) floating numbers system.
Let’s understand it in detail with a very simple analogy.
We humans use the base-10 (decimal) number system to read, write and understand numbers in daily life.
When someone tells us 1/10, we understand and take it as 0.1 but for 1/3, we don’t have an exact value. It’s 0.333333….. (never ending) so for our understanding, we take it as 0.3 (nearest possible complete number).
let a = 0.1;
In binary -
In same way, 0.2 is interpreted as -
So when we do 0.1 + 0.2,
Now, we need to understand what happens when we console log this value.
So, for example we log the value -
Output is 0.1
That’s why, we try to log the result of 0.1 + 0.2,
as we have already concluded the result which is -
So, this is why the answer of 0.1 + 0.2 is 0.30000000000000004
I hope, now you have got a fair idea of what to say when someone asks why 0.1 + 0.2 is not 0.3