Consider the following:
(9/5) * (celsius + 32)
So the question is what does 9/5 evaluate to in C#?
It evaluates to 1, NOT 1.8 like we expected. Why?
C# assumes when there is no decimal digits after a number, that the
numbers are integers.
integer 9 / integer 5 evaluates to 1.
So you have to trick the system into producing a floating point
result.
9.0/5.0 does the trick.
Here is another gotcha that I found on the web.
int temp = (int)(100 * 36.41);
assigns the value 3640 to temp.
If you don't cast to int, and try this:
double temp = 100 * 36.41;
you'll find that temp is 3640.9999999999995
The cast to int rounds down to 0, while a Convert.ToInt32 will round to
the nearest integer.
If you try
int temp = Convert.ToInt32(100 * 36.41);
you'll find that temp is 3641
This is a common problem with floating point
values and is really a limitation
instead of a bug. This problem is common to all languages that use
floating
point values. It is even a limitation in the maths co processor in your
cpu.
When C# converts the value to an integer it must
just round down, so if the
value is stored as 3640.99999 then it becomes 3640.
You can use Convert.ToInt32 to get the correct value.
This is one of the big "gotchas" of computer
programming as we know it.
36.41 is a number that cannot be expressed exactly in binary, for the
same
reason that 1/3 cannot be expressed exactly in decimal. (It would take
an
infinite number of decimal places or binary places respectively.)
So when you write "36.41" you actually get a binary number very slightly
less than 36.41.
Then you multiply it by 100 and get 3640.9999999 or something like that.
Then you convert -- and if the int conversion is done by truncation, you
get
3640.
At least that's my educated guess about what's happening.
The Microsoft "Decimal" numeric type overcomes this; it's actually
represented in decimal, so 36.41 is always exactly 36.41.
Read the whole thread here.
http://www.thescripts.com/forum/thread226039.html
|