What is the difference between decimal
, float
and double
in .NET?
When would someone use one of these?
float
(the C# alias for System.Single
) and double
(the C# alias for System.Double
) are floating binary point types. float
is 32-bit; double
is 64-bit. In other words, they represent a number like this:
10001.10010110011
The binary number and the location of the binary point are both encoded within the value.
decimal
(the C# alias for System.Decimal
) is a floating decimal point type. In other words, they represent a number like this:
12345.65789
Again, the number and the location of the decimal point are both encoded within the value – that's what makes decimal
still a floating point type instead of a fixed point type.
The important thing to note is that humans are used to representing non-integers in a decimal form, and expect exact results in decimal representations; not all decimal numbers are exactly representable in binary floating point – 0.1, for example – so if you use a binary floating point value you'll actually get an approximation to 0.1. You'll still get approximations when using a floating decimal point as well – the result of dividing 1 by 3 can't be exactly represented, for example.
As for what to use when:
For values which are "naturally exact decimals" it's good to use decimal
. This is usually suitable for any concepts invented by humans: financial values are the most obvious example, but there are others too. Consider the score given to divers or ice skaters, for example.
For values which are more artefacts of nature which can't really be measured exactly anyway, float
/double
are more appropriate. For example, scientific data would usually be represented in this form. Here, the original values won't be "decimally accurate" to start with, so it's not important for the expected results to maintain the "decimal accuracy". Floating binary point types are much faster to work with than decimals.
Answered 2023-09-20 20:30:48
float
/double
usually do not represent numbers as 101.101110
, normally it is represented as something like 1101010 * 2^(01010010)
- an exponent - anyone float
is a C# alias keyword and isn't a .Net type. it's System.Single
.. single
and double
are floating binary point types. - anyone Precision is the main difference.
Float - 7 digits (32 bit)
Double-15-16 digits (64 bit)
Decimal -28-29 significant digits (128 bit)
Decimals have much higher precision and are usually used within financial applications that require a high degree of accuracy. Decimals are much slower (up to 20X times in some tests) than a double/float.
Decimals and Floats/Doubles cannot be compared without a cast whereas Floats and Doubles can. Decimals also allow the encoding or trailing zeros.
float flt = 1F/3;
double dbl = 1D/3;
decimal dcm = 1M/3;
Console.WriteLine("float: {0} double: {1} decimal: {2}", flt, dbl, dcm);
Result :
float: 0.3333333
double: 0.333333333333333
decimal: 0.3333333333333333333333333333
Answered 2023-09-20 20:30:48
0.1
-- that is rarely the case in the real world! Any finite storage format will conflate an infinite number of possible values to a finite number of bit patterns. For example, float
will conflate 0.1
and 0.1 + 1e-8
, while decimal
will conflate 0.1
and 0.1 + 1e-29
. Sure, within a given range, certain values can be represented in any format with zero loss of accuracy (e.g. float
can store any integer up to 1.6e7 with zero loss of accuracy) -- but that's still not infinite accuracy. - anyone 0.1
is not a special value! The only thing that makes 0.1
"better" than 0.10000001
is because human beings like base 10. And even with a float
value, if you initialize two values with 0.1
the same way, they will both be the same value. It's just that that value won't be exactly 0.1
-- it will be the closest value to 0.1
that can be exactly represented as a float
. Sure, with binary floats, (1.0 / 10) * 10 != 1.0
, but with decimal floats, (1.0 / 3) * 3 != 1.0
either. Neither is perfectly precise. - anyone double a = 0.1; double b = 0.1;
then a == b
will be true. It's just that a
and b
will both not exactly equal 0.1
. In C#, if you do decimal a = 1.0m / 3.0m; decimal b = 1.0m / 3.0m;
then a == b
will also be true. But in that case, neither of a
nor b
will exactly equal 1/3
-- they will both equal 0.3333...
. In both cases, some accuracy is lost due to representation. You stubbornly say that decimal
has "infinite" precision, which is false. - anyone +---------+----------------+---------+----------+---------------------------------------------------------+
| C# | .Net Framework | Signed? | Bytes | Possible Values |
| Type | (System) type | | Occupied | |
+---------+----------------+---------+----------+---------------------------------------------------------+
| sbyte | System.Sbyte | Yes | 1 | -128 to 127 |
| short | System.Int16 | Yes | 2 | -32,768 to 32,767 |
| int | System.Int32 | Yes | 4 | -2,147,483,648 to 2,147,483,647 |
| long | System.Int64 | Yes | 8 | -9,223,372,036,854,775,808 to 9,223,372,036,854,775,807 |
| byte | System.Byte | No | 1 | 0 to 255 |
| ushort | System.Uint16 | No | 2 | 0 to 65,535 |
| uint | System.UInt32 | No | 4 | 0 to 4,294,967,295 |
| ulong | System.Uint64 | No | 8 | 0 to 18,446,744,073,709,551,615 |
| float | System.Single | Yes | 4 | Approximately ±1.5e-45 to ±3.4e38 |
| | | | | with ~6-9 significant figures |
| double | System.Double | Yes | 8 | Approximately ±5.0e-324 to ±1.7e308 |
| | | | | with ~15-17 significant figures |
| decimal | System.Decimal | Yes | 16 | Approximately ±1.0e-28 to ±7.9e28 |
| | | | | with 28-29 significant figures |
| char | System.Char | N/A | 2 | Any Unicode character (16 bit) |
| bool | System.Boolean | N/A | 1 / 2 | true or false |
+---------+----------------+---------+----------+---------------------------------------------------------+
Answered 2023-09-20 20:30:48
The Decimal structure is strictly geared to financial calculations requiring accuracy, which are relatively intolerant of rounding. Decimals are not adequate for scientific applications, however, for several reasons:
Answered 2023-09-20 20:30:48
I won't reiterate tons of good (and some bad) information already answered in other answers and comments, but I will answer your followup question with a tip:
When would someone use one of these?
Use decimal for counted values
Use float/double for measured values
Some examples:
money (do we count money or measure money?)
distance (do we count distance or measure distance? *)
scores (do we count scores or measure scores?)
We always count money and should never measure it. We usually measure distance. We often count scores.
* In some cases, what I would call nominal distance, we may indeed want to 'count' distance. For example, maybe we are dealing with country signs that show distances to cities, and we know that those distances never have more than one decimal digit (xxx.x km).
Answered 2023-09-20 20:30:48
float
7 digits of precision
double
has about 15 digits of precision
decimal
has about 28 digits of precision
If you need better accuracy, use double instead of float. In modern CPUs both data types have almost the same performance. The only benifit of using float is they take up less space. Practically matters only if you have got many of them.
I found this is interesting. What Every Computer Scientist Should Know About Floating-Point Arithmetic
Answered 2023-09-20 20:30:48
double
proper in accounting applications in those cases (and basically only those cases) where no integer type larger than 32 bits was available, and the double
was being used as though it were a 53-bit integer type (e.g. to hold a whole number of pennies, or a whole number of hundredths of a cent). Not much use for such things nowadays, but many languages gained the ability to use double-precision floating-point values long before they gained 64-bit (or in some cases even 32-bit!) integer math. - anyone Real
could IIRC represent values up to 1.8E+19 with unit precision. I would think it would be much saner for an accounting application to use Real
to represent a whole number of pennies than... - anyone double
type which had unit accuracy up to 9E15. If one needs to store whole numbers which are bigger than the largest available integer type, using double
is apt to be simpler and more efficient than trying to fudge multi-precision math, especially given that while processors have instructions to perform 16x16->32 or... - anyone No one has mentioned that
In default settings, Floats (System.Single) and doubles (System.Double) will never use overflow checking while Decimal (System.Decimal) will always use overflow checking.
I mean
decimal myNumber = decimal.MaxValue;
myNumber += 1;
throws OverflowException.
But these do not:
float myNumber = float.MaxValue;
myNumber += 1;
&
double myNumber = double.MaxValue;
myNumber += 1;
Answered 2023-09-20 20:30:48
float.MaxValue+1 == float.MaxValue
, just as decimal.MaxValue+0.1D == decimal.MaxValue
. Perhaps you meant something like float.MaxValue*2
? - anyone System.Decimal
throws an exception just before it becomes unable to distinguish whole units, but if an application is supposed to be dealing with e.g. dollars and cents, that could be too late. - anyone Integers, as was mentioned, are whole numbers. They can't store the point something, like .7, .42, and .007. If you need to store numbers that are not whole numbers, you need a different type of variable. You can use the double type or the float type. You set these types of variables up in exactly the same way: instead of using the word int
, you type double
or float
. Like this:
float myFloat;
double myDouble;
(float
is short for "floating point", and just means a number with a point something on the end.)
The difference between the two is in the size of the numbers that they can hold. For float
, you can have up to 7 digits in your number. For double
s, you can have up to 16 digits. To be more precise, here's the official size:
float: 1.5 × 10^-45 to 3.4 × 10^38
double: 5.0 × 10^-324 to 1.7 × 10^308
float
is a 32-bit number, and double
is a 64-bit number.
Double click your new button to get at the code. Add the following three lines to your button code:
double myDouble;
myDouble = 0.007;
MessageBox.Show(myDouble.ToString());
Halt your program and return to the coding window. Change this line:
myDouble = 0.007;
myDouble = 12345678.1234567;
Run your programme and click your double button. The message box correctly displays the number. Add another number on the end, though, and C# will again round up or down. The moral is if you want accuracy, be careful of rounding!
Answered 2023-09-20 20:30:48
Answered 2023-09-20 20:30:48
decimal
by zero (CS0020), and the same is true of integral literals. However if a runtime decimal value is divided by zero, you'll get an exception not a compile error. - anyone Answered 2023-09-20 20:30:48
decimal
is actually stored in decimal format (as opposed to base 2; so it won't lose or round digits due to conversion between the two numeric systems); additionally, decimal
has no concept of special values such as NaN, -0, ∞, or -∞. - anyone This has been an interesting thread for me, as today, we've just had a nasty little bug, concerning decimal
having less precision than a float
.
In our C# code, we are reading numeric values from an Excel spreadsheet, converting them into a decimal
, then sending this decimal
back to a Service to save into a SQL Server database.
Microsoft.Office.Interop.Excel.Range cell = …
object cellValue = cell.Value2;
if (cellValue != null)
{
decimal value = 0;
Decimal.TryParse(cellValue.ToString(), out value);
}
Now, for almost all of our Excel values, this worked beautifully. But for some, very small Excel values, using decimal.TryParse
lost the value completely. One such example is
cellValue = 0.00006317592
Decimal.TryParse(cellValue.ToString(), out value); // would return 0
The solution, bizarrely, was to convert the Excel values into a double
first, and then into a decimal
:
Microsoft.Office.Interop.Excel.Range cell = …
object cellValue = cell.Value2;
if (cellValue != null)
{
double valueDouble = 0;
double.TryParse(cellValue.ToString(), out valueDouble);
decimal value = (decimal) valueDouble;
…
}
Even though double
has less precision than a decimal
, this actually ensured small numbers would still be recognised. For some reason, double.TryParse
was actually able to retrieve such small numbers, whereas decimal.TryParse
would set them to zero.
Odd. Very odd.
Answered 2023-09-20 20:30:48
decimal.Parse("0.00006317592")
works -- you've got something else going on. -- Possibly scientific notation? - anyone The Decimal, Double, and Float variable types are different in the way that they store the values. Precision is the main difference where float is a single precision (32 bit) floating point data type, double is a double precision (64 bit) floating point data type and decimal is a 128-bit floating point data type.
Float - 32 bit (7 digits)
Double - 64 bit (15-16 digits)
Decimal - 128 bit (28-29 significant digits)
More about...the difference between Decimal, Float and Double
Answered 2023-09-20 20:30:48
For applications such as games and embedded systems where memory and performance are both critical, float is usually the numeric type of choice as it is faster and half the size of a double. Integers used to be the weapon of choice, but floating point performance has overtaken integer in modern processors. Decimal is right out!
Answered 2023-09-20 20:30:48
The problem with all these types is that a certain imprecision subsists AND that this problem can occur with small decimal numbers like in the following example
Dim fMean as Double = 1.18
Dim fDelta as Double = 0.08
Dim fLimit as Double = 1.1
If fMean - fDelta < fLimit Then
bLower = True
Else
bLower = False
End If
Question: Which value does bLower variable contain ?
Answer: On a 32 bit machine bLower contains TRUE !!!
If I replace Double by Decimal, bLower contains FALSE which is the good answer.
In double, the problem is that fMean-fDelta = 1.09999999999 that is lower that 1.1.
Caution: I think that same problem can certainly exists for other number because Decimal is only a double with higher precision and the precision has always a limit.
In fact, Double, Float and Decimal correspond to BINARY decimal in COBOL !
It is regrettable that other numeric types implemented in COBOL don't exist in .Net. For those that don't know COBOL, there exist in COBOL following numeric type
BINARY or COMP like float or double or decimal
PACKED-DECIMAL or COMP-3 (2 digit in 1 byte)
ZONED-DECIMAL (1 digit in 1 byte)
Answered 2023-09-20 20:30:48
In simple words:
/==========================================================================================
Type Bits Have up to Approximate Range
/==========================================================================================
float 32 7 digits -3.4 × 10 ^ (38) to +3.4 × 10 ^ (38)
double 64 15-16 digits ±5.0 × 10 ^ (-324) to ±1.7 × 10 ^ (308)
decimal 128 28-29 significant digits ±7.9 x 10 ^ (28) or (1 to 10 ^ (28)
/==========================================================================================
You can read more here, Float, Double, and Decimal.Answered 2023-09-20 20:30:48
Decimal
suitable for financial applications, and it's the main criterion to use when deciding between Decimal
and Double
. It's rare that Double
precision isn't enough for scientific applications, for example (and Decimal
is often unsuitable for scientific applications because of its limited range). - anyone The main difference between each of these is the precision.
float
is a 32-bit numberdouble
is a 64-bit numberdecimal
is a 128-bit numberAnswered 2023-09-20 20:30:48
Float:
It is a floating binary point type variable. Which means it represents a number in it’s binary form. Float is a single precision 32 bits(6-9 significant figures)
data type. It is used mostly in graphic libraries because of very high demand for processing power, and also in conditions where rounding errors are not very important.
Double:
It is also a floating binary point type variable with double precision and 64 bits size(15-17 significant figures)
. Double are probably the most generally used data type for real values, except for financial applications and places where high accuracy is desired.
Decimal:
It is a floating decimal point type variable. Which means it represents a number using decimal numbers (0-9)
. It uses 128 bits(28-29 significant figures)
for storing and representing data. Therefore, it has more precision than float and double. They are mostly used in financial applications because of their high precision and easy to avoid rounding errors.
Example:
using System;
public class GFG {
static public void Main()
{
double d = 0.42e2; //double data type
Console.WriteLine(d); // output 42
float f = 134.45E-2f; //float data type
Console.WriteLine(f); // output: 1.3445
decimal m = 1.5E6m; //decimal data type
Console.WriteLine(m); // output: 1500000
}
}
No. of Bits used:
Range of values:
The float value ranges from approximately ±1.5e-45 to ±3.4e38.
The double value ranges from approximately ±5.0e-324 to ±1.7e308.
The Decimal value ranges from approximately ±1.0e-28 to ±7.9e28.
Precision:
Accuracy:
Answered 2023-09-20 20:30:48
you must mention values as:
Decimal dec = 12M/6;
Double dbl = 11D/6;
float fl = 15F/6;
and check the results.
Float - 4
Double - 8
Decimal - 12
Answered 2023-09-20 20:30:48