Your comments

Precision is the main difference where double is a double precision (64 bit) floating point data type and decimal is a 128-bit floating point data type.



Double - 64 bit (15-16 digits)
Decimal - 128 bit (28-29 significant digits)



So Decimals have much higher precision and are usually used within monetary (financial) applications that require a high degree of accuracy. But in performance wise Decimals are slower than double and float types. Double Types are probably the most normally used data type for real values, except handling money. More about....Decimal vs Double vs FloatDecimal vs Double vs Float


Dell