Sunday, June 23, 2024 20:36

## Real types error calculations

In calculations with real floating-point data types it is possible to observe strange behavior, because during the representation of a given real number it often happens to lose accuracy. The reason for this is the inability of some real numbers to be represented exactly as a sum of negative powers of the number 2. Examples of numbers that do not have an accurate representation in float and double types are for instance 0.1, 1/3, 2/7 and other. Here is a sample C# code, which demonstrates real type error calculations with floating-point numbers in C#:

The reason for the unexpected result in the first example is the fact that the number 0.1 (i.e. 1/10) has no accurate representation in the real floating-point number format and its approximate value is recorded. When printed directly the result looks correct because of the rounding. The rounding is done during the conversion of the number to string in order to be printed on the console. When switching from float to double the approximate representation of the number format is more noticeable. Therefore, the rounding does no longer hide the incorrect representation and we can observe the errors in it after the eighth digit.

In the second case the number 1/3 has no accurate representation and is rounded to a number very close to 0.3333333. The value of this number is clearly visible when it is written in double type, which preserves more significant digits.

Both examples show that floating-point number arithmetic can produce mistakes, and is therefore not appropriate for precise financial calculations. Fortunately, C# supports decimal precision arithmetic where numbers like 0.1 are presented in the memory without rounding. But, as a rule, remember, not all real numbers have accurate representation in float and double types. For example, the number 0.1 is represented rounded in float type as 0.099999994.

C# supports the so-called decimal floating-point arithmetic, where numbers are represented via the decimal numeral system rather than the binary one. Thus, the decimal floating point-arithmetic type in C# does not lose accuracy when storing and processing floating-point numbers, and you will not get into real types error calculations.

The type of data for real numbers with decimal precision in C# is the 128-bit type decimal. So, when dealing with situations where real number’s accuracy is crucial, such as financial programs, always use decimal variables.

The concepts explained in this lesson are also shown visually as part of the following video:

EXERCISES
1. Write a program which compares correctly two real numbers with accuracy of at least 0.000001.

Solution

Two floating-point variables are considered equal if their difference is less than some predefined precision (e.g. 0.000001):