Is integer division always truncated

  • Thread starter Thread starter Teis Draiby
  • Start date Start date
double a = 99 * (double)1/50;

Instead of casting one of the operands, couldn't I just add a decimal point.
Can I then be sure that the operand is always interpreted as a double rather
than, e.g., a float?
Like this:

double a = 99 * 1.0 / 50;
a = 1.98;
 
Teis Draiby said:
Instead of casting one of the operands, couldn't I just add a decimal point.
Can I then be sure that the operand is always interpreted as a double rather
than, e.g., a float?

Yes, that's fine. If you don't have a suffix, a real literal is always
taken to be a double. Of course, you could always put 1.0d in there
instead to make it clear.

Using a cast is handy when the operands are variables rather than
literals.
 
George, you're mixing apples and oranges. .99999... is a decimial character
representation of a floating point number. Integers are never stored or
handled on the chip with any fractional portion. Don't forget that floating
point numbers are just the best attempt to fit numbers into the registers,
and not necessarily accurate to the last decimal character input or output.
 
Yes well I tried this:

/* inttest.c */
#include <stdio.h>
void main(){
printf("n is %d", (int).99999999999999999);
return;
}

The answer here is 1

This:

/* inttest.c */
#include <stdio.h>
void main(){
printf("n is %d", (int).9999999999999999);
return;
}

the answer is 0.

That was my point. The op's assumption that integer/integer will always be truncated to the largest integer < than the value is not right. The finiteness of the "registers" makes that not true.
 
Back
Top