Observe the code snippet:
int a, b;
a = 10;
b = 20;
Console.WriteLine("Before Swapping a="+a+" b="+b);
a = a + b;
b = a - b;
a = a - b;
Console.WriteLine("After Swapping a=" + a + " b=" + b);
Console.Read();
I have written the above piece of code to swap value of variables without
using a third variable to consume less memory.
I would like to have a debate on the above is it really consuming less
memory and improves performance?
I think "debate" might be pushing things a bit. I don't think there's
that much worth debating, or even much controversy available in the
question.
The technique you show is not unheard of, though I think typically one
would use xor rather than addition and subtraction.
Does it consume less memory? I guess that depends on what you mean by
"memory". In an optimized build, the JIT compiler is (I think) very
likely to implement a conventional swap without an actual local variable;
instead, the temporary storage may wind up in a register, or (if at least
one of the operands has already been optimized into a register) the
compiler might even emit an XCHG instruction (on x86...other platforms
might have a similar instruction).
So I'd say, no...your version won't _necessarily_ use less memory. But it
might (and on an unoptimized build, probably would), depending on a
variety of factors.
As for performance, that's even harder to predict. Lots of things can
affect the performance of various implementations of data swapping.
However, it would be highly unlikely for the performance difference, if
any, to be significant enough to justify writing the obfuscated version.
And if anything, optimized output of a conventional swap might actually be
faster.
Also, consider that the conventional swap code, using a temporary storage
location, is a pattern that an optimizing compiler would probably detect,
while the alternatives are not. So, once you code a swap using the
alternative, you're probably locked into whatever performance
characteristics that results in. On the other hand, using the
conventional code your code will take advantage of whatever improvements a
compiler might be able to provide as it or the platform evolves.
Frankly, for swapping values, none of this is likely to matter much at
all, including the question of allowing the compiler to improve over
time. It'd be pretty unusual code where the act of swapping two values is
a large proportion of the total execution time anyway.
But, it's an important lesson to generalize: the kind of
micro-optimization you're asking about is almost never the right way to
write the code. It makes the code harder to maintain and makes it more
difficult (or impossible) for the compiler to generate truly optimized
code.
Remember Knuth's classic statement: "...premature optimization is the root
of all evil". You should write your code in the clearest, simplest, most
easily provably-correct way first. If and when you find a performance
problem, _then_ you can look at potential optimizations, and in that
context you can easily determine empirically whether one version of the
code works better than the other (whether that be with respect to memory
usage, speed of execution, both, or some other metric altogether).
Pete