Why is a = (a+b) - (b=a) a bad choice for swapping two integers? Why is a = (a+b) - (b=a) a bad choice for swapping two integers? c c

Why is a = (a+b) - (b=a) a bad choice for swapping two integers?


No. This is not acceptable. This code invokes Undefined behavior. This is because of the operation on b is not defined. In the expression

a=(a+b)-(b=a);  

it is not certain whether b gets modified first or its value gets used in the expression (a+b) because of the lack of the sequence point.
See what standard syas:

C11: 6.5 Expressions:

If a side effect on a scalar object is unsequenced relative to either a different side effect on the same scalar object or a value computation using the value of the same scalarobject, the behavior is undefined. If there are multiple allowable orderings of thesubexpressions of an expression, the behavior is undefined if such an unsequenced sideeffect occurs in any of the orderings.84)1.

Read C-faq- 3.8 and this answer for more detailed explanation of sequence point and undefined behavior.


1. Emphasis is mine.


My Question is - Is this an acceptable solution to swap two integers?

Acceptable to who? If you're asking if it is acceptable to me, that would not get past any code review I was in, believe me.

why is a=(a+b)-(b=a) a bad choice for swapping two integers?

For the following reasons:

1) As you note, there is no guarantee in C that it actually does that. It could do anything.

2) Suppose for the sake of argument that it really does swap two integers, as it does in C#. (C# guarantees that side effects happen left-to-right.) The code would still be unacceptable because it is completely not obvious what its meaning is! Code shouldn't be a bunch of clever tricks. Write code for the person coming after you who has to read and understand it.

3) Again, suppose that it works. The code is still unacceptable because this is just plain false:

I stumbled into this code for swapping two integers without using a Temporary variable or the use of bit-wise operators.

That's simply false. This trick uses a temporary variable to store the computation of a+b. The variable is generated by the compiler on your behalf and not given a name, but it's there. If the goal is to eliminate temporaries, this makes it worse, not better! And why would you want to eliminate temporaries in the first place? They're cheap!

4) This only works for integers. Lots of things need to be swapped other than integers.

In short, spend your time concentrating on writing code that is obviously correct, rather than trying to come up with clever tricks that actually make things worse.


There are at least two problems with a=(a+b)-(b=a).

One you mention yourself: the lack of sequence points means that the behavior is undefined. As such, anything at all could happen. For example, there is no guarantee of which is evaluated first: a+b or b=a. The compiler may choose to generate code for the assignment first, or do something completely different.

Another problem is the fact that the overflow of signed arithmetic is undefined behavior. If a+b overflows there is no guarantee of the results; even an exception might be thrown.