Why does bitwise "not 1" equal -2?
There are 2 integers between 1
and -2
: 0
and -1
1
in binary is 00000000000000000000000000000001
0
in binary is 00000000000000000000000000000000
-1
in binary is 11111111111111111111111111111111
-2
in binary is 11111111111111111111111111111110
("binary" being 2's complement, in the case of a bitwise not ~
)
As you can see, it's not very surprising ~1
equals -2
, since ~0
equals -1
.
As @Derek explained, These bitwise operators treat their operands as a sequence of 32 bits. parseInt
, on the other hand, does not. That is why you get some different results.
Here's a more complete demo:
for (var i = 5; i >= -5; i--) { console.log('Decimal: ' + pad(i, 3, ' ') + ' | Binary: ' + bin(i)); if (i === 0) console.log('Decimal: -0 | Binary: ' + bin(-0)); // There is no `-0`}function pad(num, length, char) { var out = num.toString(); while (out.length < length) out = char + out; return out}function bin(bin) { return pad((bin >>> 0).toString(2), 32, '0');}
.as-console-wrapper { max-height: 100% !important; top: 0; }
100 -4101 -3110 -2111 -1000 0001 1010 2011 3
A simple way to remeber how two's complement notation works is imagine it's just a normal binary, except its last bit corresponds to the same value negated. In my contrived three-bit two's complement first bit is 1
, second is 2
, third is -4
(note the minus).
So as you can see, a bitwise not in two's complement is -(n + 1)
. Surprisingly enough, applying it to a number twice gives the same number:
-(-(n + 1) + 1) = (n + 1) - 1 = n
It is obvious when talking bitwise, but not so much in its arithmetical effect.
Several more observations that make remebering how it works a bit easier:
Notice how negative values ascend. Quite the same rules, with just 0 and 1 swapped. Bitwise NOTted, if you will.
100 -4 011 - I bitwise NOTted this half101 -3 010110 -2 001111 -1 000----------- - Note the symmetry of the last column000 0 000001 1 001010 2 010011 3 011 - This one's left as-is
By cycling that list of binaries by half of the total amount of numbers in there, you get a typical sequence of ascending binary numbers starting at zero.
- 100 -4 \- 101 -3 |- 110 -2 |-\ - these are in effect in signed types- 111 -1 / |*************| 000 0 | 001 1 | 010 2 | 011 3 |*************|+ 100 4 \ |+ 101 5 |-/ - these are in effect in unsigned types+ 110 6 |+ 111 7 /
In computer science it's all about interpretation. For a computer everything is a sequence of bits that can be interpreted in many ways. For example 0100001
can be either the number 33 or !
(that's how ASCII maps this bit sequence).
Everything is a bit sequence for a computer, no matter if you see it as a digit, number, letter, text, Word document, pixel on your screen, displayed image or a JPG file on your hard drive. If you know how to interpret that bit sequence, it may be turned into something meaningful for a human, but in the RAM and CPU there are only bits.
So when you want to store a number in a computer, you have to encode it. For non-negative numbers it's pretty simple, you just have to use binary representation. But how about negative numbers?
You can use an encoding called two's complement. In this encoding you have to decide how many bits each number will have (for example 8 bits). The most significant bit is reserved as a sign bit. If it's 0
, then the number should be interpreted as non-negative, otherwise it's negative. Other 7 bits contain actual number.
00000000
means zero, just like for unsigned numbers. 00000001
is one, 00000010
is two and so on. The largest positive number that you can store on 8 bits in two's complement is 127 (01111111
).
The next binary number (10000000
) is -128. It may seem strange, but in a second I'll explain why it makes sense. 10000001
is -127, 10000010
is -126 and so on. 11111111
is -1.
Why do we use such strange encoding? Because of its interesting properties. Specifically, while performing addition and subtraction the CPU doesn't have to know that it's a signed number stored as two's complement. It can interpret both numbers as unsigned, add them together and the result will be correct.
Let's try this: -5 + 5. -5 is 11111011
, 5
is 00000101
.
11111011+ 00000101---------- 000000000
The result is 9 bits long. Most significant bit overflows and we're left with 00000000
which is 0. It seems to work.
Another example: 23 + -7. 23 is 00010111
, -7 is 11111001
.
00010111+ 11111001---------- 100010000
Again, the MSB is lost and we get 00010000
== 16. It works!
That's how two's complement works. Computers use it internally to store signed integers.
You may have noticed that in two's complements when you negate bits of a number N
, it turns into -N-1
. Examples:
- 0 negated ==
~00000000
==11111111
== -1 - 1 negated ==
~00000001
==11111110
== -2 - 127 negated ==
~01111111
==10000000
== -128 - 128 negated ==
~10000000
==01111111
== 127
This is exactly what you have observed: JS is pretending it's using two's complement. So why parseInt('11111111111111111111111111111110', 2)
is 4294967294? Well, because it's only pretending.
Internally JS always uses floating point number representation. It works in a completely different way than two's complement and its bitwise negation is mostly useless, so JS pretends a number is two's complement, then negates its bits and converts it back to floating point representation. This does not happen with parseInt
, so you get 4294967294, even though binary value is seemingly the same.