What does "unsigned" mean anyway?
(Note: I'm not formally learned in computer science... YET — however, I think my understanding as detailed below is at least accurate on a basic conceptual level.)
To simplify things a bit, it means that that the type can only take on positive values; a signed type gains an 'extra' bit as a result (relative to a signed version of the value) and its max is thus about half that of an unsigned type. Suppose you have a four-bit integer (max value is 1111 in binary, or 2^4-1 = 15 in decimal). If you want it to take on both negative and positive values (that is, to be signed), one of those four bits needs to represent the sign. Which bit it is depends on the implementation details, I suppose, but let's say it's the first one, and that 0 indicates negative and 1 indicates positive. Now, 1111 might represent +7 (1000 to 1111 -> 0 to 7) and 0111 might represent -7 (0000 to 0111 -> -0 to -7), since a signed four-bit type is essentially a three-bit number with an additional bit for holding the value of the sign. The 0000/1000 overlap here is not a problem in more efficient implementations, which is why you see ranges like [-128, 127] for a signed 8-bit number. However, I think this implementation is conceptually the easiest... There are more computationally efficient ones such as the
two's complement, where the negative version of an N-bit binary number, I, can be represented as 2^N-I. (The two's complement is a better choice since that representation is compatible with standard binary arithmetic operators and can represent values from -2^(N-1) to +2^(N-1)-1 — for instance, -128 to 127 for an 8-bit signed integer).