- Signed numbers make overflows more visible in a lot of code that assumes variables are >= 0, if you see a negative number in such a variable you immediately know it's an overflow.
- Having 32 unsigned bits of precision is great until you suddenly need to apply them in a signed equation because you misjudged, or had changed requirements...
- Programmers are generally more familiar with signed arithmetic and not familiar with unsigned/signed mismatch casting.
- Despite the 50% difference, the gain of 1 bit never makes a practical difference in the vast majority of code. If you had an overflow at 31 bits chances are you would also get it at 32 bits.
- Having to think about what type of integer to use when coding higher level problems is vast of effort/time.
- In case of 64 bit signed arithmetic which is getting more common since x64, you're probably not going to overflow anyway.
I don't recommend ever using unsigned except in bit manipulation code or where you otherwise explicitly need unsigned semantics. Aside from that, the fixed point arithmetic in Clausewitz is 32/64 bit only for above reasons. Many programming languages also lack support for unsigned because the designers know it's a noob trap.