we talked last week about what can go wrong with floating point numbers, so -- what can go wrong when using integers?
so far I have:
* 32 bit integers are smaller than you think (they only go up to 4 billion!)
* overflow
* sometimes you need to switch the byte order
* ?? (maybe something about shift / bitwise operations? not sure what can go wrong with that exactly)
I'd especially love real-world examples of things that have gone wrong, if you have them!