In c/c++ there is a major issue which is responsible for a significant proportion of all software security and stability problems. The bullshit integer model: for each operator of an expression its operands are first promoted to integer and then a common type is chosen by taking the larger of the the two operands, if the types differ by signed-ness then unsigned is selected. The resultant output is of this type also.
If for example one where to add two integers together then they would remain as integer type and the result type would be integer also. When one adds together two integers and attempts to force the result back into another integer then overflow is possible. Now one might think that this is reasonable, and it is if you perform a simple expression and assign the result back into an integer but for more complex situations then it becomes a problem.
Examples:
int average(int a, int b) { return (a + b) / 2; }
s64 add64(s32 a, s32 b) { return a + b; }
Both of these will return the wrong results in some cases due to intermediate overflow. I consider this to be total horse shit. The c/c++ integer promotion rules make this even worse due to inconsistency. Why should s32 add32(s16 a, s16 b) { return a + b; } work when the above add64 does not, total bullshit.
Bounds checking:
The most significant area where the integer model really sucks is bounds checking. When parsing data structures loaded into memory its nearly impossible to correctly check for out of bound lengths or offsets. Its so difficult that in many cases one will not even bother, nobody wants to spend thrice as long writing bound checks as actually writing useful code. Even if you managed to write functional bounds checks they look so horrifically ugly and unmaintainable it makes you wish you had not even bothered.
Now of course one cannot go changing the integer model of the language that would probably break so many things (I would still consider doing so because the existing model sucks to bad). What we need is a way to tell the compiler to stop being retarded and do the right thing, perform the intermediate computation at significant width to get the mathematically correct result. This is really not that difficult for most expressions, double width would prevent overflow in most cases, and for the case of bounds checking, just checking the carry flag would suffice.
The solution:
I propose a new piece of syntax, a special pseudo function we shall call ‘X’ for now, any code placed as the argument to ‘X’ will be computed at a precision sufficient to return the mathematically correct result. With this new language feature performing bounds checking or other things which might overflow becomes trivial.
The previous examples would simply be rewritten as.
int average(int a, int b) { return X((a + b) / 2); }
s64 add64(s32 a, s32 b) { return X(a + b); }
Is it is really so much to ask to get a feature like this, it would be more useful than anything else they come up with in the last 10 years. But of course they would never consider actually adding something useful, they just endlessly masturbate over some weird useless ‘type safety’ bullshit that has no real world use.
Performing range checks is so profoundly difficult due to intermediate overflow that most programmers get it wrong, or do not even bother.