I don’t think that casting a range of bits as some other arbitrary type “is a bug nobody sees coming”.
C++ compilers also warn you that this is likely an issue and will fail to compile if configured to do so. But it will let you do it if you really want to.
That’s why I love C++
100%. In my opinion, the whole “build your program around your model of the world” mantra has caused more harm than good. Lots of “best practices” seem to be accepted without any qualitative measurement to prove it’s actually better. I want to think it’s just the growing pains of a young field.
Even with qualitative measurements they can do stupid things.
For work I have to write code in C# and Microsoft found that null reference exceptions were a common issue. They actually calculated how much these issues cost the industry (some big number) and put a lot of effort into changing the language so there’s a lot of warnings when something is null.
But the end result is people just set things to an empty value instead of leaving it as null to avoid the warnings. And sure great, you don’t have null reference exceptions because a value that defaulted to null didn’t get set. But now you have issues where a value is an empty string when it should have been set.
The exception message would tell you exactly where in the code there’s a mistake, and you’ll immediately know there’s a problem and it’s more likely to be discovered by unit tests or QA. Something that’s an value that’s supposed to be set may not be noticed for a while and is difficult to track down.
So their research indicated a costly issue (which is ultimately a dev making a mistake) and they fixed it by creating an even more costly issue.
There’s always going to be things where it’s the responsibility of the developer to deal with, and there’s no fix for it at the language level. Trying to fix it with language changes can just make things worse.
For this example, I feel that it is actually fairly ergonomic in languages that have an
Option
type (like Rust), which can either beSome
value or no value (None
), and don’t normally havenull
as a concept. It normalizes explicitly dealing with the None instead of havingnull
or hidden empty strings and such.I just prefer an exception be thrown if I forget to set something so it’s likely to happen as soon as I test it and will be easy to find where I missed something.
I don’t think a language is going to prevent someone from making a human error when writing code, but it should make it easy to diagnose and fix it when it happens. If you call it null, “”, empty, None, undefined or anything else, it doesn’t change the fact that sometimes the person writing the code just forgot something.
Abstracting away from the problem just makes it more fuzzy on where I just forgot a line of code somewhere. Throwing an exception means I know immediately that I missed something, and also the part of the code where I made the mistake. Trying to eliminate the exception doesn’t actually solve the problem, it just hides the problem and makes it more difficult to track down when someone eventually notices something wasn’t populated.
Sometimes you want the program to fail, and fail fast (while testing) and in a very obvious way. Trying to make the language more “reliable” instead of having the reliability of the software be the responsibility of the developer can mean the software always “works”, but it doesn’t actually do what it’s supposed to do.
Is the software really working if it never throws an exception but doesn’t actually do what it’s supposed to do?