To be fair, at least C's type system is more sane than Perl's.
What's wrong with Perl's type system?
To start with, it has one namespace for scalar values (numbers and strings), another for list values, another for hash values, another for subroutines, and a really goofy one for things like file descriptors. You identify the namespace you're using through the use of sigils prefixing the name, namely $, @, %, and &. Yes, that means you can have five different things all named "foo" (though the last one is traditionally upper-cased). And in case you think it's hypothetical, consider that the built-in variables $+, @+, and %+ have different but related meanings, as do @INC and %INC; $_ and @_; $ARGV, @ARGV, and ARGV; $! and %!; etc.
Then, assignments and functions have contexts related to those types, and each type has odd conversion rules for when it gets used in an unexpected context. Except that some functions check the type of their parameter instead, with different functionality depending on what gets passed in. In some cases, these conversion rules can be deliberately invoked through such astoundingly evocative syntax as $#.
Oh, and namespaces are marked differently when taking an index into a list or hash. Then, the sigil used identifies the context of the indexed value, while the bracket type identifies the namespace of the data structure. I've used the wrong sigil more often than I can count. Then there are scalar references to data structures, with their own indexing conventions and creation syntax. The difference between parens and square brackets can bite you in entirely unexpected ways.
And last I checked, objects are "blessed" hashes, with their own brand of weirdness. But that's about the point where I discovered that networking and I/O would break when moving a script from one computer to another, and decided to learn Python.
I have literally never, in all my years of coding, thought to myself "gee, I really wish I could store either a number or a fruit basket in the same variable"
I have. Not long ago, for example, while dealing with a Json data structure. I ended up with a whole bunch of nearly duplicated code, one set for objects and another for arrays, because the former uses strings as keys, and the latter uses numbers.
Come to think of it, I wouldn't want to be doing my current xpath work with static types, either.
But you're right that the cases of a variable holding a single type are far more frequent. I wouldn't even mind marking the exceptions explicitly, as long as the compiler doesn't make me explicitly mark the cases that it could reasonably infer.
Module support, of course, would actually make C++ programs compile a lot slower. Because the whole point of that is that you don't have to include header files, so the compiler is going to need to examine all CPP files for all other CPP files to check whether some definition somewhere is used. So it becomes an O(n2) problem with the number of cpp files that you have.
At the linker stage, if it decides to make that optimization, sure. But that's an optimization that could be skipped for development, making compilation quite a bit faster. With the right language semantics, compiling a file doesn't need to look at any other file anywhere; that's why Python compilation is so fast that you don't even notice it. Alternatively, one could require that a module only import symbols explicitly exported by an already-compiled module, but circular dependencies are occasionally convenient.
A language with fast compile times, no typing and no header files is actually sacrificing a lot of runtime performance for developer ease of use. C/C++ are the languages aimed at maximum performance, and will always be needed, at least to implement all the shitty languages
This, though, is a decent argument. Granted, we already sacrifice some runtime performance for dynamic linking; the language semantics could have been implemented in a way that sacrifices just a bit more to add parameter type expectations and type sizes. That would also make it easier to make libraries compatible with programs compiled against an older version.
Using explicit header files is needed to get the optimized compilation of C++ to happen in a more reasonable time (by minimizing the amount of data each module needs to know about during compilation).
Unfortunately, explicit header files all too often include far, far more information than the compiler needs to compile a given module, and lead the compiler to assume things about the imported symbols that may or may not be true if their implementation later gets changed, or even if they were compiled by a different compiler.
Dynamically typed arrays add a lot of overhead there, since every time a value is accessed it has to check "what the hell am I looking at", and it has to store type information along with your values. Every. single. access.
With a strongly typed language it just stores the raw data and it can safely assume what things are at runtime, since all of the details were set in stone at compile time. Which means more speed and less memory requirements.
That assumption underlies so many security flaws it's not even funny. I'd prefer that my program crash at runtime than do the wrong thing.
The only time I find dynamic typing useful is in scripting languages. There it is mostly only useful to simplify the type system and allow using maps as sort-of objects and other little tricks that make the language simpler.
For any language that will be used for "real work" I want static types, that said I love the way Go allows you to declare a variable with an inferred type, it save a great deal of time (would you prefer "a := f()" or "var a []map[string]func(int, int) bool = f()"? Provided "f" returns a "[]map[string]func(int, int) bool" both are equivalent.)
I keep meaning to look into Go. I also hear that it gets async right; Python had the opportunity to go down that route, but instead decided to go down a route of explicitly marking routines that could call something asynchronous somewhere in the tree, and keeping its blocking routines alongside the async ones.
I other news the parser for my new scripting language, Cobalt, has produced its first valid AST today! The VM is done, as is a simple assembler (used to test the VM). As soon as I test the parser/lexer some more it is time for the last piece, the compiler.
Cobalt is based on my earlier Lua VM and compiler, but it has lots of little differences such as 0-based indexing, a more C-like syntax, better OO simulation, etc.
And yes, Cobalt is dynamically typed. It is an embedded scripting language after all...
Congratulations! Is it compatible enough to drop into a project that already uses Lua?