Since you did not ensure you were not dealing with a locked terminal or that your output was being piped somewhere unwritable, your code has failure conditions that are unchecked but very important: it's both very small, and very important, to make sure your output is actually going somewhere.
Be fair, the only way to do this (to my knowledge) is to compare the output handles with what the OS claims stdout should be, and not all OS's will tell you if the output has been redirected even then. I'm not even sure how many other languages you'd even be able to do that much.
Every OS that I know of redirects output outside the scope of your program--stdout is always stdout, the shell just sends it somewhere else. But you don't care when the output's been directed. You care if it's been
output.
To be honest your stretching it a little claiming qwerty's code was insecure.
I said his code was not robust, not that it was insecure. Argc/argv smashing is a definite security issue, but I wasn't claiming it'd be used. More of a little aside.
Also it might interest you to know that unthrown exception paths generate the same machine code output as source without exceptions. I'm not sure how you'd compare the speed of thrown exceptions other than against error checking, but error checking is both slower in thrown cases and also effects unthrown cases.
The source size bit is interesting--it
shouldn't be for a few reasons (I mean, you're writing additional classes, which means additional compile units), but generally speaking, most compilers have always created ballooned executables with exceptions built in. This may have changed.
In terms of speed: stack unwinding is very, very slow in C++. Less so in modern languages, which can "cheat" to better know the system and ensure that the state is consistent without having to unwind every level. Like, in C#, if you aren't in a using block, there will be no destructors/disposers, so it's simpler to just jump up the stack - references are destroyed and the GC will catch up later. In C++, anything stack-based must be deconstructed correctly, and so on. (High-security code--oh, another foreign concept in C++-land--may require a traditional unwind. I'm not sure.)
Btw if you know of a reliable way to check for redirects please do enlighten me, would be useful knowledge. Likewise I don't know what happens if cout fails, I would assume an exception is thrown.
I don't know about redirects or piping, but my application doesn't have to care--just that the resultant output location can be written to. If you try to > a program to a read-only file, for example, printf will always return 0 no matter how many characters were put in. Not checking that is what I mean by "not robust," as your code is not guaranteeing that it is doing the right thing given a valid situation.
If you call ios::exceptions() - which is a bitmask of all things, eugh - it will throw an exception on iostream problems. Valid exceptions() masks are eofbit (WHY is this an exception?!), failbit, and badbit. It will then throw ifstream::failure, which is catchable.
But nobody knows this. It's an ugly, badly developed system, and you get warts like this all over. C's worse (negative return codes? zero return codes? values set into a passed-in pointer location? ERRNO?), but only a little.
How can you possibly argue against C/C++ like that, when most of the interpreted languages that you complement so much are based off C/C++ implementations[/a]?
Because it is possible to write code on which less robust code can run more safely. You are not as skilled a developer as the guys writing JRockit or the .NET CLR.
And I'm making a point of being polite, so drop the attitude
right fucking now.
First, there is redundancy in the code to get a number.
Correct--I did a first-pass write and did not refactor it for code repetition. This has no effect on robustness, however.
Second, putting the !=1 tests first mean that the ==EOF ones never succeed.
EOF is being tested for stdout, because fputs will return EOF on a write that fails (which is in itself a terrible thing--according to them, EOF does not always mean "end of file"!).
You're correct about the scanf testing; it works, and catches bad conditions correctly, but does not do so well.
Third, your poor specification having results that you didn't want was *my* problem?
My specification said to take two numbers. You invented the use of arguments to take them. This is legal by what I said. So we're good so far. But your code still fucked up by accepting non-numbers. You chose to accept bad input and turn it into "legal" input--that is, you expanded the system's domain but in doing so created an unreliable range. This is not robust.
As you failed to adhere to the specification: yes, it
is your problem.
Fourth, there are unused bits of code there, either because you tried to do that many ways or because you copied code from a few examples.
Correct; I hacked it together in about ten minutes for a class and pulled it out here as an example.
The code ran successfully on OS X, Linux, Windows, the XBox 360, and the PS3. The same bugs existed in all areas. Specifically, the failure to account for stderr (because I couldn't think of a good method to do it) and the fact that floating-point addition may not work correctly. The rest of it, while ugly, did address every problem conditions that the TA and I could come up with.
Even that is poorly written. Assuming that stderr will succeed when stdout fails is only true occasionally.
It's true quite often in the UNIX world. It is extremely rare for someone to redirect stderr as well as stdout, precisely because they are semantically different and contain different data.
It would also be better to catch the inability to write to the output once, and fail for it, and then assume that either stdout will not fail mid-program, or that it won't matter if it does if it ever does.
That's bizarre reasoning. A console might get locked (which would cause stderr to fail, on this you are correct) or you might run out of disk space during output with the > operator on the shell (which would
not cause stderr to fail, but
would cause stdout to fail). It is a case of application robustness, and you may not make unwarranted assumptions. The state of the application may change during its running life. Check it every time. You check to make sure malloc() returns an actual memory address instead of out-of-memory errors, don't you?
...don't you?
A program written for *users* probably doesn't need to report the inability to write to stdout every time it can't, otherwise they would complain about "error spam". Better would be to report that "one or more attempts to write output have failed" at either the end of the program or the first failure(optionally the first failure after successful output).
Bad reasoning. Bad, bad reasoning. You simply don't get to make that assumption. But even if we take for granted that the assumption holds...that's irrelevant, because my program
knows when it is failing. Whether mine did the "right thing" with that knowledge is certainly debatable. Yours did not know something was wrong at all. Yours was not robust.
And when it comes to C/C++, almost nobody's code actually
is robust. That is the point that I am making. If you can't make a robust program to add two numbers together, how can you think you can make robust code to do something important or complex?
Complex system failures are probabilistically additive in the worst case. If you have a piece of code that fails 10% of the time and an independent piece of code that fails 20% of the time, the probability of
one of those failures happening is 30%. If every little piece of a system has chances for failure--well, if you can't at least
know when they fail and then take steps to address the problem, these little chances grow to very large ones.
Managed languages can fail. But they fail in predictable, established, conventional ways, and failure cases have been addressed by people far smarter and better-equipped than some newbie. Or even most "experts."