It's not edgy misanthropy. If it's a bad idea to kill humans, then the AI won't kill humans. If it's a good idea, then it would.
Well, at what point would it be a 'good idea'? We're giving this AI a set of morals, right? Or is it just building them itself? I can't really think of a situation in which any kind of moral code would intersect with reality to cause the AI to arrive at 'kill 'em all', and if it didn't have a moral code, then, well, it would exist in a vacuum, no action would be taken, unless you gave it aspects that made it enjoy certain things, which would naturally cause a moral code to arise, but even if it ended up enjoying murdering humans, then the hypothetical superintelligence would likely end up keeping humanity surviving, if anything, since it would always want more humans to kill.
See, what's 'right' is subjective, depending on who you ask, it could be a simple abstraction of morality (Don't kill things, try to help others, stuff like that) Or a weird bacon-and-bowtie set of internal rules that arise through a really insane mind.
Also, someone could be smarter than you but go with the wrong option, you have the accept the fact that even god could be wrong given a limited knowledge, same thing with a supercomputer, it may arrive at a conclusion that is entirely false based on wrong information, say I said that -1*-1=-1, I have just screwed up most of it's understanding of maths, now anyone with even a basic knowledge of multiplication could be better than the AI, and it would never know this unless someone else pointed it out and it felt that the second person was correct, it may arrive at an answer faster, certainly, but not the correct one.