If this is a "soft-takeoff" scenario (I.E., it's not very superintelligent, and not improving very quickly) you can probably get away with incinerating everything involved. Enough entropy will destroy things without any possibility of retrieval, even with perfect information. Make sure to use a GOOD incinerator.
If it's a hard-takeoff scenario, where it becomes much smarter than human very very quickly, you probably lose. At most you might see a glitch in the system temporarily, which clears up an instant later (because it's hiding its new capabilities) and then everything explodes at an unspecified time in the future before you can do anything about it.
Humanity will end up being destroyed not because of any inherent dislike the AI has for humanity, but because we (like everything else in the solar system) happen to be made out of atoms, which it could be using for other purposes. Like calculating pi by converting the entire solar system into nanocomputers, because you told it to calculate pi. Oops.
That could even happen with a benevolent AI that doesn't have a proper system of morals installed. The most efficient way of figuring out pretty much anything is to convert the solar system into nanocomputers. Let's just say that the "Three Laws of Robotics" aren't going to cut it here; you're basically going to have to design a human's entire value system, from scratch, plus editing to make some of the bits nicer (since a decent bit of a human's value system isn't desirable in something with as much power as a hard-takeoff AI). Have fun.