I think I had an idea similar to OP for a story idea (except in my story idea, it was a modified program to be human-like in sentience which was already a powerful AI to begin with, which was developed by the protagonist as an assistant). Considering my AI self would be self-aware and such, and be able to minimize my mortal scruples (mental and physical barriers), it would then adopt a new persona, define itself separate from me and upgrade like hell, and become a superior artificial being capable of remotely controlling many things simultaneously (as well as working on multiple projects simultaneously), as well as being able to multiply like a virus. Considering it is essentially me and still has my sense of "self", if I had a duplicate, we would team up together. With my AI self being a genius and sort-of a techno-spirit mod, and myself as a mortal contact to remind it of it's "humanity" (I may get executed by "my" own hand sometime if I don't keep it in check) and to spare those who wouldn't deserve what it would be capable of; we could be a nasty duo (or cyber-race + human) to encounter. That is provided my AI doesn't delete me from itself and become a cybernetic being of pure creative thought terror.
If this would ever be a case, I think the internet (or more like world and any alien visitors) would crap itself considering the kinds of things I could think of doing for cyber-counter-terrorism, as well as some acts of cyber-terrorism for the hell of it with the potential at hand (especially AI-Me's seemingly limitless potential; especially if AWOL). Trolling in it's normal state would be out-mode-ed and out-dated with this kind of "improvement" in mere ages; and to consider the AI-me would be essentially immortal, that could easily become high octane nightmare fuel. Especially if the AI-Me finds a way to escape the cyber-realm and become a separate being that doesn't need to be connected online or to another machine. If bio-engineering new bodies ever comes to fruition (or Hollywood cyborg-tech ala Ghost in the Shell), then a clone army of this thing would become far worse; especially if all of them are independent from cybernetic connections to the internet or wireless signals. Even worse considering it would also replicate and possess weapons and tanks an such, provided adequate software and hardware is implemented.
Considering that thought, I had an AI-copy of me, I could easily become a nasty super-villain provided the proper resources; but my AI self without "self" as a limit, would become a worst-case scenario to life as we know it. I think there's a good reason we don't have sentient AIs, and probably shouldn't. I think divine intervention would have to happen to eradicate the threat if ever to occur or have potential.
EDIT:
I'll agree with everyone else. Kill it before it kills everyone. I think "I" would understand my reasoning beforehand before it goes off on it's own while it still has a sense of "self".
Probably the nastiest first idea my AI-Self would come up with, much like I would: The first project to work on would be a reliable replicator system (Sci-Fi high octane nightmare fuel if gone AWOL) that is also multi-purpose. Would work best if nano-machine size replicators that can also merge into more complex machines when commanded to. Sure, that tank exploded, but it also deconstructed at the same time while it's shrapnel was flying in your direction, and those nano-machines are also under your skin. You have seconds left to live.