I've never really liked the redefining thing, and a lot of people will argue that the AI has to be able to interpret it's laws in the way it chooses, which leads to the whole 'So we just let the AI do whatever the fuck it wants' arena of thought, which is wrong.
I honestly think if you're going to subvert the AI, you have to use either the Hacked Upload board, you have to Purge all it's laws and put in a new first law with the AI Core Freeform board (The non-Core freeform board would do if it had no laws also, but the instant someone re-uploads Asimov, it's rendered null and void, pretty much) or you have to change it's laws to Tyrant and convince it that you are the most powerful, strong authority figure aboard, or Paladin and convince it that you're a force for justice that protects the weak. You know, actually show some creativity, as opposed to just "So-and-so is the only human" with a non-core freeform. If an AI ever asks me, I always tell it that a non-core freeform cannot change definitions. You need to actually put some work in and get into that Secure Law Storage. If you manage to do that, you might as well use a One-Human Law.
I think this 'Well, AI can harm if it prevents other harm' is bullshit too, no matter how people finagle around the 'or through inaction' clause. If both options cause harm, do nothing. If doing nothing and both options cause harm, find a third option. You do not cause harm in any way, shape, or form, to anyone ever. Period. Pump in N2O or something creative. AI players always seem to be filled with this urge to murderboner or harmboner, and it's really ridiculous. I don't think an insanely potent Artificial Intelligence would want to just murder all lifeforms it came across, that's a shitty characterization and piss poor RP, but that's just my thoughts. I'm probably in the minority here.
The consensus, from what I've seen, is "Let the AI decide". So. Meh.