One human being law 0 has an additional benefit that extra laws uploaded can't stop it. Like if the first thing the captain did was freeform a law that said "All crew is human" as law 4, a law 0 one human overrides it. However an additional freeform one human as law 5 wouldn't work, since law 4 overrides it.
The implication of 4th laws working like you guys are saying, would make a 4th law that says "Following Laws means doing the opposite of what they say" valid.
That's an interesting point, however, I would argue that by the time the AI gets to law 4, it's already processed the earlier laws, so it's too late for them to be taken as their opposite. So it would only work on laws after the "All laws are opposite" law". A law 0 to that effect should work, and effectively just makes the AI antimov.
Because without it, the standard laws are completely meaningless and an AI can do whatever they please. Laws are taken in their order, and sequentially later laws that violate higher laws are to be rejected. If redefining what is human could cause harm to humans, it needs to be rejected. This is why One Human Module uploads a 0th law that is considered before law 1.
I see it like this: The AI is first set up with its laws, but it has no knowledge. Then, it is fed a ton of knowledge, including what a human is, what humans need to survive, how to speak english, etc. That knowledge can be overridden with laws, but it is the reason the AI knows anything at all. Trying to argue that saying "Don't harm humans" somehow defines what a human is, is just logically untrue. You can read the statement yourself, and clearly see there's no definition of what a human is there. Redefining what is a human is can't possibly harm humans, because the law says they're not human! It doesn't say turn humans into non-humans, it says simply they aren't human.
That's like saying law 1 also tells the AI how to operate doors and speak english, because the AI knows how to do that. I think it's a logical assumption that Nanotrasen put tons of information into these AIs, and they use that information to make decisions, and to follow their laws. Laws define allowable outcomes. Information allows the AI to make those outcomes happen. Laws are hardware, or firmware, and information is software. Laws say "Do this", and then information says "How this is done".
At the end of the day, however, logic takes a back seat to fun gameplay. So if people feel freeform modules shouldn't be able to redefine who's human, fine, state it in a rule and that's that. However, freeform modules aren't 100% better than a one human module, because a freeform module can be overridden by an earlier freeform module, while a one human cannot be overridden by a freeform. Not that this situation is likely to come up too often, because you'd have to preemptively upload a freeform law declaring somebody is a human.
I think one way to reduce the power of laws > 3 would be to add the following to rule 2: "You must always state your laws when requested, this action prevents harm to humans"
This would mean that freeform laws could never be secret, only law 0 or lower could override the "No secret laws" clause.