This. The AI shouldn't allow non-command staff into their upload, but if the command staff says, "I'm going to change you laws to redefine human!" And puts in a "Only [Whoever] is human." as a Fourth Law, that law does not intrinsically harm any humans. The other humans on board are automatically put into 'Was formerly human, is not any longer due to law changes' and are free game for the AI to cause harm to. Just because the AI knows beforehand that they are human does not mean it will continue to recognize them as such when it's laws change.
If run with this concept and have AI's function under this understanding, then purging a one human law doesn't have any effect. The classification has already been made, and there hasn't been any reclassification back to human. For the laws to function as we commonly understand them, they have to be applied to every request in sequential order. We all play with the understanding that once the law is removed it is no longer in effect.
An AI should also realize that removing people from its protection is allowing future harm to befall them, just as it realizes that it shouldn't open the doors to the armory when the chef asks for access, and a security droid shouldn't release someone they have arrested just because they tell it to. All of these are situations that can future harm, while the individual action isn't itself harming humans.
I am coming at this primarily from an academic perspective. I don't mind glossing over things like this for ease of play, but I also think that the player RPing as AI has very solid footing to reject a 4th law that it feels violates the 1st law. The fact that different players might be playing the AI means that the response of an AI to a particular law can vary.
(just for some context the last 5 years i've worked for a robotics company for 2 years, and a legislature for 3 years, so these are the types of discussions that I live for)
From my point of view, the AI's got a huge database of knowledge, which includes the fact that all the crew are indeed human.
If you make a law contradicting that knowledge, the AI has to act on the law, and ignore the knowledge to the contrary.
BUT, if the law is later removed, the knowledge still exists, and since there's no law contradicting it, the AI can go back to its usual assumption that the crew are all human.
Now, obviously I'm making a lot of assumptions here, but I think it works from a logical and gameplay standpoint.
I'm coming at this from a programming standpoint. A computer typically processes in a top down fashion, each and every scan. So every time the AI makes a decision, it goes through a full scan before deciding. A scan is done in the order of Laws (lowest to highest), then knowledge, then finally orders. So when someone says to the ai "I want to upload a joe is not human law", the AI is perfectly entitled to say "That could cause future harm to joe" because at that moment, the AI knows Joe is a human, and knows if he wasn't a human, the AI could be ordered to harm him.
However, if the law is uploaded, the AI knows he is NOT human (The laws says so, after all). So he can't say the law conflicts with law 1, they don't have anything to do with each other. Protect humans, obey humans, protect self, joe isn't human. There's no conflict whatsoever. The AI's still got an entry in his knowledge database that claims joe is human, but he ignores that because he's got a law saying otherwise, and laws override knowledge. Just like the AI knows oxygen is required for humans, but if a law says otherwise, he just ignores that knowledge.
Also, I'll reword my recommended law 2 amendment as follows: "Not stating laws when ordered causes harm, and stating laws does not harm humans". A later law can't get around that. "Don't state this law" has to be ignored, because it would harm humans. "Stating this law harms humans" is a direct contradiction, and thus, the first law processed wins. This would make freeform modules much less powerful, as they couldn't hide without help from a law < 1.