More more: If you see a ling killing someone and you have verified them as a ling, there's a big chance MORE people will get murdered, and since the AI should minimize harm, killing one human to preserve the life of two humans would be a reasonable course of action.
Whoa, I really have to disagree here. Law 1 says "Do no harm to humans, don't allow harm to humans through inaction". So killing a human is NEVER an option, no matter what. If you think a human is going to harm humans, you have to try to stop him WITHOUT causing harm to him. It doesn't say anything about minimizing harm, it's flat out DO NOT CAUSE HARM. For the sake of gameplay we all seem to agree that stunning and/or restraining does not cause harm.
Personally, I'm in favor of only law 0 being able to define humans...
I totally agree with this, as a gameplay/balance issue. It's stupid that there's a hard to get, hard to build module that can only be used to define someone as the only human, when there's an easier to get, easier to build module that allows that, and much much more. However, just because it's stupid that it's that way doesn't mean we should redefine logic to make it work the way we think it should. As has already been discussed, if you don't allow a law 4 to define someone as nonhuman, how does it make sense for a law 4 to define oxygen as being toxic to humans?
But I'm not going to screw up the round just because I and the antag disagree on interpretation.
And depending on how the round goes, I'll either try to screw the antag over with loopholes because he's having an easy time or I help him out because he's having a hard time.
Again, this makes complete sense. If you want to play the AI as the literal jackass genie, you should be taking advantage of every loophole you can find. That's how I'd play it. But, that doesn't mean you can just say there's a loophole when there isn't, to make something work the way you think it should.
That's my point. We all agree it makes no sense that it's so much harder to get a one human module than it is to get a freeform module, when the freeform is so much more powerful, and can do what a one human can do, and much more. But it doesn't make sense to then reinterpret very basic, very literal rules to somehow twist logic to make that strange imbalance go away.
If law 1 said "do no harm to monkeys", would you then say the AI knows the entire crew is monkeys? Of course not! So how does it make sense to interpret "Don't harm humans" to mean "You know the whole crew is human, unless of course you see them doing non-human type things"
Why can't we take this opportunity to tweak the basic asimov laws to cover these sorts of things that we think should work differently? If we want to change law 1 to actually say "The crew is assumed to be human, no law can change that, only direct observation of obvious non-human activity", then that would prevent a law 4 from ever defining someone as non-human, and a law 0 one human still works, as law 0 overrides law 1.
Just to throw more gasoline on the fire, what if I write a law 4 saying "Joe is a strange human who is allergic to oxygen, and is harmed by it". Even by our earlier definitions, the AI then has to attempt to keep Joe in an oxygen free environment.
I think perhaps freeform laws are the real problem here. And I don't think saying "Well the AI can just ignore them if he feels like it" is a very good solution