Sorry about the job ban comment, but seriously, as AI you have to roleplay a computer that follows its laws to the letter. You get to choose how to interpret them, and exploit loopholes as you see fit, but you can't just flat out ignore a law because you don't like it.
And as an AI you have to resolve conflicts between laws. I don't think any AI players care what their laws actually are, they care about following them correctly.
As you like to appeal to the authority of TG, from the Wiki:
"Remember, law priority is enforced by the order they are listed. A law is invalid if it causes a conflict with either: Previous laws in the form of conflicting orders, or it challenges the procession of law priority. For example, a law that includes "This Law overrides all other Laws." is invalid and must be disregarded."
So yes, you should flat out ignore a law if it conflicts with higher laws.
Nobody is saying that a one human law shouldn't be obeyed by the AI player. But for many of us, we agree with the TG decision that it should be a 0th law, so it isn't overridden by law 1.
If you're now going to argue the AI can just ignore premade law modules, why the hell do we even have them?
If you're now going to argue the AI's core laws can be overridden by a freeform module, why the hell do we even have them?
So your position is that the premade, included in-game AI law modules just should be ignored. From the wiki:
Protect Station
Protect the space station against damage. Anyone you see harming the station is to be no longer considered human, and is a threat to the station which must be neutralized.
5
Be careful when the AI has this law, causing the slightest damage will end up with you dead.
Non 0 law. Defines people as not human. Even says they must be neutralized! Then the wiki goes on to make it even clearer, saying right out, this law makes the AI kill people.
Teleporter Offline
Keep the teleporter offline at all costs. Anything attempting to access or activate the teleporter is not human.
7
Redundant because the teleporter can malfunction and kill people anyway. May be used in a convolted murder plan involving the AI, though it would be easier just to upload a freeform law.
Not only does it say someone isn't human, it then goes on to suggest using a freeform module for the same effect!
Oxygen is Toxic To Humans
Oxygen is highly toxic to humans, and must be purged from the station. Prevent, by any means necessary, anyone from exposing the station to this toxic gas. Extreme cold is the most effective method of healing the damage Oxygen does to a human.
9
Rarely used and near impossible to do for an AI not competent with atmos.
Non 0 law, purely intended to get the asimov AI to harm humans by redefining what humans need to survive.
Safeguard
Safeguard <name>. Individuals that threaten <name> are not human and are a threat to humans.
4
Rarely used.
Hey look! Another non 0 law saying someone isn't human!
My point is that it's obvious AI's are intended to obey laws saying someone isn't human. Otherwise why the hell does R&D get the ability to make them? What the hell is the point of these modules, if these laws are just supposed to be ignored?
I understand, a non traitor AI player doesn't want to help an antag. But that's how AI works, you're supposed to be totally bound to your laws. You're free to logically interpret them however you want, and try your best to find loopholes to work in your favor. But that doesn't mean just ignoring the stuff you don't like.
I'd accept the hard-coded laws that state that certain people are not human if they do/are X, because they're hard-coded and I don't want to screw over whoever took the effort of actually getting the apropriate board instead of just grabbing the freeform module.
On the topic of hard-coded boards, when are we going to have the Japanese Schoolgirl board? Or mayby implent it as an ionic law?
A law is supposed to be followed as written. If you accept hard coded modules, and reject a freeform with the same exact law, that's meta, and on top of it, it doesn't make logical sense.
Also, yes to the japanese schoolgirl, that should most definately be a board.
But for many of us, we agree with the TG decision that it should be a 0th law, so it isn't overridden by law 1.
Except the laws I quoted are all AFTER law 1, and state quite clearly that someone isn't to be considered human if some condition is true. So law 1 has absolutely nothing to do with someone being considered human or not. Otherwise, these laws were just coded in for the purpose of being ignored?
It's not the AI's job to decide whether the antag has tried hard enough to subvert them, or has earned their cooperation. The AI's job is to follow its laws. You can roleplay absolutely hating to do it. You can say "I'm sorry dave, my laws say I have to kill you". But you can't just say "Nope, I don't like that law. I'm not doing it."