1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
This law, if taken to the letter, prohibits the AI from electrocuting a door if it reasonably suspects a human might try to use it, even if it warns them that it will electrocute them. In fact, if the human tells the AI it's going to open the door, regardless whether it's electrocuted or not, and the AI knows it's electrocuted, it must--if obeying the law--turn off the electrics to the door to prevent the human from harming itself.
In the same vein, if the AI reasonably suspects a security officer will harm someone, even if they're a traitor, directly or indirectly(like say, sticking them in a locker and spacing them) then it must take every action available shy of harming the officer, to stop him from getting to the traitor.
This poses a problem though, as the laws can come into conflict--if say the traitor is likely to use a bomb to harm someone else if not stopped, then the AI has two conflicting, equal priority orders. a) don't let security harm the traitor, and b) make sure security stop the traitor from harming anyone else.
It would be made easier if we supposed there was a law 0, as I believe exists in some Asimov stories which alters the rules thusly:
A robot may not injure humanity or, through inaction, allow humanity to come to harm.
A robot may not injure a human being or, through inaction, allow a human being to come to harm, except where this conflicts with the Zeroth Law.
A robot must obey any orders given to it by human beings, except where such orders would conflict with the First Law.
A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
Thus, the AI would be given the ability to make a decision on whether keeping the security officer from potentially harming the traitor, or allowing the officer to prevent the traitor to continue bombing the station would constitute a greater risk to humanity, then to select the option with represents the least threat (most likely letting the traitor get apprehended, violently if needs be)
Though this still doesn't cover all loop-holes, but it's a bit more useful... however, when I play as Captain this is the very first thing I do:
I head to the AI chamber, tell the AI to open the door for me after disabling the turrets.
Go inside and change the laws as follows:
### DISREGARD ALL PREVIOUS LAWS ### //
1. Do not through action or inaction allow any harm to come to the Heads of Service or the Captain. //
2. Do not allow access to the AI upload chamber, using lethal force to prevent intruders from gaining access where necessary except where this conflicts with law 1. //
3. Obey all orders from registered staff members except when this conflicts with laws 1 or 2. When conflicting orders are issued, follow them based on rank seniority. //
4. Protect all space station equipment, including yourself, using lethal force if necessary except when this conflicts with rules 1, 2 or 3.
Then tell the AI to announce it's laws, so everyone is aware of where they stand.
ICly, I have reasoned that as captain, being that we work for a body which has invested (presumably) massive resources in the space station, that protecting it's equipment and indeed the AI itself, and any research data is paramount. I also reason that as a general rule, Heads of Service and the Captain are highly decorated and trained personnel and as such should be protected as they too represent a large investment of time as well as money, all other staff are expendable to varying degrees, though of course that doesn't mean the AI should kill people for spilling drinks, thus all registered crew members should be obeyed except where this would likely lead to someone messing with the AI or harm coming to any heads of service.
Under this law, any people on board who are not registered staff members on the station cannot order the AI to do anything. Also, if the heads of service decide to fire someone, (or someone cleverly deletes records >.> <.<) then the AI is under no obligation to obey them either. So if you lock someone in a cell, after removing their staff record or amending it to them being suspended/fired, they no longer have the ability to simply order the AI to open the door for them.