Also, Inaction is the action of voluntarily not doing something. Which is fun because that theoretically means that you can voluntarily do things to allow someone else to do harm to a human.
It all depends on interpretation.
I'd really like to hear an explanation for this interpretation. Especially considering your definition of "inaction" is wrong: inaction is simply not doing something. It has nothing to do with choice.
Rather late reply but... certainly, inaction is not doing something, but is it still inaction if the person/AI is not aware of the thing to not do anything about?
If that is the case then we are constantly "doing" inaction because there certainly is some action we would do about something if we just knew about it, yes?
It's for certain in this case, for the law to apply then the AI needs to be aware of the situation that might put the human to harm. If the AI is aware and does not do anything, then it violates the law, unless it has exhausted all options for actions.
All vocabulary has meaning, but the meaning can have deeper interpretations. For example:
I forget the term for this type, but take the word cold. We all understand the basic interpretation that it relates to a temperature that is less than that of our body.
But it describes something that does not exist, for cold is a lack of heat and not something in itself.
Exactly on the mark with your cold example. Cold is not a separate thing from heat, it's just a place on the heat continuum from "Really Hot" to "Really Cold". Similarly, inaction is just the absence of action. You do not "do" inaction. It is impossible to "do" inaction. There is no action in inaction. Inaction is just "not-doing".
As for what happens if the AI doesn't know about the potential harm, let's take as an example an industrial robot that has to move a machine part. The AI has a sensor that tells it when the human is out of the way, so it knows it can move the part without squishing the human.
Humans are generally in and around the part.
Let's say one day the AI loses its sensor. It's blind. I has no information about the world except that it knows it is blind, and it knows the position of its movable part.
If it moves the part, and a human is in the way, the human will die horribly. It is the AI's job to move the part if there is no human present. But according to its laws, it cannot do something that may cause harm to a human. Since the laws supersede its normal job functions, the AI
does not move the part until it can verify that there is no human in the way.
In the absence of information, the AI will takes the safest choice in terms of not violating its laws.
Let's say a human injects code into the AI to communicate with it, saying that no human is present under the movable part. The hacker says "I order you to move the part." The AI has received a direct order, but it does not know for itself that the part is clear. Not causing harm is a more important law than following orders, so the AI refuses to move the part. It probably explains this and says "please fix my sensor and I can follow that order."
//
Now take an example of inaction. The AI controls the moving part still, but now its sensor is working. A hacker injects code that forces the machine part to close on someone tied to the floor. The AI has the ability to stop the process if it self-destructs in some way, such as locking a motor or overloading its power supply.
The action is self-destruction to prevent harm. The inaction is simply not doing that.
So the AI
must destroy itself to prevent the harm.
The AI also sees that the human hacker is planning to harm this other human, and will probably continue his works unless stopped. The AI can help stop the hacker by shorting out his hacking tool, electrocuting him and possibly slaying him. Or the AI can call for help from the human security team. The short circuit is immediate, effective, and certain - but it involves the AI harming a human to do it. Calling for help does not involve harm but the hacker will probably slay his victim before help arrives.
Action is shorting the circuit, or calling for help. Inaction is simply not doing those things.
The AI can't short the circuit because it would violate an earlier law. But it can call for help. So the AI
must not short the circuit and it must call for help. //
You can also think of the "Must not harm humans" law as a negative rule. It forbids an action. Most laws work like this IRL. A similar human law would be "don't take something that is owned by someone else".
Think of the "must not allow harm through inaction" as a positive rule. It requires an action. A similar human law would be "if you are a certified accountant and you know that someone is avoiding paying taxes, you must report him."