Fuzzy inference doesn't work this way. Each fuzzy set has its own function. And when fuzzified certain set of concepts (like cold/warm/hot), it will map the quality degree into quantity degree with its function (often continuous function), and the inference process then using fuzzy rules, with intersection/inference functions. So it's not by percentage. And the inference output can be fuzzy sets as well, defuzzified is optional. Fuzzy set is quality degree not quantity numbers, although in practice application like deciding the spin of a washing machine it must be a real number, rather than a group of fast/medium/slow fuzzy sets. So the defuzzified process will use mostly weight average to determine real final outputs. But as ethical decision goes with good/bad, then the defuzzificaion should not work at this stage since they are quality not quantity. Only when an action is required like moving how many degree or how far it should move.
Hey, I
do have a computer science degree myself. I'm aware of how fuzzy-logic operates. we're talking here about building a decision-making apparatus, so various individual outcomes,
regardless of the inner workings of each decision process/equations, need to be quantified, the "percentages" I specified were basically for illustrative purposes, as probabilities are the most common way to specify such things (at least for student examples). In the example 0.50 is the evaluated probability that outcome 1 is a good outcome, and 0.49 is the evaluated probability that outcome 2 is a good outcome.
Anyway both choices could be computed on some arbitrary scale (real values 'A' and 'B'), then scaled to 100*A/(A+B) and 100*B/(A+B) so be displayed as they were in my example as percentages.
The point I was making is that even though the evaluation is very,very, close, for a computer system it will pick outcome 1 as superior 100% of the time, even though the fuzzy-logic equations are quite close (unless we add a pseudo-random "noise to the decisions to make it less-predictable, for example a game implementation).
A human on the other hand will feel "conflicted" by such a close decision. But that feeling does not apply to a machine (they're not anthropomorphic, unless we deliberately encode that behavior).
@counting: I'm beginning to think you deliberately make over-long rambling and off-topic posts
on purpose. You're engaging in a semantic debate, not engaging with the meaning of the discussion.
Well, except for when you attack me with "The amount of information required to make it sentient is unknown as of right now." when my post was purely a lists of reasons given as direct response to you trying to quantify
that exact thing. The whole point of my post was that you cannot quantify that (you gave actual figures in your post, I just list reasons we cannot know that, and that it is likely a lot lower than you claim)