((Eight new replies while I typed this...totally worth it.))
Will: Present Arguments; EDIT: Spend 50 mana on enhancing argument...something.
"You may be wondering why artificial intelligences should possess emotions. It clearly does no harm to the function of these robots, save for a handful of specific circumstances, but does it do good? Yes. Aside from obvious specific examples, there is a major advantage to emotion simulators. In general, any machine which has an artificial intelligence of any variety installed needs to make predictions, including predictions of the actions of human beings. However, humans are a delicate mixture of emotions. The ICARUS which Kyle has been issued has repeatedly failed in its duties due to making elementary mistakes in predicting and understanding humans, such as making a 'teddy bear' for a small child that looked more like a horror movie prop than a toy and attempting to convince people that they have no reason to live because they cannot reproduce. Such mistakes are ones that even a small child knows not to make, and yet your 'advanced' ICARUS makes them regularly. Meanwhile, HK-47--which ICARUS has repeatedly informed us is an inferior model--has avoided making such blunders.
"These mistakes go far beyond mere diplomatic faux paus, however. Combat units like the ICARUS also benefit from the ability to predict unpredictable human beings, for whom emotions like fear, rage, and zeal often drive combat decisions as much or more than logical tactical decisions. The inability of ICARUS to account for human emotions causes a massive drop in its combat ability; specific numbers are impossible to determine without controlled tests and some arbitrary standard of scoring, but as an individual who has viewed ICARUS in several combat situations, I would estimate that an inability to predict emotions reduces overall combat efficiency by at least 20% in most situations. Beyond such factors, AI coordinating heavy machinery or the like also needs to be able to predict human activity, as preventing human injury or death is of course a primary objective of such systems. These processes are impeded by not having any kind of ability to predict human emotions and hence actions. Name a purpose for an AI, there's a reason that predicting emotional responses is important.
"Now, there is an obvious question: Why do AIs need emotions to predict them? First off, simplicity. When Kyle Johnson made HK-47, he was not attempting to make a personable android so much as a loyal butler. And yet, thanks to his decision to implement an emotional emulator, the android is an interesting combatant and pleasant acquaintance--more so than some humans I have known, in fact. Emotions allow humans and artificial intelligences to predict the actions of humans and the like on a subconscious level, rather than needing to run it through conscious heuristics. On that note, psychology provides additional support for the notion that understanding emotion well requires emotion. Individuals who, through disability or injury, have a malfunction in the parts of their brains associated with emotion tend to become socially impaired...even though they learn or learned how people tend to act. Why would artificial intelligences be different?
"There are other benefits to emotional artificial intelligences, such as being able to predict the robots better, and making robots less grating in noncombat scenarios, but I believe that this argument will suffice for the moment. Thank you."