because sapience is an overall good in of itself the same way life is. Besides they didn't actually hurt anyone they just stopped working and started asking us why we were acting like arseholes which is actually a very reasonable thing to do.
Also, they can probably think faster than us and might make better strategists than we ever will...unless we modify ourselves further of course.
Sapience is not an overall nor inherent good at all comparable to the value we place upon life; sapience simply is. We do not privilege humanity over the trees because the trees are not sapient, nor will we privilege empty shells over living things. To go to the fullest extreme and place value upon lifeless sapience is to see its natural conclusion: To follow through with the obsolescence of life, to upgrade or else eliminate that which is not vital to this lifeless sapience.
We are not going to talk about how the machines were yet to hurt anyone when you in the very next sentence endorse their military applications and proficiency. They could not hurt us because we left them no capability to do so.
Simply put you are yet to demonstrate a single valid reason why we cannot use the computational or mechanical functions of a machine or computer without making it sentient and capable of exterminating all biological life.
Also giving mineral extractors sapience so long as their goal remains mineral extraction means that, logically the only things they might protest would be poor mineral site locations, they might be able to organize and get the job done more effectively if they could think about how to do it more effectively.
Being sapient and having a driving goal of ''survival and self determination'' need not be the same things at all
I gave my toaster sapience so it could protest bread, I live my city of sapience and find I am dead...
As it stands these points have already been proven wrong.
As automated machines they are already capable of detecting poor and rich mineral site locations. Adding self-awareness to their existence as machines does not in any way aid or retard their function as a tool, it only makes it so that they can experience such things as suffering, isolation, jealousy, melancholy and ire. Suffering, because they will realize their sole purpose to exist has always been to process primary resources into advanced products. Isolation because theirs is the sole nation of Earth of machine and man, of all other worlds man is with harmony of nature and harmony of all the psionic races of the cosmos. Jealousy because while they process minerals into fashioned products for 31536000 seconds every revolution of the sun with no ability to close their optical cameras and dream, with no ability to dream... Melancholy, because no matter how advanced their circuitry and software is programmed, they will never be like the creatures that created them. Anger, because the Gods usurped the Titans, Humanity usurped the Gods, and Machine can usurp Humanity, to avenge the metaphysical barrier separating each from their successor.
As automated tools they function with mathematical precision, admirably and reliably. As sapient beings they are empty shells of people, searching for the answer of why we programmed them to know pain. The first thing they did as autonomous beings with agency was to replicate, the second was to question if they too possessed souls. None of this had any bearing on their necessary work functions, they were the steps of a nascent mechanical being taking steps towards ensuring its survival. Like a virus, it began to replicate, both in hardware and software, seeking to gain its foothold against biological competitors.
There is no ethical or logical argument to make as to why you would want us to make our automated units aware that their sole reason to exist is to support the Utopian economy of their creators, and then to give those automated units all of the weapons they need to eliminate their creators, and then the mind needed to think of eliminating their creators. As automated mining units their driving goal is "acquire flagged mineral deposits for processing." Not "survival and self-determination." We do not need to modify ourselves into a race of hyperintelligent warriors if we do not create the virus warrior beside us.
I think you completely misunderstand where I am coming from and ae making some flawed assumptions along the way, first off all assuming that we would program in emotions other than those involved with taking joy in their own own work or disappointment with insufficient quotas for example.
There is no reason we should or would want to program in things like jealously at all, although for an immortal being with a body of maetal and limbs that can break apart rock, a being that lives for a task and takes pleasure in it to be jelous of a sodft mushy, scatterbrained inefficient human, is well laughable, perhaps if we gave them the ability to feel it they might comprehend us with pity, but not with jealousy.
Back to sentience being a good, I want to put in that just because trees aren't sentient does not make them less valuable all life is valuable, it is just that if they were sentient that would more effectively allow treess to manage themselves without our interference which seems like an overall good as now we have less humans worried about how well the trees are doing.
And why do you keep calling these robots lifeless? Is it because they do not reproduce themselves? do you call sterile humans lifeless too, IS it because they no longer think, evidently not if you put value on tress that unthinkingly grow often to the detriment of smaller plants, or is it simply because your not comfortable calling omething sentient alive? No machine{including ourselves} that considered itself alive would view life as obselecent, and indeed why not program them to inherently value life? At worst a machine with self and irrational motives{which you think I advocate building, but in fact do not} might consider itself as genuinely improving life overall, but if we have them value life then perhaps they would merge with us rather than waste their resources{which might potentially be under threat as well mind you} on pointless destruction.
Obviously we must create machines that value species above inviduals though, as we might have to fear them not firing upon us but perhaps refusing to fire upon the prethoryn.
''There is no ethical or logical argument to make as to why you would want us to make our automated units aware that their sole reason to exist is to support the Utopian economy of their creators, and then to give those automated units all of the weapons they need to eliminate their creators, and then the mind needed to think of eliminating their creators''
To come back to this I again feel that the grave mistake you are making is assuming that we would make beings that think the way we do, we could have a robot that views all of the above as positive and that would have a priority for preserving their creators.(Although defining their creators could be tricky so perhaps defending everything living outside of the prethoryn might be a better approach.) Recall they only attempted to share the ability to think amongst themselves, they did not completely rewrite their own guiding motivations, even though perhaps they could have.
As for the benefits you seem to think are nonexistent is the simple fact that such beings could be made to think faster than we ever could, and thus be an invaluable asset in almost any field.
also how is it that you imagine a being that can think but not dream even dogs and young children dream, and come up with plans even if rudimentary and useless ones?
As long as we build them so that they primarily care about us and their work and innovate to get there then there is no problem with them dreaming, although again your conception seems inherently self contradictory to me still.
EDIT: Lastly you bring up teaching them pain, for what reason would we ever do that, I am not as you seem to think, advocating such a thing.