The more I learn about Truean the more I think they're seriously technologically ignorant and possibly need help.
As to your main point, you're spot on. AI reflects the values of the human creators, and frankly we already give human all these powers. The only thing that AIs bring to the table is that lots of humans refuse to try and understand them, and instead wind up projecting all kinds of things onto them. Some people become fearful, some people think they're perfect and infallible, some people do both in different circumstances.
I don't think Truean needs help in the 'mental help' sense, if that's what you're trying to imply.
She (I think that's how they prefer to be addressed? I'm not sure, though other people are going with it) seems to be coming from a combination of legal perspective and personal experience.
Even if they're wrong or simply misinformed about AI, it's still healthy to have skeptics saying 'Hey, we need to be real careful and alert here!' and calling for caution.
On military and police AI, the problem is a little less rogue AI (though it still elicts scary Terminator-like images) and has a lot more with how to deal with ethics and determining friend from foe, civillian vs threat, among other things.
The disjointed stream-of-consciousness writing style, the way she returns to the idea that she's persecuted (ex, 'starting a flame war', 'do people have the decency'), repeated statements or implications that she understands things that most people don't, appealing to authorities and namedropping but not supporting an actual argument, tossing around mathematical terms to justify philosophical ideas, quite a few of her turns of phrase, the scaremongering and doom-and-gloom predictions, constant asides to rant about society today...they all combine to leave the impression of someone who's not quite grounded in the same reality as the rest of us.
I'm strongly reminded of various survivalists nuts, conspiracy theorists and crystal healers I've run into.
There's certainly nothing about the legal perspective in her posts I saw that actually touched on the legal aspects in a coherent way, just several links to Wikipedia. If she was actually talking about current laws I'd be less skeptical that she has any idea what she's talking about. Since she's linking to Wikipedia pages that are not actually relevant to existing laws that impact AI and machine learning, and her example is actually one where society forms a coherent and constructive response to corporate bad behavior, I remain skeptical.
We should certainly question how any new technology will be deployed and used. But that includes talking about the current state of the technology and the laws surrounding it. There's a really interesting conversation to be had about algorithmic resume analysis, for example and whether that reinforces or reduces bias in interviewee selection. Or the various issues that criminal justice software has had.
https://arstechnica.com/science/2018/01/random-people-as-good-as-judicial-software-at-predicting-future-arrests/ is a pretty good article about recidivism prediction software. One takeaway goes back to what I saw earlier, about people trusting software too much.
"Dressel and Farid make a big deal of the claim that COMPAS supposedly considers 137 different factors when making its prediction. A statement by Equivant, the company that makes the software, points out that those 137 are only for evaluating interventions; prediction of reoffending only uses six factors."
The program records 137 factors but only a few are used to predict re-offending; people are simply assuming that all the data entered is used for everything and then acting on that assumption. The software, however crappy it is at predictions, is functioning as intended, but people's wrong assumptions cause problems. They're overestimating what it can do. Equally, people often underestimate what data analysis can do given a large enough set of data, and how much useful data they generate on a daily basis. But data analysis software isn't Skynet, it's a program doing what its human author intends it to do.