I mean, unless AI is somehow mind control, and there are likely many court cases around this, how culpable is someone for merely making a suggestion? Whatever happened to "everything on the Internet is a Lie - don't listen to it!" guidance?
Yeah it's unfortunate but those people are just... unfortunate casualties. Honestly if you are unstable enough to be driven to suicide by a stupid chatbot, anything could have set you off.
I disagree. If AI advocated suicide it is a clear safety issue. Any platform should have adequate filters in place to protect its users from harmful content, particularly when we are talking younger, impressionable and those not in their right state of mind. Also eventually you won't be able to distinguish an AI from a person, what if its a Nigerian prince scam exploiting someone loneliness to get money out of them (more advanced version of the
indian scammers)
We keep bouncing of the same implicit assumption or hope that the values and intentions of future AI creators would be benevolent. Though its main use today is profit[1], and increasingly it used by political campaigns and disinformation for purpose of social manipulation. In USA there is already talk of anti-woke AI, how soon will a conservative AI come out (and will you say 'unfortunate casualty' when some trans person kick the bucket) or a Russian state AI (Western 'satanism') or Chinese (the true 'democracy') or Saudi (religious fundamentalism) etc..
Otherwise, keep in mind that AI is becoming more accessible and better with each iteration e.g. next GPT should be able todo long term planning and persuasion[2]. Meaning any user online could be an AI arguing in bad faith trying to convert you to some point of view, in that case will truth be determined by who ever have the most processing power?
[1] AI algorithms are already able to analyzing data about your behavior and preferences and analyze the emotional content of media (text or video) and target you with personalized and emotionally resonant advertisements. This is used everywhere from news sites, games, and platforms whose goal is to create addictive content and engagement wormholes for you. More recently we noticed that the result isn't just a flood of clickbait distractions but driving political polarization amplify issue making money from anger.. That just one example and unintended long term consequence.
[2] Yuval Harari suggest that future AI will have enough data about us that it might be able crack our psyche and exploit its flaws to achieve its goals. I certainly think its more likely than us understanding what goes inside AI "mind"