I frankly trust exactly zero reports on "paid/bused protestors" after the Charlotte protests. The police commissioner lied straight to everybody's faces (shocking) and claimed 80% weren't from North Carolina at all when in reality not only were most arrested people Charlotte residents, all of them were from the city and immediately surrounding areas.
The cops are liars. This is demonstrable fact. Any claims they make as to the identities of the people they've arrested is inherently untrustworthy, and there is nothing legally obligating them to tell the press the truth on these matters. Fuck the police and FTFE.
Ah.
The more I hear stuff like this, the more I think that, unless we employ AI's to manage our media in order to achieve re-convergence of society, we're pretty much screwed.
As someone in tech:
NONONONONONONOOOO BAD, NOPE, NUH UH, KEEP AI AWAY FROM STUFF LIKE THAT.
Basically, AI is just a regurgitation of training data, which is usually biased towards the most well-represented demographics among the creators. Bias in AI systems is a huge systemic issue,
even in things as seemingly innocuous as speech-to-text software. Any sort of skew in the training data will result in poor optimizations for certain subsets. With a population-accurate sample of the US with 17% dark skinned people, an algorithm (ex: image recognition) trained on that data set will likely perform worse for that subset, since a small gain in accuracy for 83% is a bigger deal than a relatively large loss in accuracy for the 17%. A good deal worse if trained primarily on (and tested afterwards for accuracy with) typical Silicon Valley white male employees.
!!!Which, as it turns out, is the case for software *actually used to determine criminal sentencing*!!!The best description I've heard for AI systems in situations like these is "Bias Laundering." Clearly (sarcasm) there's no racism or similar involved, because it's just math and complex systems! That just so happen to propagate existing racial stereotypes because that's what they were implicitly trained to do, using data sets taken from a system with heavy biases. You can't question it because the data sets are often corporate secrets and the models involved too complex to actually understand what's going on in the machine's head, and no one would think to anyway, since they aren't technical enough to understand just how bad AI systems are at doing this sort of thing in a socially responsible way.
Put them in charge of media, and we will long for the days of simple fake news stories propagated by Facebook's algorithms. Or rather we won't, because everything we read will make perfect sense to us despite being completely inaccurate and based on opaque systemic biases.