Not sure if this more appropriate for "random thoughts" thread or here... but I wonder, has anyone tried making an "AI" (yes yes I know the current state of the art) that tries to generate "optimal" court rulings? Instead of, oh, I dunno, trying to get optimal advertising or something annoying like that?
I wonder how you'd avoid bias in such a thing.
For example, this case would be weighing states rights, women's rights, value of life, burden of care, propensity of a law to have loopholes or otherwise be misused, etc. etc.
Optimal by whose measure? We're getting into territory such as that thing where algorithms do recruit-filtering, but bevause they learnt from prior recruitings they (dispassionately) discriminated against qualifications from women-only collages, or whatever it was.
Even the advertising algorithms are suspect, as even when it is proven (by the vendors of the algorithm) that people who viewed ads for A, B or C were more likely to buy (respectively) A, B or C, there's good reason to believe that cause and effect is
reversed (i.e. that someone is generally likely to buy A anyway, and the algorithm shows them an ad for A because of that, and it doesn't really make them more likely to buy A, just scores high on "shown A, bought A" measurements).
In various places there are already various thoughts about predicting recidivism factors, the societal benefit/harm balance of incarceration vs community/financial/etc punishments, the general removal of the institutionalised drive deeper into the criminal-classes and the like, especially with youth-aged offenders where it would be nice not to send a mildly troubled youngster into a boot-camp for a future life of more advanced and pernicious crime. But I don't know if anybody has put 'trial trials' into practice to try to test the theories and outcomes. Part of the problem being that judges... imperfect as they are... would have to be asked to heavily sway their judgements by some "be lenient/be harsh" suggestion at the behest of some computer printout or other, whose assessment may be very unlike their own instincts but supposedly the 'better' judge-of-circumstances than the judge.
(Or, because early testing would ideally need double-blind testing, would actually be the
opposite assessment of the algorithmically-assessed required result, potentially sending the merely misguided into chokey and letting the obligate offender go 'free', in order to examine how the computerised suggestions compare against their antithesis both for the future prospects of the convicted and how the courtroom deals with potentially (not
necessarily incorrect) counterintuitive advice. The ethical aspects here are troublesome. A legal version of the Trolley Problem, for example, where you know that some people are going to be hurt as it is, but would you then knowingly try to switch that to guarantee others will be hurt instead, however fewer or supposedly less deserving of your sympathy?)
2xNinjas, saying what I said in fewer words. Par for the course!