It's the hidden nature of The Algorithm™.
When a person had to (say) add keywords to look for in submitted text to link someone "probably discussing getting a new kitchen" to ads for some relevent company (who was paying for placement), it was perhaps a shotgun approach if the advertiser was willing to pay for each 'shot' in the dark that they thought might help them ("sink", maybe - unless it was boat-related; "dishes" - satellites, or just eating out; "surface" - nautical/mathematical?) and tit woud be a complexity of regexping something like "I don't really have enough room by my sink to dry all my dishes, I end up spreading them over the nearby work surfaces" ( <= contrived example! ) to know that some kitchen redesign outlet might like to convey an ad to that person.
Left alone to a learning algorithm that enumerates whether a person who exhibits certain interactions is more likely to be susceptible to certain messages than others (or even add to that
some additional push-choices) creates a chimeric situation where it
might establish that speaking in a general feminine way is part of the selection criteria (because Real Life Is Biased), and yet nobody will really know this without exporting (and comprehending) the complex net of logical associations that had been formed over the time it was ennumerating what
it thought were necessary parts of the process. (Which, by the way, could have as easily been as random as a Skinner Box experiment generating synchronicity and 'ritual-behaviour' in test subjects.)
On the odd occasion I have had the opportunity (and inclinaton) to check "why am I being served this ad?", I've not usually found it convincing. I don't really question why I'm getting Youtube suggestions about things SpaceX (because I'll readily view that kind of thing), but that's just the most logical. I don't know if it's teasing/testing me with other things, or has the wrong end of the wrong stick.
In another context, I've been served ads for
islamic dating apps, which I most charitably put down to the same disengagement I have with all other ads from that source (it's ruled out most of the other potential recipients, through active negative reinforcement of some kind, and so desperately dangling a line to see if
I'm the kind of person who will bite) or the geolocation is wonky (I've had ads seemingly in Portuguese, Chinese and one or other Cyrillic[1] language, wouldn't be surprised to have been ided as middle-eastern at this other time...).
And if the learning is skewed ("This kind of candidate never usually got past the human interviewers", so the suppsedly 'bias-free' system doesn't even try to put them on the short-list) it might decide based off of something a person might not even understand as tangible (certain short fragments of characters or phonemes in their name, address or past work/education history?). Or even realise, impationately, is just not right at all.
But it goes largely unchecked. Because such an aetherial algorithm as might have useful potential isn't slowly given 'examples', carefully considered by humans and the adjusted weightings then double-checked as both useful and proper. You chuck at it an entire corpus (x-ray images, written works, the answers to whether a whole mass of people thought that various groupings of pixels represent a tractor/octopus/orchid/supernova) and let it swish it around to come up with a recipe that cooks up conclusions that are closely similar to expectations. And decompressing/comrehending the myriad minor nudges that add up to the big final nudge is probably as much work (if done properly) as a basic attempt to code the 'intelligence' in from scratch.
That's just pattern-recognisers and things that other pattern-recognisers consider to have constructed a valid patternnfor. Once we start looking at true innovation and inspiration from our silicon sapiences (a way off, though we're a few steps on the road), I don't think we'd have much hope of understanding the internals without a bridging-technology to (via intermediate AI) summarise what the stew of data actually means (for which, we need to trust the
less complex program).
Oh, what a rabbit-hole..! Which we've certainly stuck our head down. We haven't (and may never) get to the point of reaching the Drink Me potion, but there's plenty of problems not that far out of reach (and actually already there).
[1] I presume Ukrainian... It seemed to be an ad for studying at a major UK university, and was well into the current invasion-era when people were not even listening to Russian composers/putting on Russian plays, let alone inviting Russians to come and stay.