I've never found dogmatism to be the brick wall. I've found that if you're discussing with someone who isn't completely insane, and you continually press the reminder that anyone outside of their religion will never recognize it as a valid basis for determining their lifestyle, AND you do this without causing an offended reaction that triggers their defensive walls, that you will eventually whittle it down to a simple emotional recoil.
I agree that most dogma is rooted in emotional training or recoil or whatever you want to call it.
I can't say I see how that helps the situation, however. Brick wall is brick wall, whether you draw the semantic cutoff for what you call a "brick wall" as the moment dogma is uncovered, or whether it happens half a block later in emotion town, the end result is the same.
I simply give up when I uncover the dogma, because even if there is something deeper, it doesn't matter to me if that deeper thing is just as entrenched and unassailable as the dogma is. It's like the difference between turning around and taking a different road when you see billowing smoke in front of your car, versus driving further and then turning around and still taking a different road only when you see fire. I'm simply saving myself that little bit of extra effort, once the outcome is already inevitable.
The design of the algorithm could also be accused of bias.
Can you suggest any reasonable example of how an algorithm could possibly be programmed ahead of time to bias toward one consistent partisan position across hundreds of different topics that the programmer doesn't even have a list of prior to finishing the code?
Keeping in mind, by the way, that since you would probably want to make the code open source as a means of guaranteeing neutrality to users, such an example also has to be something that could somehow be not only biased, but
undetectably biased even to experts reading said code.
It would take a lot of work to accumulate samples, and ensure that those samples were equally gathered from across the spectrum of opinions.
This is at worst an equally difficult barrier to either an algorithm or a human writer. And much more likely LESS of a barrier, since:
1) Computers can parse 15,000 documents in seconds, and a writer would take half a lifetime, meaning that the algorithm has greater flexibility in being able to use brute force methods to reduce variance rather than delicately chosen perfectly balanced examples from careful research. (It can ALSO use human provided delicate, careful research if desired. Hence "flexibility")
2) Computers also have greater flexibility in research than a human writer, since a computer can use all the same research as a writer would on the one hand (i.e. a human could research examples and then feed them in, same as a human would to prepare for writing an article) AND/OR on top of that, computers have options like crawling the web automatically and being constantly vigilant, versus writing an article that quickly becomes dated until somebody writes another one.
And its output wouldn't be as elegant and relatable as human writing.
Agreed. But we already have elegance and relatability in national debates - go pick up an op ed or a bestseller.
The niche that isn't already filled is cold hard unbiased reference resources. That's where the unsatisfied demand is, not in more elegance.