The goal of "capitalism" isn't to create more capitalism though... at least not originally. Arguably the goal of capitalism is the efficient allocation of resources - the problem is the definition of "efficient" doesn't necessarily mean "good for the average random individual."
If it's reimagined as the whole
Paperclip Problem, a systemic goal that does not necessarily take account of the means to achieve that goal (and anticipate what it might
not be meant to usevas means), you get problems. Monomanaicity of any kind is bad.
I could see perhaps an AI coming up with a scheme that is both efficient (in terms of requiring the fewest resources to perform some task) and equitable (most even distribution of resulting goods and services across the population), but unless you give the AI some means of enforcing that scheme, I don't know what good it will do.
I think that's still overestimating that the solution is possible from an undirected AI. Piping the AI's proposals to enforcable solutions is so trivial (and typical Hollywood nightmare-fuel when it oversteps the bounds).
There was an interesting set of talks about how AI perhaps should be designed/handled. I presume you can get to some listenable/watchable version by following
this link (there may be geofencing on some bits, or I'd point you more directly to the media files or container-pages that would service you).
There were also some
companion broadcasts (in the vein of R&F's "Curious Cases" set of programmes) that might be interesting if you can appreciate them.
I've got a few personal 'yeahbut's about what's said, I will admit, but nothing that I feel qualified to outright trash the views given...