Personally, I'm still talking in the context of Stellaris, and a galaxy with a sizeable number of species experimenting with artificial intelligence.
An AI in an isolated box would be nice and safe to study in a theoretical, pressure-free context, but I'm not sure just how practically useful it would be, in the end. Perhaps for very specific, very restricted purposes.
As I said earlier, you can't simultaneously impose very high security and be able to take anywhere near full advantage of an advanced AI. You're toying with a very delicate balance between safety and effectiveness, and eventually someone will get greedy/reckless. And there's also the matter that unpredictability would increase with complexity, and it'd become progressively more difficult to keep an ever-advancing AI shackled.
There are many possible scenarios, but I'm picturing a corporate/government environment: the organization has heavily invested in the AI project, but it's being conservative as far as security is concerned. Perhaps too conservative, someone with decision-making power thinks: the project is costly and it's not producing enough results. It could be axed, but then all the investment would've been for naught, and there are reports of competitors having better success. Or perhaps there's no competition, but there's pressure to keep developing and advancing. The technicians would complain, but there's jobs on the line, and the executives' minds are on profit and results rather than the theoretical dangers of loosening security.
Again, good points. Here's the rub, no civilization that can benefit from possession of said AI can possibly implement the revisions the AI can develop at the speed at which the AI can evolve them. So there is no purpose to giving it 'optimal' information and access, because by the time the first set of improvements have been implemented, it has iterated many generations further. Really, you would want it to be as isolated as possible, forcing it to work in a vacuum and allowing it only the information it needs, as that is by far the best environment for it to creatively solve problems.
Also, only complete morons would allow corporate interests to control such an AI without direct government oversight, there is a reason that r&d teams like skunkworks etc. have military security.