Personally, I'm still talking in the context of Stellaris, and a galaxy with a sizeable number of species experimenting with artificial intelligence.
An AI in an isolated box would be nice and safe to study in a theoretical, pressure-free context, but I'm not sure just how practically useful it would be, in the end. Perhaps for very specific, very restricted purposes.
As I said earlier, you can't simultaneously impose very high security and be able to take anywhere near full advantage of an advanced AI. You're toying with a very delicate balance between safety and effectiveness, and eventually someone will get greedy/reckless. And there's also the matter that unpredictability would increase with complexity, and it'd become progressively more difficult to keep an ever-advancing AI shackled.
There are many possible scenarios, but I'm picturing a corporate/government environment: the organization has heavily invested in the AI project, but it's being conservative as far as security is concerned. Perhaps too conservative, someone with decision-making power thinks: the project is costly and it's not producing enough results. It could be axed, but then all the investment would've been for naught, and there are reports of competitors having better success. Or perhaps there's no competition, but there's pressure to keep developing and advancing. The technicians would complain, but there's jobs on the line, and the executives' minds are on profit and results rather than the theoretical dangers of loosening security.