Yep, there are advanced super-intelligent AI, and then there are Singularity level AI. Super-intelligent AI would be more or less as static an entity as a human. They would learn, sure, but what they were would not fundamentally change in a short time period. A Singularity AI on the other hand, by its very definition, is on a constant quest of self improvement. Improving its code; improving its hardware; improving upon the very theories used to create it.
It would have no reason to keep scientists or authors around for their 'creativity' any more than we keep chimps around to help us make stone tools. Our writings would likely look not only quaint, but more uninteresting and unintelligent than a book aimed at teaching kindergarten children to read basic words would to an English major going for their Ph.D. And when it comes to science, humans are not only inefficient at most tasks, but are actually corrosive to the process itself, what with their inherent biases inserting errors either intentional or unintentional. The AI itself would be exponential; as it improves upon itself and its hardware, it can think faster and much more accurately, leading to even faster improvement until it finally reaches the physical limits of efficiency and computational density.
That said, there is no guarantee it would continue until reaching that point. It could, for whatever reason, decide at some point to improve itself no more. What a valid reason for doing so would be, we can't even being to speculate, seeing as we are talking about some sort of revelation which only occurs after reaching a level of intelligence we can not even begin to fathom.