Again, but with quotes and clarity.
If non organic intelligence proves both vastly superior to organic intelligence and unable to co-exist, it might make sense for a society to replace itself (or ascend itself, depending on your point of view) with a successor society it creates out of non organic intelligence specifically to solve the problem of the potential for an outside non organic intelligence destroying their society utterly.
You ask why it would be a threat: even if you build your own to protect against someone elses, you're by definition building it "blind". You can't know what way a self-improving AI should develop itself at the levels it will be reaching. When somethings moving individual quarks around the way we move around gears, for example, a builder species would have no clue what it "should" do. It's all the AI decision; and it's methodology is not guaranteed to be perfect.
Simple example: there's a million ways to build a house on a open plain. A ant can't grasp this; a human can do so, but by no means fully understands it. It's no different with an AI. It steps onto a entire, new, wide open plain of the universe and "decides" what house to build, with information that is quite possibly as incomplete and imperfect, in relative terms, as a human doing the same.
If another AI is built, that operates differently, it's possible it could far surpass your own by simply happening to choose a different architectural style on a whim. Due to complete coincidence, the locals like it more, it keeps heat better because the green rocks you liked are good insulators... you see where I'm going with this?
Their success would be no less determined by chance then our own is.
(if) there is some value in alien intelligence, you can simply colonize the galaxy and then wait a couple of million or billion years. All the daughter societies created by your colonization will probably then be as alien to you as any aliens from another planet.
You want the aliens to be as alien as possible, so the intelligence is as varied as possible. You'd recognize Entropy is a hard problem; so, for the best chance of finding the right species to crack it, you'd be willing to take the risk.
In this specific version non organic intelligence is apparently enough to defeat an already secured organic intelligence society, but at the same time they are able to stop that from happening for billions of years without resorting to wiping out any possible seeds for it?
It doesn't have to be literally billions of years; just long enough that you start taking the long-term problems seriously. There's always going to be some fucking sentient species that is actually sane, weird and lucky enough to survive at a high tech level over a long period. That seems a bit contrived? Well, it just seems that without ridiculous edge cases the default answer is "stomped by AI." And
Space is big
I admit, it's a leap. But, if it's necessary to disrupt your species out of developing AI; which it, does, indeed, seem to be - it begs the question of what that species does next.
And it would result in a Fermi Paradox like situation. Though I admit; this is a secondary theory.