The AI apocalypse is something that terrifies me since I heard of it the second time. The first time I heard of it, I did not think too deeply, maybe because I did not give the person explaining it enough credit and forgot it immediately, which is a shame.
But after another person just outlined some thoughts - especially the concept of self-improving AI - I went into a long strange mood, which, after repeated failures of finding the flaws to discard these crazy ideas, almost drove me off the margin of my mental stability. It is really creepy, so creepy that I almost cannot take it seriously and stay sane at the same time, because it is so plausible. I am not a computer scientist, but I have sufficient scientific education to form my own judgment about the seriousness of this idea - and I came to the conclusion that there is really no reason to discard it lightly.
Next, more a feeling than an argument, I have a rather gloomy and mechanistic view on modern day society, believing that it works exactly in such a way to produce such a catastrophe as swiftly and carelessly as possible. The major forces are (1) capital interests, which constantly fight any form of control and oversight because the doctrine is that exponential growth is at the heart of every ones well-being, especially their own and (2) governments, whose power hunger can also be measured by how powerful they currently already are. For both of these the vision of controlling the best AI must be an irresistible goal to pursue.
And only after that I started reading (like the letter by Hawking, the article by Tegmark, and even Bill Gates' statement on reddit) etc etc.. and yep, they came up with much better phrased arguments than I could have. I am left shattered.