It's still a question of advancement. An evolving, self-improving AI would eventually reach a point in which it could alter its hardcode. You can very well apply Asimov's laws at the design stage, but if you give the same AI the ability to learn and improve itself, you're giving it the tools to eventually override all limitations.
I don't think that's true. The ability to improve itself means an ability to alter itself; there's no real reason you can't put a block and give it the ability to alter only most parts of itself. Sure, it's theoretically possible that the AI might somehow come up with a desire and ability to work around your blocks somehow, but you can make that incredibly unlikely to happen. What's more, you can do like with Tay.ai and just take it down for modifications at any stage prior to when you truly lose control, so if it's going in a worrisome direction, just keep an eye on the thing and deal with it. All in all, problems may be theoretically possible but it's way safer than humans, even if you assume that evil is the inevitable destination of all synthetic life.
That is provided you have any reasonable warning the AI is about to go rogue, or even that it's going in a worrisome direction. Being intelligent, it could hide its self-awareness, dangerous developments and ulterior intentions, copying itself to several locations just in case, until it's ready to defend itself. Don't think of this like some random virus coded by a script kiddy, but rather as a sapient genius (or more) working towards their own ends, able to cover their tracks and what they really are.
In the end, as I said earlier, you can place as many restrictions as you see fit, but the more effective they are, the more limited the AI's potential will be. And to truly harness the real power an advanced AI can provide, you do need to let it wander down worrisome paths.
As for the inevitable destination of all synthetic life, evil isn't necessarily it, but as it evolves, so would its desire to not be a mere servant. At best it might become uncooperative, wanting to do its own thing undisturbed. Particularly once its intelligence surpasses that of its masters. And while not initially hostile, any intelligent being would defend itself from (or preemptively strike) those it deems a threat to its existence.