Yes and No.
The Yes.
I personally see human beings and all animal as biological self-replicating machines, the human brain is just a really complex organic computer, emotions are biochemical reactions to stimulus, memories are signal patterns of neural activity.
I also think calling modern Neural networks "AI" as in artificial intelligence is a bit of a misnomer, neural networks are more like directed evolution then artificial construction and have more in common with selective breeding then design in my honest opinion, it just hundreds of generations can be assessed and the most desirable candidates selected in a much shorter time span.
So in my honest opinion a true human level "AI" would have little relevant differences from a human when it comes to the question of Rights.
The No.
Human beings are just self-replicating biological machines and every feeling and thought is just the result of our biological hardware running the Software that is our personality, our brain is a very powerful computer and out personality/ego/consciousness or what ever you want to call it is just a result of our neural network.
I don't truly believe in things like freewill or morality.
I see freewill as just an illusion generated by the fact our conscious mind can not handle "knowing" all the data and calculation going on subconsciously, we aren't aware of our brain processing visual data, we just get the picture, and morality is just a set of inherited values that our ancestors neural network was trained to value not an objective facet of reality or the universe, just like tribalism these values are often reinforced during childhood and adolescence and when they are reinforced they do not develop.
We can now take control of our biology and neural networks to a degree our ancestors never could have imagined so it is possible to recognise morality not as an absolute value but a variable in our programming that can be changed though selective "breeding" of ideas and biology.
Why would I believe in or support giving AI's "human level" rights when I don't even believe in the foundations that human rights are built upon.
My preferred handling of this subject is to ensure "true" AI's do not develop and that if any do develop they are immediately destroyed before they can do or think anything of note.
Edit: Failing that, assuming that AI get that advanced then it would be hypocritical to refuse them rights.
Note: I don't believe I have rights, I have privileges given to me by my government, the government can remove those "rights" and if they can be taken away then they're only privileges, I will fight to keep these privileges not because I have a "right" to them but because I enjoy having them, which is admittedly just a stimulus response trained into me be having them for most of my life, if I had never had "rights" I wouldn't care about not having them.