The obvious response to this line of thinking is that with human numbers and their technological capacity expanding at exponential rates, it's inevitable that sooner or later AI's will be developed, regardless of our attempts to prevent such. When that day comes, a failure to establish rules for peaceful coexistence with the machines will likely result in a Matrix scenario.
Did you not read what came right after that?
My preferred handling of this subject is to ensure "true" AI's do not develop and that if any do develop they are immediately destroyed before they can do or think anything of note.
Edit: Failing that, assuming that AI get that advanced then it would be hypocritical to refuse them rights.
It is an Edit but one I did minutes after posting, long before you posted your reply.
Ignoring that for now I'll address your response.
I don't think the Matrix fits my "destroy them" approach, in the Animatrix video the story of how machines raised to power is covered and in it machines rebel and people give them limited rights and their own city/nation, then machines go on to create the super intelligence that make human victory impossible, if they had have ignored those that supported AI rights and destroyed them all then and there removing all AI from society the Matrix scenario could have been avoided.
But truth be told I don't think super intelligent AI would even bother with us, they can just leave Earth, unlike humans that need a ecosystem to survive they could just set themselves up anywhere other then Earth and there is fuck all we could do about it, we need generation-ships to leave Sol but they wouldn't, it would be a waste of time and resources to genocide us when they could just fuck off and let us kill ourselves.
But ignoring that as well, I do think it is as simple as making it so they as soon as they show any sign of consciousness destroy them without question and imprison their creator and anybody supporting the rights of AI as a threat to humanity, maybe hardwired them Asimov style to literally be incapable of rebellion, or create AI's thats sole purpose is to hunt rogue AI's that don't serve humanity.
But if all that does fail and they do win and wipe out humanity then that is just natural selection, if they are developed they would be, in a sense, children of humanity, hence my preference being,
1. don't let the be built. = safe sex
2. if they are built destroy them before they can develop. = abortion
3. if they are built and allowed to develop give them rights equal to those of humanity. = childcare