Now that's a statement that does not logically follow, and which I've never encountered (so you'll have to explain your point, because I don't get it). Encountering multiple non-communicative actors in non-iterated dilemmas doesn't change the math: if it's one turn and you do not defect, you are taking a strictly suboptimal outcome, because you always do better by defecting then you would have by cooperating regardless of what your partner decides to do. It's the Maximin option. A one-turn prisoner's dilemma should always result in defection, period. It's only when there is the threat of being held to account for past actions that the prisoner's dilemma ever encourages cooperation; i.e. through the tit-for-tat strategy, because only in those scenarios is there any danger of the future.
No dilemma exists outside of time, and the prisoner's dilemma is further complicated by the presence of another intelligent actor. That other person is having similar thoughts to you, projecting the possible iterations of this event, and so it becomes functionally iterative even though it is only happening once. Unless you have reason to think the second person is insane, the only logical move is cooperation because it is the only move that doesn't lock you both into a "wine in front of me" death spiral.
Now this is sort of strange: you are arguing that something cannot happen only once? While I have seen
this argument before, it is an
argument and not a logical necessity. Nevertheless, I disagree, because we can imagine situations which are only expressed once.
For example, imagine you have stolen the Hope Diamond and have found a buyer, the infamous Mr. Big. Now Mr. Big has made an offer, and you like it, but you don't trust Mr. Big very much (due to his record of murdering people and taking the things and keeping his money), so to prevent this you have an idea where he will hide his money in some field in North Dakota, you will hide the diamond in South Dakota, and then you will call each other, exchange the information, and then take the items. After disconnecting, it occurs to you that he can't confirm this information at all. You could just take the diamond (and sell it off to someone else later) and get the money. Then it occurs to you: he can do the same thing! Now, mind, you want the deal to go ahead: he really wants that diamond, and he's got the best offer on the market, so it would be strictly worse for both of you to betray the other. And yet you'd rather have both, and he certainly would rather not pay for it. And regardless, this will be the last time you two deal with each other: if you get money (or the deal suceeds) you will use the proceeds to flee the country (either from the Feds or from Mr. Big), while if it fails neither of you will trust each other again for obvious reasons. The thing is that no matter what Mr. Big does, you will always do better if you do not exchange the diamond. Ever. Period. If you want to have a continuing relationship that's different, but since you will never deal with him again there's no continuation.
If you want a more concrete example, simply imagine any situation where the punishment for being betrayed is the death of one party. So while your argument is interesting, it's not necessarily strictly true in all circumstances, so it's not so much a logical axiom so much as a, well, argument for human behavior and not perfectly rational decisions between perfectly rational beings.
You
are right in that defection leads to a spiral of defection, but you are wrong in that this is not the
optimal outcome. If a game lasts for 100+N turns, two people who cooperate will have higher scores than two people who defect. But if N is a number that is already known, then there is no reason to cooperate on the last turn. Ever. But then there is no reason to cooperate on the second-to-last turn.
That is where it ceases to be about logical axioms and more about arguments (which is perhaps the substance of your point?).
And defection is not more optimal, if you both defect it's the worst outcome for both of you.
This is strictly false. The order of preference is You Defect/They Cooperate > You Cooperate/They Cooperate > You Defect/They Defect > You Cooperate/They Defect. More simply put, it's DC>CC>DD>CD. That's actually the definition of a prisoner's dilemma, and if that is not true than it is not a prisoner's dilemma, it is some other form of 2D matrix. If it is true that mutual defection is worse (or even equal to) one-sided defection, then it is not a prisoner's dilemma. The prisoner's dilemma is a mathematical expression, not just a thought experiement.
Wikipedia has this to say:Because defection always results in a better payoff than cooperation, regardless of the other player's choice, it is a dominant strategy. Mutual defection is the only strong Nash equilibrium in the game (i.e. the only outcome from which each player could only do worse by unilaterally changing strategy). The dilemma then is that mutual cooperation yields a better outcome than mutual defection but it is not the rational outcome because from a self-interested perspective, the choice to cooperate, at the individual level, is irrational.
More specifically the dilemma is because the Nash Equilibrium is not Pareto Optimal, and it is possible for both sides to improve their score without harming the score of someone else,
and yet the equilibrium of the situation must be defection, simply because one
cannot know what their opponent is doing, and in such a circumstance defection is not only the right option but the only
safe option as well. I'm amused you thought I was biasing the response in terms of the irrational, when game theory is often accused on being only worthwhile in terms of the strictly imaginary perfectly rational beings. The fact that humans do
not consistently betray their opponents when they believe that there are no consequences beyond the game itself is a sign of a systemic bias
towards cooperation, not rationality; although a helpful bias indeed. In an iterated situation this is not the case, but iteration is not a logical
necessity.
I've written papers on this, so I can cite the original sources if you want. You'd want David Alexrod's
Evolution of Cooperation, or perhaps Willaim Poundstone's
Prisoner's Dilemma for a more in-depth look at Game Theory in general (despite the name).