Let's say you know your opponents preferences for the game. They use rock r of the time, paper p and scissors s, with r + p + s = 1
.
Let's assume you also have a static preference system (but that theirs is independent of yours), let your preferences be R, P and S, with R + P + S = 1
.
Your expected return for this system is: E = R(r*0 + p*-1 + s*3) + P(r*1 + p*0 + s*-2) + S(r*-3 + p*2 + s*0)
⇒ E = R(3s - p) + P(r - 2s) + S(2p - 3r)
The way to maximise this is to put all your eggs in one basket. Choose the one with the higher partial derivative, where
∂E/∂R = 3s - p
∂E/∂P = r - 2s
∂E/∂S = 2p - 3r
Importantly, it is impossible for all three of these to be negative. That is, there is no possible static set of possibilities which cannot be beaten. Indeed, if we assume that r, s and p only apply to the next round, this holds true even for dynamic choices.
As it happens, the best combination is the one for which the equations are all equal to 0 (except swap the lower and upper case letters). This gives the entirely predictable optimal choice (assuming you don't know your opponent's decision-making):
R = 1/2, P = 1/6, S = 1/3.
In other words, you do rock one and a half times more often than scissors, and three times more often than paper. Basically, this does not overcome the problems of the normal version.
Note that, with this setup, your opponent's choice of weightings is entirely irrelevant to the expected value, which still comes out to be 0. That is to say, the best outcome you can hope for, not knowing your opponent's mind, is breaking even.