The problem with that sort of behavior is much the same as with the flaw inherent to unregulated capitalism: if everyone behaves like that, then all human interaction falls victim to the Prisoner's Dilemma (in a sense), as everyone is perfectly aware that every other person will screw them over if it's beneficial for them to do so. There's no motive for anyone to ever interact positively with others. There could be, if all parties could objectively know that a certain course of action is mutually beneficial and that a selfish course of action doesn't result in greater benefits for the defector, but that's functionally impossible to arrange, especially given that different people prioritize different things.
Except that real life plays out much more like the
Iterated Prisoner's Dilemma than a single run of it. After all people don't just disappear after you interact with them only one time. When the simulation is iterated the algorithms that do the best in the long run are actually
not the "always selfish" ones; rather the algorithms that were more altruistic did better in the long run, on average, with the overall winning algorithm being a "tit-for-tat" algorithm, that would tend towards a altruistic stance unless you infringed on it, in which case it would retaliate (an even better algorithm was shown to be "tit-for-tat" with a small chance of being forgiving instead of always retaliating in kind, since it helped to break the "chains" of eye-for-an-eye that the straight tit-for-tat caused occasionally).
This is because, like the prisoner's dilemma, the benefits of both of us remaining silent (i.e., helping one another) is greater in the long run then the benefits if we both betray one another. This holds true in real life as well; the benefits of me feeding/paying you to work with me incurs a short term penalty, but in the long term it will have more benefits for me than cheating you (because such a play will make you more likely to cheat me later). Extrapolated outwards, this results in (at least for pretty much every scenario I've considered so far under this system) the altruistic path being the more favorable one, except in cases where the damage that could be caused by a loss being so extreme that the damage outweighs any long term benefits (such as my death causing the "end of the game" for me, stopping my total gained value at the current number while everything else keeps advancing).
Followup note: Studies of the "tit-for-tat" strategy have shown that in many cases being only sightly more cooperative than all of the other competitors provides basically no benefit, and can even be harmful, while being largely more cooperative provides real benefits in the long run, simply from a purely individual point of view. This is currently one of the big believed reasons why, despite it's overall gains for the individual in the long run, we see very few examples of it in nature or other parts of real life. The constant push towards the local maxima (as opposed to the absolute one) means that in the vast majority of cases evolution get stuck on the "valley" that it has to cross to move from the local maxima of not being altruistic to the absolute maxima of the "tit-for-tat" strategy on a purely selfish basis. (Despite this it
is believed to occur naturally in some places, such as guppies and vampire bats).