Not really. I'm accusing your definition of vacuousness. At that point you're saying "Everyone does the things they want to because they want to". If your definition of "pleasure" is so broad as to allow you to make that move where you say "Anything you suggest as a motivation boils down to pleasure" then you're making a non-argument, and I don't see how it's either convincing or useful in constructing an AI. At least my Greatest Good offers some qualities by which it can be distinguished from literally anything else at all.
What I think pleasure is is a sense of enjoyment; it might be got from pride at an accomplishment, sensory feedback from a meal, or reciprocation of your feelings toward a lover. If you want me to dig deeper, it has its roots as a reward mechanism. Even deeper into my own beliefs, making the reward your explicit goal is foolish, and if you make it your ideal you turn yourself into an addict. But that is your business, of course. It's just not something I wish to do. In any case, it's a good deal more restricted than "the whole of human motivation", and that makes it a much more useful concept because it means I can have sensible conversations about it.
EDIT: I guess to be clear, what I'm saying is that defining pleasure this way is abusing semantics to dispose of the argument entirely. It's a major foundation on which everything else rests, and if you just abstract "goodness" away, you wind up saying nothing and taking quite a lot of words to do so. It's as though you were to write instructions on cracking RSA, and at some point you call a method for calculating the decryption key that is "left as an exercise to the reader".
My main point here is that I do not consider pleasure to be the highest good, unless (as you suggest we treat it) it is essentially defined as the greatest good. So a morality we want to implement into some hypothetical intelligence that's based on pleasure will either be something I'm at sharp disagreement with, or else so nebulous as to be no morality at all.
Actually, that's correct. It is indeed a definition that encompasses every human motivation, and is thus probably quite vacuous. The statement "pleasure is the goal of everyone's life" is meaningless, because pleasure in this situation is the sensation you obtain when you perceive positive accomplishment. You could replace it with "happiness" or "satisfaction" and get the exact same sentence.
Let's rephrase further: "The goal of everyone's life is to get what you want." That's a tautology, because your goal is what you want. I suppose that sentence is indeed unsalvageable, unfortunately. Well then. Let's take a step back.
The core principle, which the previous shitty sentence is derived from, is that people do things because they get something from it. The counterargument to that is altruism, which doesn't really get you anything aside from maybe gratitude, but is still pretty great to do. So you introduce the concept of pleasure, and say that being altruistic pleases you despite it resulting in a net loss for you materially, and recontextualizes selflessness as part of a broader selfish motivation, because terrible people will have you believe that everything you do is technically selfish to justify their own selfish actions. And then you extend the concept of pleasure as non-material gain, and notice that even material gain is valuable because of your subjective perception of it - see the mice that would starve if it meant they could keep stimulating their pleasure centers. And there you have a handy unified way of characterizing all of human subjective fulfillment - non-material gain (or pleasure, but pleasure sounds dirtier, if catchier).
From here you can reason that non-material gain, if you can measure it, quantify it and predict it with adequate knowledge of the human mind, could as an equation, if applied to a society and with interactions borne in mind, be potentially solved for maximum non-material gain. It's not an AI thing, strictly speaking, just part of me gushing about the potential benefits of a mathematical reduction of human thought, a necessary prerogative for artificial intelligence, for scientific (or, well, pseudoscientific) fields concerned with the mind.
#1: You can't perfectly model every human being in all likelihood, since you don't have all the information, or even most of the information. Though if you did, you'd probably be able to largely predict their behavior anyway if we assume human beings don't make choices at random.
#2: Maximum non-material gain may result in preferential treatment for some, suffering for others (I think I saw an extreme of this in an SMBC comic - that's how I know I'm in trouble) if applied broadly.
#3: Maximum overall non-material gain may clash with notions of decency, morality and utilitarianism, which is why you should be careful about directly implementing the ideas you get from solving for maximum non-material gain and probably use them for informative purposes when considering other policies.
And with all that, I am back where we started - solving for maximum non-material gain, even if you had a complete knowledge of the underlying principles of consciousness, probably wouldn't be all that helpful and in fact I notice that SirQuiamus was completely right - see problem #1, which he mentioned but I failed to understand the implications of at the time.
Furthermore, I'm not actually
advocating making the sensation of non-material gain your goal, because with the way it is defined and phrased you literally can't do anything else except by doing something blatantly self-destructive out of spite such as slitting your own throat with no other provocation - but then you would have derived a small measure of satisfaction from proving me completely wrong and demonstrating supreme agency, which I could comfortably describe as non-material gain for you. It's a catch-all term for a reason. More amusingly, it may in fact be an
unhelpful, impractical abstraction, which is something that sounds familiar to me right now.
A good real-life example of non-material gain coming to light is when a good deed becomes tainted by some extraneous factor. The good deed would have granted you the appropriate amount of non-material gain, but the extraneous factor changed your perception of it sufficiently that you failed to get all (or indeed any) of it. For you, that would be your choice to be in a specific situation having been revealed to be an illusion, a ploy based on a prediction of what choice you were likely to pick, and also the result of shallow, yet nevertheless effective manipulation.
Ah, to be proven wrong.