Nope. If you know the others are going to buy the widgets, you'd be better of not buying one yourself. That's the whole point, really. If you don't believe me, go look up the frickin' prisoners' dilemma on Wikipedia. Buying the widget corresponds to keeping mum, not buying it corresponds to ratting out the other guy.
No, you don't know the others are going to buy the widgets, you just know that everyone will make the same decision, whatever it is, in which case the logical thing for everyone to do is buy the widget. This is a real thing, I'm not making it up, it's just a higher level concept than you'll get on Wikipedia.
There's actually a TED talk about something highly related to this that's relevant to the thread. I'd forgotten about it until your post.
There's a scenario, similar to the Trolley Problem. The problem is for self-driving cars. The car can either run over 10 people if it goes on it's current trajectory, or it can kill 1 person. Which should it do? This is just the basic Trolley Problem, then, and almost people immediately say "kill the one person", e.g. they take the utilitarian choice: kill the least amount possible, even if it means taking action that deliberately chooses someone to die. At some point, self-driving cars will be involved in accidents and will have to take hard choices. Applying the "trolley problem" is just a way of thinking about the ethics involved.
however, what if the "one" who must be sacrificed to save other lives is you, the driver, who paid for the car? Suddenly, most people say they will
not buy a car that makes that trade-off, even if everyone else agrees to buy one. e.g. if
everyone had a utilitarian self-driving car that would in fact kill the
driver if that was the way that minimizes total casualties, total road casualties would be objectively the lowest possible. However people don't want the cars that do this. And, objectively when you do the maths, we're all worse off if we
don't have the cars that would willingly self-sacrifice if that is the choice that truly minimizes casualties. e.g. every driver is in fact more
likely to die, individually if we all choose cars that put "driver first" rather than optimize for complete minimization of road casualties.
if you consider the trade-offs here it's clearly the same problem as the "widgets" example I put up. "Logically" everyone should buy the robot car that's willing to kill it's own driver rather than kill two other drivers. However people seem to balk at this, even if it in fact means their total chance of death is higher: they want everyone else, except not themselves, to have the self-sacrificing "suicidal" cars. This is from surveys the guy doing the Ted talk did btw, people overwhelmingly want other people to get the death cars, because that makes them safer, but they overwhelmingly reject the idea that they themselves should get one.
And think about it, if everyone is required to have perfectly utilitarian cars, the overall chance of death is minimized. However ... if you cheated by hacking your robot car with the instruction "save me and only me at all costs" your chances of survival would increase compared to everyone else. So it would be the logical thing to do as hacking your car always minimizes your chance of death compared to not hacking your car. However, everyone is in fact optimizing their own chance of survival at the expense of increased death chances across the board. So everyone hacks their cars to be selflish, because that's just "prudent" isn't it? However once all cars are hacked, everyone is in fact less safe.
And this isn't a toy problem either. When you get into a robot car in the future you're damn well going to want to know how they work and what basis for their decisions they make.