Well if there are 50000 auto deaths (USA) a year now, and if fully optimized robot cars would be 90% safer then there will still be 5000 deaths. If, however, everyone decides to buy "me first" robot cars that prioritizes themselves instead of "collectivist" robot cars that work in a network to minimize total road casualties, even if that means one of the robot cars "self sacrificing" then you might only see an 80% reduction instead, which works out at another 5000 preventable deaths (plus injuries and property damage) because people didn't want to buy completely safe as a whole cars just in case their car would decide they were expendable to the system.
I think a system where all cars are designed to be "selfish" for the occupants of the car, and only the occupants, vs one where all cars were designed to be "selfless" and think about all human lives in their decision making process would be clearly delineated futures, and they'd likely have vastly disparate outcomes and issues to deal with. Hell, if the "selfish" cars were in fact talking in a network, how do we know that they'd benefit from talking honestly with the other cars around them. What if your "Chevy me-me-me 3000" decided to send bogus data about it's movements to other nearby-cars in a near-crash situation because it could 'game the system' to maximize the survival chances for just it's owner, at the expense of the other owners? Once you go down the route of "selfish" robot cars being ok then "gaming the system" becomes a thing - e.g. Prisoner's Dilemma and suboptimal outcomes for all because people didn't choose to cooperate if there was a risk that they could be back-stabbed and lose out, so we all assume everyone is a backstaber to be prudent, and we hide information. The problem is that "selfish" cars will need to be built around these sorts of trade-offs, which demonstrably give very different outcomes to utilitarian outcomes.
However, and there's a real point here, people in the future with robot cars might decide that they're ok with higher driving speeds and more tight cornering, less space between cars because that's a trade-off with convenience and safety that they're willing to make. Robot cars would be safer if driven exactly like we drive now, but as safety margins increase people might be willing to push it thus trading-off some safety for convenience. Hell, most people are already ok with the current level of road fatalities so why wouldn't they push things with robot cars even further? So things like a claimed 90% reduction in fatalities might not actually happen if we decide to push things in other ways because we feel safer.
So no, i don't think saying that it's not worth thinking about because robot cars will be too perfect to care is a good argument. It's highly likely to be a flawed assessment.