Oh here's an interesting thought. I think people frequently drive badly because they selfishly want to get where they're going sooner at the expense of everyone else getting there a little later and/or less safely. You'll see some guy weaving through traffic on the freeway, not because he enjoys it, but because he foolishly didn't give himself enough time to get where he's going. Or perhaps encountered some petty "emergency" which he feels that he needs to attend to ASAP.
A robot car wouldn't do that. You tell it where you want to go and it drives you there as best it can.
We have a car company putting a vacuum cleaner in a car and nannying you into being unable to use it while driving - similarly, in our car we can't have a passenger use the navigation system while the car is in motion because the designer figured the driver would be doing it and driving distracted as a result. Shortsighted, unnecessary, but possibly safer than letting it work how you would want.
Because of this existent climate of car companies trying to make safer cars, I don't think they would add in a feature where you could tell the AI how important your time is and let it make riskier decisions and/or drive more aggressively in order to save time. For one thing, every driver would set it to maximum at all times, much like some people seem to think every email they send is "high importance". Since nobody would set it to 1 of 5, even on a leisurely cruise, why even give them the option?
There's also the public opinion that the AI driver is going to make a lot of mistakes, and to counter that the car company can't afford to include an option that allows the owner to let the AI take greater risks. They want to look as safe as possible at the start, and what happens in the beginning will affect the cultural expectations for AI drivers in the future.
However I don't doubt that some high-end cars will feature an "Emergency Driving" mode that is more aggressive. This perfectly mirrors the self-fulfilling stigma of the "Jackass BMW Driver".
Anyway, the end result is that in the beginning you will have cars with an AI Assist, which alerts you to hazards and makes suggestions, like an extension of the navigation system. Or you'll have an AI Driver which can do the work while you sit back and eat your waffles (finally! in peace!). Or you can take control and just drive yourself.
And everyone will just choose to drive themselves because they can be jerks and speed and cut people off and swerve around under trucks. As long as the alternative is getting where you're going at an average of 30 MPH city / 60 MPH freeway, people will choose to drive manually and go 40 / 75+. They'll rationalize it as "
I'm a great driver and I don't want to let the fallible AI make a mistake and get me into a wreck".
Here's another issue: what happens when a wreck does occur and neither driver is using an autopilot? Are they both automatically somewhat negligent by not using the safer autopilot? If one driver is using an autopilot and is technically at fault but the AI failed to work correctly and avoid the crash, and the other driver is driving manually, can we say that the autopilot driver was really at fault, or was it his car? Is he negligent in letting a possibly faulty autopilot drive? Is the other guy assumed to be at fault because we trust the autopilot more than a manual driver?
What if someone hacks their car to think it's in autopilot mode when it's really manually-driven, thus allowing the driver to engage autopilot post-wreck and claim he was on autopilot the whole time, fooling even the car's black box? What if a driver hacks his car and sets its risk acceptance level higher than normal?
Will insurance companies start charging more if you drive manually? Will they demand access to your black box records in order to charge an appropriate amount? In which case, what privacy rights do the car owners have when the insurance company inevitably sells their black box records to advertisers and car-theft rings?