I'm wondering if the whole DX/Mantle/Vulcan releases will kick the specs down a tad? Fair enough, it's necessary to double render everything, but I wouldn't be surprised if some things get worked out in the render pipeline to make that a bit easier. They are two different "cameras/viewports", but they're pretty damn close together for standard binocular vision.
There's lots of things that only have to be loaded once and used used twice (to use a fairly incorrect analogy). But GPUs are multicore parallel by default, so there are ways.....
Ps3 got pretty sexy by the end of the console's run, x86 hardware is going to end up the same. In some respects, standardized console hardware should make it easier to push the techniques, with easy flow-over to PC (and all it's horribly dissimilar configurations). GTA5 was worlds ahead on render distances compared to early PS3 releases, just due to a few well known coding tricks that weren't used as well up until that time. So I'm assuming similar twinned-pipeline, direct-to-hardware coding tricks will come into being pretty soon for VR. It's usually better/faster than dealing with another layer of middle-ware between your good coding and fast screen output. Screen 1 and Screen 2 is a thing in VR.
It's still a graphics card bottleneck, but CPU specs may not need to be as vigorous once Vulcan, etc come through properly, as well as CPU->GPU pipelines getting properly sorted for twin displays, freeing up some processing resources for most users.
Is split-screen local multiplayer that hard to do? Not really, but it does use more GPU power. VR is like that, but you've got a really good chance that all models, brushes, textures, shaders and lights all get used twice in every frame. Those two "viewports" tend to be pretty close together asset-wise on rendering the final scene, even if you're a giant mecha. But all at full render resolution, through different "monitor ports", one for each eye. In a way, it's no different from split-screen, or anything that renders multiple viewports/cameras. There's already very good ways of doing so, but utilizing multi-core parallelism has never been one of indy dev's strong points. And that's what half of VR seems to be right now. Tech demos and indy 1-shots. When the big boys go "Oh, we've already got rendering pipelines that do that, we call it "split-screen and multi-monitor support"", a lot of problems go away.
They'll still want you to buy a 970+. It makes framerate problems go away too.