So apparently NVidia don't support OpenCL any more. NVidia basically dropped all pretenses at supporting the OpenCL standard for all their cards, and as such, they can't use above version 1.1, released in 2010. Hardware support is there; they just can't be arsed to write drivers for it. Well, guess any OpenCL code I write excludes the use of NVidia GPUs. Time to buy a proper AMD card (which will soon be faster and probably have better console porting ability because of Mantle anyway) and put this NVidia paperweight in the closet.
Yeah, so don't buy NVidia GPUs.
So... AMD, you say? *Makes mental note*
The rest of the forum knowspeople agree?
NVidia aren't completely out yet, but if they don't announce something really big really soon, they're toast.
Mantle will be an open API; NVidia can develop with it if they want to. It would be more or less directly portable to consoles, since they run on the AMD APUs which Mantle somewhat aims for (despite being PC only; consoles already have Mantle-like languages, and have for a while). But chances are, NVidia won't develop for it, since they haven't even bothered updating their OpenCL drivers in 3 years. For graphics-intense games, NVidia will run like shit if Mantle becomes standard.
In the compute language department, CUDA is on a long walk off a short pier. Intel, AMD, and every other company on the market are locked out of CUDA, since it is fully proprietary. Meanwhile, all of those same companies are fully behind OpenCL. OpenCL will run on your desktop's GPU, it will run on your desktop's CPU, it will run on your laptop's GPU or CPU or integrated GPU. It will run on your phone, it will run on your console.
It will run on a boat, it will run on a goat... and it will run in the rain, and in the dark, and on a train. And in a car, and in a tree. And it will run in a box, and it will run with a fox, it will run in a house, and it will run with a mouse. It will run here and there, it will run ANYWHERE.
Except an nvidia cardI will give them that OpenCL has a steeper learning curve than CUDA. CUDA has a nice API which is fairly easy to use; but that fact is entirely worthless if you can only support a small subset of machines. If you have an AMD machine, I would need to manually write an entirely separate implementation of my compute needs on the CPU or with OpenCL. If you have an NVidia machine, I would simply see the fact that no GPUs support OpenCL, and choose a CPU device to run it on. No extra code required, no separate data paths or anything. Just run the OpenCL on the CPU. Or, if the GPU performance is necessary, some simple changes to an NVidia codepath to remove any OpenCL 1.2+ features.
At this point, NVidia is in two markets, having been driven out of the console market pretty much entirely by AMD making the hardware for the current-gen systems. NVidia are in the desktop PC business and the mobile chipset business. In the mobile sector, no one is going to use CUDA. No one. The mobile market thrives on compatibility and portability of code; CUDA really doesn't have either. I wouldn't yet call compute power in the mobile market a big thing yet, but it may well be 'the next big thing' in mobile. That compute power is what will give the devices powerful capability, like image recognition or other parallel, highly-intensive operations. AR, and all those sorts of things can use all the compute power you throw at them. OpenCL is really the only possibility of a standard for mobile; they just don't have the marketshare to bully out other vendors like they do on the desktop.
So then there's the desktop. They have momentum here, and that is entirely what they are depending on to push CUDA and things running on it (like PhysX). And really nothing more than momentum.
OpenCL gets basically the same performance on a desktop GPU as CUDA. It only sticks around because NVidia have a much higher budget for pushing it, and generally have more impressive tech demos since they can throw more money at teams developing them. None of this is actually any better than OpenCL; generally all the same. So it really just comes down to platform support; where OpenCL is king.
But most importantly of all, NVidia can go fall down a well for turning GPU computing into a developer hell of non-standards, wasting everyone's time and money, and over all resulting in giving the end user shittier games and products.
GPUs these days have several trillion floating point operations per second, and basically no one uses them for GPGPU in games because it's too much of a pain!Or at least that's my rant on it, from the perspective of a graphics programmer.