No, multithreading
is easy, just not with the typical, imperative style of programming.
The way most programmers approach this problem is to write programs that alter some state, then do some more (different) alterations based on the altered state, rinse and repeat until you've completed a full frame.. then repeat.
Needless to say, making this style of programming work with multithreading is an iron-plated bitch. It's hard if you start off your program thinking about threading, and it's practically impossible to retrofit. That does not mean that
multithreading is hard, though, just that multithreading using this style is. There are others.
For example, in (purely) functional programming, you write functions without having any state for them to alter. You pass in the old state, and they return new state, a summary of it, or whatever; it gets quite involved, and doing it well is probably harder to learn.
However, this style offers several enormous benefits:
First, it becomes much easier to reason about your program, because no function can affect anything other than its own return value, and nothing needs to worry about other parts of the program altering its data (since they can't). This benefit increases with program size, offering probably better modularization than OO manages; a good thing, since the style fits very poorly with OO in general. You have to learn other ways to do what OO'd do.
Second, which is the part you care about, this makes multithreading easier. You don't have to worry about other threads altering your data since, again, they can't. Further, higher-order functions - which don't only exist in FP, but are certainly more common there - such as maps or folds, make it easier to decide what to multithread. You can always switch out a map for a parmap, which does the operations in parallel, without ever having to worry about whether this might make your program incorrect - it can't - though you still have to worry about the cost of thread creation and granularity.
Third, it's easier to reason about the programs - for computers. Automatic parallelization of non-parallel code is the holy grail, yes? Well, in functional programming, it's happening already. The research is still at an early stage, but efforts such as Data-Parallel Haskell are pushing it closer to reality every day. The compilers already do extensive single-threaded optimizations that make such languages nearly as fast as C, despite being far,
far higher-level and running on processors that are, frankly, designed for C.
Fourth, it's safer. There just isn't nearly as much that can go wrong in your programs; crashes are nearly unknown, and they're fairly amenable to formal proof methods. As a result, functional programming is quickly gaining popularity in the financial sector; other areas, such as airplanes, nuclear reactors or spacecraft, are still limited by inertia and the relatively poor performance of hardened processors.
Fifth, it's cooler.
You'll never find any traditional language with anywhere near the interesting features of functional languages - hell, for you C++ programmers, Haskell allows you to overload the
semicolon - and it works quite well, thank you. (Monads)