At least one of the "C++ Sucks for game development" talks at GDC was speaking about memory layout. There just isn't a language that allows you to arrange memory in the fastest way at the moment, memory layout for cache purposes seems to be an optimisation that has been totally ignored despite the slowness of memory reads. The best you get is caching in registers. C macros (ugh) or C++ with (insanely complicated) template metaprogramming can sort of help the programmer, but we really need a compiler designed for stream processing. I guess implementing it as a meta language in D would work well.
OK, I'm a programmer, and I've never really heard of this complaint before, at least not at this level. RAM isn't like a hard drive, the actual location of data in a RAM chip isn't going to take longer to access. If you mean searching for objects in memory, then hash tables are about the fastest way (that I know of, at least) to look things up at the cost of using up a bit more memory and CPU power.
The only actual bottleneck I'm aware of as far as RAM goes is the pagesize, which on most OSes is 4KB. That means that to load a 1MB object, the underlying system has to do 250 checks to see if a page is in RAM, and 250 reads, a best-case assuming that it doesn't have to load anything from the hard drive. An OS that uses 2MB pages, which is apparently starting to happen, can do the same check in 2 checks max, if the data happens to cross the page boundary. I don't think this is something that is under control by anything besides the OS handling RAM access, and I don't think it's a major deal for games in general anymore than any other type of application. Unless hard drives have had exponential speed boosts lately, 250 RAM accesses is still faster than 1 HD access.
I might be a little bit behind on this, since I've been doing .NET programming that handles a lot of memory stuff automatically and haven't gotten into much theoretical work since college graduation.