I have an average 4GB's of ram, so mine may be able to take it for a short period of time.
Unfortunately it's not that simple. Under Microsoft Windows, each individual ordinary program is limited to a 2 GB address space; even if you have far more RAM than that. There is an option when compiling (linking actually) a program to make it "large address aware", which allows it to use up to 4 GB on a 32-bit version of Windows, or 8 TB on a 64-bit version of Windows; this can slow performance somewhat and has various other issues, so it's still pretty uncommon. Note that since a 32-bit version of Windows can only address 4 GB total (and out of that you need the OS itself, graphics memory, and so on), realistically you can't actually get anywhere near that for a single program. You also have to increase the user memory space on 32-bit Windows (which can lead to instability), which usually puts an upper limit of 3 GB, and in practice somewhat less than that.
Under Linux, 32-bit versions allow up to 3 GB for a single program; plus PAE-enabled kernels on modern PAE-aware hardware allow even 32-bit version of OS to address a max of 64 GB total, which if you actually have more than 4 GB physical greatly increases the chance you'll be able to successfully handle a nearly 3 GB image without stability issues. I think this is one reason why some of the bizarrely broken embarks (spires, etc.) have been loadable on Linux but not Windows.
This sort of complex memory management mess is why power users are moving to 64-bit OSs; even with 4 GB of RAM you tend to see improvements, and larger amounts can have a dramatic effect. It requires the programmers to cooperate to get more than a small benefit, however. We're still somewhat in the "chicken vs. egg" stage where programmers don't bother doing the work to produce 64-bit or at least large address aware versions because there aren't enough 64-bit users; and ordinary users aren't moving to 64-bit because there aren't enough programs that benefit.