The modernization of the Linux distribution also impacts choice of compilers and libraries. GCC and LLVM can use different C++ libraries. (
http://libcxx.llvm.org/) binaries differ as LLVM is much more strict about the new standard and doesn't use GCC extensions.
There is much to consider. C++ 0x11 changes in GCC 5 mean that software compiled with newer systems won't run on older systems. The newer systems have a dual ABI (Application Binary Interface) to handle this but really older systems are out of luck See
the GCC manual on Dual ABI.
lsb_release -a: Ubuntu 8.10, codename intrepid
The real worry with moving from such an ancient system is crossing the glibc 2.1 symbol versioning threshold. After 2.1 the GNU standard C library started versioning symbols - that is the name of functions and variables exported for use. Software written with newer GCC (>=3.4) and older GCC (<3.4) have all sorts of problems.
I don't currently know how to decide between 10 all the way up to 15. Would using 14.04/15.10 cut anybody out?
The issue to watch for is the C++ 0x11 updates, This was a very big deal in the 2011-2012 time frame. Almost as big at the 2014 work on making /usr and /lib into links to /usr/lib or /usr/bin. Since you have to cross that huge gap for the symbol issue why not go with the latest version?
Linux users tend to be a very diverse bunch but general end-users follow the normal OS adoption curve. Perhaps take a user poll? Examine the bug tracker for OS releases on recent or active tickets?
At the worst the binaries could be statically linked (SDL.a included), The binaries would be huge and people use alternatives like uclib.a to help reduce the code bloat. And it would piss off people who make native packages since statically linked binaries are strongly disliked by Linux distribution makers. With 32-bit versions a static binary is like to be too big to load into memory. For 64-bit a statically linked game might load but just be ridiculously slow.
Using a newer GCC might break compatibility for some people, but that should be relatively easy to fix by distributing the newer libstdc++ that comes with it. (
In Windows it is seen frequently that developers ship their standard C library or an installer package for it with an application for just that application's use.
In Linux this is usually not done. An installation of Linux or BSD is a distribution of software after all. Anything missing should be added to the installed system for everyone to use. Linux development workflows and release paradigms are all designed around this idea.
On top of this the glibc and libstdc++ libraries are special on Linux. The glibc library is the ABI (Application Binary Interface). That is your software's interface to the kernel and thus the world. So glibc and libstdc++ are not just a grab bag of standard C/C++ stuff but are 'the computer' for your native application. Yes, you can build software directly against kernel APIs, et cetera and throw away any pretense of running on anything else.
Plus you can't use tricks like LIBPATH and LD_LIBRARY_PATH to override glibc. You have to mess with the application loader's RPATH on compile time and LD_PRELOAD custom libraries at run time. Even then your success will vary. See
http://www.lightofdawn.org/wiki/wiki.cgi/NewAppsOnOldGlibc for some of the stuff involved.
This ABI not only has to match between your application and libstdc++ but also for every library and kernel function you want to load. Problems with the ABI cause 'Symbol not Found' errors when compiling and linker errors from ld.so when trying to run applications such as "version `GLIBC_x.y.z` not found."
So to ship libstdc++.so you'd also have to ship matching libraries for
everything that the application needs. Custom SDL. Custom TTY. Custom sound and OpenGL libraries for SDL. Matching libpthread even though you never directly use threads. Everything. This runs into 100s of libraries for some applications. This is usually where a company resorts to static compiles.
Oh, you can forget about DFHack or gdb working unless they ship the same pile of libraries as Dwarf Fortress, too. Those need to match close enough to inspect the running game and inject data into it.
well, I would use ubuntu 16.04 LTS x64 for compiling. If older versions of linux are required, then I would use docker.
Even if you ship the universe of "shared" libraries there's also the risk that your custom world of stuff you are running work NOT work on the Linux kernel you have installed. Containers like docker are popular because they manage all of this and attempt to resolve that issue. But the ability to run a container implies you are running a 2014 or later Linux distribution probably with systemd.
Should Dwarf Fortress start trying to ship glibc.so or libstdc++.so I'd have to strip it our when
packaging it. Even then basic checks on the package (rpmlint, deblint, etc) would scream about shipping any standard library. Services like the
Open Build Server would just refuse to build the package.
This is probably not a big loss. It appears that most people who want something as convenient as packaged software use one of the big install kits. Those kits come with custom tile sets, DFHack and the other quality of life improvement tools. (Plus I never got a clear license answer to my question so I cannot Legally provide binary packages, either.)
Also, it is definitely a good idea to produce several versions of binaries with different cpu instructions support (vanilla, SSE2, AVX, AVX2 - the only difference for you is just a compiler flag), because it does not take any (!) additional time, but may yield tens of percents of execution speed. This is valid for both Windows, Linux and OSX
As eternaleye pointed out, GCC and Linux support
function multi-versioning since 4.8. There is no need to generate different binaries. This is another reason to switch to the latest GCC for full support of the range of optimizations available.
Do note that some kinds of optimization will influence repeat-ability of simulations like Dwarf Fortress world generation. Lookup the scholarly articles on optimization robustness for some cool math on stability, sensitivity and limit theorems of multidimensional optimization.