Space Elevator is thinking small.
Just build all of the Earth's surface upwards until you're in outer space.
if you build the entire earth up 20 stories that's equivalent to 20 more Earth's of floor space. Then keep building up, and eventually you won't need a space elevator.
EDIT: Some people ask where the aliens are, why wouldn't they expand. My thinking is that once you do the math you work out there's no reason to expand. We could easily build living space for Quadrillions of humans if we want in this solar system.
Also, with the possibility of simulated consciousness that makes the need to travel to other star systems for the point of "experiencing" it pointless too.
Consider this idea: if you can simulate consciousness, then that AI would have a current-state and a future-state, and you determine that the future-state is a valid successor to the current-state so that it's the "same" consciousness. Then, if you've determined some really great future-state for your AI, why even bother with the intermediate states? After all, previous states just exist as memories, there's no reason to even have them. Just make the ideal future-state consciousness who has really good memories.
Also, consider that you might be able to give inputs to two different current-state AIs that cause them both to converge to the same future-state AI. Effectively, that new AI is a merged consciousness which would view itself as the legitimate successor-state to both original current-state AIs. You could repeat ad infinitum, and have a single future-state AI which is the legit successor of all of any number of current state AIs.
A related idea:
Scenario 1) you are simulating a consciousness in one machine, then stop that machine, copy the current-state over to another machine and then allow the simulation to keep running. We'd say that's the same consciousness, right? since the simulation is what matters, not the hardware. Copying the data to a new machine then starting it up again should not result in any loss of consciousness. So the consciousness moved over with the data, right?
However ... consider:
Scenario 2) you are running two consciousness simulations in lock step, using the same inputs. Two different machines. You then stop one of the machines. Now, from the point of view of the stopped machine, this is identical to scenario #1. So did the consciousness "jump" in this case, or "not-jump" because there was already a consciousness there. Clearly, though in different "machines", they were actually the same "consciousness" to start with. This also suggests that if you can cause the states of two conscious simulations to converge, then that is literally the same thing as merging them, as long as you don't subsequent diverge them again.
My point here is that if consciousness can be digitally simulated, then we may have to abandon the very concept of "individual identities". The final frontier of understanding consciousness itself may well be the final existential crisis of humanity, making things like evolution vs religion look like children's bickering. How this relates back to the aliens idea is that if your race has had an exitential crisis where you realize there's actually no such thing as individual consciousness then a lot of things that make sense to us such as experience itself wouldn't make so much sense: you could compute a consciousness which is a legitimate simulation successor-state to any number of existing consciousnesses. Then, why would you even bother having people let alone going anywhere? Just compute a being that's had the best version of every possible experience and be that thing.
EDIT2: another way to understand this: imagine I teleported in "you from 10 minutes in the future" and then de-materialized now-you and left future-you in your place. Is that the same you or a different you? Logically, it's exactly who you physically would have become 10 minutes from now, so it is in fact the same you. Then, you can realize there's no difference between you-10-minutes-from-now or you-10-years-from-now or you-10-million-years-from-now. As long as it can be shown that they're the legitimate successor-state to your current "AI" then they are still you. So you can compute an ideal future-you, materialize that you and immediately kill current-you, realizing that for all intents and purposes you're still actually alive and conscious. And that future-you could be merged with other future-yous for different people.