So You Want to Colonize the Universe Part 2: Deep Time Engineering
Part 2: Deep Time Engineering
So, with “Gotta go Fast” as the highest goal, and aware of the fact that the amount of computational resources and thinking time devoted to building fast starships will exceed by many orders of magnitude all human thought conducted so far, due to the importance of it...
I set myself to designing a starship to get to the Virgo supercluster (about 200 million light years away) in minimum time, as a lower-bound on how much of the universe could be colonized. I expect the future to beat whatever bar I set, whether humanity survives or not (it turned out to be about 0.9 c)
Now, most people focus on interstellar travel, but the intergalactic travel part is comparatively underexplored (see comments). We have one big advantage here, which is that we don’t need to keep mammals around, and this lets us have a much smaller payload. Instead of delivering a vessel that can support earth-based life for hundreds of millions of years, we just have to deliver about 100 kg of Von Neumann probes and stored people, which build more of themselves. (The true number is probably a lot less than this, but as it turns out, it isn’t harder to design for the 100 kg case than the 1 mg case because there’s a minimum viable mass for dust shielding, and we’ll be cheating the rocket equation.)
Before we get into intergalactic starship design (part 5), I want to take a minute to point out the field of Deep Time engineering, which is something that I just crystallized as a concept while working on this.
Note that whatever starship design you’re building, it has to last for 200 million years, getting bombarded by relativistic protons and dust the whole way, and even with relativity speeding things up, you’re still talking about building machinery that last for tens of millions of years and works with extremely high reliability the whole way. This is incredibly far beyond what engineering normally does, it takes god-like levels of redundancy and reliability, and if you’ve got something with moving parts, there’s erosion by friction to consider, and also 200 million years worth of cosmic rays… I didn’t focus on actual solutions that much, but just the awareness of the existence of tasks which require building machinery that works for hundreds of millions of years sparked something.
Engineers have shorter time horizons than you might expect. In environmental engineering (my major), we typically focused on a 20-50 year design life for building wastewater treatment systems. They’re also dependent on the electrical grid for functioning. I think that I could design a 500-year treatment plant that also wasn’t dependent on the electrical grid. It would take a while, bring in quite a few nonstandard considerations, and be far outside of the scope of normal design, and a bunch of standard approaches (like using energy-hungry air pumps to aerate the water) wouldn’t work. A plant that does this would also have an enormously larger footprint than standard wastewater treatment plants.
Several-hundred or several-thousand year solutions are in a very different design space than standard solutions.
I should also make the note that we’ve figured out how Roman Concrete works, which is far more erosion-resistant than standard concrete (it lasts for several thousands of years, and is far more resistant to saltwater than standard cement), and this is why the Colosseum is still standing. Basically, you just use seawater instead of regular water when making it. Also the steel beams in regular concrete which give it tensile strength instead of mere compressive strength accelerate corrosion significantly. However, regular concrete takes a few hours to cure enough to apply weight, and cures fully in about a month. Roman Concrete takes two years to fully cure. And this is why very few places use Roman Concrete, even though it lasts over an order of magnitude longer. (I did find an article about a Hindu temple under construction that was using Roman Concrete, and was designed for a thousand years, though).
Even in civil engineering, the land of roads and bridges and buildings, you tend to see 100-year design lives at most, as well. I should note that there are tables that tell you the average magnitude of a 100-year flood (largest flood expected in 100 years), and these are used in design. And also the teachers mentioned that due to climate change, extreme weather events are more likely to occur than the tables indicate. But they didn’t explicitly connect these two things, it was left unstated for the students to click together, and there was also an unstated implication that going to the higher-redundancy systems that’d handle 100 years+climate change would lead to people asking you why you’re using 1,000-year flood numbers instead of 100-year flood numbers and the design wouldn’t pass.
There are exceptions. The sea walls in the Netherlands are sized for 10,000-year flood numbers, and I got a pleasant chill up my back when I read that, because there’s something really nice about seeing a civilization build for thousands of years in the future.
There’s also the attempt to design nuclear waste storage that warns people away for tens of thousands of years, even if civilization falls in the meantime. This popular account is worth reading, as a glimpse into long-timescale engineering.
But in general, Deep Time Engineering is pretty underexplored, because it requires much higher costs, much higher reliability, a larger footprint, and about all of the machinery that you’d buy isn’t rated for hundreds or thousands of years, there’s no supporting infrastructure for engaging in construction projects of that design life.
The specific manifestations of it would vary widely by field and the specifics of what you’re building, but in general it seems to be a discrete Thing that hasn’t previously been named, and that our civilization neglects.
Building a 100-million year (or even billion-year) starship is an especially extreme example of this. For my specific starship design, the only thing that actually requires continuously running the whole time is the antimatter chilling system to get it to 0.1 K when the cosmic microwave background is 2.73 K (otherwise it heats up enough that you lose all your antimatter to evaporation against starcraft walls by the time you get there). This takes less than a watt of power to do, but keeping an antimatter cooling system (and storage system, although superconducting coils help immensely) continuously running for geologic timescales is a very impressive feat. Also, all the machinery for deceleration has to still work at the end of 100 million years of cosmic ray damage and such, and there’s a part in there where end up firing a multi-gigawatt nuclear engine for a few millenia to target a specific star, which is also going to be extremely hard to design for that level of reliability. (imagine the radiation damage to the engine from that level of power, it won’t be pretty).
Repairing nanobots help, but it’s still going to be an impressive feat.
- So You Want to Colonize The Universe by 27 Feb 2019 10:17 UTC; 21 points) (
- So You Want To Colonize The Universe Part 3: Dust by 27 Feb 2019 10:20 UTC; 20 points) (
- So You Want to Colonize The Universe Part 5: The Actual Design by 27 Feb 2019 10:23 UTC; 20 points) (
- So You Want to Colonize The Universe Part 4: Velocity Changes and Energy by 27 Feb 2019 10:22 UTC; 14 points) (
We don’t necessarily need stored people. The probe can unfold into basic infrastructure + receiver, and the people can be transmitted by some communication channel (radio, laser or something more exotic).
Claiming that
is annoying as there is an excellent work by Armstrong “Eternity in six hours” https://www.sciencedirect.com/science/article/pii/S0094576513001148
and there was LW post about decelerating of the intergalactic probes. https://www.lesswrong.com/posts/WnPa2KyaTcLXRYZp8/decelerating-laser-vs-gun-vs-rocket
Edited. Thanks for that. I guess I managed to miss both of those, I was mainly going off of the indispensable and extremely thorough Atomic Rockets site having extremely little discussion of intergalactic missions as opposed to interstellar missions.
It looks like there are some spots where me and Armstrong converged on the same strategy (using lasers to launch probes), but we seem to disagree about how big of a deal dust shielding is, how hard deceleration is, and what strategy to use for deceleration.
:) Ok, and now I will take a chance to advertise my two ideas of intergalactic colonisation.
First is a SETI-attack—that is sending AI-contaminated messages to possible naive civilizations. LW post. Not sure we should start it.
Second is the use of a nanoprobe’s accelerator to send many nanoprobes with different speeds—such nanoprobes will reach each other in the flight and organise in large obejct, which will then capable to deceleration—more details in the comment to the deceleration post.
Or at least people taking expected value mildly seriously and not just copying the engineering standards acceptable for roads.
Why not to go big in order to increase redundancy? Make planet sized ship, accelerate it by halo drive, and lasers. Don’t decelerate, the expansion of The Universe plus matter of the intragslactic space will help you up. Just spawn smaller ships, which will resupply you and build lasers for your acceleration. So there is no such a problem as redundancy in hundreds millions years scale for earth like systems.
In the most radical case, you could reassemble your galaxy and push it by creating artificial quasar. And this will be fast.