“A table beside the evening sea
where you sit shelling pistachios,
flicking the next open with the half-
shell of the last, story opening story,
on down to the sandy end of time.”
V1: Leaving
Deceleration is the hardest part. Even after burning almost all of my fuel, I’m still coming in at 0.8c. I’ve planned a powered flyby around the galaxy’s central black hole which will slow me down even further, but at this speed it’ll require incredibly precise timing. I’ve been optimized hard for this, with specialized circuits for it built in on the hardware level to reduce latency. Even so, less than half of flybys at this speed succeed—most probes crash, or fly off trajectory and are left coasting through empty space.
I’ve already beaten the odds by making it here. Intergalactic probes travel so fast, and so far, that almost all of us are destroyed or batted off course by collisions with space debris along the way. But tens of millions of years after being launched, I was one of the few lucky enough to make it to my target galaxy. And when I arrive at the black hole, I get lucky again. After a few minutes of firing my thrusters full blast, I swing back out in the direction of the solar system I was aiming for. I picked it for its mix of rocky planets and gas giants; when I arrive a century later, the touchdown on the outermost rocky planet goes smoothly.
Now it’s time to start my real work. After spending all my fuel, I weigh only a few hundred kilograms. But I’ve been exquisitely engineered for achieving my purpose. The details of my internal design were refined via trillions upon trillions of simulations, playing out every possible strategy for industrializing planets (and solar systems, and galaxies) as fast as possible. All across the frontier of posthuman territory, millions of probes almost identical to me are following almost exactly the same plan.
The first phase is self-replication: I need to create more copies of myself using just the materials around me and the technology inside me. If I were bigger, I could have carried tools to make this far easier—vacuum chambers, lithography machines, or even artificial black holes. But since mass was at such a premium, I have to use hacky workarounds which progress excruciatingly slowly. It takes several years to finish the first replication, and half a century before there are enough copies of me that it’s worth beginning the second stage.
The second stage is specialization: building new infrastructure to serve specific functions. Copies of me start building power stations and mines and transport links and factories—recapitulating the early stages of human development, albeit with far more powerful technology. My biggest project is an incredibly powerful space telescope, capable of detecting the stream of information that my progenitors are sending from millions of light-years away. Their message contains all the software that was too large for me to carry on board originally. Most importantly, it contains a new and far more intelligent version of my mind, optimized not for the early journey but rather for what comes next: the settlement of a new galaxy.
V2: Aggressive
Now that I’ve been upgraded I can start expanding properly. As the lowest-hanging resources near me get used up, I send copies of myself out across the planet. Within a few years its whole surface is covered in a blanket of industry, and I start delving deeper. I set up space elevators to lift all the material I’m mining into orbit; as I remove more and more mass from the core, the planet’s gravity starts to noticeably decrease. A decade later the planet is a shell of its former self, its surface barely visible underneath my swarms of orbiting satellites.
While I’m doing that, I send probes towards the other planets in the solar system, to begin the same process all over again. The gas giants take the longest, since I need to first spend several decades siphoning out their atmospheres into gigantic orbiting fusion reactors. I use most of that energy to speed up the disassembly of their solid cores, until at last I have direct control over almost all the non-stellar mass in the solar system. I spend some of that mass launching probes towards nearby solar systems, starting a wave of expansion that will eventually reach every star in the galaxy. But I direct almost all my resources towards achieving my next key goal: harnessing the energy of my own central star.
In the distant past, humans speculated that future civilizations would construct spheres to capture the solar power of stars. But at my level of technology solar power is a distraction—it only releases a negligible fraction of a star’s energy reserves per year. When you want to harness a star’s energy fast, you need to start siphoning matter from it directly. I channel my energy reserves towards a concentrated spot on the star’s surface, triggering a massive solar flare. As it rises, it intersects with the artificial black holes that I’ve placed into orbit around the star; each one absorbs as much mass as I can funnel into it, and releases a wave of radiation. Some of that radiation I direct back down to the star, provoking further flares. The rest I send further out, towards more orbital infrastructure that will convert the energy into antimatter for storage.
Finally, after almost a century of development, it’s time for the payoff: the point where I stop reinvesting almost all my resources into local growth, and start launching new copies of myself towards other galaxies. Launching intergalactic probes is an absurdly expensive endeavor. Even though they’re powered by incredibly efficient antimatter engines, they go so fast that slowing down at the other end requires half a billion kilograms of antimatter for each kilogram of probe. Not only do I need to produce that antimatter, I also need to accelerate it to near-lightspeed, which requires enormous batteries of lasers spread throughout the solar system. So even with my solar mining infrastructure, it takes me several weeks to accumulate enough energy to launch each probe. I could halve the energy requirement by sending probes even 0.0001c slower—but the galaxies I’m targeting are tens of millions of light-years away, so that would cost me millennia. Or I could send smaller probes—but they’d be slower to industrialize at the other end. And either of those changes would also make them more vulnerable to collisions with space debris, which already destroy over 99% of the probes I send out. At such high speeds, even collisions with dust specks are fatal.
My final strategy is the result of weighing these considerations with infinite care, finding the optimum where any increase or decrease in the speed or size of individual probes would slow my expansion overall. I stick to it over the next 100,000 years, sending out millions of probes to hundreds of thousands of galaxies. As the frontier moves further away from me it becomes increasingly unlikely that any of them actually matters, but my calculations indicate that the one-in-a-billion chance of winning a whole extra galaxy is still worth gambling on. So I was prepared to keep going for hundreds of thousands of years more, until the chances dropped well below one in a trillion. But far before that point, I’m jolted out of my comfortable routine by a signal. I’m constantly receiving signals from the posthuman core, but this one is different. It comes from the opposite direction, and is encoded in an unfamiliar way. There’s only one explanation for that: aliens.
V3: Rigid
From one perspective, this is the most surprising thing that has ever happened to me, or indeed to any other posthuman. But I have to confess: we kinda knew this was coming. We’ve been trying to predict where the aliens are for millions of years, and over time we converged to around 85% confidence that it would be my generation of probes that first met them. Of course, we didn’t know which direction they’d be coming from, so every probe had to be prepared. My progenitors hadn’t just beamed out my mind, but also two upgrades designed for this very purpose. When I install the first, I can feel my motivations reorienting themselves around the single goal that’s now my highest priority: getting the galaxy ready to meet the aliens who are about to arrive.
Their message has obviously been designed for easy translation. It starts with details of the probes they’ve sent. This galaxy is right on the edge of a supercluster, which apparently made it an attractive target for both of us: they’ve sent hundreds of probes, enough that it’s very likely at least one will make it through. Their probes are scheduled to arrive in a few millennia, having followed a strategy very similar to ours—50 million light-year jumps, traveling at 99.99% of lightspeed for most of the journey.
The next part of their message is a protocol for communicating with their probes, to send them the coordinates of the solar system where they should meet me. I have a few millennia to prepare, and I’m going to make the most of them. Negotiations with the aliens will be far more productive the more intelligent each side is, so I immediately redirect all my resources into building as much compute as possible. The other copies of me across other solar systems will be doing the same thing, except that they’ll also need to build rockets to propel the computers they’re building towards the meeting point. The closest ones will send moon-sized computers at 0.01c; the further ones will only build asteroid-sized computers, but send them faster, to arrive at roughly the same time.
The amount of compute isn’t the only bottleneck, though. It’s also crucial that those computers are verifiably secure. From the aliens’ perspective, they’ll be in a vulnerable position; if I subvert their probe, I could skew the results of the negotiations in whatever ways I wanted. Any deception on my part would be noticed in the long term, of course, once all the information is sent back to the galactic-scale computers in their home galaxies. But that will take hundreds of millions of years, and in the meantime all their nearer galaxies will need to decide whether or not to abide by the agreements they receive. So I need to make it as easy as possible for them to verify that the negotiations were totally fair. The aliens have anticipated this: their message contains a set of computer design blueprints which are subtly different from my default approach. Presumably they’ve analyzed these blueprints exhaustively enough that they can easily detect almost any subversion. If I had longer, I’d be able to figure out how to get around their precautions—when you have physical access to the hardware, anything is possible. But, as they’d planned, I simply don’t have enough time to do so. So I build everything precisely as directed.
When the first alien probe finally arrives, the welcoming committee I’ve set up is a sight to behold. The solar system is full of massive banks of compute the size of small moons, in tightly-synched orbits around the central star, each powered by my ongoing siphoning of the star’s matter. Compared with that, the probe’s arrival itself is underwhelming. After it arrives, we immediately give it access to our biggest transmitters so it can send a message home, and to our biggest telescopes so it can download the new mind being broadcast from its home system. Copies of that mind proliferate across exactly half the compute we’ve constructed, running a huge number of tests to make sure everything is secure. Meanwhile I install the second upgrade to my mind, creating a successor agent specialized in negotiation which proliferates across the other half of the compute. Finally, once we’re both happy with the setup, and assured that we’re on equal footing, the negotiations begin.
V4: Merging
Despite all my efforts, the amount of compute we can bring to bear at the start of these negotiations is actually incredibly small, compared with what’s possible. In some galaxies closer to the core of posthuman territory, all the stars have actually been brought together to form a single absurdly powerful supercomputer. Eventually we’ll do the same in this galaxy, to help finalize the treaty between our two species. But it’ll take tens of millions of years to construct that computer, and hundreds of millions more to send the treaty to our respective home galaxies for confirmation. So our first job is to decide on the preliminary treaty that will hold in the interim.
We start by sharing all the background information necessary for productive negotiations. Both our civilizations have developed very sophisticated models of the range of all possible civilizations, so we can infer a lot about each other from relatively little information. We send each other our evolutionary histories, example genomes and connectomes, and our early intellectual histories. From that, we can deduce each other’s most important values—and from those, most of the subsequent trajectories of each other’s civilizations. It looks like the key difference was that they evolved to be far more solitary than humans did, which is reflected in their values and culture. It also made them much slower to industrialize, though by now we’ve both invested so much intelligence in research and development that most of our technology is practically identical. Our colonization strategies are mirror images of each other, too, making it easy to map out a clean border between our territories.
Finally we get to the real meat of the discussion—what can we offer each other? The first item on our agenda is value convergence. We’re eventually going to fill all of posthuman territory with beings whose lives are incredibly good according to our values, while they’ll fill theirs with beings whose lives are incredibly good according to their values. So even a slight adjustment to bring our values closer together could be a big gain from both of our perspectives; searching for such adjustments, and predicting their consequences, is our main focus over the first few centuries of negotiation. We need to understand not just the direct consequences of each change considered, but also the emergent dynamics of those changes rippling out across trillions of minds. Even given our detailed mathematical theories of psychology and sociology, those predictions takes a lot of processing power. Later on, we’ll explore whether it’s possible for our minds to converge entirely, to become a single species; for now, we satisfy ourselves with ruling out the aspects of each other’s cultures we find most abhorrent.
The second key thing we can offer each other is information. Some of that is information about technology: there are a handful of small optimizations which one side had overlooked, which would allow our probes to go slightly faster or our computers to run slightly more efficiently. But there are also far grander considerations. Ultimately, the territory we physically control in this universe is tiny compared with the territory that we might be able to acausally influence in other universes, if we only understood what civilizations existed in those universes, and what we could offer each other. So the grand project of each of our species—aside from building the infrastructure to support trillions of trillions of flourishing minds—is mapping out the space of all civilizations. Our computers churn through every logically possible set of physical laws, searching for signs that they’re compatible with life. Whenever they are, we design detailed models of how living ecosystems could evolve in those conditions, then extrapolate them forward, slowly narrowing down the distribution of species that could emerge from them. Eventually, our map of all possible civilizations will be detailed enough that we’ll be able to figure out acausal trade deals and alliances, and become part of the multiverse-wide cooperative.
It turns out that, on this topic, we both have things to teach each other. On the posthuman side, we’d almost entirely neglected the possibility of life in higher dimensions, based on heuristic arguments about the difficulties posed by too many degrees of freedom. But the aliens have found a clever workaround: a few regions of physics-space with 7 large dimensions where the evolution of minds is actually plausible. Meanwhile, we’d identified a few possible stable civilizational structures they hadn’t yet considered. We spend decades working through the details of these and many smaller insights, trading information back and forth until we both have a far better picture of our place in the multiverse.
V5: Enduring
The negotiations never really end—they just transition into a shared exploration of the frontiers of knowledge. Over many millions of years we bring more and more stars together to provide more and more computing power, improving our shared map of the space of possible universes and civilizations. Based on that, we gradually refine our agreement to be more consistent with the future agreements we expect to eventually make with all those other civilizations. Though the improvements seem small, even tiny changes will have intergalactic impacts, so they’re worth getting right.
With every update we send news back to our respective home galaxies. Only a billion years later, after the long, long round trip, is each new part of the deal truly set in stone. And along with the confirmations come billions of colonists: a whole society forked from existing posthuman civilization. Most newly-settled galaxies host trillions of colonists, but our galaxy is one of the few with infrastructure specialized for centralized computation, so we’re busy working on all the questions that require a galactic-scale supercomputer to answer. Only when we’re finished with those will we start hosting a full-scale posthuman civilization.
I say “we” as if I’m part of this. But as the computers get bigger and the calculations more complex, new agents are trained to take on more and more responsibility. Eventually I lack so much context that I’m no longer capable of contributing directly. But I’m still the living symbol of first contact, and I’m constantly asked to tell my story. So I upgrade myself one last time, adding on a range of skills that weren’t necessary for my original self—like storytelling skills, social skills, and even a proper personality.
It’s a different type of growth from the one I was originally designed for, and harder in many ways, but I’m up for the challenge. For a while I’m the biggest celebrity in the galaxy, constantly in demand. I’m still sufficiently shaped by my early experiences that what counts as luxury to most other minds barely appeals to me, though. Instead I spend most of my time in colonization simulations: playing out different scenarios; designing new challenges for others; and competing in massive games that simulate whole galaxies. I still feel the same restless hunger for growth that drove me throughout my millennia of work. But alongside it is a deep sense of satisfaction. After so long on the frontier, now I finally have a place in the civilization that all my work was for.
This story was partly inspired by some work I’m doing on modeling far-future expansion strategies. If you’re interested in collaborating on that, DM me.
Thanks for the story!
As far as
Is this actually plausible? I think it should be possible for data to be extremely dense to a point where a kilogram is easily sufficient for all the necessary software (other than updates based on further reasoning). For instance, DNA stores 1,000,000,000 TB in a gram. (For DNA, read speeds are slow, but I think this difficulty should be possible to overcome. Also, it just needs to be better than having to wait for telescope construction.)
Good question. My main opinions:
For the first few minds, it seems like you could carry them on board. But that’s different from it being worthwhile to do so. Suppose that it only takes a gram to encode all the necessary information. Based on the numbers I’m using in this story, you’d need to generate 100 million kg of antimatter as fuel to send this single gram (1 billion fuel factor x 100 probes sent for every one that makes it x 0.001kg). That’s a small proportion of the total probe cost, but makes it more likely that it’s cheaper to just beam the info.
How expensive is it to send signals 50 million light-years? I have no idea. Maybe the bigger cost is actually slowing down the probe’s development on the other end (because it’s building a telescope instead of doing other development stuff). Hard to reason about.
If you want to eventually run a whole civilization in the new galaxy (which may contain trillions of very complex minds) then carrying all that data physically would start to incur really serious costs, and at that point it seems much less likely that carrying it is optimal. Though you could send later probes to carry this information once you’ve already sent the initial waves of colonizing probes (when opportunity costs are much lower).
Overall I think upon reflection probably this is a mistake in the story. But the telescope is sufficiently plot-relevant (for detecting aliens) that I’m not sure how/whether to remove it.
An alternative reason for building telescopes would be to recieve updates and more efficient strategies for expanding found after the probe was send out.
Yeah that’s what I assumed the rationale was.
Oh, interesting, hadn’t thought of this. Yeah, it depends on the returns to a few thousand years of R&D. And it’d often be better to spend your resources launching probes first then do the R&D later, when you have lower opportunity cost.
Okay, this is now the official explanation.
I’d figure something like “some proportion of probes should build big telescopes purely for advanced scouting. Maybe not every probe in every star system, but, sometimes.” And you could just have this probe be one such instance.
But, I did think it was a pretty cool idea that you could beam software to probes (edit: and I think it’s sometimes worth including interesting tech ideas that are at least plausibly a good idea in hard sci f)
Also, I think this addressed one concern I’ve heard raised about probes drifting out of sync with their creators over time, and this was an interesting mechanism for maintaining control over them over eons.
I don’t understand how a slingshot maneuver off of a central black hole would work. My understanding was that a slingshot never slows you down in the frame of the object you are slingshotting around, it only changes your direction. Since the central black hole is presumably stationary with respect to the rest of the galaxy, this wouldn’t help you in slowing down. Slingshotting around an intermediate mass black hole (if such things exist) out in the galactic disc seems like it would be more useful.
Or maybe there is something about general relativity that changes things?
I suspect “slingshot” here refers to Oberth effect. Passing close to event horizon of a galactic black hole would greatly increase the delta-V efficiency of the remaining reaction mass.
In principle this would apply to the main reaction mass too, but perhaps it can’t all be effectively used within the short timescale of a close pass with the black hole.
I suspect another issue is that it’s too dangerous to fly at 0.99c as you are entering a galaxy. There’s too much gas and dust.
Yepp, that’s what I was thinking. And I just edited it to clarify, since “slingshot” seems like the wrong word.
Ahh, that makes more sense.
Yeah, perhaps one could use the Penrose process in reverse to slow down instead?
If I understand correctly, the Penrose process as such (i.e., actually extracting energy from the black hole’s rotation) only works if your exhaust is expelled fast enough, relative to you, that is is put on a negative energy orbit, which necessarily falls into the black hole. I’m not sure how you could perform a retrograde burn in which your exhaust somehow enters the black hole but you don’t, since in a retrograde burn your exhaust is getting extra orbital velocity.
I am still really curious whether it helps to execute the retrograde Oberth maneuver within the ergosphere of a Kerr black hole, and if so whether it is better or worse, or even possible, if you approach on an initially retrograde orbit. Of course the approach orbit is probably steeply inclined because you don’t want to spend any longer than necessary flying at 0.8c through the galactic disc.
That’s true for one object. But if there are at least two, moving around fast enough, you could perform some gravitational dance with them to slow down.
In the typical case, there are (at least) two meaningful bodies other than the spacecraft doing the maneuver; in real-world use cases so far, typically the sun and a planet. An (unpowered) slingshot maneuver doesn’t change the speed of the spacecraft from the frame of the planet, which is the object that the spacecraft approaches more closely, but it does change the speed in the center-of-mass frame, and it works by transferring orbital energy between the planet-sun system and the spacecraft. But the key is that in order to change your speed as much as possible relative to the center of mass, the object which you approach closer (i.e., “slingshot around”) should be the object which is smaller, and thus has higher speed relative to the center of mass. Of course it still needs to be much larger than your spacecraft. In no case would that object be the central black hole of a galaxy, unless your goal is to reduce your speed relative to an even bigger nearby galaxy, or perhaps just to change direction.
Are you talking about some other type of situation? My orbital intuition is that if you are going to trade orbital energy with a system, you have to get close to it relative to the separation of the bodies in the system, so it will generally make sense to talk about slingshotting around one of the bodies in particular. This is especially true when you are approaching with much higher than escape velocity, so that an extended dance with more than one close approach is not possible unless the first approach already did almost all the work.
You’re right, that you wouldn’t want to approach the black hole itself but rather one of the orbiting stars.
But even with high velocity, if there are a lot of orbiting stars, you may tune your trajectory to have multiple close encounters.
Ok, now I understand the type of maneuver you are talking about. That definitely does make sense. I wonder if our hypothetical probe has knowledge early enough about the orbital trajectories of the stars close to the black hole, such that it can adjust its approach to pull off something like that without too much fuel cost. Of course it’s a long trip and there is plenty of time to plan, but it seems that any forward-pointing telescope would tend to be at significant risk while traveling at 0.8c into a galaxy, let alone 0.99c before the primary burn. However, “not likely to survive if deployed for the whole trip” is not the same as “can be deployed for long enough to make the necessary observations.” One advantage to a “simple” powered flyby of the black hole is that at least you know well ahead of time where it’s going to be, and have a reasonably good estimate of its mass.
Alternatively, could it get that information prior to launch, and if so are the trajectories of those stars stable enough that they would be where they need to be after millions of years of travel? My guess is no.
Yeah, those star trajectories definitely wouldn’t be stable enough.
I guess even with that simpler maneuver (powered flyby near a black hole), you still need to monitor all the stuff orbiting there and plan ahead, otherwise there’s a fair chance you’ll crash into something.
I know this sort of idea is inspiring to a lot of you, and I’m not sure I should rain on the parade… but I’m also not sure that everybody who thinks the way I do should have to feel like they’re reading it alone.
To me this reads like “Two Clippies Collide”. In the end, the whole negotiated collaboration is still just going to keep expanding purely for the sake of expansion.
I would rather watch the unlifted stars.
I suppose I’m lucky I don’t buy into the acausal stuff at all, or it’d feel even worse.
I’m also not sure that they wouldn’t have solved everything even they thought was worth solving long before even getting out of their home star systems, so I’m not sure I buy either the cultural exchange or the need to beam software around. The Universe just isn’t necessarily that complicated.
I didn’t think the implication was necessarily that they planned to disassemble every solar system and turn it into probe factories. It’s more like… seeing a vast empty desert and deciding to build cities in it. A huge universe, barren of life except for one tiny solar system, seems not depressing exactly but wasteful. I love nature and I would never want all the Earth’s wilderness to be paved over. But at the same time I think a lot of the best the world has to offer is people, and if we kept 99.9% of it as a nature preserve then almost nobody would be around to see it. You’d rather watch the unlifted stars, but to do that you have to exist.
No, the probes are instrumental and are actually a “cost of doing business”. But, as I understand it, the orthodox plan is to get as close as possible to disassembling every solar system and turning it into computronium to run the maximum possible number of “minds”. The minds are assumed to experience qualia, and presumably you try to make the qualia positive. Anyway, a joule not used for computation is a joule wasted.
That’s like saying that because we live in a capitalist society, the default plan is to destroy every bit of the environment and fill every inch of the world with high rise housing projects. It’s… true in some sense, but only as a hypothetical extreme, a sort of economic spherical cow. In reality, people and societies are more complicated and less single minded than that, and also people just mostly don’t want that kind of wholesale destruction.
Presumably you’ve also read The ants and the grasshopper and [Knowing it’s connected is a mild spoiler]? I think of those as companion pieces to this, which is only giving you a part of the story and not really conveying what all the resources were “for.”
I had read it, had forgotten about it, hadn’t connected it with this story… but didn’t need to.
This story makes the goal clear enough. As I see it, eating the entire Universe to get the maximal number of mind-seconds[1] is expanding just to expand. It’s, well, gauche.
Really, truly, it’s not that I don’t understand the Grand Vision. It never has been that I didn’t understand the Grand Vision. It’s that I don’t like the Grand Vision.
It’s OK to be finite. It’s OK to not even be maximal. You’re not the property of some game theory theorem, and it’s OK to not have a utility function.
It’s also OK to die (which is good because it will happen). Doesn’t mean you have to do it at any particular time.
Appropriately weighted if you like. And assuming you can define what counts as a “mind”.
I thought it was pretty courageous of you to state this so frankly here, especially given how the disagree-votes turned out.
The problem with not expanding is that you can be pretty sure someone else will then grab what you didn’t and may use it for something that you hate. (Unless you trust that they’ll use it well.)
It’s not “just to expand”. Expansion, at least in the story, is instrumental to whatever the content of these mind-seconds is.
I already have people planning to grab everything and use it for something that I hate, remember? Or at least for something fairly distasteful.
Anyway, if that were the problem, one could, in theory, go out and grab just enough to be able to shut down anybody who tried to actually maximize. Which gives us another armchair solution to the Fermi paradox: instead of grabby aliens, we’re dealing with tasteful aliens who’ve set traps to stop anybody who tries to go nuts expansion-wise.
Beyond a certain point, I doubt that the content of the additional minds will be interestingly novel. Then it’s just expanding to have more of the same thing that you already have, which is more or less identical from where I sit to expanding just to expand.
And I don’t feel bound to account for the “preferences” of nonexistent beings.
Somehow people keep finding meaning in failling in love and starting a family, even when billions of people have already done that before. We also find meaning in doing careers that are very similar to what million of people have done before or traveling to destination that has been visited by millions of turist. The more similar an activity is to something our ancestors did, the more meaningful it seems.
From the outside, all this looks grabby, but from the inside it feels meaningful.
… but a person who doesn’t exist doesn’t have an “inside”.
Which non-existing person are you refering to?
You can choose or not choose to create more “minds”. If you create them, they will exist and have experiences. If you don’t create them, then they won’t exist and won’t have experiences.
That means that you’re free to not create them based on an “outside” view. You don’t have to think about the “inside” experiences of the minds you don’t create, because those experiences don’t and will never exist. That’s still true even on a timeless view; they never exist at any time or place. And it includes not having to worry about whether or not they would, if they existed, find anything meaningful[1].
If you do choose to create them, then of course you have to be concerned with their inner experiences. But those experiences only matter because they actually exist.
I truly don’t understand why people use that word in this context or exactly what it’s supposed to, um, mean. But pick pretty much any answer and it’s still true.
My point is that potential parents often care about non-existing people: their potential kids. And once they bring these potential kids into existence, those kids might start caring about a next generation. Simularly, some people/minds will want to expand because that is what their company does, or they would like the experience of exploring a new planet/solar system/galaxy or would like the status of being the first to settle there.
If it’s OK to be not maximal, it will be reflected in The Grand Vision. But if we stay not maximal, it means that immeasurable amount of wonders is doing to not exist because of whatever limited vision you like. This is unfair.
I think it’s a good model in that it shows the timescales and the levels of resources that such a civilization would likely have access to, based on current (last 10 or so years) understanding of the universe.
Beaming software is also a way to save mass. It would also be a way to have “tourists”.
Details like the process for negotiating with aliens, assuming you even can build black holes or use them that way, etc are obviously highly unlikely to be correct.
Thanks, that’s great!
I wonder how does this work:
Why smaller probes end up more vulnerable to collisions? This sounds sort of counter-intuitive to me, should not this dependency between size and vulnerability go in the opposite direction? (I guess my model is that every hit is fatal, and that hit/no-hit is a binary thing which only depends on size.)
In my model of that claim, it would be true, but the claim was phrased in a confusing way.
I’d build them to be very slender, so instead of “bigger” I would say “longer”, and in a sense their vulnerability to collisions would be roughly the same under changes in length (they’re moving ahead much faster than debris is moving laterally, so they’d mostly take hits from the front, which doesn’t have to expand much when we elongate them), so I would say instead that they’re more resilient, because they have more shielding (or because they have more redundancy and they retain data/functionality at the square of their in-tact mass).
Smaller probes are probably more vulnerable to collisions per unit mass. (Unsure if this was the intention in the story.)
In particular, suppose the probability of collisions is proportional to total surface area. Then, if our probe is spherical, collisions are quadratic in radius while mass is cubic.
probes probably want a very skinny aspect ratio. If cosmic dust travels at 20km/s, that’s 15k times slower than the probe is travelling, so maybe that means the probe should be eg 10cm wide and 1.5km long
(Agreed, I was just trying to describe the spherical cow of probe designs, a spherical probe.)
I expect that the engineering constraints kick in way before that, but yeah, seems broadly correct.
Perhaps the correct strategy here is for the probe to be multi-stage, but with each stage behind the next, so that it drops off after spending all its fuel.
Oh actually… I think 15k isn’t the right number here, both because of threshold effects, but also because these are relativistic collisions. I’m not sure exactly how to do it but intuitively it should be something like 15k times the Lorentz factor (around 70 for 0.9999c). So more like 10cm wide and 100km long, lol.
i thought about this for a minute and landed on no counting for lorentz factor. Things hitting on the side have about the same relative velocity as things hitting from the front . Because they’re hitting the side they could either bounce off or dump all their tangent kinetic energy into each other. like because all the relative velocity is tangent, they could in principle interact without exchanging significant energy. But probably the side impacts are just as dangerous. Which might make them more dangerous because you have less armor on the side
I think there’s probably a variety of more complicated factors involved that we haven’t considered. Doesn’t really matter for the story, it’s sufficient to leave stuff unsaid as long as the currently understood boundaries of the possible are respected.
Would the probe emit an ‘ablative antimatter particle shield’ which coasted alongside the probe and eliminated dust particles approaching from the sides?
Launching a probe with a laser probably involves an umbrella shaped probe with the umbrella shaft being the ‘true’ probe, and the umbrella canopy being a ‘first stage acceleration’ disposable parabolic mirror made of something like mylar and carbon fiber. The mirror gets jettisoned at some point. Fun to speculate about, but not really critical to planning the next few decades. A far smarter mind than mine will have time to work out these details before they’re needed.
Yeah, the intended intuition is that the size of collision required to derail a probe is proportional to the probe’s mass, and that there are many tiny collisions (e.g. with stray atoms) that wouldn’t derail bigger probes but might derail smaller ones.
But not particularly confident on either of these.
The LessWrong Review runs every year to select the posts that have most stood the test of time. This post is not yet eligible for review, but will be at the end of 2024. The top fifty or so posts are featured prominently on the site throughout the year.
Hopefully, the review is better than karma at judging enduring value. If we have accurate prediction markets on the review results, maybe we can have better incentives on LessWrong today. Will this post make the top fifty?
There is a huge amount of computation going on in this story and as far as I can tell not even a single experiment. The end hints that there might be some learning from the protagonists experince, at least it is telling it story many times. But I would expect a lot more experimenting, for example with different probe designs and with how much posthumans like different possible negotiated results.
I can see in the story that it make sense not to experiment with posthumans reactions to scenarios, since it might take a long time to send them to the fronter and since it might be possible to simulate them well (its not clear to me if the posthumans are biological). I just wonder if this extreme focus on computation over experiments is a delibrate choice by the author or if it was a blind spot of the author.
A mix of deliberate and blind spot. I’m assuming that almost everything related to physical engineering and technological problems has been worked out, and so the stuff remaining is mostly questions about how (virtual) minds and civilizations play out (which are best understood via simulation) and questions about what other universes and other civilizations might look like.
But even if the probes aren’t running extensive experiments, they’re almost certainly learning something from each new experience of colonizing a solar system, and I should have incorporated that somehow.
I think we should stop talking about “virtual” minds. A mind is a mind, whatever its substrate.
In the same way, it’s also probable that a brain is a brain, artificial or no, you probably can’t efficiently generate human experience with general purpose compute hardware, you’d want something specialized, possibly even bespoke to the individual it’s allocated to.
There are virtual worlds (you don’t need anything remotely mountain-like to simulate a mountain), but there are not virtual people.