Their message contains all the software that was too large for me to carry on board originally.
Is this actually plausible? I think it should be possible for data to be extremely dense to a point where a kilogram is easily sufficient for all the necessary software (other than updates based on further reasoning). For instance, DNA stores 1,000,000,000 TB in a gram. (For DNA, read speeds are slow, but I think this difficulty should be possible to overcome. Also, it just needs to be better than having to wait for telescope construction.)
For the first few minds, it seems like you could carry them on board. But that’s different from it being worthwhile to do so. Suppose that it only takes a gram to encode all the necessary information. Based on the numbers I’m using in this story, you’d need to generate 100 million kg of antimatter as fuel to send this single gram (1 billion fuel factor x 100 probes sent for every one that makes it x 0.001kg). That’s a small proportion of the total probe cost, but makes it more likely that it’s cheaper to just beam the info.
How expensive is it to send signals 50 million light-years? I have no idea. Maybe the bigger cost is actually slowing down the probe’s development on the other end (because it’s building a telescope instead of doing other development stuff). Hard to reason about.
If you want to eventually run a whole civilization in the new galaxy (which may contain trillions of very complex minds) then carrying all that data physically would start to incur really serious costs, and at that point it seems much less likely that carrying it is optimal. Though you could send later probes to carry this information once you’ve already sent the initial waves of colonizing probes (when opportunity costs are much lower).
Overall I think upon reflection probably this is a mistake in the story. But the telescope is sufficiently plot-relevant (for detecting aliens) that I’m not sure how/whether to remove it.
An alternative reason for building telescopes would be to recieve updates and more efficient strategies for expanding found after the probe was send out.
Oh, interesting, hadn’t thought of this. Yeah, it depends on the returns to a few thousand years of R&D. And it’d often be better to spend your resources launching probes first then do the R&D later, when you have lower opportunity cost.
I’d figure something like “some proportion of probes should build big telescopes purely for advanced scouting. Maybe not every probe in every star system, but, sometimes.” And you could just have this probe be one such instance.
But, I did think it was a pretty cool idea that you could beam software to probes (edit: and I think it’s sometimes worth including interesting tech ideas that are at least plausibly a good idea in hard sci f)
Also, I think this addressed one concern I’ve heard raised about probes drifting out of sync with their creators over time, and this was an interesting mechanism for maintaining control over them over eons.
Thanks for the story!
As far as
Is this actually plausible? I think it should be possible for data to be extremely dense to a point where a kilogram is easily sufficient for all the necessary software (other than updates based on further reasoning). For instance, DNA stores 1,000,000,000 TB in a gram. (For DNA, read speeds are slow, but I think this difficulty should be possible to overcome. Also, it just needs to be better than having to wait for telescope construction.)
Good question. My main opinions:
For the first few minds, it seems like you could carry them on board. But that’s different from it being worthwhile to do so. Suppose that it only takes a gram to encode all the necessary information. Based on the numbers I’m using in this story, you’d need to generate 100 million kg of antimatter as fuel to send this single gram (1 billion fuel factor x 100 probes sent for every one that makes it x 0.001kg). That’s a small proportion of the total probe cost, but makes it more likely that it’s cheaper to just beam the info.
How expensive is it to send signals 50 million light-years? I have no idea. Maybe the bigger cost is actually slowing down the probe’s development on the other end (because it’s building a telescope instead of doing other development stuff). Hard to reason about.
If you want to eventually run a whole civilization in the new galaxy (which may contain trillions of very complex minds) then carrying all that data physically would start to incur really serious costs, and at that point it seems much less likely that carrying it is optimal. Though you could send later probes to carry this information once you’ve already sent the initial waves of colonizing probes (when opportunity costs are much lower).
Overall I think upon reflection probably this is a mistake in the story. But the telescope is sufficiently plot-relevant (for detecting aliens) that I’m not sure how/whether to remove it.
An alternative reason for building telescopes would be to recieve updates and more efficient strategies for expanding found after the probe was send out.
Yeah that’s what I assumed the rationale was.
Oh, interesting, hadn’t thought of this. Yeah, it depends on the returns to a few thousand years of R&D. And it’d often be better to spend your resources launching probes first then do the R&D later, when you have lower opportunity cost.
Okay, this is now the official explanation.
I’d figure something like “some proportion of probes should build big telescopes purely for advanced scouting. Maybe not every probe in every star system, but, sometimes.” And you could just have this probe be one such instance.
But, I did think it was a pretty cool idea that you could beam software to probes (edit: and I think it’s sometimes worth including interesting tech ideas that are at least plausibly a good idea in hard sci f)
Also, I think this addressed one concern I’ve heard raised about probes drifting out of sync with their creators over time, and this was an interesting mechanism for maintaining control over them over eons.