The above dealt primarily with the first half of your post, but let me also address the 2nd half. You’ve assigned several probability estimates to various outcomes of our civilization:
Collapse/Extinction: “in the 1% to 50% range.” I’m inclined to agree with you on this one, as described in the last paragraph of my above post.
Biological/Mixed Civilization: “This scenario is almost not worth mentioning: prior < 1%” I think you’ve defined this a bit too narrowly. We don’t yet see any limiting factor for AI advancement besides physics, but that doesn’t mean that one won’t make itself apparent. Maybe this factor will turn out to be teraFLOPS (aka limited by Moore’s law) or energy (limited by our energy production capacity) or even matter (limited by the amount of rare earth elements necessary to make computronium). But it could also happen that we fail to make a super-intelligence at all, or that AI eventually achieves most, but not all, of humans mental abilities. The livelihood of a general intelligence increases asymptotically with time, but I think it would be a mistake to assume that it is increasing asymptotically toward 1. It could easily be getting closer and closer to 0.8 or some other value which is hard to calculate. The existence of the human mind shows that consciousness can be built out of atoms, but not necessarily that it can be built out of a string of transistors, or that it is simple enough that we can ever understand it well enough to reproduce it in code. There’s also the existential risk of developing a flawed AI. We only have 1 shot at it, and the evidence seems to be against developing one correctly on the first try. I suspect that the supermajority of civilizations that develop AI’s develop flawed AIs. Even if 90% develop an AI before going to the stars, perhaps >99.9999% are wiped out by a poorly designed AI. This would lead to many more “Biological/Mixed Civilizations” than AI civilizations, if the flawed AI’s tend to wipe themselves out or not to spread out into the universe.
PostBiological Warm-tech AI Civilization: “I assign a prior to the warm-tech scenario that is about the same as my estimate of the probability that the more advanced cold-tech (reversible quantum computing, described next) is impossible: < 10%.” This seems slightly low to me, but not by much. “This particular scenario is based on the assumption that energy is a key constraint, and that civilizations are essentially stellavores which harvest the energy of stars.” Although this state doesn’t flow from energy being a limiting factor (aka biological/mixed civilizations may also be energy limited) I agree that such a civilization would eventually become energy limited. I see 2 ways of solving this: better harvesting (aka Dyson swarms, since Dyson spheres are likely mass-limited) or broader civilization (if it takes less energy to send a colony to the nearest star, then you do that before you start building a Dyson swarm).
From Warm-tech to Cold-tech: This seems to be where you are putting the majority of your probability mass. I’d probably put less, but that’s not actually my main contention. I don’t buy that this is sufficient reason to travel to the interstellar medium, away from such a ready energy and matter source as a solar system. You list 3 reasons: lower energy bit erasures, superconductivity, and quantum computer efficiency. Bit erasure costs seem like they would be more than made up for my a surplus of energy available from plentiful solar power, materials for fusion plants, etc. Only a few superconductors require temperatures below ~50 Kelvin, and you can get that anywhere perpetually shaded from the sun, such as the craters on the north and south poled of the moon (~30 Kelvin). If you want it somewhere else, stop an asteroid from spinning and build a computer on the dark side. I’m not sure that quantum computers need to be below that either. Anywhere you go, you’ll still be heated be cosmic microwave background radiation to ~4 K. Is an order of magnitude decrease in temperature really worth several orders of magnitude decrease in energy/matter harvesting ability? In order to expand exponentially, such a system would still need huge amounts of matter for superconductors and whatever else.
I’m inclined to agree with you on this one, as described in the last paragraph of my above post.
I should haved pointed out that even a high probability of collapse is unlikely to act as a filter, because it has to be convergent and a single civ can colonize.
From Warm-tech to Cold-tech: This seems to be where you are putting the majority of your probability mass.
It is where I am putting most of my prior probability mass. There are three considerations:
Engineering considerations—the configurations which maximize computation are those where the computational mass is far from heat sources such as stars which limit computation. With reversible computing, energy is unlikely to be a constraint at all, and the best use of available mass probably involves ejecting the most valuable mass out of the system.
Stealth considerations—given no radical new physics, it appears that stealth is the only reliable way to protect a civ’s computational brains. Any civ hanging out near a star would be a sitting duck.
Simulation argument selection effects—discussed elsewhere, but basically the coldtech scenario tends to maximize the creation of simulations which produce observers such as ourselves.
After conditioning on observations of the galaxy to date, the coldtech scenario contains essentially all of the remaining probability mass. Of course, our understanding of physics is incomplete, and I didn’t have time to list all of the plausible models for future civs. There is the transcension scenario, which is related to my model of coldtech civs migrating away from the galactic disk.
One other little thing I may have forgot to mention in the article: the distribution of dark matter is that of a halo, which is suspiciously close to what one would expect in the expulsion scenario, where elder civs are leaving the galaxy in directions away from the galactic disk. Of course, that effect is only relevant if a good chunk of the dark matter is usable for computation.
Bit erasure costs seem like they would be more than made up for my a surplus of energy available from plentiful solar power, materials for fusion plants, etc.
No—I should have elaborated on the model more, but the article was already long.
Given some planemo (asteroid,moon,planet whatever) of mass M, we are concerned with maximizing the total quantity of computation in ops over the future that we can extract from that mass M.
If high tech reversible/quantum computing is possible, then the designs which maximize the total computation are all temperature limited, due to Landauer’s limit.
Now there are actually many constraints to consider. There is a structural constraint that even if your device creates no heat, there is a limit to the ops/s achievable by one molecular transistor—and this actually is also related to Landauer’s principle. Whether the computer is reversible or not, it still requires about 100kT j per reliable bitop—the difference is that the irreversible computer converts that energy into heat, whereas the reversible design recycles it.
If reversible/quantum computing is possible, then there is no competition—the reversible designs will scale to enormously higher computational densities (that would result in the equivalent of nuclear explosions if all of those bits were erased).
Temperature then becomes the last key thing you can optimize for, as the background temperature limits your effective cooling capability.
Anywhere you go, you’ll still be heated be cosmic microwave background radiation to ~4 K. Is an order of magnitude decrease in temperature really worth several orders of magnitude decrease in energy/matter harvesting ability?
Well—assuming that really powerful reversible computing is possible, then the answer—rather obviously—is yes.
But again energy harvesting is only necessary if energy is a constraint, which it isn’t in the coldtech model.
Why not just build an inferior computer design that only achieves 10% of the maximum capacity? Intelligence requires computation. As long as there exists some reasonably low energy technique for ejecting from the solar system, it results in a large payoff multiplier. Of course you can still leave a bunch of stuff in the system, and perhaps even have a form of a supply line—although that could reduce stealth and add risk.
There is admittedly a lot of hand waving going on in this model. If I had more time I would develop a more accurate model focusing on some of the key unknowns.
One key variable is the maximum practical reversibility ratio, which is the ratio of bitops of computation per bitop erased. This determines the maximum efficiency gain from reversible computing. Physics doesn’t appear to have a hard limit for this variable, but there will probably be engineering limits.
For example, an advanced civ will at the very least want to store its observational data from its sensors in a compressed form, which implies erasing some minimal number of bits. But if you think about a big civ occupying a sphere, the input bits/s coming in from a few sparse sensor ports on the surface is going to be incredibly tiny compared to the bitop/s rate across the whole volume.
First, let me try to summarize your position formally. Please let me know if I’m misrepresenting anything. We seem to be talking past each other on a couple subtopics, and I thought this might help clear things up.
1 p(type III civilization in milky way) ≈ 1
1.1 p(reversible computing | type III civilization in milky way) ≈ .9
1.1.1 p(¬energy or mass limited | reversible computing) ≈ 1
1.1.1.1 p(interstellar space | ¬ energy or mass limited) is large
1.1.1.2 p(intergalactic space | ¬ energy or mass limited) is very large
1.1.1.3 p( (interstellar space ↓ intergalactic space) | ¬ energy or mass limited) ≈ 0
1.1.2 p(energy or mass limited | reversible computing) ≈ 0
1.2 p(¬reversible computing | type III civilization in milky way) ≈ .1
2 p(¬type III civilizations in milky way) ≈ 0
Note that 1.1.1.1 and 1.1.1.2 are not mutually exclusive, and that ↓ is the joint denial / NOR boolean logic operator. Personally, after talking with you about this and reading through the reversible computing Wikipedia article (which I found quite helpful), my estimates have shifted up significantly. I originally started to build my own sort of probability tree similar to the one above, but it quickly became quite complex. I think the two of us are starting out with radically different structures in our probability trees. I tend to presume that the future has many more unknown factors than known ones, and so is fundamentally extremely difficult to predict with any certainty, especially in the far future.
The only thing we know for sure is the laws of physics, so we can make some headway by presuming that one specific barrier is the primary limiting factor of an advanced civilization, and see what logical conclusions we can draw from there. That’s why I like your approach so much; before reading it I hadn’t really given much thought to civilizations limited primarily by things like Laudauer’s limit rather than energy or raw materials. However, without knowing their utility function, it is difficult to know for sure what limits will be their biggest concern. It’s not even certain that such a civilization would have one single unified utility function, although it’s certainly likely.
If I was in the 18th century and trying to predict what the 21st century would be like, even if I was a near-perfect rationalist, I would almost certainly get almost everything wrong. I would see limiting factors like transportation and food. From this, I might presume that massive numbers of canals, rather than the automobile, would address the need for trade. I would also presume that food limited population growth, and might hypothesize that once we ran out of land to grow food we would colonize the oceans with floating gardens. The 18th century notion of a type I civilization would probably be one that farmed the entire surface of a planet, rather than one that harvested all solar energy. The need for electricity was not apparent, and it wasn’t clear that the industrial revolution would radically increase crop yields. Perhaps fusion power will make electricity use a non issue, or perhaps ColdTech will decrease demand to the point where it is a non-issue. These are both reasonably likely hypotheses in a huge, mostly unexplored, hypothesis space.
But let’s get to the substance of the matter.
1 and 2: I tried to argue for a substantially lower p value here, and I see that you responded, so I’ll answer on that fork instead. This comment is likely to be long enough as is. :)
1.1 and 1.2: I definitely agree with you that a sufficiently advanced civilization would probably have ColdTech, but among many, many other technologies. It’s likely to be a large fraction of the mass of all their infrastructure, but I’m not sure if it would be a super-majority. This would depend to a large degree on unknown unknowns.
1.1.1 and 1.1.2: I’m inclined to agree with you that ColdTech technology itself isn’t particularly mass or energy limited. You had this to say:
Engineering considerations—the configurations which maximize computation are those where the computational mass is far from heat sources such as stars which limit computation. With reversible computing, energy is unlikely to be a constraint at all, and the best use of available mass probably involves ejecting the most valuable mass out of the system.
I would still think that manufacturing and ejecting ColdTech is likely to be extremely mass and energy intensive. If the civilization expands exponentially limited only by their available resources, the observable effects would look much like other forms of advanced civilizations. Are you arguing that they would stay quite small for the sake of stealth? If so, wouldn’t it still make sense to spread out as much as possible, via as many independent production sites as possible? You touch on this briefly:
As long as there exists some reasonably low energy technique for ejecting from the solar system, it results in a large payoff multiplier. Of course you can still leave a bunch of stuff in the system, and perhaps even have a form of a supply line—although that could reduce stealth and add risk.
I don’t see any reason not to just keep sending material out in different directions. Perhaps this is the underlying assumption that caused us to disagree, since I didn’t make the distinction between manufacturing being mass/energy limited and the actual computation being mass/energy limited. When you say that such a civilization isn’t mass/energy limited, are you referring to just the ColdTech, or the production too?
It seems like you could just have the ejected raw materials/ColdTech perform a course correction and series of gravity assists based on the output from a random number generator, once they were out of observational distance from the origin system. This would ensure that no hostile forces could determine their location by finding the production facility still active. Instead of a handfull of hidden colonies, you could turn a sizable fraction of a solar system’s mass, or even a galaxies mass, into computonium.
Hmm I’m not sure what to make of your probability tree yet .. . but in general I don’t assign such high probabilities to any of these models/propositions. Also, I’m not sure what a type III civilization is supposed to translate to in the cold dark models that are temperature constrained rather than energy constrained. I guess you are using that to indicate how much of the galaxy’s usable computronium mass is colonized?
It is probably unlikely that even a fully colonized galaxy would have a very high computronium ratio: most of the mass is probably low value and not worth bothering with.
That’s why I like your approach so much; before reading it I hadn’t really given much thought to civilizations limited primarily by things like Laudauer’s limit rather than energy or raw materials
Thanks. I like your analogies with food and other early resources. Energy is so fundamental that it will probably always constrain many actions (construction still requires energy, for example), but it isn’t the only constraint, and not necessarily the key constraint for computation.
I would still think that manufacturing and ejecting ColdTech is likely to be extremely mass and energy intensive.
Yes—agreed. (I am now realizing ColdTech really needs a better name)
If the civilization expands exponentially limited only by their available resources, the observable effects would look much like other forms of advanced civilizations.
No, the observable effects vary considerably based on the assumed technology. Let’s compare three models: stellavore, BHE (black hole entity) transcension, and CD (cold dark) arcilects.
The stellavore model predicts that civs will create dyson spheres, which should be observable during the long construction period and may be observable afterwards. John Smart’s transcension model predicts black holes entities arising in or near stellar systems (although we could combine that with ejection I suppose). The CD arcilect model predicts that civs will cool down some of the planemos in their systems, possibly eject some of those planemos, and then also colonize any suitable nomads.
Each theory predicts a different set of observables. The stellavore model doesn’t appear to match our observations all that well. The other two seem to match, although also are just harder to detect, but there are some key things we could look for.
For my CD arcilect model, we have already have some evidence for a large amount of nomads. Perhaps there is a way to distinguish between artificial and natural ejections. Perhaps the natural pattern is ejections tend to occur early in system formation, whereas artificial ejections occur much later. Perhaps we could even get lucky and detect an unusually cold planemo with microlensing. Better modelling of the dark matter halos may reveal a match between ejection models for at least a baryonic component of the halo.
For the CDA model stars become somewhat wasteful, which suggests that civs may favour artificial supernovas if such a thing is practical. At the moment I don’t see how one could get the energy/mass to do such a thing.
Those are just some quick ideas, I haven’t really looked into it all that much.
Are you arguing that they would stay quite small for the sake of stealth? If so, wouldn’t it still make sense to spread out as much as possible, via as many independent production sites as possible?
No, I agree that civilizations will tend to expand and colonize, and yes stealth considerations shouldn’t prevent this.
I don’t see any reason not to just keep sending material out in different directions. . .
Thinking about it a little more, I agree. And yes when I mention not being energy constrained, that was in reference only to computation, not construction. I assume efficient construction is typically in place, using solar or fusion or whatever.
It seems like you could just have the ejected raw materials/ColdTech perform a course correction and series of gravity assists based on the output from a random number generator, once they were out of observational distance from the origin system. This would ensure that no hostile forces could determine their location by finding the production facility still active.
Yes, this seems to be on the right track. However, the orbits of planetary bodies are very predictable and gravity assists are reversible operations (I think), which seems to imply that the remaining objects in the system will contain history sufficient for predicting the ejection trajectory (for a rival superintelligence). You can erase the history only by creating heat … so maybe you end up sending some objects into the sun? :) Yes actually that seems pretty doable.
The above dealt primarily with the first half of your post, but let me also address the 2nd half. You’ve assigned several probability estimates to various outcomes of our civilization:
Collapse/Extinction: “in the 1% to 50% range.” I’m inclined to agree with you on this one, as described in the last paragraph of my above post.
Biological/Mixed Civilization: “This scenario is almost not worth mentioning: prior < 1%” I think you’ve defined this a bit too narrowly. We don’t yet see any limiting factor for AI advancement besides physics, but that doesn’t mean that one won’t make itself apparent. Maybe this factor will turn out to be teraFLOPS (aka limited by Moore’s law) or energy (limited by our energy production capacity) or even matter (limited by the amount of rare earth elements necessary to make computronium). But it could also happen that we fail to make a super-intelligence at all, or that AI eventually achieves most, but not all, of humans mental abilities. The livelihood of a general intelligence increases asymptotically with time, but I think it would be a mistake to assume that it is increasing asymptotically toward 1. It could easily be getting closer and closer to 0.8 or some other value which is hard to calculate. The existence of the human mind shows that consciousness can be built out of atoms, but not necessarily that it can be built out of a string of transistors, or that it is simple enough that we can ever understand it well enough to reproduce it in code. There’s also the existential risk of developing a flawed AI. We only have 1 shot at it, and the evidence seems to be against developing one correctly on the first try. I suspect that the supermajority of civilizations that develop AI’s develop flawed AIs. Even if 90% develop an AI before going to the stars, perhaps >99.9999% are wiped out by a poorly designed AI. This would lead to many more “Biological/Mixed Civilizations” than AI civilizations, if the flawed AI’s tend to wipe themselves out or not to spread out into the universe.
PostBiological Warm-tech AI Civilization: “I assign a prior to the warm-tech scenario that is about the same as my estimate of the probability that the more advanced cold-tech (reversible quantum computing, described next) is impossible: < 10%.” This seems slightly low to me, but not by much. “This particular scenario is based on the assumption that energy is a key constraint, and that civilizations are essentially stellavores which harvest the energy of stars.” Although this state doesn’t flow from energy being a limiting factor (aka biological/mixed civilizations may also be energy limited) I agree that such a civilization would eventually become energy limited. I see 2 ways of solving this: better harvesting (aka Dyson swarms, since Dyson spheres are likely mass-limited) or broader civilization (if it takes less energy to send a colony to the nearest star, then you do that before you start building a Dyson swarm).
From Warm-tech to Cold-tech: This seems to be where you are putting the majority of your probability mass. I’d probably put less, but that’s not actually my main contention. I don’t buy that this is sufficient reason to travel to the interstellar medium, away from such a ready energy and matter source as a solar system. You list 3 reasons: lower energy bit erasures, superconductivity, and quantum computer efficiency. Bit erasure costs seem like they would be more than made up for my a surplus of energy available from plentiful solar power, materials for fusion plants, etc. Only a few superconductors require temperatures below ~50 Kelvin, and you can get that anywhere perpetually shaded from the sun, such as the craters on the north and south poled of the moon (~30 Kelvin). If you want it somewhere else, stop an asteroid from spinning and build a computer on the dark side. I’m not sure that quantum computers need to be below that either. Anywhere you go, you’ll still be heated be cosmic microwave background radiation to ~4 K. Is an order of magnitude decrease in temperature really worth several orders of magnitude decrease in energy/matter harvesting ability? In order to expand exponentially, such a system would still need huge amounts of matter for superconductors and whatever else.
I should haved pointed out that even a high probability of collapse is unlikely to act as a filter, because it has to be convergent and a single civ can colonize.
It is where I am putting most of my prior probability mass. There are three considerations:
Engineering considerations—the configurations which maximize computation are those where the computational mass is far from heat sources such as stars which limit computation. With reversible computing, energy is unlikely to be a constraint at all, and the best use of available mass probably involves ejecting the most valuable mass out of the system.
Stealth considerations—given no radical new physics, it appears that stealth is the only reliable way to protect a civ’s computational brains. Any civ hanging out near a star would be a sitting duck.
Simulation argument selection effects—discussed elsewhere, but basically the coldtech scenario tends to maximize the creation of simulations which produce observers such as ourselves.
After conditioning on observations of the galaxy to date, the coldtech scenario contains essentially all of the remaining probability mass. Of course, our understanding of physics is incomplete, and I didn’t have time to list all of the plausible models for future civs. There is the transcension scenario, which is related to my model of coldtech civs migrating away from the galactic disk.
One other little thing I may have forgot to mention in the article: the distribution of dark matter is that of a halo, which is suspiciously close to what one would expect in the expulsion scenario, where elder civs are leaving the galaxy in directions away from the galactic disk. Of course, that effect is only relevant if a good chunk of the dark matter is usable for computation.
No—I should have elaborated on the model more, but the article was already long.
Given some planemo (asteroid,moon,planet whatever) of mass M, we are concerned with maximizing the total quantity of computation in ops over the future that we can extract from that mass M.
If high tech reversible/quantum computing is possible, then the designs which maximize the total computation are all temperature limited, due to Landauer’s limit.
Now there are actually many constraints to consider. There is a structural constraint that even if your device creates no heat, there is a limit to the ops/s achievable by one molecular transistor—and this actually is also related to Landauer’s principle. Whether the computer is reversible or not, it still requires about 100kT j per reliable bitop—the difference is that the irreversible computer converts that energy into heat, whereas the reversible design recycles it.
If reversible/quantum computing is possible, then there is no competition—the reversible designs will scale to enormously higher computational densities (that would result in the equivalent of nuclear explosions if all of those bits were erased).
Temperature then becomes the last key thing you can optimize for, as the background temperature limits your effective cooling capability.
Well—assuming that really powerful reversible computing is possible, then the answer—rather obviously—is yes.
But again energy harvesting is only necessary if energy is a constraint, which it isn’t in the coldtech model.
Why not just build an inferior computer design that only achieves 10% of the maximum capacity? Intelligence requires computation. As long as there exists some reasonably low energy technique for ejecting from the solar system, it results in a large payoff multiplier. Of course you can still leave a bunch of stuff in the system, and perhaps even have a form of a supply line—although that could reduce stealth and add risk.
There is admittedly a lot of hand waving going on in this model. If I had more time I would develop a more accurate model focusing on some of the key unknowns.
One key variable is the maximum practical reversibility ratio, which is the ratio of bitops of computation per bitop erased. This determines the maximum efficiency gain from reversible computing. Physics doesn’t appear to have a hard limit for this variable, but there will probably be engineering limits.
For example, an advanced civ will at the very least want to store its observational data from its sensors in a compressed form, which implies erasing some minimal number of bits. But if you think about a big civ occupying a sphere, the input bits/s coming in from a few sparse sensor ports on the surface is going to be incredibly tiny compared to the bitop/s rate across the whole volume.
First, let me try to summarize your position formally. Please let me know if I’m misrepresenting anything. We seem to be talking past each other on a couple subtopics, and I thought this might help clear things up.
1 p(type III civilization in milky way) ≈ 1
1.1 p(reversible computing | type III civilization in milky way) ≈ .9
1.1.1 p(¬energy or mass limited | reversible computing) ≈ 1
1.1.1.1 p(interstellar space | ¬ energy or mass limited) is large
1.1.1.2 p(intergalactic space | ¬ energy or mass limited) is very large
1.1.1.3 p( (interstellar space ↓ intergalactic space) | ¬ energy or mass limited) ≈ 0
1.1.2 p(energy or mass limited | reversible computing) ≈ 0
1.2 p(¬reversible computing | type III civilization in milky way) ≈ .1
2 p(¬type III civilizations in milky way) ≈ 0
Note that 1.1.1.1 and 1.1.1.2 are not mutually exclusive, and that ↓ is the joint denial / NOR boolean logic operator. Personally, after talking with you about this and reading through the reversible computing Wikipedia article (which I found quite helpful), my estimates have shifted up significantly. I originally started to build my own sort of probability tree similar to the one above, but it quickly became quite complex. I think the two of us are starting out with radically different structures in our probability trees. I tend to presume that the future has many more unknown factors than known ones, and so is fundamentally extremely difficult to predict with any certainty, especially in the far future.
The only thing we know for sure is the laws of physics, so we can make some headway by presuming that one specific barrier is the primary limiting factor of an advanced civilization, and see what logical conclusions we can draw from there. That’s why I like your approach so much; before reading it I hadn’t really given much thought to civilizations limited primarily by things like Laudauer’s limit rather than energy or raw materials. However, without knowing their utility function, it is difficult to know for sure what limits will be their biggest concern. It’s not even certain that such a civilization would have one single unified utility function, although it’s certainly likely.
If I was in the 18th century and trying to predict what the 21st century would be like, even if I was a near-perfect rationalist, I would almost certainly get almost everything wrong. I would see limiting factors like transportation and food. From this, I might presume that massive numbers of canals, rather than the automobile, would address the need for trade. I would also presume that food limited population growth, and might hypothesize that once we ran out of land to grow food we would colonize the oceans with floating gardens. The 18th century notion of a type I civilization would probably be one that farmed the entire surface of a planet, rather than one that harvested all solar energy. The need for electricity was not apparent, and it wasn’t clear that the industrial revolution would radically increase crop yields. Perhaps fusion power will make electricity use a non issue, or perhaps ColdTech will decrease demand to the point where it is a non-issue. These are both reasonably likely hypotheses in a huge, mostly unexplored, hypothesis space.
But let’s get to the substance of the matter.
1 and 2: I tried to argue for a substantially lower p value here, and I see that you responded, so I’ll answer on that fork instead. This comment is likely to be long enough as is. :)
1.1 and 1.2: I definitely agree with you that a sufficiently advanced civilization would probably have ColdTech, but among many, many other technologies. It’s likely to be a large fraction of the mass of all their infrastructure, but I’m not sure if it would be a super-majority. This would depend to a large degree on unknown unknowns.
1.1.1 and 1.1.2: I’m inclined to agree with you that ColdTech technology itself isn’t particularly mass or energy limited. You had this to say:
I would still think that manufacturing and ejecting ColdTech is likely to be extremely mass and energy intensive. If the civilization expands exponentially limited only by their available resources, the observable effects would look much like other forms of advanced civilizations. Are you arguing that they would stay quite small for the sake of stealth? If so, wouldn’t it still make sense to spread out as much as possible, via as many independent production sites as possible? You touch on this briefly:
I don’t see any reason not to just keep sending material out in different directions. Perhaps this is the underlying assumption that caused us to disagree, since I didn’t make the distinction between manufacturing being mass/energy limited and the actual computation being mass/energy limited. When you say that such a civilization isn’t mass/energy limited, are you referring to just the ColdTech, or the production too?
It seems like you could just have the ejected raw materials/ColdTech perform a course correction and series of gravity assists based on the output from a random number generator, once they were out of observational distance from the origin system. This would ensure that no hostile forces could determine their location by finding the production facility still active. Instead of a handfull of hidden colonies, you could turn a sizable fraction of a solar system’s mass, or even a galaxies mass, into computonium.
Hmm I’m not sure what to make of your probability tree yet .. . but in general I don’t assign such high probabilities to any of these models/propositions. Also, I’m not sure what a type III civilization is supposed to translate to in the cold dark models that are temperature constrained rather than energy constrained. I guess you are using that to indicate how much of the galaxy’s usable computronium mass is colonized?
It is probably unlikely that even a fully colonized galaxy would have a very high computronium ratio: most of the mass is probably low value and not worth bothering with.
Thanks. I like your analogies with food and other early resources. Energy is so fundamental that it will probably always constrain many actions (construction still requires energy, for example), but it isn’t the only constraint, and not necessarily the key constraint for computation.
Yes—agreed. (I am now realizing ColdTech really needs a better name)
No, the observable effects vary considerably based on the assumed technology. Let’s compare three models: stellavore, BHE (black hole entity) transcension, and CD (cold dark) arcilects.
The stellavore model predicts that civs will create dyson spheres, which should be observable during the long construction period and may be observable afterwards. John Smart’s transcension model predicts black holes entities arising in or near stellar systems (although we could combine that with ejection I suppose). The CD arcilect model predicts that civs will cool down some of the planemos in their systems, possibly eject some of those planemos, and then also colonize any suitable nomads.
Each theory predicts a different set of observables. The stellavore model doesn’t appear to match our observations all that well. The other two seem to match, although also are just harder to detect, but there are some key things we could look for.
For my CD arcilect model, we have already have some evidence for a large amount of nomads. Perhaps there is a way to distinguish between artificial and natural ejections. Perhaps the natural pattern is ejections tend to occur early in system formation, whereas artificial ejections occur much later. Perhaps we could even get lucky and detect an unusually cold planemo with microlensing. Better modelling of the dark matter halos may reveal a match between ejection models for at least a baryonic component of the halo.
For the CDA model stars become somewhat wasteful, which suggests that civs may favour artificial supernovas if such a thing is practical. At the moment I don’t see how one could get the energy/mass to do such a thing.
Those are just some quick ideas, I haven’t really looked into it all that much.
No, I agree that civilizations will tend to expand and colonize, and yes stealth considerations shouldn’t prevent this.
Thinking about it a little more, I agree. And yes when I mention not being energy constrained, that was in reference only to computation, not construction. I assume efficient construction is typically in place, using solar or fusion or whatever.
Yes, this seems to be on the right track. However, the orbits of planetary bodies are very predictable and gravity assists are reversible operations (I think), which seems to imply that the remaining objects in the system will contain history sufficient for predicting the ejection trajectory (for a rival superintelligence). You can erase the history only by creating heat … so maybe you end up sending some objects into the sun? :) Yes actually that seems pretty doable.