State value models require resources to produce high-value states. If happiness is the goal, using the resources to produce the maximum number of maximally happy minds (with a tradeoff between number and state depending on how utilities aggregate) would maximize value. If the goal is knowledge, the resources
would be spent on processing generating knowledge and storage, and so on. For these cases the total amount of produced value increases monotonically with the amount of resources, possibly superlinearly.
I would think that superlinear scaling of utility with resources is incompatible with the proposed resolution of the Fermi paradox. Why?
Superlinear scaling of utility means (ignoring detailed numbers) that e.g. a distribution of 1% chance of 1e63 bit-erasures + 99% of fast extinction is preferable to almost certain 1e60 bit erasures. This seems (1) dubious from an, admittedly human-centric, common sense perspective, and more rigorously (2) is incompatible with the observation that possibilities for immediate resource extraction which don’t affect later computations are not realized. In other words: You do not propose a mechanism how a dyson-swarm to collect current energy/entropy emitted by stars would decrease the total amount of computation to be done over the life-time of the universe. Especially the energy/negative entropy contained in unused emissions of current stars appears to dissipate into un-useful background glow.
I would view the following, mostly (completely?) contained in your paper as a much more coherent proposed explanation:
(1) Sending self-replicating probes to most stars in the visible universe appears to be relatively cheap [your earlier paywalled paper]
(2) This gives rise to a much stronger winner-takes-all dynamics than just colonization of a single galaxy
(3) Most pay-off, in terms of computation, is in the far future after cooling
(4) A stongly sublinear utility of computation makes a lot of sense. I would think more in direction of poly-log, in the relevant asymptotics, than linear.
(5) This implies a focus on certainty of survival
(6) This implies a lot of possible gain from (possibly a-causal) value-trade / coexistence.
(7) After certainty of survival, this implies diversification of value. If, for example, the welfare and possible existence of alien civilizations is valued at all, then the small marginal returns on extra computations on the main goals lead to gifting them a sizable chunk of cosmic real estate (sizable in absolute, not relative terms: A billion star systems for a billion years are peanuts compared to the size of the cosmic endowment in the cold far future)
This boils down to an aestivating-zoo scenario: Someone with strongly sublinear utility function and slow discounting was first to colonize the universe, and decided to be merciful to late-comers; either for acausal trade reasons, or for terminal values. Your calculations boil down showing the way towards a lower-bound on the amount of necessary mercy for late-comers: For example, if the first mover decided to sacrifice 1e-12 of its cosmic endowment to charity, this might be enough to explain the current silence (?).
The first-mover would send probes to virtually all star systems, which run nice semi-stealthy observatories, e.g. on an energy budget of a couple giga-watt of solar panels on asteroids. If a local civilization emerges, it could go “undercover”. It appears unlikely that a locally emergent superintelligence could threaten the first colonizer: The upstart might be able to take its own home system, but invading a system that has already a couple thousand tons of technologically mature equipment appears physically infeasible, even for technically mature invaders. If the late-comer starts to colonize too many systems… well, stop their 30g probes once they arrive, containment done. If the late-comer starts to talk too loud on radio… well, ask them to stop.
In this very optimistic world, we would be quite far from “x-risk by crossing the berserker-threshold”: We would be given the time and space to autonomously decide what to do with the cosmos, and afterwards be told “sorry, too late, never was an option for you; wanna join the party? Most of it is ours, but you can have a peanut!”
Question: What are the lower bounds on the charity-fraction necessary to explain the current silence? This is a more numerical question, but quite important for this hypothesis.
Note that this does not require any coordination beyond the internal coordination of the first mover: All later civs are allowed to flourish in their alloted part of the universe; it is just their expansion that is contained. This strongly reduces the effective amount of remaining filter to explain: We just need technological civilization to emerge rarely enough compared to the first-colonizer set upper expansion bound (instead of the size of the universe). For further reductions, the first-colonizer might set upper time-of-existence bounds, e.g. offer civilizations that hit their upper bound the following deal: “hey, guys, would you mind uploading and clearing your part of space for possible future civilizations? We will pay you in more computation in the far future than you have any way of accessing in other ways. Also, this would be good manners, since your predecessors’ agreement to this arrangement is the reason for your existence”.
PS, on (4) “strongly sublinear utility function”. If high-risk high-payoff behaviour is possible at all, then we would expect the median universe to be taken by risk-adverse (sublinear utility scaling) civs, and would expect almost all risk-hungry (superlinear utility scaling) civs to self-destruct. Note that this is rational behaviour of the risk-hungry civs, and I am not criticizing them for it. However, I view this as a quite weak argument, since the only plausible risk/reward trade-off on a cosmic scale appears to be in uncertainty about terminal values (and time discounting). Or do you see plausible risk/reward trade-offs?
Also, the entire edifice collapses if the first colonizer is a negative utilitarian.
In Section 3, you write:
I would think that superlinear scaling of utility with resources is incompatible with the proposed resolution of the Fermi paradox. Why?
Superlinear scaling of utility means (ignoring detailed numbers) that e.g. a distribution of 1% chance of 1e63 bit-erasures + 99% of fast extinction is preferable to almost certain 1e60 bit erasures. This seems (1) dubious from an, admittedly human-centric, common sense perspective, and more rigorously (2) is incompatible with the observation that possibilities for immediate resource extraction which don’t affect later computations are not realized. In other words: You do not propose a mechanism how a dyson-swarm to collect current energy/entropy emitted by stars would decrease the total amount of computation to be done over the life-time of the universe. Especially the energy/negative entropy contained in unused emissions of current stars appears to dissipate into un-useful background glow.
I would view the following, mostly (completely?) contained in your paper as a much more coherent proposed explanation:
(1) Sending self-replicating probes to most stars in the visible universe appears to be relatively cheap [your earlier paywalled paper]
(2) This gives rise to a much stronger winner-takes-all dynamics than just colonization of a single galaxy
(3) Most pay-off, in terms of computation, is in the far future after cooling
(4) A stongly sublinear utility of computation makes a lot of sense. I would think more in direction of poly-log, in the relevant asymptotics, than linear.
(5) This implies a focus on certainty of survival
(6) This implies a lot of possible gain from (possibly a-causal) value-trade / coexistence.
(7) After certainty of survival, this implies diversification of value. If, for example, the welfare and possible existence of alien civilizations is valued at all, then the small marginal returns on extra computations on the main goals lead to gifting them a sizable chunk of cosmic real estate (sizable in absolute, not relative terms: A billion star systems for a billion years are peanuts compared to the size of the cosmic endowment in the cold far future)
This boils down to an aestivating-zoo scenario: Someone with strongly sublinear utility function and slow discounting was first to colonize the universe, and decided to be merciful to late-comers; either for acausal trade reasons, or for terminal values. Your calculations boil down showing the way towards a lower-bound on the amount of necessary mercy for late-comers: For example, if the first mover decided to sacrifice 1e-12 of its cosmic endowment to charity, this might be enough to explain the current silence (?).
The first-mover would send probes to virtually all star systems, which run nice semi-stealthy observatories, e.g. on an energy budget of a couple giga-watt of solar panels on asteroids. If a local civilization emerges, it could go “undercover”. It appears unlikely that a locally emergent superintelligence could threaten the first colonizer: The upstart might be able to take its own home system, but invading a system that has already a couple thousand tons of technologically mature equipment appears physically infeasible, even for technically mature invaders. If the late-comer starts to colonize too many systems… well, stop their 30g probes once they arrive, containment done. If the late-comer starts to talk too loud on radio… well, ask them to stop.
In this very optimistic world, we would be quite far from “x-risk by crossing the berserker-threshold”: We would be given the time and space to autonomously decide what to do with the cosmos, and afterwards be told “sorry, too late, never was an option for you; wanna join the party? Most of it is ours, but you can have a peanut!”
Question: What are the lower bounds on the charity-fraction necessary to explain the current silence? This is a more numerical question, but quite important for this hypothesis.
Note that this does not require any coordination beyond the internal coordination of the first mover: All later civs are allowed to flourish in their alloted part of the universe; it is just their expansion that is contained. This strongly reduces the effective amount of remaining filter to explain: We just need technological civilization to emerge rarely enough compared to the first-colonizer set upper expansion bound (instead of the size of the universe). For further reductions, the first-colonizer might set upper time-of-existence bounds, e.g. offer civilizations that hit their upper bound the following deal: “hey, guys, would you mind uploading and clearing your part of space for possible future civilizations? We will pay you in more computation in the far future than you have any way of accessing in other ways. Also, this would be good manners, since your predecessors’ agreement to this arrangement is the reason for your existence”.
PS, on (4) “strongly sublinear utility function”. If high-risk high-payoff behaviour is possible at all, then we would expect the median universe to be taken by risk-adverse (sublinear utility scaling) civs, and would expect almost all risk-hungry (superlinear utility scaling) civs to self-destruct. Note that this is rational behaviour of the risk-hungry civs, and I am not criticizing them for it. However, I view this as a quite weak argument, since the only plausible risk/reward trade-off on a cosmic scale appears to be in uncertainty about terminal values (and time discounting). Or do you see plausible risk/reward trade-offs?
Also, the entire edifice collapses if the first colonizer is a negative utilitarian.