To clarify, I am not sure that we can relax expecting this type of salvation. It is just plan D after the first three fail (A—alignment, B—pause, C—boxing or destroying). We can make plan D stronger by:
1. Improving decision theory 2. Exploring the power of commitments 3. Sending messages to aliens in space now (METI) with a request for help in order to improve our counterfactual negotiation position
But to do all these, we need to look more in this direction.
A galactic brain can surely extract more utility from additional pieces of matter, but it is less likely to be fate-changing utility, because fate can’t change 100 billion times after exploring each of 100 billion stars.
Scanning brains is an expensive act for a very young AI, so it may need to preserve us until later stages, and in later stages, the utility of our atoms will decline.
There could be an intermediate variant between storing data and preserving Earth: building a space station where each person gets an apartment. This space station would take around 10^3 cubic km of material, and given Earth’s volume of 10^12 cubic km, it would require only one billionth of Earth’s mass.
I think that for any young AI, the estimation of being in a simulation should be at least 10 percent. The independent probability of meeting aliens should also be around 10 percent. In each case, I would give a 0.1 chance that preserving humans will make some difference. In total, I estimate that preserving humans eliminates a 2 percent risk to AI survival from its point of view.
If aligned AI is created by negative utilitarians, it will kill us all.
Makes sense, I get that, all good points and good discussion.
I would say, scanning brains may be expensive early on, but cryonics and plasticination should be very cheap relative to space-based apartments, but the time it’s possible to have either at scale.
To clarify, I am not sure that we can relax expecting this type of salvation. It is just plan D after the first three fail (A—alignment, B—pause, C—boxing or destroying). We can make plan D stronger by:
1. Improving decision theory
2. Exploring the power of commitments
3. Sending messages to aliens in space now (METI) with a request for help in order to improve our counterfactual negotiation position
But to do all these, we need to look more in this direction.
A galactic brain can surely extract more utility from additional pieces of matter, but it is less likely to be fate-changing utility, because fate can’t change 100 billion times after exploring each of 100 billion stars.
Scanning brains is an expensive act for a very young AI, so it may need to preserve us until later stages, and in later stages, the utility of our atoms will decline.
There could be an intermediate variant between storing data and preserving Earth: building a space station where each person gets an apartment. This space station would take around 10^3 cubic km of material, and given Earth’s volume of 10^12 cubic km, it would require only one billionth of Earth’s mass.
I think that for any young AI, the estimation of being in a simulation should be at least 10 percent. The independent probability of meeting aliens should also be around 10 percent. In each case, I would give a 0.1 chance that preserving humans will make some difference. In total, I estimate that preserving humans eliminates a 2 percent risk to AI survival from its point of view.
If aligned AI is created by negative utilitarians, it will kill us all.
Makes sense, I get that, all good points and good discussion.
I would say, scanning brains may be expensive early on, but cryonics and plasticination should be very cheap relative to space-based apartments, but the time it’s possible to have either at scale.