Can you send me a model? I think my objection is to the binariness of the possible strategies node, but I’m not sure how to express that best in your model.
Suppose there are N rojects in the world each of which might almost-succeed and so each of which is an existential risk.
The variable that I can counterfactually control is my actions. The variable that we can counterfactually control are our actions. Since we’re conversing in persuasive dialog, it is reasonable to discuss what strategies we might take to best reduce existential risk.
Suppose that we distinguish between “safety strategies” and “singleton strategies”.
Singleton strategies are explicitly going for fast, general-purpose power and capability, with as many stacks of iterated exponential growth in capability as the recursive self-improvement engineers can manage. It seems obvious to me that if we embarked on a singleton strategy, even with the best of intentions, there are now N+1 AGI projects, each increasing existential risk, and our best intentions might not outweigh that increase.
Safety strategies would involve attempting to create entities (e.g. human teams, human/software amalgams, special-purpose software) which are explicitly limited and very unlikely to be generally powerful compared to the world at large. They would try to decrease existential risk both directly (e.g. build tools for the AGI projects that reduce the chance of the AGI projects going wrong) and indirectly, by not contributing to the problem.
No, sorry, the above comment was just my attempt to explain my objection as unambiguously as possible.
It seems obvious to me that if we embarked on a singleton strategy, even with the best of intentions, there are now N+1 AGI projects, each increasing existential risk, and our best intentions might not outweigh that increase.
Yes, but your “N+1” hides some important detail: Our effective contribution to existential risk diminishes as N grows, while our contribution to safer outcomes stays constant or even grows (in the case that our work has a positive impact on someone else’s “winning” project).
I think my objection is to the binariness of the possible strategies node, but I’m not sure how to express that best in your model. [...] They would try to decrease existential risk both directly (e.g. build tools for the AGI projects that reduce the chance of the AGI projects going wrong) and indirectly, by not contributing to the problem.
Since you were making the point that attempting to build Friendly AGI contributes to existential risk, I thought it fair to factor out other actions. The two strategies you outline above are entirely independent, so they should be evaluated separately. I read you as promoting the latter strategy independently when you say:
By explicitly going for general-purpose, no-human-dependencies, and indefinitely self-improvable, you’re building in exactly the same elements that you suspect are dangerous.
The choice under consideration is binary: Attempt a singleton or don’t. Safety strategies may also be worthwhile, but I need a better reason than “they’re working toward the same goal” to view them as relevant to the singleton question.
Can you send me a model? I think my objection is to the binariness of the possible strategies node, but I’m not sure how to express that best in your model.
Suppose there are N rojects in the world each of which might almost-succeed and so each of which is an existential risk.
The variable that I can counterfactually control is my actions. The variable that we can counterfactually control are our actions. Since we’re conversing in persuasive dialog, it is reasonable to discuss what strategies we might take to best reduce existential risk.
Suppose that we distinguish between “safety strategies” and “singleton strategies”.
Singleton strategies are explicitly going for fast, general-purpose power and capability, with as many stacks of iterated exponential growth in capability as the recursive self-improvement engineers can manage. It seems obvious to me that if we embarked on a singleton strategy, even with the best of intentions, there are now N+1 AGI projects, each increasing existential risk, and our best intentions might not outweigh that increase.
Safety strategies would involve attempting to create entities (e.g. human teams, human/software amalgams, special-purpose software) which are explicitly limited and very unlikely to be generally powerful compared to the world at large. They would try to decrease existential risk both directly (e.g. build tools for the AGI projects that reduce the chance of the AGI projects going wrong) and indirectly, by not contributing to the problem.
No, sorry, the above comment was just my attempt to explain my objection as unambiguously as possible.
Yes, but your “N+1” hides some important detail: Our effective contribution to existential risk diminishes as N grows, while our contribution to safer outcomes stays constant or even grows (in the case that our work has a positive impact on someone else’s “winning” project).
Since you were making the point that attempting to build Friendly AGI contributes to existential risk, I thought it fair to factor out other actions. The two strategies you outline above are entirely independent, so they should be evaluated separately. I read you as promoting the latter strategy independently when you say:
The choice under consideration is binary: Attempt a singleton or don’t. Safety strategies may also be worthwhile, but I need a better reason than “they’re working toward the same goal” to view them as relevant to the singleton question.