In a previous comment you talked about the importance of “the problem of solving the bargaining/cooperation/mutual-governance problem that AI-enhanced companies (and/or countries) will be facing”. I wonder if you’ve written more about this problem anywhere, and why you didn’t mention it again in the comment that I’m replying to.
My own thinking about ‘the ~50% extinction probability I’m expecting from multi-polar interaction-level effects coming some years after we get individually “safe” AGI systems up and running’ is that if we’ve got “safe” AGIs, we could ask them to solve the “bargaining/cooperation/mutual-governance problem” for us but that would not work if they’re bad at solving this kind of problem. Bargaining and cooperation seem to be in part philosophical problems, so this fits into my wanting to make sure that we’ll build AIs that are philosophically competent.
ETA: My general feeling is that there will be too many philosophical problems like these during and after the AI transition, and it seems hopeless to try to anticipate them all and solve them individually ahead of time (or solve them later using only human intelligence). Instead we might have a better chance of solving the “meta” problem. Of course buying time with compute regulation seems great if feasible.
Yes, and specifically worse even in terms of probability of human extinction.
Why? I’m also kind of confused why you even mention this issue in this post, like are you thinking that you might potentially be in a position to impose your views? Or is this a kind of plea for others who might actually face such a choice to respect democratic processes?
In a previous comment you talked about the importance of “the problem of solving the bargaining/cooperation/mutual-governance problem that AI-enhanced companies (and/or countries) will be facing”. I wonder if you’ve written more about this problem anywhere, and why you didn’t mention it again in the comment that I’m replying to.
My own thinking about ‘the ~50% extinction probability I’m expecting from multi-polar interaction-level effects coming some years after we get individually “safe” AGI systems up and running’ is that if we’ve got “safe” AGIs, we could ask them to solve the “bargaining/cooperation/mutual-governance problem” for us but that would not work if they’re bad at solving this kind of problem. Bargaining and cooperation seem to be in part philosophical problems, so this fits into my wanting to make sure that we’ll build AIs that are philosophically competent.
ETA: My general feeling is that there will be too many philosophical problems like these during and after the AI transition, and it seems hopeless to try to anticipate them all and solve them individually ahead of time (or solve them later using only human intelligence). Instead we might have a better chance of solving the “meta” problem. Of course buying time with compute regulation seems great if feasible.
Why? I’m also kind of confused why you even mention this issue in this post, like are you thinking that you might potentially be in a position to impose your views? Or is this a kind of plea for others who might actually face such a choice to respect democratic processes?