My layman’s understanding of the SI position is as follows:
Many different kinds of AIs are possible, and humans will keep building AIs of different types and power to achieve different goals.
Such attempts have a chance of becoming AGI strong enough to reshape the world, and AGI further has a chance of being uFAI. It doesn’t matter whether the programmers’ intent matches the result in these cases, or what the exact probabilities are. It matters only that the probability of uFAI of unbounded power is non-trivial (pick your own required minimum probability here).
The only way to prevent this is to make a FAI that will expand its power to become a singleton, preventing any other AI/agent from gaining superpowers in its future light cone. Again, it doesn’t matter if there is a chance of failure in this mission, as long as success is likely enough (pick your required probability, but I think 10% has already been suggested as sufficient in this thread).
Making a super-powerful FAI will of course also solve a huge number of other problems humans have, which is a nice bonus.
My layman’s understanding of the SI position is as follows:
Many different kinds of AIs are possible, and humans will keep building AIs of different types and power to achieve different goals.
Such attempts have a chance of becoming AGI strong enough to reshape the world, and AGI further has a chance of being uFAI. It doesn’t matter whether the programmers’ intent matches the result in these cases, or what the exact probabilities are. It matters only that the probability of uFAI of unbounded power is non-trivial (pick your own required minimum probability here).
The only way to prevent this is to make a FAI that will expand its power to become a singleton, preventing any other AI/agent from gaining superpowers in its future light cone. Again, it doesn’t matter if there is a chance of failure in this mission, as long as success is likely enough (pick your required probability, but I think 10% has already been suggested as sufficient in this thread).
Making a super-powerful FAI will of course also solve a huge number of other problems humans have, which is a nice bonus.