What probability would you assign to this statement: “UFAI will be relatively easy to create within the next 100 years. FAI is so difficult that it will be nearly impossible to create within the next 200 years.”
I think that the estimates cannot be undertaken independently. FAI and UFAI would each pre-empt the other. So I’ll rephrase a little.
I estimate the chances that some AGI (in the sense of “roughly human-level AI”) will be built within the next 100 years as 85%, which is shorthand for “very high, but I know that probability estimates near 100% are often overconfident; and something unexpected can come up.”
And “100 years” here is shorthand for “as far off as we can make reasonable estimates/guesses about the future of humanity”; perhaps “50 years” should be used instead.
Conditional on some AGI being built, I estimate the chances that it will be unfriendly as 80%, which is shorthand for “by default it will be unfriendly, but people are working on avoiding that and they have some small chance of succeeding; or there might be some other unexpected reason that it will turn out friendly.”
Thank you. I didn’t phrase my question very well but what I was trying to get at was whether making a friendly AGI might be, by some measurement, orders of magnitude more difficult than making a non-friendly one.
Yes, it is orders of magnitude more different. If we took a hypothetical FAI-capable team, how much less time would it take them to make a UFAI than a FAI, assuming similar levels of effort, and starting at today’s knowledge levels?
What probability would you assign to this statement: “UFAI will be relatively easy to create within the next 100 years. FAI is so difficult that it will be nearly impossible to create within the next 200 years.”
I think that the estimates cannot be undertaken independently. FAI and UFAI would each pre-empt the other. So I’ll rephrase a little.
I estimate the chances that some AGI (in the sense of “roughly human-level AI”) will be built within the next 100 years as 85%, which is shorthand for “very high, but I know that probability estimates near 100% are often overconfident; and something unexpected can come up.”
And “100 years” here is shorthand for “as far off as we can make reasonable estimates/guesses about the future of humanity”; perhaps “50 years” should be used instead.
Conditional on some AGI being built, I estimate the chances that it will be unfriendly as 80%, which is shorthand for “by default it will be unfriendly, but people are working on avoiding that and they have some small chance of succeeding; or there might be some other unexpected reason that it will turn out friendly.”
Thank you. I didn’t phrase my question very well but what I was trying to get at was whether making a friendly AGI might be, by some measurement, orders of magnitude more difficult than making a non-friendly one.
Yes, it is orders of magnitude more different. If we took a hypothetical FAI-capable team, how much less time would it take them to make a UFAI than a FAI, assuming similar levels of effort, and starting at today’s knowledge levels?
One-tenth the time seems like a good estimate.