With FAI, you have a commensurate reason to take the risk.
Sure, but if the Oracle AI is used as a stepping stone towards FAI, then you also have a reason to take the risk.
I guess you could argue that the risk of Oracle + Friendly AI is higher than just going straight for FAI, but you can’t be sure how much the FAI risk could be mitigated by the Oracle AI (or any other type of not-so-powerful / constrained / narrow-domain AI). At least it doesn’t seem obvious to me.
To the extent you should expect it to be useful. It’s not clear in what way it can even in principle help with specifying morality. (See also this thread.)
Assume you have a working halting oracle. Now what? (Actually you could get inside to have infinite time to think about the problem.)
I think he means Oracle as in general powerful question-answer, not as in a halting oracle. A halting oracle could be used to answer many mathematical questions (like the aforementioned Riemann Hypothesis) though.
I know he doesn’t mean a halting oracle. A halting oracle is a well-specified superpower that can do more than real Oracles. The thought experiment I described considers an upper bound on usefulness of Oracles.
I figure we will build experts and forecasters before both oracles and full machine intelligence. That will be good—since forecasters will help to give us foresight—which we badly need.
Generally speaking, replacing the brain’s functions one-at-a-time seems more desirable than replacing them all-at-once. It is likely to result in a more gradual shift, and a smoother transfer—with a reduced chance of the baton getting dropped during the switch over.
Doesn’t this also apply to provably friendly Friendly AI? Perhaps even more so, given that it is a project of higher complexity.
With FAI, you have a commensurate reason to take the risk.
Sure, but if the Oracle AI is used as a stepping stone towards FAI, then you also have a reason to take the risk.
I guess you could argue that the risk of Oracle + Friendly AI is higher than just going straight for FAI, but you can’t be sure how much the FAI risk could be mitigated by the Oracle AI (or any other type of not-so-powerful / constrained / narrow-domain AI). At least it doesn’t seem obvious to me.
To the extent you should expect it to be useful. It’s not clear in what way it can even in principle help with specifying morality. (See also this thread.)
Assume you have a working halting oracle. Now what? (Actually you could get inside to have infinite time to think about the problem.)
I think he means Oracle as in general powerful question-answer, not as in a halting oracle. A halting oracle could be used to answer many mathematical questions (like the aforementioned Riemann Hypothesis) though.
I know he doesn’t mean a halting oracle. A halting oracle is a well-specified superpower that can do more than real Oracles. The thought experiment I described considers an upper bound on usefulness of Oracles.
I figure we will build experts and forecasters before both oracles and full machine intelligence. That will be good—since forecasters will help to give us foresight—which we badly need.
Generally speaking, replacing the brain’s functions one-at-a-time seems more desirable than replacing them all-at-once. It is likely to result in a more gradual shift, and a smoother transfer—with a reduced chance of the baton getting dropped during the switch over.