Do we need FAI that does as good a job of satisfying human desires as possible, or would an FAI which protects humanity against devastating threats enough?
Even devastating threats can be a little hard to define.… if people want to transform themselves into Something Very Different, is that the end of the human race, or just an extension to human history?
Still, most devastating threats (uFAI, asteroid strike) aren’t such a hard challenge to identify.
There’s a danger, though, in building something that’s superhumanly intelligent, and has goals that it acts on, that doesn’t include some of our goals. You would have to make sure it’s not an expectation-maximizing agent.
I think an assumption of the FAI project is that you shouldn’t do what Nancy is proposing, because you can’t reliably build a superhumanly-intelligent self-improving agent and cripple it in a way that prevents it from trying to maximize its goals.
Is it actually more crippled than a wish-fulfilling FAI? Either sort of AI has to leave resources for people.
However, your point makes me realize that a big threat only FAI (such threats including that it might take too much from people) will need a model of and respect for human desires so that we aren’t left on a minimal reservation.
Do we need FAI that does as good a job of satisfying human desires as possible, or would an FAI which protects humanity against devastating threats enough?
Even devastating threats can be a little hard to define.… if people want to transform themselves into Something Very Different, is that the end of the human race, or just an extension to human history?
Still, most devastating threats (uFAI, asteroid strike) aren’t such a hard challenge to identify.
There’s a danger, though, in building something that’s superhumanly intelligent, and has goals that it acts on, that doesn’t include some of our goals. You would have to make sure it’s not an expectation-maximizing agent.
I think an assumption of the FAI project is that you shouldn’t do what Nancy is proposing, because you can’t reliably build a superhumanly-intelligent self-improving agent and cripple it in a way that prevents it from trying to maximize its goals.
Is it actually more crippled than a wish-fulfilling FAI? Either sort of AI has to leave resources for people.
However, your point makes me realize that a big threat only FAI (such threats including that it might take too much from people) will need a model of and respect for human desires so that we aren’t left on a minimal reservation.