There’s a danger, though, in building something that’s superhumanly intelligent, and has goals that it acts on, that doesn’t include some of our goals. You would have to make sure it’s not an expectation-maximizing agent.
I think an assumption of the FAI project is that you shouldn’t do what Nancy is proposing, because you can’t reliably build a superhumanly-intelligent self-improving agent and cripple it in a way that prevents it from trying to maximize its goals.
Is it actually more crippled than a wish-fulfilling FAI? Either sort of AI has to leave resources for people.
However, your point makes me realize that a big threat only FAI (such threats including that it might take too much from people) will need a model of and respect for human desires so that we aren’t left on a minimal reservation.
There’s a danger, though, in building something that’s superhumanly intelligent, and has goals that it acts on, that doesn’t include some of our goals. You would have to make sure it’s not an expectation-maximizing agent.
I think an assumption of the FAI project is that you shouldn’t do what Nancy is proposing, because you can’t reliably build a superhumanly-intelligent self-improving agent and cripple it in a way that prevents it from trying to maximize its goals.
Is it actually more crippled than a wish-fulfilling FAI? Either sort of AI has to leave resources for people.
However, your point makes me realize that a big threat only FAI (such threats including that it might take too much from people) will need a model of and respect for human desires so that we aren’t left on a minimal reservation.