Perhaps the alternative to the maximize one thing subject to a price to humans constraint would be not making the AI that specialized. Make it maximize across a basket of things humans want.
While I have heard the paper clips take over the universe worry it seems to be that type of thought experiment introduce the problem to begin with (making a bit of a circular type error). As I gather (indirectly) the problem is the paper clip maximizing AI end up taking over the entire economy. That seems to equivalent to suggesting the AI replaces all the markets and other economic decisions (being smarter, faster and more competitive I guess).
If so isn’t an obvious solution to give it multiple (infinite in the sense of unlimited human wants) things to maximize? While it might replace the human production economic activity it’s going to produce some form a current state production possibility frontier and, I would think, an inter temporal one as well that that might address some inter-generational concerns.
I don’t think that fully solves the alignment problem (as I understand it—possibly poorly) but I do think it shifts what the risks are and may well eliminate a lot of the existential risks people worry about.
Perhaps the alternative to the maximize one thing subject to a price to humans constraint would be not making the AI that specialized. Make it maximize across a basket of things humans want.
While I have heard the paper clips take over the universe worry it seems to be that type of thought experiment introduce the problem to begin with (making a bit of a circular type error). As I gather (indirectly) the problem is the paper clip maximizing AI end up taking over the entire economy. That seems to equivalent to suggesting the AI replaces all the markets and other economic decisions (being smarter, faster and more competitive I guess).
If so isn’t an obvious solution to give it multiple (infinite in the sense of unlimited human wants) things to maximize? While it might replace the human production economic activity it’s going to produce some form a current state production possibility frontier and, I would think, an inter temporal one as well that that might address some inter-generational concerns.
I don’t think that fully solves the alignment problem (as I understand it—possibly poorly) but I do think it shifts what the risks are and may well eliminate a lot of the existential risks people worry about.