It looks like you’re suggesting that AIs that mirror human values must be implemented in the way humans really work
I’m saying that a system that’s based on utility maximizing is likely too alien of a creature to be able to be safely understood and utilized by humans.
That’s more or less the premise of FAI, is it not? Any strictly-maximizing agent is bloody dangerous to anything that isn’t maximizing the same thing. What’s more, humans are ill-equipped to even grok this danger, let alone handle it safely.
Utility maximization can model any goal-oriented creature, within reason. Familiar, or alien, it makes not the slightest bit of difference to the theory.
Utility maximization can model any goal-oriented creature, within reason. Familiar, or alien, it makes not the slightest bit of difference to the theory.
Of course it can, just like you can model any computation with a Turing machine, or on top of the game of Life. And modeling humans (or most any living entity) as a utility maximizer is on a par with writing a spreadsheet program to run on a Turing machine. An interesting, perhaps fun or educational but exercise, but mostly futile.
I mean, sure, you could say that utility equals “minimum global error of all control systems”, but it’s rather ludicrous to expect this calculation to predict their actual behavior, since most of their “interests” operate independently. Why go to all the trouble to write a complex utility function when an error function is so much simpler and closer to the territory?
I think you are getting my position. Just as a universal computer can model any other type of machine, so a utilitiarian agent can model any other type of agent. These two concepts are closely analogous.
I’m saying that a system that’s based on utility maximizing is likely too alien of a creature to be able to be safely understood and utilized by humans.
That’s more or less the premise of FAI, is it not? Any strictly-maximizing agent is bloody dangerous to anything that isn’t maximizing the same thing. What’s more, humans are ill-equipped to even grok this danger, let alone handle it safely.
The best bridges are not humans either.
Bridges aren’t utility maximizers, either.
Utility maximization can model any goal-oriented creature, within reason. Familiar, or alien, it makes not the slightest bit of difference to the theory.
Of course it can, just like you can model any computation with a Turing machine, or on top of the game of Life. And modeling humans (or most any living entity) as a utility maximizer is on a par with writing a spreadsheet program to run on a Turing machine. An interesting, perhaps fun or educational but exercise, but mostly futile.
I mean, sure, you could say that utility equals “minimum global error of all control systems”, but it’s rather ludicrous to expect this calculation to predict their actual behavior, since most of their “interests” operate independently. Why go to all the trouble to write a complex utility function when an error function is so much simpler and closer to the territory?
I think you are getting my position. Just as a universal computer can model any other type of machine, so a utilitiarian agent can model any other type of agent. These two concepts are closely analogous.
But your choice of platforms is not without efficiency and complexity costs, since maximizers inherently “blow up” more than satisficers.