Is there one dominant paradigm for AI motivation control in this group that’s competing with utility functions, or do each of the people you mention have different thoughts on it?
People have different thoughts, but to tell the truth most people I know are working on a stage of the AGI puzzle that is well short of the stage where they need to think about the motivation system.
For people (like robot builders) who have to sort that out right now, they used old fashioned planning systems combined with all kinds of bespoke machinery in and around that.
I am not sure, but I think I am the one thinking most about these issues just because I do everything in a weird order, because I am reconstructing human cognition.
Is there one dominant paradigm for AI motivation control in this group that’s competing with utility functions, or do each of the people you mention have different thoughts on it?
People have different thoughts, but to tell the truth most people I know are working on a stage of the AGI puzzle that is well short of the stage where they need to think about the motivation system.
For people (like robot builders) who have to sort that out right now, they used old fashioned planning systems combined with all kinds of bespoke machinery in and around that.
I am not sure, but I think I am the one thinking most about these issues just because I do everything in a weird order, because I am reconstructing human cognition.