If we can’t make a C.elegans-Friendly AI given that information, we certainly can’t do it for H. sapiens.
I like the suggestion you make. (But) I would perhaps fall just short of certainty. It is not unreasonable to suppose that a supergoal or utility function is something that was evolved alongside higher level adaptations like, say, an executive function and goal directed behaviour. C. elegans just wouldn’t get much benefit from having a supergoal encoded in its nervous system.
Looking at the difficulty of creating a C. elegans-FAI would highlight one of the difficulties with FAI in general. There is the inevitable and somewhat arbitrary decision on just how much weight we want to give of implicit goals of humanity. The line between terminal and instrumental values is somewhat dependent on one’s perspective.
I like the suggestion you make. (But) I would perhaps fall just short of certainty. It is not unreasonable to suppose that a supergoal or utility function is something that was evolved alongside higher level adaptations like, say, an executive function and goal directed behaviour. C. elegans just wouldn’t get much benefit from having a supergoal encoded in its nervous system.
Looking at the difficulty of creating a C. elegans-FAI would highlight one of the difficulties with FAI in general. There is the inevitable and somewhat arbitrary decision on just how much weight we want to give of implicit goals of humanity. The line between terminal and instrumental values is somewhat dependent on one’s perspective.