It’s a total digression from this post, but: it occurs to me that someone ought to try to figure out what the “supergoal” or utility function of C. elegans is, or what the coherent extrapolated volition of the C. elegans species might be. That organism’s nervous system has been mapped down to every last neuron (not so hard since there’s only about 300 of them). If we can’t make a C.elegans-Friendly AI given that information, we certainly can’t do it for H. sapiens.
If we can’t make a C.elegans-Friendly AI given that information, we certainly can’t do it for H. sapiens.
I like the suggestion you make. (But) I would perhaps fall just short of certainty. It is not unreasonable to suppose that a supergoal or utility function is something that was evolved alongside higher level adaptations like, say, an executive function and goal directed behaviour. C. elegans just wouldn’t get much benefit from having a supergoal encoded in its nervous system.
Looking at the difficulty of creating a C. elegans-FAI would highlight one of the difficulties with FAI in general. There is the inevitable and somewhat arbitrary decision on just how much weight we want to give of implicit goals of humanity. The line between terminal and instrumental values is somewhat dependent on one’s perspective.
It’s a total digression from this post, but: it occurs to me that someone ought to try to figure out what the “supergoal” or utility function of C. elegans is, or what the coherent extrapolated volition of the C. elegans species might be. That organism’s nervous system has been mapped down to every last neuron (not so hard since there’s only about 300 of them). If we can’t make a C.elegans-Friendly AI given that information, we certainly can’t do it for H. sapiens.
My understanding is that we have a connection map but have not successfully simulated the behavior.
I like the suggestion you make. (But) I would perhaps fall just short of certainty. It is not unreasonable to suppose that a supergoal or utility function is something that was evolved alongside higher level adaptations like, say, an executive function and goal directed behaviour. C. elegans just wouldn’t get much benefit from having a supergoal encoded in its nervous system.
Looking at the difficulty of creating a C. elegans-FAI would highlight one of the difficulties with FAI in general. There is the inevitable and somewhat arbitrary decision on just how much weight we want to give of implicit goals of humanity. The line between terminal and instrumental values is somewhat dependent on one’s perspective.
In a nutshell, it’s to make more copies of the c. elegans genome:
http://en.wikipedia.org/wiki/God’s_utility_function