“Now, though, I could see it—the pulse of the optimization process, sensory information surging in, motor instructions surging out, steering the future. In the middle, the model that linked up possible actions to possible outcomes, and the utility function over the outcomes. Put in the corresponding utility function, and the result would be an optimizer that would steer the future anywhere.”
Much kudos for realizing that we need to think about smarter-than-human intelligence naturalistically; that’s an important realization. But I think you may have got a little bit too enthusiastic about optimization processes; I get the feeling that most of the singularity community seems to have followed you. So allow me to object: not all configurations of matter worthy of the name “mind” are optimization processes. For example, my mind doesn’t implement an optimization process as you have described it here.
Allow me to further object: although a rational economic agent (AKA Bayesian utility maximizer, AKA optimization process) has the property that it will not knowingly change its utility function by self-modification, I have seen no evidence to show that optimization processes form attractors for minds under self-modification. You seem to imply above that you think this is the case; though I may have misread you.
And lastly, allow me to object: Just because a probabilistic utility maximizer is a being which is mathematically simple to describe, doesn’t mean we should go and build one! I think that the lure of a simple formalism must be contrasted with the danger of trying to squeeze human notions of value into a formalism that is not appropriate for them. You haven’t directly stated that you want to build a utility maximizer here, but you have equated “mind” with “utility maximizing agent”, so in your terminology it seems that utility maximizers are suddenly the only kind of AI we can build. When you go buy yourself a shiny new hammer, suddenly everything looks like a nail. Yes, it is a nice hammer, but let’s not get carried away.
“Now, though, I could see it—the pulse of the optimization process, sensory information surging in, motor instructions surging out, steering the future. In the middle, the model that linked up possible actions to possible outcomes, and the utility function over the outcomes. Put in the corresponding utility function, and the result would be an optimizer that would steer the future anywhere.”
Much kudos for realizing that we need to think about smarter-than-human intelligence naturalistically; that’s an important realization. But I think you may have got a little bit too enthusiastic about optimization processes; I get the feeling that most of the singularity community seems to have followed you. So allow me to object: not all configurations of matter worthy of the name “mind” are optimization processes. For example, my mind doesn’t implement an optimization process as you have described it here.
Allow me to further object: although a rational economic agent (AKA Bayesian utility maximizer, AKA optimization process) has the property that it will not knowingly change its utility function by self-modification, I have seen no evidence to show that optimization processes form attractors for minds under self-modification. You seem to imply above that you think this is the case; though I may have misread you.
And lastly, allow me to object: Just because a probabilistic utility maximizer is a being which is mathematically simple to describe, doesn’t mean we should go and build one! I think that the lure of a simple formalism must be contrasted with the danger of trying to squeeze human notions of value into a formalism that is not appropriate for them. You haven’t directly stated that you want to build a utility maximizer here, but you have equated “mind” with “utility maximizing agent”, so in your terminology it seems that utility maximizers are suddenly the only kind of AI we can build. When you go buy yourself a shiny new hammer, suddenly everything looks like a nail. Yes, it is a nice hammer, but let’s not get carried away.