I’m sure he’s not a crank. Which leaves the important question: is he right? I don’t know, but if he is, it’s highly relevant to the question of FAI, and suggests that the MIRI approach of considering an AI as a logical system to be designed to be safe may be barking up the wrong tree. From an interview with Wissner-Gross:
“The conventional storyline [of SF about AI],” he says, “has been that we would first build a really intelligent machine, and then it would spontaneously decide to take over the world.”
But one of the key implications of Wissner-Gross’s paper is that this long-held assumption may be completely backwards — that the process of trying to take over the world may actually be a more fundamental precursor to intelligence, and not vice versa.
...
“Our causal entropy maximization theory predicts that AIs may be fundamentally antithetical to being boxed,” he says. “If intelligence is a phenomenon that spontaneously emerges through causal entropy maximization, then it might mean that you could effectively reframe the entire definition of Artificial General Intelligence to be a physical effect resulting from a process that tries to avoid being boxed.”
But as I said on a previous occasion when this came up, the outside view here is that so far it’s just a big idea and toy demos.
Thank you for your response. Having thought about it for a while, I think he is wrong. (Whether he is a crank is a different issue, probably not worth worrying about)
I think it can be illustrated with the following example:
Suppose you are writing a computer program to find the fastest route between two cities and the computer program must select between two possibilities: Take the express highway or take local roads. A naive interpretation of Wissner-Gross’ approach would be to take the local roads because that gives you more options. However this would not seem to be the more intelligent choice in general. So a naive interpretation of the Wissner-Gross approach appears to be basically a heuristic—useful in some situations but not others.
But is this interpretation of Wissner-Gross’s approach correct? I expect he would say “no,” that taking the express highway actually entails more options because you get to your destination quicker, resulting in extra time which can be used to pursue other activities. Which is fine, but it seems to me that this is circular reasoning. Of course the more intelligent choice will result in more time, money, energy, health, or whatever, and these things give you more options. But this observation tells us nothing about how to actually achieve intelligence. It’s like the investment guru who tells us to “buy low sell high.” He’s stating the obvious without imparting anything of substance.
I admit it’s possible I have misunderstood Wissner-Gross’ claims. Is he saying anything deeper than what I have pointed out?
I’m sure he’s not a crank. Which leaves the important question: is he right? I don’t know, but if he is, it’s highly relevant to the question of FAI, and suggests that the MIRI approach of considering an AI as a logical system to be designed to be safe may be barking up the wrong tree. From an interview with Wissner-Gross:
...
But as I said on a previous occasion when this came up, the outside view here is that so far it’s just a big idea and toy demos.
Thank you for your response. Having thought about it for a while, I think he is wrong. (Whether he is a crank is a different issue, probably not worth worrying about)
I think it can be illustrated with the following example:
Suppose you are writing a computer program to find the fastest route between two cities and the computer program must select between two possibilities: Take the express highway or take local roads. A naive interpretation of Wissner-Gross’ approach would be to take the local roads because that gives you more options. However this would not seem to be the more intelligent choice in general. So a naive interpretation of the Wissner-Gross approach appears to be basically a heuristic—useful in some situations but not others.
But is this interpretation of Wissner-Gross’s approach correct? I expect he would say “no,” that taking the express highway actually entails more options because you get to your destination quicker, resulting in extra time which can be used to pursue other activities. Which is fine, but it seems to me that this is circular reasoning. Of course the more intelligent choice will result in more time, money, energy, health, or whatever, and these things give you more options. But this observation tells us nothing about how to actually achieve intelligence. It’s like the investment guru who tells us to “buy low sell high.” He’s stating the obvious without imparting anything of substance.
I admit it’s possible I have misunderstood Wissner-Gross’ claims. Is he saying anything deeper than what I have pointed out?