[LINK] Scott Aaronson on Integrated Information Theory
Scott Aaronson, complexity theory researcher, disputes Tononi’s theory of consciousness, that a physical system is conscious if and only if it has a high value of “integrated information”. Quote:
So, this is the post that I promised to Max [Tegmark] and all the others, about why I don’t believe IIT. And yes, it will contain that quantitative calculation [of the integrated information of a system that he claims is not conscious].
...
But let me end on a positive note. In my opinion, the fact that Integrated Information Theory is wrong—demonstrably wrong, for reasons that go to its core—puts it in something like the top 2% of all mathematical theories of consciousness ever proposed. Almost all competing theories of consciousness, it seems to me, have been so vague, fluffy, and malleable that they can only aspire to wrongness.
Here is my summary of his post and some related thoughts.
Scott instrumentalizes Chalmers’ vague Hard problem of consciousness:
into something concrete and measurable, which he dubs the Pretty-Hard Problem of Consciousness:
and shows that Tononi’s IIT fails to solve the latter. He does it by constructing a counterexample which has arbitrarily high integrated information (more than a human brain) while doing nothing anyone would call conscious. He also notes that building a theory of consciousness around information integration is not a promising approach in general:
Scott is very good at instrumentalizing vague ideas (what lukeprog calls hacking away at the edges). He did the same for the notion of “free will” in his paper The Ghost in the Quantum Turing Machine. His previous blog entry was about “The NEW Ten Most Annoying Questions in Quantum Computing”, which are some of the “edges” to hack at when thinking about the “deep” and “hard” problems of Quantum Computing. This approach has been very successful in the past:
after 8 years of work.
I hope that there are people at MIRI who are similarly good at instrumentalizing big ideas into interesting yet solvable questions.
Tononi gives a very interesting (weird?) reply: Why Scott should stare at a blank wall and reconsider (or, the conscious grid), where he accepts the very unintuitive conclusion that an empty square grid is conscious according to his theory. (Scott’s phrasing: “[Tononi] doesn’t “bite the bullet” so much as devour a bullet hoagie with mustard.”) Here is Scott’s reply to the reply:
Giulio Tononi and Me: A Phi-nal Exchange
Here’s one particularly weird consequence of IIT: a zeroed-out system has the same degree of consciousness as a dynamic one, because it’s a structural measure of a system. For example, a physical, memrister based neural net has the same degree of integrated information when it’s unplugged. Or, to chase after a more absurd-seeming conclusion, human consciousness is not reduced immediately upon death (assuming no brain damage), instead slowly decreasing as the cellular arrangement begins to decay.
Given that, I agree with Scott- while interesting, IIT doesn’t track particularly well with ‘consciousness’ in the conceptual sense.
His use of philosophical zombies does not dissuade me.
The idea of devices that transform input data with “low-density parity-checks” having more phi than humans concerns me slightly more. If it is a valid complaint, then I believe it’s probably an issue with the formalism, not with the concept.
I need to read further.
I didn’t downvote, but I’m guessing it’s because you stated your opinions about the post without giving reasons for believing in those opinions.
I didn’t think I had to cite my sources on philosophical zombies; we’re on LessWrong.
And the downvoting continues. Would the individual in question say something?
There was also this:
What’s wrong with that? I’d say it’s a prevalent problem when trying to formalize complicated concepts.
Like I said in my original comment, it’s stating your opinion without giving any reason to believe in that opinion. If you don’t say why you believe that it’s an issue with the formalism rather than the concept, you’re adding more noise than information. Facts are better than opinions.
Take that, Aumann!
His answer: “Au, Mann!” (“au” means “ouch” in German, his original mother tongue). Aw man, bad puns are my personal demon (works phonetically). Amen to that being a bad case of nomen est omen.
Aumann must be rolling in his grave from disagreeing with all the misuses of his agreement theorem as applying in a social context. Figure of speech, since he’s still alive.
ETA: Au-mann puns, the poor man’s gold!