Among new atheists even the notion that the nature of truth is up for discussion is a very threatening question.
I don’t know if it’s threatening, and I doubt that it applies to Dennett, but the other guys can’t seem to even conceive of truth beyond correspondence.
But if it’s a matter of people being open to changing their world view, to even understanding that they have one, and other people have other world views, it’s Korzybski they need to read, not Jaynes.
The guy with the blog is Chapman?
I don’t see a discussion. I see a pretty good video, and blog comments that I don’t see any value at all in. I had characterized them more colorfully, but seeing that Chapman is on the list, I decided to remove the color.
I’m not trying to be rude here, but his comments are just very wrong about probability, and thereby entirely clueless about the people he is criticizing.
As an example
It’s all just arithmetic.
No! Probability as inference most decidedly is not “just arithmetic”. Math tells you nothing axiomatically about the world.. All our various mathematics are conceptual structures that may or may not be useful in the world.
That’s where Jaynes, and I guess Cox before him, adds in the magic. Jaynes doesn’t proceed axiomatically. He starts with problem of representing confidence in a computer, and proceeds to show how the solution to that problem entails certain mathematics. He doesn’t proceed by “proof by axiomatic definitions”, he shows that the conceptual structures work for the problem attacked.
Also, in Jaynes presentation of probability theory as an extension of logic, P(A|B) isn’t axiomatically defined as P(AB)/P(B), it is the mathematical value assigned to the plausibility of a proposition A given that proposition B is taken to be true. It’s not about counting, it’s about reasoning about the truth of propositions given our knowledge.
I guess if he’s failing utterly to understand what people are talking about, what they’re saying might look like ritual incantation to him. I’m sure it is for some people.
Is there some reason I should take David Chapman as particularly authoritative? Why do you find his disagreement with senior LW people of particular note?
Is there some reason I should take David Chapman as particularly authoritative? Why do you find his disagreement with senior LW people of particular note?
I think in total that exchange provides a foundation for clearing the question of what Bayesianism is. I do consider that an important question.
As far as authority goes David Chapman did publish academic papers about artificial intelligence. He did develop solutions for previously unsolved AI problems. When he says that there’s no sign of Bayes axiom in the code that he used to solve an AI problem he just might be right.
Funny, I’ve been making that point for a while. I doubt that it applies to Dennett, but the other guys can’t seem to conceive of truth beyond correspondence.
Dennett is pretty interesting. Instead of asking what various people mean when they say consciousness he just assumes he knows and declares it nonexistent. The idea that maybe he doesn’t understand what other people mean with the term doesn’t come up in his thought.
Dennett writes about how detailed visual hallucinations are impossible. I do have had experiences where what I visually perceived didn’t change much whether or not I closed my eyes. It was after I spent 5 days in artificial coma. I know two additional people who I meet face to face who have had similar experiences.
I also have access to various accounts of people hallucinating stuff in other context via hypnosis. My own ability let myself go is unfortunately not good, so I still lack some first hand accounts of some other hallucinations.
A week ago I spoke at our local LW meetup with someone who said that while “IQ” obviously exists “free will” obviously doesn’t. At that point in time I didn’t know exactly how to resolve the issue but it seems to me that those are both concept that exist somehow on the same level. You won’t find any IQ atoms and you won’t find any free will atoms but they are still mental concepts that can be used to model things about the real world.
That a problem that arises by not having a well defined idea of what it means for concepts to exist. In practice that leads to terms like depression getting defined by committee and written down in the DSM-V and people simply assuming that depression exists without asking themselves in what way it exists. If people would ask themselves in what way it exist that might provide ground for a new way to think about depression.
But if it’s a matter of people being open to changing their world view, to even understanding that they have one, and other people have other world views, it’s Korzybski they need to read, not Jaynes.
The problem with Korzybski is that he’s hard to read. Reading and understanding him, is going to be hard work for most people who are not exposed to that kind of thinking.
What might be more readable is Barry Smith’s paper “Against Fantology”. It’s only 20 pages.
the idea being that it would be possible to save the
fantological doctrine by denying the existence of those entities which cause it
problems. Many heirs of the fantological world view have in this way found it
possible to avoid the problems raised for their doctrines by apparent examples of
true predications in the category of substance by denying the existence of
substances.
I think that’s what the New Atheists like Dennett do. They simply pretend that the things that don’t fit in their worldview don’t exist.
I think you’re being unfair to Dennett. He actually has availed himself of the findings of other fields, and has been at the consciousness shtick for decades. He may not agree, but it’s unlikely he is unaware.
And when did he say consciousness was nonexistent?
Cite? That seems a rather odd thing for him to say, and not particularly in his ideological interests.
Dennett writes about how detailed visual hallucinations are impossible.
Cite here? Again, except for supernatural bogeymen, my experience of him is that he recognizes that all sorts of mental events exists, but maybe not in the way that people suppose.
They simply pretend that the things that don’t fit in their worldview don’t exist.
Not accurate. If those things don’t fit in their world views, they don’t exist in them, so they’re not pretending.
On a general brouhaha with CHapman, I seemed to miss most of that. He did one post on Jaynes and A_p, which I read as I’ve always been interested in that particular branch of Jaynes’ work. But the post made a fundamental mistake, IMO, and the opinion of others, and I think Chapman admitted as much before all of his exchanges were over. So even with Chapman running the scoreboard, he’s behind in points.
Well, for one thing, Chapman was (at least at one point) a genuine, credentialed AI researcher and a good fraction of content on Less Wrong seems to be a kind of armchair AI-research. That’s the outside view, anyway. The inside view (from my perspective) matches your evaluation: he seems just plain wrong.
I think a few people here are credentialed, or working on their credentials in machine learning.
But almost everything useful I learned, I learned by just reading the literature. There were three main guys I thought had good answers—David Wolpert, Jaynes, and Pearl. I think time has put it’s stamp on approval on my taste.
Reading more from Chapman, he seems fairly reasonable as far as AI goes, but he’s got a few ideological axes to grind against some straw men.
On his criticisms of LW and Bayesianism, is there anyone here who doesn’t realize you need algorithms and representations beyond Bayes Rule? I think not too long ago we had a similar straw man massacre where everyone said “yeah, we have algorithms that do information processing other than Bayes rule—duh”.
And he really should have stuck it out longer in AI, as Hinton has gone a long way to solving the problem Chapman thought was insurmountable—getting proper representation of the space to analyze from the data without human spoon feeding. You need a hidden variable model of the observable data, and should be able to get it from prediction of subsets of the observables using the other observables. That much was obvious, it just took Hinton to find a good way to do it. Others are coming up with generalized learning modules and mapping them to brain constructs. There was never any need to despair of progress.
I don’t know if it’s threatening, and I doubt that it applies to Dennett, but the other guys can’t seem to even conceive of truth beyond correspondence.
But if it’s a matter of people being open to changing their world view, to even understanding that they have one, and other people have other world views, it’s Korzybski they need to read, not Jaynes.
The guy with the blog is Chapman?
I don’t see a discussion. I see a pretty good video, and blog comments that I don’t see any value at all in. I had characterized them more colorfully, but seeing that Chapman is on the list, I decided to remove the color.
I’m not trying to be rude here, but his comments are just very wrong about probability, and thereby entirely clueless about the people he is criticizing.
As an example
No! Probability as inference most decidedly is not “just arithmetic”. Math tells you nothing axiomatically about the world.. All our various mathematics are conceptual structures that may or may not be useful in the world.
That’s where Jaynes, and I guess Cox before him, adds in the magic. Jaynes doesn’t proceed axiomatically. He starts with problem of representing confidence in a computer, and proceeds to show how the solution to that problem entails certain mathematics. He doesn’t proceed by “proof by axiomatic definitions”, he shows that the conceptual structures work for the problem attacked.
Also, in Jaynes presentation of probability theory as an extension of logic, P(A|B) isn’t axiomatically defined as P(AB)/P(B), it is the mathematical value assigned to the plausibility of a proposition A given that proposition B is taken to be true. It’s not about counting, it’s about reasoning about the truth of propositions given our knowledge.
I guess if he’s failing utterly to understand what people are talking about, what they’re saying might look like ritual incantation to him. I’m sure it is for some people.
Is there some reason I should take David Chapman as particularly authoritative? Why do you find his disagreement with senior LW people of particular note?
Because senior LW people spent effort in replying to him. The post lead to LW posts such as what bayesianism taught me. Scott Alexander wrote in response: on first looking into chapmans pop-bayesianism. Kaj Sotala had a lively exchange in the comments of that article.
I think in total that exchange provides a foundation for clearing the question of what Bayesianism is. I do consider that an important question.
As far as authority goes David Chapman did publish academic papers about artificial intelligence. He did develop solutions for previously unsolved AI problems. When he says that there’s no sign of Bayes axiom in the code that he used to solve an AI problem he just might be right.
Dennett is pretty interesting. Instead of asking what various people mean when they say consciousness he just assumes he knows and declares it nonexistent. The idea that maybe he doesn’t understand what other people mean with the term doesn’t come up in his thought.
Dennett writes about how detailed visual hallucinations are impossible. I do have had experiences where what I visually perceived didn’t change much whether or not I closed my eyes. It was after I spent 5 days in artificial coma. I know two additional people who I meet face to face who have had similar experiences.
I also have access to various accounts of people hallucinating stuff in other context via hypnosis. My own ability let myself go is unfortunately not good, so I still lack some first hand accounts of some other hallucinations.
A week ago I spoke at our local LW meetup with someone who said that while “IQ” obviously exists “free will” obviously doesn’t. At that point in time I didn’t know exactly how to resolve the issue but it seems to me that those are both concept that exist somehow on the same level. You won’t find any IQ atoms and you won’t find any free will atoms but they are still mental concepts that can be used to model things about the real world.
That a problem that arises by not having a well defined idea of what it means for concepts to exist. In practice that leads to terms like depression getting defined by committee and written down in the DSM-V and people simply assuming that depression exists without asking themselves in what way it exists. If people would ask themselves in what way it exist that might provide ground for a new way to think about depression.
The problem with Korzybski is that he’s hard to read. Reading and understanding him, is going to be hard work for most people who are not exposed to that kind of thinking.
What might be more readable is Barry Smith’s paper “Against Fantology”. It’s only 20 pages.
I think that’s what the New Atheists like Dennett do. They simply pretend that the things that don’t fit in their worldview don’t exist.
I think you’re being unfair to Dennett. He actually has availed himself of the findings of other fields, and has been at the consciousness shtick for decades. He may not agree, but it’s unlikely he is unaware.
And when did he say consciousness was nonexistent?
Cite? That seems a rather odd thing for him to say, and not particularly in his ideological interests.
Cite here? Again, except for supernatural bogeymen, my experience of him is that he recognizes that all sorts of mental events exists, but maybe not in the way that people suppose.
Not accurate. If those things don’t fit in their world views, they don’t exist in them, so they’re not pretending.
On a general brouhaha with CHapman, I seemed to miss most of that. He did one post on Jaynes and A_p, which I read as I’ve always been interested in that particular branch of Jaynes’ work. But the post made a fundamental mistake, IMO, and the opinion of others, and I think Chapman admitted as much before all of his exchanges were over. So even with Chapman running the scoreboard, he’s behind in points.
Well, for one thing, Chapman was (at least at one point) a genuine, credentialed AI researcher and a good fraction of content on Less Wrong seems to be a kind of armchair AI-research. That’s the outside view, anyway. The inside view (from my perspective) matches your evaluation: he seems just plain wrong.
I think a few people here are credentialed, or working on their credentials in machine learning.
But almost everything useful I learned, I learned by just reading the literature. There were three main guys I thought had good answers—David Wolpert, Jaynes, and Pearl. I think time has put it’s stamp on approval on my taste.
Reading more from Chapman, he seems fairly reasonable as far as AI goes, but he’s got a few ideological axes to grind against some straw men.
On his criticisms of LW and Bayesianism, is there anyone here who doesn’t realize you need algorithms and representations beyond Bayes Rule? I think not too long ago we had a similar straw man massacre where everyone said “yeah, we have algorithms that do information processing other than Bayes rule—duh”.
And he really should have stuck it out longer in AI, as Hinton has gone a long way to solving the problem Chapman thought was insurmountable—getting proper representation of the space to analyze from the data without human spoon feeding. You need a hidden variable model of the observable data, and should be able to get it from prediction of subsets of the observables using the other observables. That much was obvious, it just took Hinton to find a good way to do it. Others are coming up with generalized learning modules and mapping them to brain constructs. There was never any need to despair of progress.