It seems to be more than he can fit into the whole of his family of blogs. I’ve read quite a lot of what he’s written, but at last I gave up on his perpetual jam-tomorrow deferral of any plain setting out of his positive ideas.
I like to link to his recommended reading list instead of the main site for gesturing towards what Chapman seems to be circling around while never quite landing on. It’s still not a clear explanation of the thing, but at least that’s more than one person’s viewpoint on the landscape.
Contrast The Sequences, where Eliezer simply wrote down what he knew, with exemplary clarity. No circling around and around, but steadily marching through the partial order of dependencies of topics, making progress with every posting, that built into a whole. That is what exposition should look like. No repeated restarting with ever more fundamental things that would have to be explained first while the goal gets further away. No gesturing towards enormous reading lists as a stopgap for being unable to articulate his ideas about that material. Perhaps that is because he had something real to say?
In my experience, the subjective feeling that one understands an idea, even if it seems to have pin-sharp clarity, often does not survive trying to communicate it to another person, or even to myself by writing it down. The problem may not be that the thing is difficult to convey, but that I am confused about the thing. The thing may not exist, not correspond to anything in reality. (Explaining something to a machine, i.e. programming, is even more exposing of confusion.)
When a thing is apparently so difficult to communicate that however much ink one spills on the task, the end of it does not draw nearer, but recedes ever further, I have to question whether there is anything to be communicated.
In that reading list page, Chapman writes this
I took the title of my book In the Cells of the Eggplant from a dialog in Understanding Computers and Cognition:
A. Is there any water in the refrigerator? B. Yes. A. Where? I don’t see it. B. In the cells of the eggplant.
Was “there is water in the refrigerator” true?
That question can only be answered meta-rationally: “True in what sense? Relative to what purpose?”
That is not “meta-rationality”, that is plain rationality. Person A is asking about water to drink. The anecdote implies that there is none in the refrigerator, therefore the correct answer is “No.” The simple conundrum is dissolved by the ordinary concept of implicature. Even a child can tell that B’s reply is either idiotic or a feeble joke. That Chapman has titled his intended book after this vignette does not give me confidence in his project.
I have just recalled an anecdote about the symptoms of trying to explain something incoherent. If (so I read) you hypnotize someone and suggest to them that they can see a square circle drawn on the wall, fully circular and fully a square, they have the experience of seeing a square circle. Now, I’m somewhat sceptical about the reality of hypnosis, but not at all sceptical about the physical ability of a brain to have that experience, despite the fact that there is no such thing as a square circle.
If you ask that person (the story goes on) to draw what they see, they start drawing, but keep on erasing and trying again, frustrated by the fact that what they draw always fails to capture the thing they are trying to draw.
I think you might be reading a bit too much into things here. Eliezer is exceptional in his verbal communication abilities for whatever his other skills and flaws, so even if the most rationalist rationalist set out to write the sequences and they were not on par with Eliezer’s verbal skills they likely would not have been as successful, would have gotten lost and have lots of dangling pointers to things to explain later. Chapman is facing the normal problems of trying to explain a complex things when you’re closer to the mean.
The problem with this reasoning is: if you can’t explain it, just how exactly are you so sure that there is any “it” to explain?
If we’re talking about some alleged practical skill, or some alleged understanding of a physical phenomenon, etc., then this is no obstacle. Can’t explain it in words? Fine; simply demonstrate the skill, or make a correct prediction (or several) based on your understanding, and it’ll be clear at once that you really do have said skill, or really do possess said understanding.
For example, say I claim to know how to make a delicious pie. You are skeptical, and demand an explanation. I fumble and mumble, and eventually confess that I’m just no good at explaining. But I don’t have to explain! I can just bake you a pie. So I can be perfectly confident that I have pie-baking skills, because I have this pie; and you can be confident in same, for the same reason.
Similar logic applies to alleged understanding of the real world.
But with something like this—some deep philosophical issue—how can you demonstrate to me that you know something I don’t, or understand something I don’t, without explaining it to me? Now, don’t get me wrong; maybe you really do have some knowledge or understanding. Not all that is true, can be easily demonstrated.
But without a clear verbal explanation, not only do I have no good reason at all to believe that “there’s a ‘there’ there”… but neither do you!
But without a clear verbal explanation, not only do I have no good reason at all to believe that “there’s a ‘there’ there”… but neither do you!
I may well have knowledge of things through experience I cannot verbalize well or explain to myself in a systematized way. To suggest it must be that I can only have such things if I can explain them is to assume against the point I’m making in the original post, which you are free to disagree with but I want to make clear I think this is a difference of assumptions not a difference of reasoning from a shared assumption.
So… you have experience you can’t verbalize or explain, to yourself or to others; and through this experience, you gain knowledge, which you also can’t adequately verbalize or explain, to yourself or to others? Nor can you in any meaningful way demonstrate this knowledge or its fruits?
Once again, I do not claim that this proves that you don’t have any special knowledge or understanding that you claim to have. But it seems clear to me that you have no good reason for believing that you have any such thing—and much less does anyone else have any reason to believe this.
And, with respect, whatever assumption you have made, which would lead you to conclude otherwise… I submit to you that this assumption has taken you beyond the bounds of sanity.
I would imagine it’s also about: not having such an explanation right now, but being confident you will have one soon. For an extreme case with high confidence: I see a ‘proof’ that 1=2. I may be confident that there is at least 1 mistake in the proof before I find it. With less confidence, I may guess that the error is ‘dividng by zero’ before I see it.
That’s a lot of reiteration of the problem with Chapman’s writing which was a reason I pointed to the reading list to begin with. Not trying to pull a “you must read all this before judging Chapman” Gish gallop, but trying to figure out if there’s some common strain of what Nietzsche, Heidegger, Wittgenstein, Dreyfus, Hofstadter and Kegan are going on about that looks like what Chapman is trying to go for. Maybe the idea is just really hard, harder than the Sequences stuff, but at least you got several people doing different approaches to it so you have a lot more to work with.
And it might be there isn’t and this is all just Chapman flailing about. When someone builds a working AGI with just good and basic common-sense rationality ideas, I’ll concede that he probably was. In the meantime, it seems kind of missing the point to criticize an example whose point is that it’s obvious to humans as being obvious to humans. I took the whole point of the example that we’re still mostly at the level of “dormitive principle” explanations for how humans figure this stuff out, and now we have the AI programming problem that gives us some roadmap for what an actual understanding of this stuff would look like, and suddenly figuring out the eggplant-water thing from first principles isn’t that easy anymore. (Of course now we also have the Google trick of having a statistical corpus of a million cases of humans asking for water from the fridge where we can observe them not being handed eggplants, but taking that as the final answer doesn’t seem quite satisfactory either.)
The other thing is the Kegan levels and the transition from a rule-following human who’s already doing pretty AI complete tasks, but very much thinking inside the box to the system-shifting human. A normal human is just going to say “there are alarm bells ringing, smoke coming down the hallway and lots of people running towards the emergency exits, maybe we should switch from the weekly business review meeting frame to the evacuating the building frame about now”, while the business review meeting robot will continue presenting sales charts until it burns to a crisp. The AI engineer is going to ask, “how do you figure out which inputs should cause a frame shift like that and how do you figure out which frame to shift to?” The AI scientists is going to ask, “what’s the overall formal meta-framework of designing an intelligent system that can learn to dynamically recognize when its current behavioral frame has been invalidated and to determine the most useful new behavioral frame in this situation?” We don’t seem to really have AI architectures like this yet, so maybe we need something more heavy-duty than SEP pages to figure them out.
So that’s a part of what I understand Chapman is trying to do. Hofstadter-like stuff, except actually trying to tackle it somehow instead of just going “hey I guess this stuff is a thing and it actually looks kinda hard” like Hofstadter went in GEB. And then the background reading has the fun extra feature that before about the 1970s nobody was framing this stuff in terms of how you’re supposed to build an AI, so they’ll be coming at it from quite different viewpoints.
It’s not “hard”, just remarkably subtle. Frustratingly vague, often best described by demonstrating wordless experiences. I often describe the idea as a cluster not something that can be nailed down. And that’s not from lack of trying, it’s because it’s a purposely slippery concept.
Try to hold a slippery concept and talk about it and it just doesn’t convert to words very well.
That’s the way where you try to make another adult human recognize the thing based on their own experiences, which is how we’ve gone about this since the Axial Age. Since 1970s, the second approach of how would you program an artificial intelligence to do this has been on the table. If we could manage this, it would in theory be a lot more robust statement of the case, but would also probably be much, much harder for humans to actually follow by going through the source code. I’m guessing this is what Chapman is thinking when he specifies “can be printed in a book of less than 10kg and followed consciously” for a system intended for human consumption.
Of course there’s also a landscape between the everyday language based simple but potentially confusion engendering descriptions and the full formal specification of a human-equivalent AGI. We do know that either humans work by magic or a formal specification of a human-equivalent AGI exists even when we can’t write down the book of probably more than 10 kg containing it yet. So either Chapman’s stuff hits somewhere in the landscape between the present-day reasoning writing that piggybacks on existing human cognition capabilities and the Illustrated Complete AGI Specification or it does not, but it seems like the landscape should be there anyway and getting some maps of it could be very useful.
I’m going to outsource the answer to this to David Chapman; it’s a bit more than I could hope to fit in a response here.
It seems to be more than he can fit into the whole of his family of blogs. I’ve read quite a lot of what he’s written, but at last I gave up on his perpetual jam-tomorrow deferral of any plain setting out of his positive ideas.
I like to link to his recommended reading list instead of the main site for gesturing towards what Chapman seems to be circling around while never quite landing on. It’s still not a clear explanation of the thing, but at least that’s more than one person’s viewpoint on the landscape.
Contrast The Sequences, where Eliezer simply wrote down what he knew, with exemplary clarity. No circling around and around, but steadily marching through the partial order of dependencies of topics, making progress with every posting, that built into a whole. That is what exposition should look like. No repeated restarting with ever more fundamental things that would have to be explained first while the goal gets further away. No gesturing towards enormous reading lists as a stopgap for being unable to articulate his ideas about that material. Perhaps that is because he had something real to say?
In my experience, the subjective feeling that one understands an idea, even if it seems to have pin-sharp clarity, often does not survive trying to communicate it to another person, or even to myself by writing it down. The problem may not be that the thing is difficult to convey, but that I am confused about the thing. The thing may not exist, not correspond to anything in reality. (Explaining something to a machine, i.e. programming, is even more exposing of confusion.)
When a thing is apparently so difficult to communicate that however much ink one spills on the task, the end of it does not draw nearer, but recedes ever further, I have to question whether there is anything to be communicated.
In that reading list page, Chapman writes this
That is not “meta-rationality”, that is plain rationality. Person A is asking about water to drink. The anecdote implies that there is none in the refrigerator, therefore the correct answer is “No.” The simple conundrum is dissolved by the ordinary concept of implicature. Even a child can tell that B’s reply is either idiotic or a feeble joke. That Chapman has titled his intended book after this vignette does not give me confidence in his project.
I have just recalled an anecdote about the symptoms of trying to explain something incoherent. If (so I read) you hypnotize someone and suggest to them that they can see a square circle drawn on the wall, fully circular and fully a square, they have the experience of seeing a square circle. Now, I’m somewhat sceptical about the reality of hypnosis, but not at all sceptical about the physical ability of a brain to have that experience, despite the fact that there is no such thing as a square circle.
If you ask that person (the story goes on) to draw what they see, they start drawing, but keep on erasing and trying again, frustrated by the fact that what they draw always fails to capture the thing they are trying to draw.
Edit: the story is from Edward de Bono’s book “Lateral Thinking: An Introduction” (previously published as “The Use of Lateral Thinking”).
I think you might be reading a bit too much into things here. Eliezer is exceptional in his verbal communication abilities for whatever his other skills and flaws, so even if the most rationalist rationalist set out to write the sequences and they were not on par with Eliezer’s verbal skills they likely would not have been as successful, would have gotten lost and have lots of dangling pointers to things to explain later. Chapman is facing the normal problems of trying to explain a complex things when you’re closer to the mean.
The problem with this reasoning is: if you can’t explain it, just how exactly are you so sure that there is any “it” to explain?
If we’re talking about some alleged practical skill, or some alleged understanding of a physical phenomenon, etc., then this is no obstacle. Can’t explain it in words? Fine; simply demonstrate the skill, or make a correct prediction (or several) based on your understanding, and it’ll be clear at once that you really do have said skill, or really do possess said understanding.
For example, say I claim to know how to make a delicious pie. You are skeptical, and demand an explanation. I fumble and mumble, and eventually confess that I’m just no good at explaining. But I don’t have to explain! I can just bake you a pie. So I can be perfectly confident that I have pie-baking skills, because I have this pie; and you can be confident in same, for the same reason.
Similar logic applies to alleged understanding of the real world.
But with something like this—some deep philosophical issue—how can you demonstrate to me that you know something I don’t, or understand something I don’t, without explaining it to me? Now, don’t get me wrong; maybe you really do have some knowledge or understanding. Not all that is true, can be easily demonstrated.
But without a clear verbal explanation, not only do I have no good reason at all to believe that “there’s a ‘there’ there”… but neither do you!
I may well have knowledge of things through experience I cannot verbalize well or explain to myself in a systematized way. To suggest it must be that I can only have such things if I can explain them is to assume against the point I’m making in the original post, which you are free to disagree with but I want to make clear I think this is a difference of assumptions not a difference of reasoning from a shared assumption.
So… you have experience you can’t verbalize or explain, to yourself or to others; and through this experience, you gain knowledge, which you also can’t adequately verbalize or explain, to yourself or to others? Nor can you in any meaningful way demonstrate this knowledge or its fruits?
Once again, I do not claim that this proves that you don’t have any special knowledge or understanding that you claim to have. But it seems clear to me that you have no good reason for believing that you have any such thing—and much less does anyone else have any reason to believe this.
And, with respect, whatever assumption you have made, which would lead you to conclude otherwise… I submit to you that this assumption has taken you beyond the bounds of sanity.
Improvements to subjective well being can be extremely legible from the inside and fairly noisy from the outside.
Fair enough, that’s true. To be clear, then—is that all this is about? “Improvements to subjective well being”?
I would imagine it’s also about: not having such an explanation right now, but being confident you will have one soon. For an extreme case with high confidence: I see a ‘proof’ that 1=2. I may be confident that there is at least 1 mistake in the proof before I find it. With less confidence, I may guess that the error is ‘dividng by zero’ before I see it.
That’s a lot of reiteration of the problem with Chapman’s writing which was a reason I pointed to the reading list to begin with. Not trying to pull a “you must read all this before judging Chapman” Gish gallop, but trying to figure out if there’s some common strain of what Nietzsche, Heidegger, Wittgenstein, Dreyfus, Hofstadter and Kegan are going on about that looks like what Chapman is trying to go for. Maybe the idea is just really hard, harder than the Sequences stuff, but at least you got several people doing different approaches to it so you have a lot more to work with.
And it might be there isn’t and this is all just Chapman flailing about. When someone builds a working AGI with just good and basic common-sense rationality ideas, I’ll concede that he probably was. In the meantime, it seems kind of missing the point to criticize an example whose point is that it’s obvious to humans as being obvious to humans. I took the whole point of the example that we’re still mostly at the level of “dormitive principle” explanations for how humans figure this stuff out, and now we have the AI programming problem that gives us some roadmap for what an actual understanding of this stuff would look like, and suddenly figuring out the eggplant-water thing from first principles isn’t that easy anymore. (Of course now we also have the Google trick of having a statistical corpus of a million cases of humans asking for water from the fridge where we can observe them not being handed eggplants, but taking that as the final answer doesn’t seem quite satisfactory either.)
The other thing is the Kegan levels and the transition from a rule-following human who’s already doing pretty AI complete tasks, but very much thinking inside the box to the system-shifting human. A normal human is just going to say “there are alarm bells ringing, smoke coming down the hallway and lots of people running towards the emergency exits, maybe we should switch from the weekly business review meeting frame to the evacuating the building frame about now”, while the business review meeting robot will continue presenting sales charts until it burns to a crisp. The AI engineer is going to ask, “how do you figure out which inputs should cause a frame shift like that and how do you figure out which frame to shift to?” The AI scientists is going to ask, “what’s the overall formal meta-framework of designing an intelligent system that can learn to dynamically recognize when its current behavioral frame has been invalidated and to determine the most useful new behavioral frame in this situation?” We don’t seem to really have AI architectures like this yet, so maybe we need something more heavy-duty than SEP pages to figure them out.
So that’s a part of what I understand Chapman is trying to do. Hofstadter-like stuff, except actually trying to tackle it somehow instead of just going “hey I guess this stuff is a thing and it actually looks kinda hard” like Hofstadter went in GEB. And then the background reading has the fun extra feature that before about the 1970s nobody was framing this stuff in terms of how you’re supposed to build an AI, so they’ll be coming at it from quite different viewpoints.
It’s not “hard”, just remarkably subtle. Frustratingly vague, often best described by demonstrating wordless experiences. I often describe the idea as a cluster not something that can be nailed down. And that’s not from lack of trying, it’s because it’s a purposely slippery concept.
Try to hold a slippery concept and talk about it and it just doesn’t convert to words very well.
That’s the way where you try to make another adult human recognize the thing based on their own experiences, which is how we’ve gone about this since the Axial Age. Since 1970s, the second approach of how would you program an artificial intelligence to do this has been on the table. If we could manage this, it would in theory be a lot more robust statement of the case, but would also probably be much, much harder for humans to actually follow by going through the source code. I’m guessing this is what Chapman is thinking when he specifies “can be printed in a book of less than 10kg and followed consciously” for a system intended for human consumption.
Of course there’s also a landscape between the everyday language based simple but potentially confusion engendering descriptions and the full formal specification of a human-equivalent AGI. We do know that either humans work by magic or a formal specification of a human-equivalent AGI exists even when we can’t write down the book of probably more than 10 kg containing it yet. So either Chapman’s stuff hits somewhere in the landscape between the present-day reasoning writing that piggybacks on existing human cognition capabilities and the Illustrated Complete AGI Specification or it does not, but it seems like the landscape should be there anyway and getting some maps of it could be very useful.