That’s a lot of reiteration of the problem with Chapman’s writing which was a reason I pointed to the reading list to begin with. Not trying to pull a “you must read all this before judging Chapman” Gish gallop, but trying to figure out if there’s some common strain of what Nietzsche, Heidegger, Wittgenstein, Dreyfus, Hofstadter and Kegan are going on about that looks like what Chapman is trying to go for. Maybe the idea is just really hard, harder than the Sequences stuff, but at least you got several people doing different approaches to it so you have a lot more to work with.
And it might be there isn’t and this is all just Chapman flailing about. When someone builds a working AGI with just good and basic common-sense rationality ideas, I’ll concede that he probably was. In the meantime, it seems kind of missing the point to criticize an example whose point is that it’s obvious to humans as being obvious to humans. I took the whole point of the example that we’re still mostly at the level of “dormitive principle” explanations for how humans figure this stuff out, and now we have the AI programming problem that gives us some roadmap for what an actual understanding of this stuff would look like, and suddenly figuring out the eggplant-water thing from first principles isn’t that easy anymore. (Of course now we also have the Google trick of having a statistical corpus of a million cases of humans asking for water from the fridge where we can observe them not being handed eggplants, but taking that as the final answer doesn’t seem quite satisfactory either.)
The other thing is the Kegan levels and the transition from a rule-following human who’s already doing pretty AI complete tasks, but very much thinking inside the box to the system-shifting human. A normal human is just going to say “there are alarm bells ringing, smoke coming down the hallway and lots of people running towards the emergency exits, maybe we should switch from the weekly business review meeting frame to the evacuating the building frame about now”, while the business review meeting robot will continue presenting sales charts until it burns to a crisp. The AI engineer is going to ask, “how do you figure out which inputs should cause a frame shift like that and how do you figure out which frame to shift to?” The AI scientists is going to ask, “what’s the overall formal meta-framework of designing an intelligent system that can learn to dynamically recognize when its current behavioral frame has been invalidated and to determine the most useful new behavioral frame in this situation?” We don’t seem to really have AI architectures like this yet, so maybe we need something more heavy-duty than SEP pages to figure them out.
So that’s a part of what I understand Chapman is trying to do. Hofstadter-like stuff, except actually trying to tackle it somehow instead of just going “hey I guess this stuff is a thing and it actually looks kinda hard” like Hofstadter went in GEB. And then the background reading has the fun extra feature that before about the 1970s nobody was framing this stuff in terms of how you’re supposed to build an AI, so they’ll be coming at it from quite different viewpoints.
It’s not “hard”, just remarkably subtle. Frustratingly vague, often best described by demonstrating wordless experiences. I often describe the idea as a cluster not something that can be nailed down. And that’s not from lack of trying, it’s because it’s a purposely slippery concept.
Try to hold a slippery concept and talk about it and it just doesn’t convert to words very well.
That’s the way where you try to make another adult human recognize the thing based on their own experiences, which is how we’ve gone about this since the Axial Age. Since 1970s, the second approach of how would you program an artificial intelligence to do this has been on the table. If we could manage this, it would in theory be a lot more robust statement of the case, but would also probably be much, much harder for humans to actually follow by going through the source code. I’m guessing this is what Chapman is thinking when he specifies “can be printed in a book of less than 10kg and followed consciously” for a system intended for human consumption.
Of course there’s also a landscape between the everyday language based simple but potentially confusion engendering descriptions and the full formal specification of a human-equivalent AGI. We do know that either humans work by magic or a formal specification of a human-equivalent AGI exists even when we can’t write down the book of probably more than 10 kg containing it yet. So either Chapman’s stuff hits somewhere in the landscape between the present-day reasoning writing that piggybacks on existing human cognition capabilities and the Illustrated Complete AGI Specification or it does not, but it seems like the landscape should be there anyway and getting some maps of it could be very useful.
That’s a lot of reiteration of the problem with Chapman’s writing which was a reason I pointed to the reading list to begin with. Not trying to pull a “you must read all this before judging Chapman” Gish gallop, but trying to figure out if there’s some common strain of what Nietzsche, Heidegger, Wittgenstein, Dreyfus, Hofstadter and Kegan are going on about that looks like what Chapman is trying to go for. Maybe the idea is just really hard, harder than the Sequences stuff, but at least you got several people doing different approaches to it so you have a lot more to work with.
And it might be there isn’t and this is all just Chapman flailing about. When someone builds a working AGI with just good and basic common-sense rationality ideas, I’ll concede that he probably was. In the meantime, it seems kind of missing the point to criticize an example whose point is that it’s obvious to humans as being obvious to humans. I took the whole point of the example that we’re still mostly at the level of “dormitive principle” explanations for how humans figure this stuff out, and now we have the AI programming problem that gives us some roadmap for what an actual understanding of this stuff would look like, and suddenly figuring out the eggplant-water thing from first principles isn’t that easy anymore. (Of course now we also have the Google trick of having a statistical corpus of a million cases of humans asking for water from the fridge where we can observe them not being handed eggplants, but taking that as the final answer doesn’t seem quite satisfactory either.)
The other thing is the Kegan levels and the transition from a rule-following human who’s already doing pretty AI complete tasks, but very much thinking inside the box to the system-shifting human. A normal human is just going to say “there are alarm bells ringing, smoke coming down the hallway and lots of people running towards the emergency exits, maybe we should switch from the weekly business review meeting frame to the evacuating the building frame about now”, while the business review meeting robot will continue presenting sales charts until it burns to a crisp. The AI engineer is going to ask, “how do you figure out which inputs should cause a frame shift like that and how do you figure out which frame to shift to?” The AI scientists is going to ask, “what’s the overall formal meta-framework of designing an intelligent system that can learn to dynamically recognize when its current behavioral frame has been invalidated and to determine the most useful new behavioral frame in this situation?” We don’t seem to really have AI architectures like this yet, so maybe we need something more heavy-duty than SEP pages to figure them out.
So that’s a part of what I understand Chapman is trying to do. Hofstadter-like stuff, except actually trying to tackle it somehow instead of just going “hey I guess this stuff is a thing and it actually looks kinda hard” like Hofstadter went in GEB. And then the background reading has the fun extra feature that before about the 1970s nobody was framing this stuff in terms of how you’re supposed to build an AI, so they’ll be coming at it from quite different viewpoints.
It’s not “hard”, just remarkably subtle. Frustratingly vague, often best described by demonstrating wordless experiences. I often describe the idea as a cluster not something that can be nailed down. And that’s not from lack of trying, it’s because it’s a purposely slippery concept.
Try to hold a slippery concept and talk about it and it just doesn’t convert to words very well.
That’s the way where you try to make another adult human recognize the thing based on their own experiences, which is how we’ve gone about this since the Axial Age. Since 1970s, the second approach of how would you program an artificial intelligence to do this has been on the table. If we could manage this, it would in theory be a lot more robust statement of the case, but would also probably be much, much harder for humans to actually follow by going through the source code. I’m guessing this is what Chapman is thinking when he specifies “can be printed in a book of less than 10kg and followed consciously” for a system intended for human consumption.
Of course there’s also a landscape between the everyday language based simple but potentially confusion engendering descriptions and the full formal specification of a human-equivalent AGI. We do know that either humans work by magic or a formal specification of a human-equivalent AGI exists even when we can’t write down the book of probably more than 10 kg containing it yet. So either Chapman’s stuff hits somewhere in the landscape between the present-day reasoning writing that piggybacks on existing human cognition capabilities and the Illustrated Complete AGI Specification or it does not, but it seems like the landscape should be there anyway and getting some maps of it could be very useful.