What do you think of David Chapman’s stuff? I’m thinking of his curriculum sketch in particular.
I don’t think most rationalists were very excited by it though, e.g. Scott’s brief look at it in 2013 (and David’s response downthread) and an old comment thread I can no longer find between David and Kaj Sotala.
I don’t plan to read David Chapman’s writings. His website is titled “Meta-rationality”. When I’m teaching rationality, one of the first things I have to do is tell students repeatedly is to stop being meta.
Empiricism is about reality. “Meta” is at least one step away from reality, and therefore at least one step farther from empiricism.
Telling people to stop being meta is very important, but I think you may be misunderstanding the way in which Chapman is using the term. AFAICT it’s really more about being able to step back from your own viewpoint and assumptions and effectively apply a mental toolbox and different mental stances effectively to a problem that isn’t trivial or already-solved. Personally I’ve found it has helped keep me from going too meta in a lot of cases, by re-orienting my thinking to what’s needed.
Chapman’s old work programming Pengi with Phil Agre at the MIT AI Lab seems to suggest otherwise, but I respect your decision to not read his writings, since they mirror mine after attempting to and failing to grok him.
What do you think of David Chapman’s stuff? I’m thinking of his curriculum sketch in particular.
I don’t think most rationalists were very excited by it though, e.g. Scott’s brief look at it in 2013 (and David’s response downthread) and an old comment thread I can no longer find between David and Kaj Sotala.
I don’t plan to read David Chapman’s writings. His website is titled “Meta-rationality”. When I’m teaching rationality, one of the first things I have to do is tell students repeatedly is to stop being meta.
Empiricism is about reality. “Meta” is at least one step away from reality, and therefore at least one step farther from empiricism.
Telling people to stop being meta is very important, but I think you may be misunderstanding the way in which Chapman is using the term. AFAICT it’s really more about being able to step back from your own viewpoint and assumptions and effectively apply a mental toolbox and different mental stances effectively to a problem that isn’t trivial or already-solved. Personally I’ve found it has helped keep me from going too meta in a lot of cases, by re-orienting my thinking to what’s needed.
Chapman’s old work programming Pengi with Phil Agre at the MIT AI Lab seems to suggest otherwise, but I respect your decision to not read his writings, since they mirror mine after attempting to and failing to grok him.