Unfortunately for Korzybski, General Semantics never really took off or achieved prominence as the new field he had set out to create. It wasn’t without some success and it has been taught in some colleges. But overall, despite trying to create something grounded in science and empiricism, over the years the empiricism leaked out of general semantics and a large amount of woo and pseudoscience leaked in. This looks like it was actually a similar failure mode to what had started happening with Origin before I stopped the project.
With Origin, I introduced a bunch of rough draft concepts and tried to bake in the idea that these were rough ideas that should be iterated upon. However, because of the halo effect, those rough drafts were taken as truth without question. Instead of quickly iterating out of problematic elements, the problematic elements stuck around and became accepted parts of the canon.
Something similar seems to have happened with General Semantics, at a certain point it stopped being viewed as a science to iterate upon, and began being viewed in a dogmatic, pseudoscientific way. It would eventually spin off a bunch of actual cults like Scientology and Neuro-Linguistic Programming, and while the Institute of General Semantics still exists and still does things, no one seems to really be trying to achieve Korzybski’s goal of a science of human engineering. That goal would sit on a shelf for a long time until finally it was picked back up by one Eliezer Yudkowsky.
This makes me wonder to what extent we fail at this in the rationality movement. I think we’re better at it, but I’m also not sure we’re as systematic about fighting against it as we could be.
I agree. I love LessWrong (and its surroundings), but i think it hasn’t yet lived to its promise. to me it seems the community/movement suffers somewhat from focusing on the wrong stuff and premature optimization.
it also seems that sequences suffer from the same halo effect as the author’s project (origin, which I’m not familiar with). it has been written more then 10 years ago, ending on a note that there’s still much to be discovered and improved about rationality—even with it’s release as a book Eliezer noted in the preface his mistakes with it. Since there seems to be agreement on the usefulness of a body of information everybody is expected to read (e.g “read the sequences”), I’d expect there would at least be work or thought on some sort a second version.
Just to be clear, since intentions sometimes don’t come through in text, I’m saying that out of love for the project, not spite. I’ve came across this site a bit more then a year ago and have read a ton of content here, i both love it and somewhat disappointed -
In short, I feel there’s still a level above ours.
I’d expect there would at least be work or thought on some sort a second version.
Note that the current version of R:AZ has been updated and is half-as-long as the original (with some additional edits in the works). There’s definitely effort in this direction, it’s just a lot of work.
Shorter definitely seems better. Ideally I think there’d be a version that was less than a hundred pages. Something as short and concise as possible. Do we really need to list every cognitive bias to explain rationality? How much is really necessary and how much can be cut?
It’s a nontrivial operation to figure out what stuff can be cut. The work isn’t just listing a bunch of facts, it’s weaving them in a compelling way that helps people integrate them. Trimming things down requires new ways of fitting them together.
(Basically I’m saying “yes, people are taking this seriously, and the reason the job isn’t done already is that it’s hard.”)
I think we’re better at it, but I’m also not sure we’re as systematic about fighting against it as we could be.
I’m trying to do my part by pointing out misuses or overuses of UDT (for example trying to derive strong conclusions about human rationality from it), at least when I see them on LW, and being as clear as I can about its flaws and inadequacies. I also try to do this for Aumann Agreement which is another idea that has the potential to become viewed in a dogmatic, pseudoscientific way.
Would be interested in ideas on how to go about doing this more systematically.
This makes me wonder to what extent we fail at this in the rationality movement. I think we’re better at it, but I’m also not sure we’re as systematic about fighting against it as we could be.
I agree. I love LessWrong (and its surroundings), but i think it hasn’t yet lived to its promise. to me it seems the community/movement suffers somewhat from focusing on the wrong stuff and premature optimization.
it also seems that sequences suffer from the same halo effect as the author’s project (origin, which I’m not familiar with). it has been written more then 10 years ago, ending on a note that there’s still much to be discovered and improved about rationality—even with it’s release as a book Eliezer noted in the preface his mistakes with it. Since there seems to be agreement on the usefulness of a body of information everybody is expected to read (e.g “read the sequences”), I’d expect there would at least be work or thought on some sort a second version.
Just to be clear, since intentions sometimes don’t come through in text, I’m saying that out of love for the project, not spite. I’ve came across this site a bit more then a year ago and have read a ton of content here, i both love it and somewhat disappointed -
In short, I feel there’s still a level above ours.
Note that the current version of R:AZ has been updated and is half-as-long as the original (with some additional edits in the works). There’s definitely effort in this direction, it’s just a lot of work.
Shorter definitely seems better. Ideally I think there’d be a version that was less than a hundred pages. Something as short and concise as possible. Do we really need to list every cognitive bias to explain rationality? How much is really necessary and how much can be cut?
It’s a nontrivial operation to figure out what stuff can be cut. The work isn’t just listing a bunch of facts, it’s weaving them in a compelling way that helps people integrate them. Trimming things down requires new ways of fitting them together.
(Basically I’m saying “yes, people are taking this seriously, and the reason the job isn’t done already is that it’s hard.”)
I’m trying to do my part by pointing out misuses or overuses of UDT (for example trying to derive strong conclusions about human rationality from it), at least when I see them on LW, and being as clear as I can about its flaws and inadequacies. I also try to do this for Aumann Agreement which is another idea that has the potential to become viewed in a dogmatic, pseudoscientific way.
Would be interested in ideas on how to go about doing this more systematically.