You are doing some fancy footwork with the labels there. As soon as a post rationality method is describable, repeatable and documentable it woild fit into rationality. But what about the concepts that don’t fit into words so easily. The map-territory bridge problem (describe how to make a bridge between the map and the territory).
Alternative medicine has some interesting features worth investigating (deliberately left vague).
As soon as a post rationality method is describable, repeatable and documentable it woild fit into rationality.
It already fits. For example, in “The Rhythm of Disagreement”, Eliezer talks about the “rhythm” of Bayesian reasoning, the fundamentals that come before any formalised method, that you have to adhere to even if you don’t have any numbers to apply Bayes’ Theorem to. He illustrates this with various examples, rather than laying down a “describable, repeatable and documentable” method. This bears out Raemon’s comment that everything advertised as postrationality is already in Eliezer!rationality. And Eliezer!rationality is the rationality we are concerned with here at LessWrong.
Gosh I hope not. I hope we developed the craft further than the guy who mostly stopped publishing on the site by 2011.
I would appreciate “Lesswrong” not standing in a shadow and instead building on existing work.
I propose that PR is a natural progression from R. (hence the name PR). I expect to see places where R occasionally stretched into places that made space for PR to grow out of.
I would appreciate “Lesswrong” not standing in a shadow and instead building on existing work.
So would I, but I don’t think I’ve seen much that qualifies. There is the work on AGI, but I don’t know how far that has gone, because I don’t follow it (a topic of vital importance that I have chosen to ignore, there being only so many hours in the day). Perhaps CFAR? But they don’t publish, and I have not been to any of their events.
I do think there is a lot of good writing by people who are not Eliezer. At the very least Scott Alexander’s writing, but also Luke’s, Kaj Sotala’s, Anna Salamon’s and many others.
I do think there is a lot of good writing by people who are not Eliezer. At the very least Scott Alexander’s writing, but also Luke’s, Kaj Sotala’s, Anna Salamon’s and many others.
Except that for some techniques I had to step out of rational into “weird” to develop it. For example focusing is a technique in rationality that talks about interior subjective experience of a feeling of a knot of a problem and what could often be referred to the same phenomena as “energy channels”. A very alt-medicine-esque concept. I put focussing more in pr territory than in R territory. Particularly in the mind that developing further techniques needs to be done from a different experiential space.
That is—as Thomas Kuhn suggests in proposing paradigm shifts in the book “the structure of scientific revolutions”, to get novel science we need to do novel experiments with novel apparatuses. To revolutionise what we know, we need to explore something we haven’t already explored.
I think if rationality need to regularly explore concepts that don’t fit into words in order to be successful, then rationality should just not stress out when it can’t put concepts into words.
(It seems that both epistemic and instrumental rationality need this, for different reasons).
My biggest pet peeve with post-rationality (esp. as described as relating to LessWrong) is that it doesn’t seem to be doing anything that Eliezer doesn’t at least point to once in the sequences and say “this seems like it’s going to be important”, even if Eliezer isn’t an expert on it so didn’t have much to say.
I get a lot of flak from rationalists when I try to do stuff in the weird word territory.
It’s clear to me that the delineation is both necessary and helpful for people who are still getting the hang of interpreting weird words and for people well versed to find each other and compare notes.
To mush all PR into R isn’t making anyone happy.
I see R folk complaining about PR.
I see reductionist R folk, trying to deny the existence of PR.
I see PR folk laughing at the problem because of some variation on, “it seems so obvious now”.
I see PR folk bitter and annoyed because to them there is clearly something different that is not easy to delineate.
I see all this and more. We aren’t winning any games of “I mapped it better” by mushing two categories together.
I’m generally sympathetic to the postrationalists on several points, but I agree with this. Coming up with the whole postrationality label was a bad idea in the first place, as it tends to serve tribal purposes and distracts people from the ways in which postrationality is just a straightforward extension of standard rationality.
(Ironically, one could argue that I already fit the criteria of “postrationalist”, making it slightly weird for me to say that I’m sympathetic to the postrationalists, rather than being one of them. But to some extent I don’t want to identify as post-rat, exactly because I don’t think that post-rat is a good term and that good rat is already post-rat.)
There are definitely rationalist positions that have unexamined potential in the pr direction, where a good excuse is, “I haven’t looked yet”. (and a bad excuse might be, “that’s dumb I don’t want to look there”). In that sense there is rationality that is not yet at Post-rational investigations.
I had to have some sense and experience of investigating and knowing the world before I turned that machine on itself and started to explore the inner workings of the investigation mechanism.
While “rationality” claims to be defined as “stuff that helps you win”, and while on paper if it turned out that the Sequences didn’t help you arrive at correct conclusions we’d stop calling that “rationality” and call something else “rationality”, in practise the word “rationality” points at “the stuff in the Sequences” rather than the “stuff that helps you win”, and that people with stuff that helps you win that isn’t the type of thing that you’d find in the Sequences have to call it something else to be unambiguous. Such is language.
The central claim of the Sequences is that what they expound is the stuff that helps you come to true beliefs and effective actions. It seems to me that that claim is well-founded. All the specific things I’ve seen touted as “Here’s where ‘rationality’ is wrong” have always seemed to me to either be addressed in the Sequences already, or to be so confused that even the writer can’t explain them. And all the things that do go beyond the Sequences (e.g. CFAR workshops—about which I have no detailed knowledge) do not brand themselves in opposition as “post-rationality”, any more than “applied mathematics” would be called “post-mathematics”.
David Chapman, to take one major example, is pretty oppository in his prospectus for his proposed book “In the Cells of the Eggplant”. I expect he would call it “extending”, but it’s more like hacking off all the limbs to replace them with tentacles.
My impression is that Chapman is objecting to a different kind of rationality, which he defines in a more specific and narrow way. At least, on several times in conversation when I’ve objected to him that LW-rationality already takes into account postrationality, he has responded with something like “LW-rationality has elements of what I’m criticizing, but this book is not specifically about LW-rationality”.
My impression agrees. I am inclined to say that Chapman seems to be targeting the kind of rationality criticized in Seeing Like a State, save that In the Cells of the Eggplant is about how unsatisfying the perspective is rather than the damage implementation does.
Postrationality is to rationality as alternative medicine is to medicine.
“There is no alternative medicine. There is only medicine that works and medicine that doesn’t work.”
You are doing some fancy footwork with the labels there. As soon as a post rationality method is describable, repeatable and documentable it woild fit into rationality. But what about the concepts that don’t fit into words so easily. The map-territory bridge problem (describe how to make a bridge between the map and the territory).
Alternative medicine has some interesting features worth investigating (deliberately left vague).
It already fits. For example, in “The Rhythm of Disagreement”, Eliezer talks about the “rhythm” of Bayesian reasoning, the fundamentals that come before any formalised method, that you have to adhere to even if you don’t have any numbers to apply Bayes’ Theorem to. He illustrates this with various examples, rather than laying down a “describable, repeatable and documentable” method. This bears out Raemon’s comment that everything advertised as postrationality is already in Eliezer!rationality. And Eliezer!rationality is the rationality we are concerned with here at LessWrong.
Gosh I hope not. I hope we developed the craft further than the guy who mostly stopped publishing on the site by 2011.
I would appreciate “Lesswrong” not standing in a shadow and instead building on existing work.
I propose that PR is a natural progression from R. (hence the name PR). I expect to see places where R occasionally stretched into places that made space for PR to grow out of.
So would I, but I don’t think I’ve seen much that qualifies. There is the work on AGI, but I don’t know how far that has gone, because I don’t follow it (a topic of vital importance that I have chosen to ignore, there being only so many hours in the day). Perhaps CFAR? But they don’t publish, and I have not been to any of their events.
I do think there is a lot of good writing by people who are not Eliezer. At the very least Scott Alexander’s writing, but also Luke’s, Kaj Sotala’s, Anna Salamon’s and many others.
I agree. My earlier judgment was too negative.
Except that for some techniques I had to step out of rational into “weird” to develop it. For example focusing is a technique in rationality that talks about interior subjective experience of a feeling of a knot of a problem and what could often be referred to the same phenomena as “energy channels”. A very alt-medicine-esque concept. I put focussing more in pr territory than in R territory. Particularly in the mind that developing further techniques needs to be done from a different experiential space.
That is—as Thomas Kuhn suggests in proposing paradigm shifts in the book “the structure of scientific revolutions”, to get novel science we need to do novel experiments with novel apparatuses. To revolutionise what we know, we need to explore something we haven’t already explored.
I think if rationality need to regularly explore concepts that don’t fit into words in order to be successful, then rationality should just not stress out when it can’t put concepts into words.
(It seems that both epistemic and instrumental rationality need this, for different reasons).
My biggest pet peeve with post-rationality (esp. as described as relating to LessWrong) is that it doesn’t seem to be doing anything that Eliezer doesn’t at least point to once in the sequences and say “this seems like it’s going to be important”, even if Eliezer isn’t an expert on it so didn’t have much to say.
I get a lot of flak from rationalists when I try to do stuff in the weird word territory.
It’s clear to me that the delineation is both necessary and helpful for people who are still getting the hang of interpreting weird words and for people well versed to find each other and compare notes.
To mush all PR into R isn’t making anyone happy.
I see R folk complaining about PR. I see reductionist R folk, trying to deny the existence of PR. I see PR folk laughing at the problem because of some variation on, “it seems so obvious now”. I see PR folk bitter and annoyed because to them there is clearly something different that is not easy to delineate.
I see all this and more. We aren’t winning any games of “I mapped it better” by mushing two categories together.
I’m generally sympathetic to the postrationalists on several points, but I agree with this. Coming up with the whole postrationality label was a bad idea in the first place, as it tends to serve tribal purposes and distracts people from the ways in which postrationality is just a straightforward extension of standard rationality.
(Ironically, one could argue that I already fit the criteria of “postrationalist”, making it slightly weird for me to say that I’m sympathetic to the postrationalists, rather than being one of them. But to some extent I don’t want to identify as post-rat, exactly because I don’t think that post-rat is a good term and that good rat is already post-rat.)
There are definitely rationalist positions that have unexamined potential in the pr direction, where a good excuse is, “I haven’t looked yet”. (and a bad excuse might be, “that’s dumb I don’t want to look there”). In that sense there is rationality that is not yet at Post-rational investigations.
I had to have some sense and experience of investigating and knowing the world before I turned that machine on itself and started to explore the inner workings of the investigation mechanism.
While “rationality” claims to be defined as “stuff that helps you win”, and while on paper if it turned out that the Sequences didn’t help you arrive at correct conclusions we’d stop calling that “rationality” and call something else “rationality”, in practise the word “rationality” points at “the stuff in the Sequences” rather than the “stuff that helps you win”, and that people with stuff that helps you win that isn’t the type of thing that you’d find in the Sequences have to call it something else to be unambiguous. Such is language.
The central claim of the Sequences is that what they expound is the stuff that helps you come to true beliefs and effective actions. It seems to me that that claim is well-founded. All the specific things I’ve seen touted as “Here’s where ‘rationality’ is wrong” have always seemed to me to either be addressed in the Sequences already, or to be so confused that even the writer can’t explain them. And all the things that do go beyond the Sequences (e.g. CFAR workshops—about which I have no detailed knowledge) do not brand themselves in opposition as “post-rationality”, any more than “applied mathematics” would be called “post-mathematics”.
Are you sure that post rationality is opposite to rationality? Where did that idea come from?
I’ve been involved in the loosely defined PR cluster for a while and I’ve not seen such a thing yet. Do you have a link?
“Opposition”, not “opposite”.
Pr is not in opposition either.
David Chapman, to take one major example, is pretty oppository in his prospectus for his proposed book “In the Cells of the Eggplant”. I expect he would call it “extending”, but it’s more like hacking off all the limbs to replace them with tentacles.
My impression is that Chapman is objecting to a different kind of rationality, which he defines in a more specific and narrow way. At least, on several times in conversation when I’ve objected to him that LW-rationality already takes into account postrationality, he has responded with something like “LW-rationality has elements of what I’m criticizing, but this book is not specifically about LW-rationality”.
My impression agrees. I am inclined to say that Chapman seems to be targeting the kind of rationality criticized in Seeing Like a State, save that In the Cells of the Eggplant is about how unsatisfying the perspective is rather than the damage implementation does.