FWIW: I’m not sure I’ve spent >100 hours on a ‘serious study of rationality’. Although I have been around a while, I am at best sporadically active. If I understand the karma mechanics, the great majority of my ~1400 karma comes from a single highly upvoted top level post I wrote a few years ago. I have pretty sceptical reflexes re. rationality, the rationality community, etc., and this is reflected in that (I think) the modal post/comment I make is critical.
On the topic ‘under the hood’ here:
I sympathise with the desire to ask conditional questions which don’t inevitably widen into broader foundational issues. “Is moral nihilism true?” doesn’t seem the right sort of ‘open question’ for “What are the open questions in Utilitarianism?”. It seems better for these topics to be segregated, no matter the plausibility or not for the foundational ‘presumption’ (“Is homeopathy/climate change even real?” also seems inapposite for ‘open questions in homeopathy/anthropogenic climate change’). (cf. ‘This isn’t a 101-space’).
That being said, I think superforecasting/GJP and RQ/CART etc. are at least highly relevant to the ‘Project’ (even if this seems to be taken very broadly to normative issues in general—if Wei_Dai’s list of topics are considered elements of the wider Project, then I definitely have spent more than 100 hours in the area). For a question cluster around “How can one best make decisions on unknown domains with scant data”, the superforecasting literature seems some of the lowest hanging fruit to pluck.
Yet community competence in these areas has apparently declined. If you google ‘lesswrong GJP’ (or similar terms) you find posts on them but these posts are many years old. There has been interesting work done in the interim: here’s something on the whether the skills generalise, and something else of a training technique that not only demonstrably improves forecasting performance, but also has a handy mnemonic one could ‘try at home’. (The same applies to RQ: Sotala wrote a cool sequence on Stanovich’s ‘What intelligence tests miss’, but this is 9 years old. Stanovich has written three books since expressly on rationality, none of which have been discussed here as best as I can tell.)
I don’t understand, if there are multiple people who have spent >100 hours on the Project (broadly construed), why I don’t see there being a ‘lessons from the superforecasting literature’ write-up here (I am slowly working on one myself).
Maybe I just missed the memo and many people have kept abreast of this work (ditto other ‘relevant-looking work in academia’), and it is essentially tacit knowledge for people working on the Project, but they are focusing their efforts to develop other areas. If so, a shame this is not being put into common knowledge, and I remain mystified as to why the apparent neglect of these topics versus others: it is a lot easier to be sceptical of ‘is there anything there?’ for (say) circling, introspection/meditation/enlightenment, Kegan levels, or Focusing than for the GJP, and doubt in the foundation should substantially discount the value of further elaborations on a potentially unedifying edifice.
[Minor] I think the first para is meant to be block-quoted?
I know of a lot of people who continued studying and being interested in the forecasting perspective. I think the primary reason why there has been less writing from that is just that LessWrong was dead for a while, and so we’ve seen less writeups in general. (I also think there were some secondary factors that also contributed, but that the absence of a publishing platform was the biggest)
Given that the OP counts the Good Judgment project as part of the movement I think that certainly qualifies.
It’s my understanding that while the Good Judgment project made progress on the question of how to think about the right probability, we still lack ways for people to integrate the making of regular forecasts into their personal and professional lives.
FWIW: I’m not sure I’ve spent >100 hours on a ‘serious study of rationality’. Although I have been around a while, I am at best sporadically active. If I understand the karma mechanics, the great majority of my ~1400 karma comes from a single highly upvoted top level post I wrote a few years ago. I have pretty sceptical reflexes re. rationality, the rationality community, etc., and this is reflected in that (I think) the modal post/comment I make is critical.
On the topic ‘under the hood’ here:
I sympathise with the desire to ask conditional questions which don’t inevitably widen into broader foundational issues. “Is moral nihilism true?” doesn’t seem the right sort of ‘open question’ for “What are the open questions in Utilitarianism?”. It seems better for these topics to be segregated, no matter the plausibility or not for the foundational ‘presumption’ (“Is homeopathy/climate change even real?” also seems inapposite for ‘open questions in homeopathy/anthropogenic climate change’). (cf. ‘This isn’t a 101-space’).
That being said, I think superforecasting/GJP and RQ/CART etc. are at least highly relevant to the ‘Project’ (even if this seems to be taken very broadly to normative issues in general—if Wei_Dai’s list of topics are considered elements of the wider Project, then I definitely have spent more than 100 hours in the area). For a question cluster around “How can one best make decisions on unknown domains with scant data”, the superforecasting literature seems some of the lowest hanging fruit to pluck.
Yet community competence in these areas has apparently declined. If you google ‘lesswrong GJP’ (or similar terms) you find posts on them but these posts are many years old. There has been interesting work done in the interim: here’s something on the whether the skills generalise, and something else of a training technique that not only demonstrably improves forecasting performance, but also has a handy mnemonic one could ‘try at home’. (The same applies to RQ: Sotala wrote a cool sequence on Stanovich’s ‘What intelligence tests miss’, but this is 9 years old. Stanovich has written three books since expressly on rationality, none of which have been discussed here as best as I can tell.)
I don’t understand, if there are multiple people who have spent >100 hours on the Project (broadly construed), why I don’t see there being a ‘lessons from the superforecasting literature’ write-up here (I am slowly working on one myself).
Maybe I just missed the memo and many people have kept abreast of this work (ditto other ‘relevant-looking work in academia’), and it is essentially tacit knowledge for people working on the Project, but they are focusing their efforts to develop other areas. If so, a shame this is not being put into common knowledge, and I remain mystified as to why the apparent neglect of these topics versus others: it is a lot easier to be sceptical of ‘is there anything there?’ for (say) circling, introspection/meditation/enlightenment, Kegan levels, or Focusing than for the GJP, and doubt in the foundation should substantially discount the value of further elaborations on a potentially unedifying edifice.
[Minor] I think the first para is meant to be block-quoted?
I know of a lot of people who continued studying and being interested in the forecasting perspective. I think the primary reason why there has been less writing from that is just that LessWrong was dead for a while, and so we’ve seen less writeups in general. (I also think there were some secondary factors that also contributed, but that the absence of a publishing platform was the biggest)
Also superforecasting and GJP are no longer new. Seems not at all surprising that most of the words written about them would be from when they were.
Given that the OP counts the Good Judgment project as part of the movement I think that certainly qualifies.
It’s my understanding that while the Good Judgment project made progress on the question of how to think about the right probability, we still lack ways for people to integrate the making of regular forecasts into their personal and professional lives.