Filtering for “people who can afford to pay for a workshop” works pretty well.
This is surprising to me. It seems to assume income is just based on general competence, which doesn’t seem true to me. There are a lot of people who seem to have these traits who would find it really difficult to pay for this, and vice versa
The filtering described here seems moderately specific but not sensitive, whether or not you agree with the “income implies competence” relationship being strong.
It seems true that those who are interested in and can pay for a $4k course of this type are more likely to have 17 of the attributes in question than a person picked at random from the population. However, the filter tells you nothing about, and completely excludes, a large number of people who would fit the “have 17 of these attributes” criteria but not the “have $4k to spend on a course or the time to take it” criteria.
The filter allows in a population of people with above-average chances of meeting the attribute criteria, but blocks a large and unknown number of other people who would also meet that criteria.
It is potentially good for creating a desired environment in the course (having mostly people with a lot of the desired attributes), but is not a good filter for identifying the much larger population of people who might be interested in and benefitted by the course (as described in article as having 17 of the attributes and therefore capable of picking up the other two).
Nod. I’m not actually particularly attached to this point nor think $4000 is necessarily the right amount to get the filtering effect if you’re aiming for that. I do think this approach is insufficient for me because the people I most hope to intervene on with my own rationality training are college students, who don’t yet have enough income for this approach to work.
But, also, well, you do need some kind of filter.
Speaking for myself, not sure what Critch would say:
There seems like some kind of assumption here that “if we didn’t filter out people unnecessarily, we’d be able to help more people.” But, I think the throughput of people-who-can-be-helped here is quite small. I think it’s not possible to scale this sort of org to help thousands of people per year without compromising the org.
(In general in education, there is a problem where educational interventions work initially, because the educators are invested, and have a nuanced understanding of what they’re trying to accomplish. But, when they attempt to scale, they get worse teachers who are less invested and less deep understanding of the methodology, because conveying the knowledge is hard)
So, I think it’s more like “there is a smallish number of people this sort of process/org would be able to help. There are going to be thousands/millions of people who could be helped, but you don’t have time to help them all.” That’s sort of baked in.
So, it’s not necessarily “a problem” from my perspective if this filters out people who I’d have liked to have helped, so long as the program successfully outputs people who go on to be much more effective. (Past me would be more sad about that, but it’s something I’ve already grieved)
I do think it’s important (and plausible) that this creates some kinds of distortions in which people you’re selecting on, and if they (in aggregate) those distortions add up. But that’s a somewhat different argument from what you and sanyer presented.
But, still, ultimately the question is “okay, what sort of filtering mechanism do you have in mind, and how well does it work?”.
This is surprising to me. It seems to assume income is just based on general competence, which doesn’t seem true to me. There are a lot of people who seem to have these traits who would find it really difficult to pay for this, and vice versa
The filtering described here seems moderately specific but not sensitive, whether or not you agree with the “income implies competence” relationship being strong.
It seems true that those who are interested in and can pay for a $4k course of this type are more likely to have 17 of the attributes in question than a person picked at random from the population. However, the filter tells you nothing about, and completely excludes, a large number of people who would fit the “have 17 of these attributes” criteria but not the “have $4k to spend on a course or the time to take it” criteria.
The filter allows in a population of people with above-average chances of meeting the attribute criteria, but blocks a large and unknown number of other people who would also meet that criteria.
It is potentially good for creating a desired environment in the course (having mostly people with a lot of the desired attributes), but is not a good filter for identifying the much larger population of people who might be interested in and benefitted by the course (as described in article as having 17 of the attributes and therefore capable of picking up the other two).
Nod. I’m not actually particularly attached to this point nor think $4000 is necessarily the right amount to get the filtering effect if you’re aiming for that. I do think this approach is insufficient for me because the people I most hope to intervene on with my own rationality training are college students, who don’t yet have enough income for this approach to work.
But, also, well, you do need some kind of filter.
Speaking for myself, not sure what Critch would say:
There seems like some kind of assumption here that “if we didn’t filter out people unnecessarily, we’d be able to help more people.” But, I think the throughput of people-who-can-be-helped here is quite small. I think it’s not possible to scale this sort of org to help thousands of people per year without compromising the org.
(In general in education, there is a problem where educational interventions work initially, because the educators are invested, and have a nuanced understanding of what they’re trying to accomplish. But, when they attempt to scale, they get worse teachers who are less invested and less deep understanding of the methodology, because conveying the knowledge is hard)
So, I think it’s more like “there is a smallish number of people this sort of process/org would be able to help. There are going to be thousands/millions of people who could be helped, but you don’t have time to help them all.” That’s sort of baked in.
So, it’s not necessarily “a problem” from my perspective if this filters out people who I’d have liked to have helped, so long as the program successfully outputs people who go on to be much more effective. (Past me would be more sad about that, but it’s something I’ve already grieved)
I do think it’s important (and plausible) that this creates some kinds of distortions in which people you’re selecting on, and if they (in aggregate) those distortions add up. But that’s a somewhat different argument from what you and sanyer presented.
But, still, ultimately the question is “okay, what sort of filtering mechanism do you have in mind, and how well does it work?”.
Well, you’re filtering on both “can afford to pay for a workshop” and “wants to attend a workshop thay charges that much”...