Valid point, though I’m not sure the original post mentioned that.
Counterpoint: would that actually change the absolute number of real!alignment researchers? If the probability that a given inductee would do real!alignment goes down, but the number of inductees goes way up + the timelines get longer, it’d still be a net-positive intervention.
That’s true given a fixed proportion of high-potential researchers amongst inductees—but I wouldn’t expect that. The more we go out and recruit people who’re disproportionately unlikely to understand the true nature of the problem (i.e. likely candidates for “worse than doing nothing”), the more the proportion of high-potential inductees drops. [also I don’t think there’s much “timelines get longer” here]
Obviously it’s far from clear how it’d work out in practice; this may only be an issue with taking the most naïve approach. I do think it’s worth worrying about—particularly given that there aren’t clean takebacks.
I don’t mean to argue against expanding the field—but I do think it’s important to put a lot of thought into how best to do it.
Valid point, though I’m not sure the original post mentioned that.
Counterpoint: would that actually change the absolute number of real!alignment researchers? If the probability that a given inductee would do real!alignment goes down, but the number of inductees goes way up + the timelines get longer, it’d still be a net-positive intervention.
That’s true given a fixed proportion of high-potential researchers amongst inductees—but I wouldn’t expect that.
The more we go out and recruit people who’re disproportionately unlikely to understand the true nature of the problem (i.e. likely candidates for “worse than doing nothing”), the more the proportion of high-potential inductees drops. [also I don’t think there’s much “timelines get longer” here]
Obviously it’s far from clear how it’d work out in practice; this may only be an issue with taking the most naïve approach. I do think it’s worth worrying about—particularly given that there aren’t clean takebacks.
I don’t mean to argue against expanding the field—but I do think it’s important to put a lot of thought into how best to do it.