I think it probably doesn’t make sense to talk about “representative samples”.
Here are a bunch of different things this could mean:
A uniform sample from people who have done any work related to AI safety.
A sample from people weighted to their influence/power in the AI safety community.
A sample from people weighted by how much I personally respect their views about AI risk.
Maybe what you mean is: “I think this sample underrepresents a world view that I think this is promising. This world view is better represented by MIRI/Conjecture/CAIS/FLI/etc.”
I think programs like this one should probably just apply editorial discretion and note explicitly that they are doing so.
(This complaint is also a complaint about the post which does try to use a notion of “representative sample”.)
I think it probably doesn’t make sense to talk about “representative samples”.
Here are a bunch of different things this could mean:
A uniform sample from people who have done any work related to AI safety.
A sample from people weighted to their influence/power in the AI safety community.
A sample from people weighted by how much I personally respect their views about AI risk.
Maybe what you mean is: “I think this sample underrepresents a world view that I think this is promising. This world view is better represented by MIRI/Conjecture/CAIS/FLI/etc.”
I think programs like this one should probably just apply editorial discretion and note explicitly that they are doing so.
(This complaint is also a complaint about the post which does try to use a notion of “representative sample”.)