This is a cool attempt to get some insight into what’s going on in AI safety! It makes sense we have had so few surveys directly about this, as they are seriously difficult to do correctly—especially in a field like AI safety where there is a lot of vagueness in how people think about positive and negative outcomes.
Most of the comments are and will be about how the survey could be better and more elaborate in what it’s asking, which would mean that the survey would probably ideally be several pages long and take about 30min to answer—at the least. And there’s a bit of a cut-off where if a survey is too long, a large part of the community might not answer it anyway (the more time you have to spend on something, the more sure you wanna be it’s worth it), but with surveys it can be hard to know if they’re worth doing, unless you specifically trust the person doing them to do them well. Plus you’d need to have a good enough sample of the AI community answering the questions, which in itself will be difficult because you’ll have more people responding who have time, compared to the ones who actively work on AI safety and probably are lower on time for surveys (that being said—I think AI safety researchers would probably appreciate an AI survey as well?).
All this being said, depending on the response rates and how well we can trust their accuracy, it could be cool to see what we can get out of a survey like this. Even if it’s just the knowledge about what kinds of questions are more useful to ask compared to other questions. It could be that, we could technically have a survey that just asks people to write down their AGI timeline estimate and it would still be several pages long, or at least include a lot of description of different types of AGI and what kinds of probabilities people put on those timelines. Points for the effort in tackling such a difficult problem!
Full disclosure: I am in a relationship with the author of the post and thus have my own biases and also additional knowledge. This mainly means that I’m leaving a comment in the first place, because if it would be someone else doing something like this and most of the comments are on the critical side (note that it’s not bad to be on the critical side here because often the criticism is necessary and important), I would be more likely to default to silence and hope that the criticism is received in a constructive way that ends up helping the project launch properly instead of the project dying out. I think it’s important to recognise people for trying to do something difficult, and support them in a way that if what they are doing could be net positive, they end up doing it properly—unless we have good reason to think that doing a survey like this is actually certainly bad, in which case it should be clearer in the comments that that is the case.
This is a cool attempt to get some insight into what’s going on in AI safety! It makes sense we have had so few surveys directly about this, as they are seriously difficult to do correctly—especially in a field like AI safety where there is a lot of vagueness in how people think about positive and negative outcomes.
Most of the comments are and will be about how the survey could be better and more elaborate in what it’s asking, which would mean that the survey would probably ideally be several pages long and take about 30min to answer—at the least. And there’s a bit of a cut-off where if a survey is too long, a large part of the community might not answer it anyway (the more time you have to spend on something, the more sure you wanna be it’s worth it), but with surveys it can be hard to know if they’re worth doing, unless you specifically trust the person doing them to do them well. Plus you’d need to have a good enough sample of the AI community answering the questions, which in itself will be difficult because you’ll have more people responding who have time, compared to the ones who actively work on AI safety and probably are lower on time for surveys (that being said—I think AI safety researchers would probably appreciate an AI survey as well?).
All this being said, depending on the response rates and how well we can trust their accuracy, it could be cool to see what we can get out of a survey like this. Even if it’s just the knowledge about what kinds of questions are more useful to ask compared to other questions. It could be that, we could technically have a survey that just asks people to write down their AGI timeline estimate and it would still be several pages long, or at least include a lot of description of different types of AGI and what kinds of probabilities people put on those timelines. Points for the effort in tackling such a difficult problem!
Full disclosure: I am in a relationship with the author of the post and thus have my own biases and also additional knowledge. This mainly means that I’m leaving a comment in the first place, because if it would be someone else doing something like this and most of the comments are on the critical side (note that it’s not bad to be on the critical side here because often the criticism is necessary and important), I would be more likely to default to silence and hope that the criticism is received in a constructive way that ends up helping the project launch properly instead of the project dying out.
I think it’s important to recognise people for trying to do something difficult, and support them in a way that if what they are doing could be net positive, they end up doing it properly—unless we have good reason to think that doing a survey like this is actually certainly bad, in which case it should be clearer in the comments that that is the case.