Updated, thank you. We were unsure if it was best to keep it vague as to not bias the responses, but it is true that you need slight justification and that it is net positive to provide the explanation.
Thanks. I’ve filled out the form, to reciprocate your efforts in response to my comment :)
I think the form could’ve been better, here’s a few ways.
The question about occupation had a really strange set of answers, with separate choices for “software engineering” and “industry” and three choices for AI work, it felt pretty odd and I wasn’t sure what answer was right.
The question about the number of “books / articles / blogs / forum posts / documentaries / movies / podcasts” I have consumed was quite odd. Firstly, that question is basically something like “do you read LW + EA Forum a lot or not” because that’s the element of that list that has the ability to actually be 200+, nobody has read that many books or documentaries on the topic because there aren’t that many. Secondly, I have no idea when I switched from having read “51-100” to having read “101-200″. I just put 201+ for all.
As a first pass I would probably have instead asked “how many hours have you spent engaging with content on this topic” (e.g. reading books / blog posts / having conversations about it etc). Or if you want more about the content then I’d have asked different questions about different content (“how much time have you spent reading LW + EAF” “how much time spent arguing about the topic with other people” etc). But that would have been more questions, and length is bad.
For the probabilities of different things question, your answers are “<1% 10% 25% 50% 75% 90% 99<% Pass”. Obviously most numbers are not there. I would’ve made each column a bucket, or else said “pick the nearest one”.
It felt a bit “forced” to “rank” the AI issues from most to least important, given that the actual importance for me was something like 1000, 10, 10, 0.1, 0.1, 0.1, 0.1, for which 1, 2, 3, 4, 5, 6, 7 is a bit misleading.
Some of the other questions also landed a bit oddly to me, but this list is long enough.
I think I’d encourage future people who want their forms answered to make their forms better than this one. I’d like LW users to be able to expect forms they’re asked to fill out to be of a basic quality. As a key element, user testing is pretty key, I always get 1-3 people in my target audience to fill out a form in its entirety before I publish it to a group. I suspect that wasn’t done here, else I think a lot of users would have said the questions were a bit strange. But maybe it was.
I want to clearly say that the impulse here to get data and test your beliefs is pretty great, and I hope more people make surveys about such questions in the future.
We did indeed user test with 3-4 people within our target group and changed the survey quite a bit in response to these. I definitely believe the “basic quality” to be met but I appreciate your reciprocation either way.
Response to your feedback, in order: The career options are multiple-choice, i.e. you may associate yourself with computer science and academia and not industry which is valuable information in this case. It means we need less combinations of occupational features. From our perspective, the number is easier to estimate while time estimates (both in the future and past) generally are less accurate. In a similar vein, it’s fine that LessWrong blog posts each count as one due to their inherently more in-depth nature. I’ll add the “pick the nearest one”. The ranking was expected to feel like that which is in accordance with our priors on the question. There are reasons for most questions’ features as well.
And thank you, I believe the same. Compared to how data-driven we believe we are, there is very little data from within the community, as far as I see it. And we should work harder to positively encourage more action in a similar vein.
After the fact, I realize that I might have replied too defensively to your feedback as a result of the tone so sorry for that!
I sincerely do thank you for all the points and we have updated the survey with a basis in this feedback, i.e. 1) career = short text answer, 2) knowledge level = which of these learning tasks have you completed and 4) they are now independent rankings.
Updated, thank you. We were unsure if it was best to keep it vague as to not bias the responses, but it is true that you need slight justification and that it is net positive to provide the explanation.
Thanks. I’ve filled out the form, to reciprocate your efforts in response to my comment :)
I think the form could’ve been better, here’s a few ways.
The question about occupation had a really strange set of answers, with separate choices for “software engineering” and “industry” and three choices for AI work, it felt pretty odd and I wasn’t sure what answer was right.
The question about the number of “books / articles / blogs / forum posts / documentaries / movies / podcasts” I have consumed was quite odd. Firstly, that question is basically something like “do you read LW + EA Forum a lot or not” because that’s the element of that list that has the ability to actually be 200+, nobody has read that many books or documentaries on the topic because there aren’t that many. Secondly, I have no idea when I switched from having read “51-100” to having read “101-200″. I just put 201+ for all.
As a first pass I would probably have instead asked “how many hours have you spent engaging with content on this topic” (e.g. reading books / blog posts / having conversations about it etc). Or if you want more about the content then I’d have asked different questions about different content (“how much time have you spent reading LW + EAF” “how much time spent arguing about the topic with other people” etc). But that would have been more questions, and length is bad.
For the probabilities of different things question, your answers are “<1% 10% 25% 50% 75% 90% 99<% Pass”. Obviously most numbers are not there. I would’ve made each column a bucket, or else said “pick the nearest one”.
It felt a bit “forced” to “rank” the AI issues from most to least important, given that the actual importance for me was something like 1000, 10, 10, 0.1, 0.1, 0.1, 0.1, for which 1, 2, 3, 4, 5, 6, 7 is a bit misleading.
Some of the other questions also landed a bit oddly to me, but this list is long enough.
I think I’d encourage future people who want their forms answered to make their forms better than this one. I’d like LW users to be able to expect forms they’re asked to fill out to be of a basic quality. As a key element, user testing is pretty key, I always get 1-3 people in my target audience to fill out a form in its entirety before I publish it to a group. I suspect that wasn’t done here, else I think a lot of users would have said the questions were a bit strange. But maybe it was.
I want to clearly say that the impulse here to get data and test your beliefs is pretty great, and I hope more people make surveys about such questions in the future.
Thank you for the valuable feedback!
We did indeed user test with 3-4 people within our target group and changed the survey quite a bit in response to these. I definitely believe the “basic quality” to be met but I appreciate your reciprocation either way.
Response to your feedback, in order: The career options are multiple-choice, i.e. you may associate yourself with computer science and academia and not industry which is valuable information in this case. It means we need less combinations of occupational features. From our perspective, the number is easier to estimate while time estimates (both in the future and past) generally are less accurate. In a similar vein, it’s fine that LessWrong blog posts each count as one due to their inherently more in-depth nature. I’ll add the “pick the nearest one”. The ranking was expected to feel like that which is in accordance with our priors on the question. There are reasons for most questions’ features as well.
And thank you, I believe the same. Compared to how data-driven we believe we are, there is very little data from within the community, as far as I see it. And we should work harder to positively encourage more action in a similar vein.
Cool that you did user testing! I’ll leave this this thread here.
After the fact, I realize that I might have replied too defensively to your feedback as a result of the tone so sorry for that!
I sincerely do thank you for all the points and we have updated the survey with a basis in this feedback, i.e. 1) career = short text answer, 2) knowledge level = which of these learning tasks have you completed and 4) they are now independent rankings.