A few of the answers seem really high. I wonder if anyone interpreted the questions as asking for P(loss of value | insufficient alignment research) and P(loss of value | misalignment) despite Note B.
I know at least one person who works on long-term AI risk who I am confident really does assign this high a probability to the questions as asked. I don’t know if this person responded to the survey, but still, I expect that the people who gave those answers really did mean them.
A few of the answers seem really high. I wonder if anyone interpreted the questions as asking for P(loss of value | insufficient alignment research) and P(loss of value | misalignment) despite Note B.
I know at least one person who works on long-term AI risk who I am confident really does assign this high a probability to the questions as asked. I don’t know if this person responded to the survey, but still, I expect that the people who gave those answers really did mean them.