Occationally we run surveys of ML people. Would it be worth asking them what their personal fire alarm would be, or what they are confident will not be achieved in the next N years? This would force them to make a mental stance that might help them achieve some cognitive dissonance later, and also allow us to potentially follow up with them.
Apparently an LW user did a series of interviews with AI researchers in 2011, some of which included a similar question. I know most LW users have probably seen this, but I only found it today and thought it was worth flagging here.
Would it be worth asking them what their personal fire alarm would be, or what they are confident will not be achieved in the next N years?
If you ask about what would constitute a fire alarm for AGI, it might be useful to also ask how much advance warning the thing they come up with would give.
Occationally we run surveys of ML people. Would it be worth asking them what their personal fire alarm would be, or what they are confident will not be achieved in the next N years? This would force them to make a mental stance that might help them achieve some cognitive dissonance later, and also allow us to potentially follow up with them.
Apparently an LW user did a series of interviews with AI researchers in 2011, some of which included a similar question. I know most LW users have probably seen this, but I only found it today and thought it was worth flagging here.
If you ask about what would constitute a fire alarm for AGI, it might be useful to also ask how much advance warning the thing they come up with would give.
I know I’m going to use the next 2 years thing.