That doesn’t seem like the consensus view to me. It might be the consensus view among LessWrong contributors. But in the AI-related tech industry and in academia it seems like very few people think AI friendliness is an important problem, or that there is any effective way to research it.
Most researchers I know seem to think strong AI (of the type that could actually result in an intelligence explosion) is a long way away and thus it’s premature to think about friendliness now (imagine if they tried to devise rules to regulate the internet in 1950). I don’t know if that’s a correct viewpoint or not.
That doesn’t seem like the consensus view to me. It might be the consensus view among LessWrong contributors. But in the AI-related tech industry and in academia it seems like very few people think AI friendliness is an important problem, or that there is any effective way to research it.
Most researchers I know seem to think strong AI (of the type that could actually result in an intelligence explosion) is a long way away and thus it’s premature to think about friendliness now (imagine if they tried to devise rules to regulate the internet in 1950). I don’t know if that’s a correct viewpoint or not.
I believe this thread is about LessWrong specifically.
Yes, I was referring to LessWrong, not AI researchers in general.
But wouldn’t it be awesome if we came up with an effective way to research it?
Yes, I was referring to LessWrong, not AI researchers in general.