His view on AI alignment risk is infuriating simplistic. To just call certain doomsday scenarios objectively “false” is a level of epistemic arrogance that borders on obscene.
I feel like he could at least acknowledge the existence of possible scenarios and express a need to invest in avoiding those scenarios instead of just negating an entire argument.
Good that you mention it and did NOT get down voted. Yet. I have noticed that we are in the midst of an “AI-washing” attack which is also going on here on lesswrong too. But its like asking a star NFL quarterback if he thinks they should ban football because the risk of serious brain injuries, of course he will answer no. The big tech companies pours trillions of dollars into AI so of course they make sure that everyone is “aligned” to their vision and that they will try to remove any and all obstacles when it comes to public opinion. Repeat after me:
His view on AI alignment risk is infuriating simplistic. To just call certain doomsday scenarios objectively “false” is a level of epistemic arrogance that borders on obscene.
I feel like he could at least acknowledge the existence of possible scenarios and express a need to invest in avoiding those scenarios instead of just negating an entire argument.
Good that you mention it and did NOT get down voted. Yet. I have noticed that we are in the midst of an “AI-washing” attack which is also going on here on lesswrong too. But its like asking a star NFL quarterback if he thinks they should ban football because the risk of serious brain injuries, of course he will answer no. The big tech companies pours trillions of dollars into AI so of course they make sure that everyone is “aligned” to their vision and that they will try to remove any and all obstacles when it comes to public opinion. Repeat after me:
“AI will not make humans redundant.”
“AI is not an existential risk.”
...