My sense is that this “they’ll encourage higher ups to think what they’re doing is safe” thing is a meme. Misaligned AI, for people like Yann Lecunn, is not even a consideration; they think it’s this stupid uninformed fearmongering. We’re not even near the point that Phillip Morris is, where tobacco execs have to plaster their webpage with “beyond tobacco” slogans to feel good about themselves—Demis Hassabis literally does not care, even a little bit, and adding alignment staff will not affect his decision making whatsoever.
My sense is that this “they’ll encourage higher ups to think what they’re doing is safe” thing is a meme. Misaligned AI, for people like Yann Lecunn, is not even a consideration; they think it’s this stupid uninformed fearmongering. We’re not even near the point that Phillip Morris is, where tobacco execs have to plaster their webpage with “beyond tobacco” slogans to feel good about themselves—Demis Hassabis literally does not care, even a little bit, and adding alignment staff will not affect his decision making whatsoever.
But shouldn’t we just ask Rohin Shah?
Even a little bit? Are you sure? https://www.lesswrong.com/posts/ido3qfidfDJbigTEQ/have-you-tried-hiring-people?commentId=wpcLnotG4cG9uynjC