I like this observation. As a random note, I’ve sometimes heard people justifying “leave poor working conditions in place for others, rather than spending managerial time improving them” based on how AI risk is an emergency, though whether this checks out on a local consequentialist level is not actually analyzed by the model above, since it partly involves tradeoffs between people and I didn’t try to get into that.
I sorta also think that “people acting on a promise of community and support that they later [find] [isn’t] there” is sometimes done semi-deliberately by the individuals in question, who are trying to get as much work out of their system one’s as possible, by hoping a thing works out without really desiring accurate answers. Or by others who value getting particular work done (via those individuals working hard) and think things are urgent and so are reasoning short-term and locally consequentialist-ly. Again partly because people are reasoning near an “emergency.” But this claim seems harder to check/verify. I hope people put more time into “really generating community” rather than “causing newcomers to have an expectation of community,” though.
I sorta also think that “people acting on a promise of community and support that they later [find] [isn’t] there” is sometimes done semi-deliberately by the individuals in question, who are trying to get as much work out of their system one’s as possible, by hoping a thing works out without really desiring accurate answers.
Personally I think of people as more acting out their dream, because reality seems empty.
Like the cargo culters, praying to a magical skyplane that will never arrive. Sure, you can argue to them that they’re wasting their time. But they don’t have any other idea about how to get skyplanes, and the world is a lot less… magical without them. So they keep manning their towers and waving their lights.
I like this observation. As a random note, I’ve sometimes heard people justifying “leave poor working conditions in place for others, rather than spending managerial time improving them” based on how AI risk is an emergency, though whether this checks out on a local consequentialist level is not actually analyzed by the model above, since it partly involves tradeoffs between people and I didn’t try to get into that.
I sorta also think that “people acting on a promise of community and support that they later [find] [isn’t] there” is sometimes done semi-deliberately by the individuals in question, who are trying to get as much work out of their system one’s as possible, by hoping a thing works out without really desiring accurate answers. Or by others who value getting particular work done (via those individuals working hard) and think things are urgent and so are reasoning short-term and locally consequentialist-ly. Again partly because people are reasoning near an “emergency.” But this claim seems harder to check/verify. I hope people put more time into “really generating community” rather than “causing newcomers to have an expectation of community,” though.
Personally I think of people as more acting out their dream, because reality seems empty.
Like the cargo culters, praying to a magical skyplane that will never arrive. Sure, you can argue to them that they’re wasting their time. But they don’t have any other idea about how to get skyplanes, and the world is a lot less… magical without them. So they keep manning their towers and waving their lights.