You can literally have a bunch of engineers and researchers believe that their company is contributing to AI extinction risk, yet still go with the flow.
They might even think they’re improving things at the margin. Or they have doubts, but all their colleagues seem to be going on as usual.
In this sense, we’re dealing with the problems of having that corporate command structure in place that takes in the loyal, and persuades them to do useful work (useful in the eyes of power-and-social-recognition-obsessed leadership).
You can literally have a bunch of engineers and researchers believe that their company is contributing to AI extinction risk, yet still go with the flow.
They might even think they’re improving things at the margin. Or they have doubts, but all their colleagues seem to be going on as usual.
In this sense, we’re dealing with the problems of having that corporate command structure in place that takes in the loyal, and persuades them to do useful work (useful in the eyes of power-and-social-recognition-obsessed leadership).
Someone shared the joke: “Remember the Milgram experiment, where they found out that everybody but us would press the button?”
My response: Right! Expect AGI lab employees to follow instructions, because of…
deference to authority
incremental worsening (boiling frog problem)
peer proof (“everyone else is doing it”)
escalation of commitment