But notice how, in the backdrop of “everyone thinks their job is hard,” this statement provides very little ability to distinguish between worlds where this actually is a crisis and worlds where things will be fine!
It sounds like you have a model that “person works in a job” causes “person believes job is hard” regardless of what the job is, but the causality can go the other way: if I thought AI safety were trivial, I wouldn’t be working on trying to make it safe.
On this model, you don’t observe this argument because everyone is biased towards thinking their job is hard: you observe it because people formed opinions some other way and then self-selected into the jobs they thought were impactful / nontrivial.
In practice, it will be a combination of both. For this discussion in particular, I’d lean more towards the selection explanation, as opposed to the bias explanation.
It sounds like you have a model that “person works in a job” causes “person believes job is hard” regardless of what the job is, but the causality can go the other way: if I thought AI safety were trivial, I wouldn’t be working on trying to make it safe.
On this model, you don’t observe this argument because everyone is biased towards thinking their job is hard: you observe it because people formed opinions some other way and then self-selected into the jobs they thought were impactful / nontrivial.
In practice, it will be a combination of both. For this discussion in particular, I’d lean more towards the selection explanation, as opposed to the bias explanation.