Most people don’t have a good idea of the risks of climate change. Instead of understanding risks, they treat it as a partisan issue where it’s about signaling tribal affiliation. The Obama administration did get work on regulating mercury pollution largely outside of public debate and poor work on CO2 pollution.
Climate science as a career path has a status incentive that AI safety does not, whether you’re directly working on technical solutions to the problems or not.
Getting people who care more about status competition into AI safety might harm the ability of the field to be focused on object-level issues instead of focusing on status competition.
Can’t choose a career in a field you’ve never even heard of.
I think there’s a good chance that people with an intellectual life where they won’t hear about AI safety are net harmful to being involved in AI safety.
It is sadly the case that many many people are simply unwilling to read anything longer than a short blog post or article, ever.
While that’s true, why do you believe that those people have something useful to contribute to AI safety on net?
Yeah, I think this gets at a crux for me, I feel intuitively that it would be beneficial for the field if the problem was widely understood to be important. Maybe climate change was a bad example due to being so politically fraught, but then again maybe not, I don’t feel equipped to make a strong empirical argument for whether all that political attention has been net beneficial for the problem. I would predict that issues that get vastly more attention tend to receive many more resources (money, talent, political capital) in a way that’s net positive towards efforts to solve it but I admit I am not extremely certain about this and would very much like to see more data pertaining to that.
To respond to your individual points:
The Obama administration did get work on regulating mercury pollution largely outside of public debate and poor work on CO2 pollution.
Good point, though I’d argue there’s much less of a technical hurdle to understanding the risks of mercury pollution compared to that of future AI.
Getting people who care more about status competition into AI safety might harm the ability of the field to be focused on object-level issues instead of focusing on status competition.
Certainly there may be some undesirable people who would be 100% focused on status and would not contribute to the object-level problem, but I would also consider those for whom status is a partial consideration (maybe they are under pressure from family, are a politician, are a researcher using prestige as a heuristic to decide which fields to even pay attention to before deciding on their object-level merits, etc.). I’d argue that not every valuable researcher or policy advocate has the luxury or strength of character to completely ignore status and that AI safety being a field that offers some slack in that regard might serve it well.
I think there’s a good chance that people with an intellectual life where they won’t hear about AI safety are net harmful to being involved in AI safety.
You’re probably right about this, I think the one exception might be children, who tend to have a much narrower view of available fields despite their future potential as researchers. Though I still think their maybe people of value in populations who have ever heard of AI safety but who did not bother taking a closer look due to its relative obscurity.
While that’s true, why do you believe that those people have something useful to contribute to AI safety on net?
Directly? I don’t. To me, getting them to understand is more about casting a wider net of awareness to get the attention of those that could make useful contributions, as well as alleviating the status concerns mentioned above.
Most people don’t have a good idea of the risks of climate change. Instead of understanding risks, they treat it as a partisan issue where it’s about signaling tribal affiliation. The Obama administration did get work on regulating mercury pollution largely outside of public debate and poor work on CO2 pollution.
Getting people who care more about status competition into AI safety might harm the ability of the field to be focused on object-level issues instead of focusing on status competition.
I think there’s a good chance that people with an intellectual life where they won’t hear about AI safety are net harmful to being involved in AI safety.
While that’s true, why do you believe that those people have something useful to contribute to AI safety on net?
Yeah, I think this gets at a crux for me, I feel intuitively that it would be beneficial for the field if the problem was widely understood to be important. Maybe climate change was a bad example due to being so politically fraught, but then again maybe not, I don’t feel equipped to make a strong empirical argument for whether all that political attention has been net beneficial for the problem. I would predict that issues that get vastly more attention tend to receive many more resources (money, talent, political capital) in a way that’s net positive towards efforts to solve it but I admit I am not extremely certain about this and would very much like to see more data pertaining to that.
To respond to your individual points:
Good point, though I’d argue there’s much less of a technical hurdle to understanding the risks of mercury pollution compared to that of future AI.
Certainly there may be some undesirable people who would be 100% focused on status and would not contribute to the object-level problem, but I would also consider those for whom status is a partial consideration (maybe they are under pressure from family, are a politician, are a researcher using prestige as a heuristic to decide which fields to even pay attention to before deciding on their object-level merits, etc.). I’d argue that not every valuable researcher or policy advocate has the luxury or strength of character to completely ignore status and that AI safety being a field that offers some slack in that regard might serve it well.
You’re probably right about this, I think the one exception might be children, who tend to have a much narrower view of available fields despite their future potential as researchers. Though I still think their maybe people of value in populations who have ever heard of AI safety but who did not bother taking a closer look due to its relative obscurity.
Directly? I don’t. To me, getting them to understand is more about casting a wider net of awareness to get the attention of those that could make useful contributions, as well as alleviating the status concerns mentioned above.