“The EA consensus is roughly that being blunt about AI risks in the broader public would cause social havoc.”
I find this odd and patronising to the general public. Why would this not also apply to climate change? Climate change is also a not-initially-obvious threat, yet the bulk of the public now has a reasonable understanding and it’s driven a lot of change.
Or would nuclear weapons be a better analogy? Then at least nuclear weapons being publicly understood brought gravity to the conversation. Or could part of the reason to avoid public awareness be avoiding having to bear the weight of that kind of responsibility on our consciences? If the public is clueless, we appear pro-active. If the public is knowledgeable, we appear unprepared and the field of AI reckless, which we are and it is.
Also, lesswrong is a public forum. Eliezer’s dying with dignity post was definitely newsworthy for example. Is it even accurate to suggest that we have significant control over the spread of these ideas in the public conciousness at the moment as there is so little attention on it, and we don’t control the sorting functions of these media platforms?
I find this odd and patronising to the general public. Why would this not also apply to climate change? Climate change is also a not-initially-obvious threat, yet the bulk of the public now has a reasonable understanding and it’s driven a lot of change.
One of the specific worries is that climate change is precisely an example of something that got politicized, and now… half of politicians (at least in the US) sort of “have” to be opposed to doing anything about it, because that’s what The Other Team made into their talking point.
I see, that’s a great point, thanks for your response. It does seem realistic that it would become political, and it’s clear that a co-ordinated response is needed.
On that note I think it’s a mistake to neglect that our epistemic infrastructure optimises for profit which is an obvious misalignment now. Like facebook and google are already optimising for profit at the expense of civil discourse, they are already misaligned and causing harm. Only focusing on the singularity allows tech companies to become even more harmful, with the vague promise that they’ll play nice once they are about to create superintelligence.
Both are clearly important and the control problem specifically deserves a tonne of dedicated resources, but in addition it would be good to have some effort on getting approximate alignment now or at least better than profit maximisation. This obviously wouldn’t make progress on the control problem, but it might help society move to a state where it is more likely to do so.
“The EA consensus is roughly that being blunt about AI risks in the broader public would cause social havoc.”
I find this odd and patronising to the general public. Why would this not also apply to climate change? Climate change is also a not-initially-obvious threat, yet the bulk of the public now has a reasonable understanding and it’s driven a lot of change.
Or would nuclear weapons be a better analogy? Then at least nuclear weapons being publicly understood brought gravity to the conversation. Or could part of the reason to avoid public awareness be avoiding having to bear the weight of that kind of responsibility on our consciences? If the public is clueless, we appear pro-active. If the public is knowledgeable, we appear unprepared and the field of AI reckless, which we are and it is.
Also, lesswrong is a public forum. Eliezer’s dying with dignity post was definitely newsworthy for example. Is it even accurate to suggest that we have significant control over the spread of these ideas in the public conciousness at the moment as there is so little attention on it, and we don’t control the sorting functions of these media platforms?
One of the specific worries is that climate change is precisely an example of something that got politicized, and now… half of politicians (at least in the US) sort of “have” to be opposed to doing anything about it, because that’s what The Other Team made into their talking point.
I see, that’s a great point, thanks for your response. It does seem realistic that it would become political, and it’s clear that a co-ordinated response is needed.
On that note I think it’s a mistake to neglect that our epistemic infrastructure optimises for profit which is an obvious misalignment now. Like facebook and google are already optimising for profit at the expense of civil discourse, they are already misaligned and causing harm. Only focusing on the singularity allows tech companies to become even more harmful, with the vague promise that they’ll play nice once they are about to create superintelligence.
Both are clearly important and the control problem specifically deserves a tonne of dedicated resources, but in addition it would be good to have some effort on getting approximate alignment now or at least better than profit maximisation. This obviously wouldn’t make progress on the control problem, but it might help society move to a state where it is more likely to do so.