Moving on from that I’m thinking that we might need a broad base of support from people (depending upon the scenario) so being able to explain how people could still have meaningful lives post AI is important for building that support. So I’ve been thinking about that.
This sounds like it would be useful for getting people to support the development of AGI, rather than effective global regulation of AGI. What am I missing?
For longer time frames where there might be visible development, the public needs to trust that the political regulators of AI to have their interests at heart. Else they may try and make it a party political issue, which I think would be terrible for sane global regulation.
I’ve come across pretty strong emotion when talking about AGI even when talking about safety, which I suspect will come bubbling to the fore more as time goes by.
It may also help morale of the thoughtful people trying to make safe AI.
This sounds like it would be useful for getting people to support the development of AGI, rather than effective global regulation of AGI. What am I missing?
For longer time frames where there might be visible development, the public needs to trust that the political regulators of AI to have their interests at heart. Else they may try and make it a party political issue, which I think would be terrible for sane global regulation.
I’ve come across pretty strong emotion when talking about AGI even when talking about safety, which I suspect will come bubbling to the fore more as time goes by.
It may also help morale of the thoughtful people trying to make safe AI.