Apparently I don’t understand what you mean by “serious risk”. (Before I pick this apart, by the way, I agree that we should try not to Godwin people—because I think it doesn’t work.)
I consider it likely that AGI will take a long time to develop. A rational species would likely figure out the flaw and take corrective steps by then. But look around you. Nearly all of us seem to agree, if you look at what we actually want according to our actions, that we should try to prevent an asteroid strike that might destroy humanity. As far as I can tell we haven’t started yet. No doubt you can think of other examples: the evidence says that if we put off FAI theory ‘until we need it’, we could easily put it off longer than that.
Apparently I don’t understand what you mean by “serious risk”. (Before I pick this apart, by the way, I agree that we should try not to Godwin people—because I think it doesn’t work.)
I consider it likely that AGI will take a long time to develop. A rational species would likely figure out the flaw and take corrective steps by then. But look around you. Nearly all of us seem to agree, if you look at what we actually want according to our actions, that we should try to prevent an asteroid strike that might destroy humanity. As far as I can tell we haven’t started yet. No doubt you can think of other examples: the evidence says that if we put off FAI theory ‘until we need it’, we could easily put it off longer than that.