I like arguing with myself. So it is fun to make the best case. But yup I was going beyond what people might. I think I find arguments against naive views less interesting so spice them up some.
In accelerando the participants in Economy 2.0 had a treacherous turn because they had the pressure of being in a sharply competitive, resource hungry environment. This could have happened if they were EM or even aligned AGI to a subset of humanity, if they don’t solve co-ordination problems.
This kind of evolutionary problem has not been talked about for a bit (everyone seems focussed on corrigibility etc), so maybe people have forgotten? I think it worth making it explicit that that is what you need to worry about. But the question then becomes should we worry about it now or when we have cheaper intelligence and a greater understanding of how intelligences might co-ordinate?
Edit: One might even make the case we should focus our thought on short term existential risks, like avoiding nuclear war during the start of AGI, because if we don’t pass that test we won’t get to worry about super intelligence. And you can’t use the cheaper later intelligence to solve that problem.
I like arguing with myself. So it is fun to make the best case. But yup I was going beyond what people might. I think I find arguments against naive views less interesting so spice them up some.
In accelerando the participants in Economy 2.0 had a treacherous turn because they had the pressure of being in a sharply competitive, resource hungry environment. This could have happened if they were EM or even aligned AGI to a subset of humanity, if they don’t solve co-ordination problems.
This kind of evolutionary problem has not been talked about for a bit (everyone seems focussed on corrigibility etc), so maybe people have forgotten? I think it worth making it explicit that that is what you need to worry about. But the question then becomes should we worry about it now or when we have cheaper intelligence and a greater understanding of how intelligences might co-ordinate?
Edit: One might even make the case we should focus our thought on short term existential risks, like avoiding nuclear war during the start of AGI, because if we don’t pass that test we won’t get to worry about super intelligence. And you can’t use the cheaper later intelligence to solve that problem.