Responding to several people at once: Some people consider an AI singleton to be an inevitable outcome after the Singularity; others believe there are only 2 possible outcomes, a singleton or incessant war. I want to find out if there are stable non-singleton states in some model of post-singularity conflict; and, if so, what assumptions are needed to produce them. I think only the qualitative, not the quantitative, results will be useful. So I’m only trying to model details that would introduce new kinds of behaviors and stable states and transitions, not ones that would only make the system more accurate.
Responding to several people at once: Some people consider an AI singleton to be an inevitable outcome after the Singularity; others believe there are only 2 possible outcomes, a singleton or incessant war. I want to find out if there are stable non-singleton states in some model of post-singularity conflict; and, if so, what assumptions are needed to produce them. I think only the qualitative, not the quantitative, results will be useful. So I’m only trying to model details that would introduce new kinds of behaviors and stable states and transitions, not ones that would only make the system more accurate.