When I was thinking about past discussions I was realized something like:
(selfish) gene → meme → goal.
When Bostrom is thinking about singleton’s probability I am afraid he overlook possibility to run more ‘personalities’ on one substrate. (we could suppose more teams to have possibility to run their projects on one hardware. Like more teams could use Hubble’s telescope to observe diffferent objects)
And not only possibility but probably also necessity.
If we want to prevent destructive goal to be realized (and destroy our world) then we have to think about multipolarity.
We need to analyze how to slightly different goals could control each other.
I’ll coin the term Monolithing Multipolar for what I think you mean here, one stable structure that has different modes activated at different times, and these modes don’t share goals, like a human—specially like a schizophrenic one.
The problem with Monolithic Multipolarity is that it is fragile. In humans, what causes us to behave differently and want different things at different times is not accessible for revision, otherwise, each party may have an incentive to steal the other’s time. An AI would need not to deal with such triviality, since, by definition of explosive recursively-self improving it can rewrite it-selves.
We need other people, but Bostrom doesn’t let simple things left out easily.
When I was thinking about past discussions I was realized something like:
(selfish) gene → meme → goal.
When Bostrom is thinking about singleton’s probability I am afraid he overlook possibility to run more ‘personalities’ on one substrate. (we could suppose more teams to have possibility to run their projects on one hardware. Like more teams could use Hubble’s telescope to observe diffferent objects)
And not only possibility but probably also necessity.
If we want to prevent destructive goal to be realized (and destroy our world) then we have to think about multipolarity.
We need to analyze how to slightly different goals could control each other.
I’ll coin the term Monolithing Multipolar for what I think you mean here, one stable structure that has different modes activated at different times, and these modes don’t share goals, like a human—specially like a schizophrenic one.
The problem with Monolithic Multipolarity is that it is fragile. In humans, what causes us to behave differently and want different things at different times is not accessible for revision, otherwise, each party may have an incentive to steal the other’s time. An AI would need not to deal with such triviality, since, by definition of explosive recursively-self improving it can rewrite it-selves.
We need other people, but Bostrom doesn’t let simple things left out easily.
One mode could have goal to be something like graphite moderator in nuclear reactor. To prevent unmanaged explosion.
In this moment I just wanted to improve our view at probability of only one SI in starting period.