If you code the AI wrong, these copies may wage war among themselves and lose utility as a result. I got really freaked out when Wei pointed out that possibility, but now it seems quite obvious to me.
Excuse me for being dense, but how would these AI’s go about waging war on each other if the are in causally distinct universes? I’m sure there’s some clever way, but I can’t see what it is.
I don’t understand precisely enough what “causally distinct” means, but anyway the AIs don’t have to be causally distinct. If our universe is spatially infinite (which currently seems likely, but not certain), it contains infinitely many copies of you and any AIs that you build. If you code the AI wrong (e.g. using the assumption that it’s alone and must fend for itself), its copies will eventually start fighting for territory.
If you code the AI wrong, it can end up fighting these non-copy AIs too, even though they may be similar enough to ours to make acausal cooperation possible.
Unless they’re far enough apart, and inflation is strong enough, that their future light-cones never intersect. I thought you were going to talk about them using resources on acausal blackmail instead.
Also, I was traveling in May, so I just discovered this post. Have your thoughts changed since then?
Causally distinct isn’t a technical term, I just made it up on the spot. Basically, I was imagining the different AIs as existing in different Everett Branches or Tegmark universes or hypothetical scenario’s or something like that. I hadn’t considered the possibility of multiple AIs in the same universe.
Excuse me for being dense, but how would these AI’s go about waging war on each other if the are in causally distinct universes? I’m sure there’s some clever way, but I can’t see what it is.
I don’t understand precisely enough what “causally distinct” means, but anyway the AIs don’t have to be causally distinct. If our universe is spatially infinite (which currently seems likely, but not certain), it contains infinitely many copies of you and any AIs that you build. If you code the AI wrong (e.g. using the assumption that it’s alone and must fend for itself), its copies will eventually start fighting for territory.
Isn’t it much more likely to encounter many other, non-copy AI’s prior to meeting itself?
If you code the AI wrong, it can end up fighting these non-copy AIs too, even though they may be similar enough to ours to make acausal cooperation possible.
Unless they’re far enough apart, and inflation is strong enough, that their future light-cones never intersect. I thought you were going to talk about them using resources on acausal blackmail instead.
Also, I was traveling in May, so I just discovered this post. Have your thoughts changed since then?
Nope, I didn’t get any new ideas since May. :-(
Causally distinct isn’t a technical term, I just made it up on the spot. Basically, I was imagining the different AIs as existing in different Everett Branches or Tegmark universes or hypothetical scenario’s or something like that. I hadn’t considered the possibility of multiple AIs in the same universe.