Tom and Andrew, it seems very implausible that someone saying “I will kill 3^^^^3 people unless X” is literally zero Bayesian evidence that they will kill 3^^^^3 people unless X. Though I guess it could plausibly be weak enough to take much of the force out of the problem.
Nothing could possibly be that weak.
Tom is right that the possibility that typing QWERTYUIOP will destroy the universe can be safely ignored; there is no evidence either way, so the probability equals the prior, and the Solomonoff prior that typing QWERTYUIOP will save the universe is, as far as we know, exactly the same.
Exactly the same? These are different scenarios. What happens if an AI actually calculates the prior probabilities, using a Solomonoff technique, without any a priori desire that things should exactly cancel out?
In other articles, you have discussed the notion that, in an infinite universe, there exist with probability 1 identical copies of me some 10^(10^29) {span} away. You then (correctly, I think) demonstrate the absurdity of declaring that one of them in particular is ‘really you’ and another is a ‘mere copy’.
When you say “3^^^^3 people”, you are presenting me two separate concepts:
Individual entities which are each “people”.
A set {S} of these entities, of which there are 3^^^^3 members.
Now, at this point, I have to ask myself: “what is the probability that {S} exists?”
By which I mean, what is the probability that there are 3^^^^3 unique configurations, each of which qualifies as a self-aware, experiencing entity with moral weight, without reducing to an “effective simulation” of another entity already counted in {S}?
Vs. what is the probability that the total cardinality of unique configurations that each qualify as self-aware, experiencing entities with moral weight, is < 3^^^^3?
Because if we’re going to juggle Bayesian probabilities here, at some point that has to get stuck in the pipe and smoked, too.
Why would an AI consider those two scenarios and no others? Seems more likely it would have to chew over every equivalently-complex hypothesis before coming to any actionable conclusion… at which point it stops being a worrisome, potentially world-destroying AI and becomes a brick, with a progress bar that won’t visibly advance until after the last proton has decayed.
More generally I mean that an AI capable of succumbing to this particular problem wouldn’t be able to function in the real world well enough to cause damage.
Tom and Andrew, it seems very implausible that someone saying “I will kill 3^^^^3 people unless X” is literally zero Bayesian evidence that they will kill 3^^^^3 people unless X. Though I guess it could plausibly be weak enough to take much of the force out of the problem.
Nothing could possibly be that weak.
Tom is right that the possibility that typing QWERTYUIOP will destroy the universe can be safely ignored; there is no evidence either way, so the probability equals the prior, and the Solomonoff prior that typing QWERTYUIOP will save the universe is, as far as we know, exactly the same.
Exactly the same? These are different scenarios. What happens if an AI actually calculates the prior probabilities, using a Solomonoff technique, without any a priori desire that things should exactly cancel out?
Well, let’s think about this mathematically.
In other articles, you have discussed the notion that, in an infinite universe, there exist with probability 1 identical copies of me some 10^(10^29) {span} away. You then (correctly, I think) demonstrate the absurdity of declaring that one of them in particular is ‘really you’ and another is a ‘mere copy’.
When you say “3^^^^3 people”, you are presenting me two separate concepts:
Individual entities which are each “people”.
A set {S} of these entities, of which there are 3^^^^3 members.
Now, at this point, I have to ask myself: “what is the probability that {S} exists?”
By which I mean, what is the probability that there are 3^^^^3 unique configurations, each of which qualifies as a self-aware, experiencing entity with moral weight, without reducing to an “effective simulation” of another entity already counted in {S}?
Vs. what is the probability that the total cardinality of unique configurations that each qualify as self-aware, experiencing entities with moral weight, is < 3^^^^3?
Because if we’re going to juggle Bayesian probabilities here, at some point that has to get stuck in the pipe and smoked, too.
Why would an AI consider those two scenarios and no others? Seems more likely it would have to chew over every equivalently-complex hypothesis before coming to any actionable conclusion… at which point it stops being a worrisome, potentially world-destroying AI and becomes a brick, with a progress bar that won’t visibly advance until after the last proton has decayed.
… which doesn’t solve the problem, but at least that AI won’t be giving anyone… five dollars? Your point is valid, but it doesn’t expand on anything.
More generally I mean that an AI capable of succumbing to this particular problem wouldn’t be able to function in the real world well enough to cause damage.
I’m not sure that was ever a question. :3