It seems very unlikely to me that a UCMA enthusiast would grant that a UCMA has in any given case only a fifty percent chance of being UC. So to assume this begs the question against them. It may be that the UCMAist is being silly here, or that the burden is on them to show that things are otherwise, but that’s not relevant to the question of the strength of EY’s argument against UCMAs.
No no no. The point of the argument is that it doesn’t matter what the probability is. Even if it’s not 50%, the dynamics at work still make us end up with and exponentially small probability that something is universally compelling, just with the raw math.
The burden is on the UCMAist to show that there are structural reasons why minds must necessarily have certain properties that also happen to coincide with the ability to received, understand, and be convinced by arguments, and also coincide with the specific pattern where at least one specific argument will result in the same understanding and the same resulting conviction for all possible minds.
Both of these are a priori extremely unlikely due to certain intuitions about physics and algorithms and due to the mathematical argument Eliezer makes, respectively.
namely moral arguments which every possible mind is committed to accepting (whether or not they do accept it) is possible.
I’d require clarification on what is meant by “committed to accepting” here. They accept the argument and change their beliefs, or they do not accept the argument and do not change their beliefs. For either case, they either do this in all situations or only some situations. They may sometimes accept it and sometimes not accept it.
The Kant formulation you give seems explicitly about humans, only humans and exclusively humans and nothing else. The whole point of EY’s argument against UCMAs is that there are no universally compelling arguments you could make to an AI built in a manner completely alien to humans that would convince the AI that it is wrong to burn your baby and use its carbon atoms to build more paperclips, even if the AI is fully sentient and capable of producing art and writing philosophy papers about consciousness and universally-compelling moral arguments.
There’s other things I’d say are just wrong about the way this description models minds, but I think that for now I’ll stop here until I’ve read some actual Kant or something.
The point of the argument is that it doesn’t matter what the probability is.
Right, but I can’t imagine a UMCAist thinking this is a matter of probability. That is, the UMCAist will insist that this is a necessary feature of minds. The burden may be up to them, but that’s not EY’s argument (its not an argument against UMCA’s at all). And I took EY to be giving an argument to the effect that UMCA’s are false or at least unlikely. You may be right that EY has successfully argued that if one has no good reasons to believe a UMCA exists, the probability of one existing must be assessed as low. But this isn’t a premise the UMCAist will grant, so I don’t know what work that point could do.
The Kant formulation you give seems explicitly about humans, only humans and exclusively humans and nothing else.
You might be able to argue that, bu that’s not the way Kant sees it. Kant is explicit that this applies to all minds in mind-space (he kind of discovered the idea of mind-space, I think). As to what ‘committed to accepting’ means, you’re right that this needs a lot of working out, working out I haven’t done. Roughly, I mean that one could not have reasons for denying the UMCA while having consistant beliefs. Kant has to argue that it is structural to all possible minds to be unable to entertain an explicit contradiction, but that’s at least a relatively plausible generalization. Still, tall order.
On the whole, I entirely agree with you that a) the burden is on the UCMAist, b) this burden has not been satisfied here or maybe anywhere. I just wanted to raise a concern about EY’s argument in this post, to the effect that it either begs the question against the UCMAist, or that it is invalid (depending on how it’s interpreted). The shortcomings of the UCMAist aren’t strictly relevant to the (alleged) shortcomings of EY’s anti-UCMAist argument.
No no no. The point of the argument is that it doesn’t matter what the probability is. Even if it’s not 50%, the dynamics at work still make us end up with and exponentially small probability that something is universally compelling, just with the raw math.
The burden is on the UCMAist to show that there are structural reasons why minds must necessarily have certain properties that also happen to coincide with the ability to received, understand, and be convinced by arguments, and also coincide with the specific pattern where at least one specific argument will result in the same understanding and the same resulting conviction for all possible minds.
Both of these are a priori extremely unlikely due to certain intuitions about physics and algorithms and due to the mathematical argument Eliezer makes, respectively.
I’d require clarification on what is meant by “committed to accepting” here. They accept the argument and change their beliefs, or they do not accept the argument and do not change their beliefs. For either case, they either do this in all situations or only some situations. They may sometimes accept it and sometimes not accept it.
The Kant formulation you give seems explicitly about humans, only humans and exclusively humans and nothing else. The whole point of EY’s argument against UCMAs is that there are no universally compelling arguments you could make to an AI built in a manner completely alien to humans that would convince the AI that it is wrong to burn your baby and use its carbon atoms to build more paperclips, even if the AI is fully sentient and capable of producing art and writing philosophy papers about consciousness and universally-compelling moral arguments.
There’s other things I’d say are just wrong about the way this description models minds, but I think that for now I’ll stop here until I’ve read some actual Kant or something.
Right, but I can’t imagine a UMCAist thinking this is a matter of probability. That is, the UMCAist will insist that this is a necessary feature of minds. The burden may be up to them, but that’s not EY’s argument (its not an argument against UMCA’s at all). And I took EY to be giving an argument to the effect that UMCA’s are false or at least unlikely. You may be right that EY has successfully argued that if one has no good reasons to believe a UMCA exists, the probability of one existing must be assessed as low. But this isn’t a premise the UMCAist will grant, so I don’t know what work that point could do.
You might be able to argue that, bu that’s not the way Kant sees it. Kant is explicit that this applies to all minds in mind-space (he kind of discovered the idea of mind-space, I think). As to what ‘committed to accepting’ means, you’re right that this needs a lot of working out, working out I haven’t done. Roughly, I mean that one could not have reasons for denying the UMCA while having consistant beliefs. Kant has to argue that it is structural to all possible minds to be unable to entertain an explicit contradiction, but that’s at least a relatively plausible generalization. Still, tall order.
On the whole, I entirely agree with you that a) the burden is on the UCMAist, b) this burden has not been satisfied here or maybe anywhere. I just wanted to raise a concern about EY’s argument in this post, to the effect that it either begs the question against the UCMAist, or that it is invalid (depending on how it’s interpreted). The shortcomings of the UCMAist aren’t strictly relevant to the (alleged) shortcomings of EY’s anti-UCMAist argument.