Taboo compelling and restate. If compelling does not mean persuasive then what does it mean to you? Also taboo “committed” and “rational”—I think there’s a namespace conflict over your use of rational and the common Less Wrong usage, so restate using different terms. As a hint, try and imagine what a universally compelling argument would look like. What properties does it have? How do different minds react to understanding it, assuming they are capable of doing so? For bonus points explain what it means to be rationally committed to something (without using those words or synonyms).
Also worth noting: P1 is a generalization over statements about minds, not minds.
Well, we have two options in tabooing ‘compelling’. On the one hand, we could mean ‘persuasive’ where this means something like ‘If I sat down with someone, and presented the moral argument to them, they would end up accepting it regardless of their starting view’. This seems to be a bad option, because the claim that ‘there are no universally persuasive moral arguments’ is trivial. No one (of significance) has ever held the contrary view.
So our other option is to take ‘compelling’ as something like what Kantians say about the CI, namely that every mind is committed to it, whether they accept this or not (‘not’ out of irrationality). As you say, this leaves us with a lot more tabooing and explaining to do. I’m happy to go on with this, since it’s the sort of thing I enjoy, but it is a digression from my (perhaps confused) complaint about EY’s argument. The important bit there is just that ‘compelling’ probably shouldn’t be taken in such a way as to make EY’s point trivial.
The problem here is that the second option you offer does nothing to explain what a compelling argument is; it just passes the recursive buck onto the word “committed”. I know you said you recognize that, but unless we can show that this line of reasoning is coherent (let alone leads to a relevant conclusion, let alone correct) then there’s no reason to assume that Eliezer’s point isn’t trivial in the end. Philosophers have believed a lot of silly things, after all. The only sensible resolution I can come up with is where you take “committed to x” to mean “would, on reflection and given sufficient (accurate) information and a great deal more intelligence, believe x”. The problem is that this is still trivially false in the entirety of mindspace. You might, although I doubt it, be able to establish a statement of that form over all humans (I think Eliezer disagrees with me on the likelihood here). You could certainly not establish one given a mindspace that includes both humans and paper clip maximizers.
I know you said you recognize that, but unless we can show that this line of reasoning is coherent (let alone leads to a relevant conclusion, let alone correct) then there’s no reason to assume that Eliezer’s point isn’t trivial in the end.
If what you’re saying is this, then we agree: EY doesn’t here present an argument that UCMAs are likely to be false, but he does successfully argue that a certain class of generalizations over mind-space are likely to be false (such as generalizations about what minds will find persuasive) along with the assumption that a UCMA will fall into that class.
If that’s the line, then I think the argument is sound so far as it goes. UCMA enthusiasts (I am not among them, but I know them well) will not accept the final assumption, but you may be right that the burden is on them to show that UCMA’s (whatever ‘compelling’ is supposed to mean) does not fall into this class.
Alternatively, we could just posit that we’re only arguing against those people who do accept the assumption, that is those people who do take ‘compelling’ in UCMA to mean something like ‘immediately persuasive’, but then we’re probably tilting at windmills.
I suspect that our beliefs are close enough to each other at this point that any perceived differences are as likely to be due to minor linguistic quibbles as to actual disagreement. Which is to say, I wouldn’t have phrased it like you did (had I said it with that phrasing I would disagree) but I think that our maps are closer than our wording would suggest.
If anyone who does think they have a coherent definition for UCMA that does not involve persuasiveness (subject to the above taboos) wants to chime in I’d love to hear it. Otherwise, I think the thread has reached its (happy) conclusion.
If anyone who does think they have a coherent definition for UCMA that does not involve persuasiveness (subject to the above taboos) wants to chime in I’d love to hear it.
I’ll give it a shot: an argument is universally compelling if no mind both a) has reasons to reject it, and b) has coherent beliefs. This is to say that a mind can only believe that the argument is false by believing a contradiction.
I’ll give it a shot: an argument is universally compelling if no mind both a) has reasons to reject it, and b) has coherent beliefs. This is to say that a mind can only believe that the argument is false by believing a contradiction.
I think this may sound stronger than it actually is, for the same reasons that you can’t convince an arbitrary mind who does not accept modus ponens that it is true.
More to the point, recall that one rationalist’s modus tollens is another’s modus ponens. This definition is defeated by any mind who possesses a strong prior that the given UCMA is false, and is willing to accept any and all consequences of that fact as true (even if doing so contradicts mathematical logic, Occam’s Razor, Bayes, or anything else we take for granted). This prior is a reason to reject the argument (every decision to accept or reject a conclusion can be reduced to a choice of priors), and since it is willing to abandon all beliefs which contradict its rejection it will not hold any contradictory beliefs. It’s worth noting that “contradiction” is a notion from formal logic which not all minds need to hold as true; this definition technically imposes a very strong restriction on the space of all minds which have to be persuaded. The law of non-contradiction (~(A ^ ~A) ) is a UCMA by definition under that requirement, even though I don’t hold that belief withcertainty.
The arbitrary choice of priors, even for rational minds, actually appears to defeat any UCMA definition that does not beg the question. Of course, it is also true that any coherent definition begs the question one way or another (by defining which minds have to be persuaded such that it either demands certain arguments be accepted by all, or such that it does not). Now that I think about it, that’s the whole problem with the notion from the start. You have to define which minds have to be persuaded somewhere between a tape recorder shouting “2 + 2 = 5!” for eternity and including only your brain’s algorithm. And where you draw that line determines exactly which arguments, if any, are UCMAs.
And if you don’t have to persuade any minds, then I hesitate to permit you to call your argument “universally compelling” in any context where I can act to prevent it.
the argument “A implies B” is not universally compelling unless every rational agent must accept that “P(B | A) > P(B | !A)”
More colloquially, one property of universally compelling evidence might be that all rational agents must agree on the particular direction a particular piece of evidence should adjust a particular prior.
You’re just passing the recursive buck over to “rational”. Taboo rational, and see what you get out; I suspect it will be something along the lines of “minds that determine the right direction to shift the evidence in every case”, which, notably, doesn’t include humans even if you assume that there is an objectively decidable “rational” direction. There is no objectively determinable method to determine what the correct direction to shift is in any case; imagine an agent with anti-occamian priors, who believes that because the coin has come up heads 100 times in a row, it must be more likely to come up tails next time. It’s all a question of priors.
I think there is an objectively right direction to shift, given particular priors. Your anti-regularity observer seems to be making a mistake by becoming more confident if he actually sees heads come up next.
Also, I edited my post above to fix a notational error.
This prior is a reason to reject the argument (every decision to accept or reject a conclusion can be reduced to a choice of priors), and since it is willing to abandon all beliefs which contradict its rejection it will not hold any contradictory beliefs.
You’re right that I am committed to denying this, though I would also point out that it does not follow a priori that it is always possible to resolve the state of having contradictory beliefs by rejecting either side of a contradiction arbitrarily. However, in order to deny the above, I must claim that there are some beliefs a mind holds (or is committed to, where this means that these beliefs are deductively provable from what the mind does believe) just in virtue of being a mind. I’ll bite that bullet, and claim that there exists a UCMA of this kind. I also think the Law of Non-Contradiction is a UCA, and in fact it’s trivially so on my definition, but I think that’ll hold up: there are no Bayesian reasons to think that ascribing it a probability of 1 is a problem, and I do think I can defend the claim that evidence against it is a priori impossible (EY’s example reasons for doubt in the two articles you cite wouldn’t apply in this case).
You have to define which minds have to be persuaded somewhere between a tape recorder shouting “2 + 2 = 5!” for eternity and including only your brain’s algorithm. And where you draw that line determines exactly which arguments, if any, are UCMAs.
This isn’t a problem on my definition of a UCA. My understanding of a UCA (which I think represents an honest to god position, namely Kant’s) is consistant with any given mind believing the UCA to be false, perhaps because of reasons like the tape-recorder. Only, such a mind couldn’t have consistant beliefs.
And if you don’t have to persuade any minds, then I hesitate to permit you to call your argument “universally compelling” in any context where I can act to prevent it.
Remember that my definition of a UCMA isn’t ‘any mind under any circumstances could always be persuaded’. To attack this view of UCMAs is, I think, to attack a strawman. If we must take UCMAs to be arguments which are universally and actually persuasive for any mind in any circumstance in order to see EY’s point (here or elsewhere) as valid, then this is a serious critique of EY.
Be very, very cautious assigning probability 1 to the proposition that you even understand what the Law of Contradiction means. How confident are you that logic works like you think it works; that you’re not just spouting gibberish even though it seems from the inside to make sense. If you’d just had a major concussion, with severe but temporary brain damage, would you notice? Are you sure? After such damage you might claim that “if bananas then clocks” was true with certainty 1, and feel from the inside like you we’re making sense. Don’t just dismiss minds you can’t empathize with (meaning minds which you can’t model by tweaking simple parameters of your self-model) as not having subjective experiences that look, to them, exactly like yours do to you. You already know you’re running on corrupted hardware; you can’t be perfectly confident that it’s not malfunctioning, and if you don’t know that then you can’t assign probability 1 to anything (on pain of being unable to update later).
Again, though, you’ve defined the subspace of minds which have to be persuaded in a way which defines precisely which statements are UCAs. If you can draw useful inferences on that set of statements then go for it, but I don’t think you can. Particularly worth noting is that there’s no way any “should” statement can be a UCA because I can have any preferences I want and still fit the definition, but “should” statements always engage with preferences.
How confident are you that logic works like you think it works; that you’re not just spouting gibberish even though it seems from the inside to make sense.
I’m not even 90% sure of that, but I am entirely certain that the LNC is true: suppose I were to come across evidence to the effect that the LNC is false. But in the case where the LNC is false, the evidence against it is also evidence for it. In fact, if the LNC is false, the LNC is provable, since anything is provable from a contradiction. So if its true, it’s true, and if it’s false, it’s true. So it’s true. This isn’t entirely uncontroversial, there is Graham Priest after all.
Particularly worth noting is that there’s no way any “should” statement can be a UCA because I can have any preferences I want and still fit the definition, but “should” statements always engage with preferences.
I’ll channel Kant here, cause he’s the best UCMAist I know. He would say that almost all ‘should’ statements involve preferences, but not all. Most ‘should’ statements are hypothetical: If you want X, do Y. But one, he says, isn’t, it’s categorical: Do Y. But there’s nothing about ‘should’ statements which a priori requires the input of preferences. It just happens that most of them (all but one, in fact) do.
Now, Kant actually doesn’t think the UCMA is UC for every mind in mind-space, though he does think it’s UC for every mind capable of action. This is just to say that moral arguments are themselves only applicable to a subset of minds in mind-space, namely (what he calls) finite minds. But that’s a pretty acceptable qualification, since it still means the UCMA is UC for everything to which morality is relevant.
I’m not even 90% sure of that, but I am entirely certain that the LNC is true: suppose I were to come across evidence to the effect that the LNC is false. But in the case where the LNC is false, the evidence against it is also evidence for it. In fact, if the LNC is false, the LNC is provable, since anything is provable from a contradiction. So if its true, it’s true, and if it’s false, it’s true. So it’s true. This isn’t entirely uncontroversial, there is Graham Priest after all.
You say you’re not positive that you know how logic works, and then you go on to make an argument using logic for how you’re certain about one specific logical proposition. If you’re just confused and wrong, full stop, about how logic works then you can’t be sure of any specific piece of logic; you may just have an incomplete or outright flawed understanding. It’s unlikely, but not certain.
Also, you seem unduly concerned with pointing out that your arguments are not new. It’s not anti-productive, but neither is it particularly productive. Don’t take this as a criticism or argument, more of an observation that you might find relevant (or not).
The Categorical Imperative, in particular, is nonsense, in at least 2 ways. First, I don’t follow it, and have no incentive to do so. It basically says “always cooperate on the prisoner’s dilemma,” which is a terrible strategy (I want to cooperate iff my opponent will cooperate iff I cooperate). It’s hardly universally compelling since it carries neither a carrot nor a stick which could entice me to follow it. Second, an arbitrary agent need not care what other minds do. I could, easily, prefer that a) I maximize paperclips but b) all other agents maximize magnets. These are not instrumental goals; my real and salient terminal preferences are over the algorithms implemented not the outcomes (in this case). I should break the CI since what I want to do and what I want others to do are different.
Also, should statements are always descriptive, never prescriptive (as a consequence of what “should” means). You can’t propose a useful argument of the sort that says I should do x as a prescription. Rather you have to say that my preferences imply that I would prefer to do x. Should is a description of preferences. What would it even mean to say that I should do x, but that it wouldn’t make me happier or fulfill any other of my preferences, and I in fact will not do it? The word becomes entirely useless except as an invective.
I don’t really want to go into extreme detail on the issues with Kantian erhics; I’m relatively familiar with it after a friend of mine wrote a high school thesis on Kant, but it’s full of elementary mistakes. If you still think it’s got legs to stand I recommend reading some more of the sequences. Note that human morality is written nowhere except in our brains. I’m tapping out, I think.
Okay, fair enough. You’ve indulged me quite a ways with the whole UCMA thing, and we finished our discussion of EY’s sequence argument, so thanks for the discussion. I’ve spent some years studying Kant’s ethical theory though, so (largely for my own enjoyment) I’d like to address some of your criticisms of the CI in case curiosity provokes you to read on. If not, again, thanks.
I don’t really want to go into extreme detail on the issues with Kantian erhics; I’m relatively familiar with it after a friend of mine wrote a high school thesis on Kant, but it’s full of elementary mistakes.
This conclusion should set off alarm bells: if I told you I’d found a bunch of elementary mistakes in the sequences, having never read them but having discussed them with an acquaintance, you would bid me caution.
First, I don’t follow it, and have no incentive to do so.
The issue of incentive is one that Kant really struggles with, and much of his writings on ethics following the publication of the Groundwork for the Metaphysics of Morals (where the CI is introduced) is concerned with this problem. So while on the one hand, you’re correct to think that this is a problem for Kant, it’s also a problem he spent a lot of time thinking about himself. I just can’t do it any justice here, but very roughly Kant thinks that in order to rationally pursue happiness, you have to pursue happiness in such a way that you are deserving of it, and only by being morally good can you deserve happiness. This sounds very unconvincing as read, but Kant’s view on this is both sophisticated and shifting. I don’t know that he felt he ever had a great solution, and he died writing a book on the importance of our sense of aesthetics and its relation to morality.
It basically says “always cooperate on the prisoner’s dilemma,”
The CI is not a decision theory, nor is a decision theory a moral theory. It’s important not to confuse the two. If you gave Kant the prisoner’s dilemma, he would tell you to always defect, because you should always be honest. You would be annoyed, because he’s mucking around with irrelevant features of the set up, and he would point out to you that the CI is a moral theory and that the details of the setup matters. The CI says nothing consistant or interesting about the prisoner’s dilemma, nor should it.
I could, easily, prefer that a) I maximize paperclips but b) all other agents maximize magnets.
You could, and that’s how preferences work. So there could be no universal hypothetical imperative. But the categorical imperative doesn’t involve reference to preferences. If you take yourself to have a reason to X, which makes no reference to preferences (terminal or otherwise), you at the same time take any arbitrary reasoner to have a reason to X. Suppose, for comparison, that a set of minds in mind-space happened to (against whatever odds) have exactly the same evidence as you for a proposition. You couldn’t coherently believe that you had reason to believe the proposition, but that they did not. Reasons don’t differentiate between reasoners that way.
You may think imperatives always make reference to preferences, but this is an argument you’d have to have with Kant. It’s not a priori obvious or anything, so it’s not enough to state it and say ‘Kant is wrong’.
I should break the CI since what I want to do and what I want others to do are different.
The CI is not the claim that everyone should do what you want to do. The CI is the demand (essentially) that you act on reasons. The structure of reasons (like the fact that reasons don’t discriminate between reasoners) gives you the whole ‘universal’ bit.
Taboo compelling and restate. If compelling does not mean persuasive then what does it mean to you? Also taboo “committed” and “rational”—I think there’s a namespace conflict over your use of rational and the common Less Wrong usage, so restate using different terms. As a hint, try and imagine what a universally compelling argument would look like. What properties does it have? How do different minds react to understanding it, assuming they are capable of doing so? For bonus points explain what it means to be rationally committed to something (without using those words or synonyms).
Also worth noting: P1 is a generalization over statements about minds, not minds.
Well, we have two options in tabooing ‘compelling’. On the one hand, we could mean ‘persuasive’ where this means something like ‘If I sat down with someone, and presented the moral argument to them, they would end up accepting it regardless of their starting view’. This seems to be a bad option, because the claim that ‘there are no universally persuasive moral arguments’ is trivial. No one (of significance) has ever held the contrary view.
So our other option is to take ‘compelling’ as something like what Kantians say about the CI, namely that every mind is committed to it, whether they accept this or not (‘not’ out of irrationality). As you say, this leaves us with a lot more tabooing and explaining to do. I’m happy to go on with this, since it’s the sort of thing I enjoy, but it is a digression from my (perhaps confused) complaint about EY’s argument. The important bit there is just that ‘compelling’ probably shouldn’t be taken in such a way as to make EY’s point trivial.
.
The problem here is that the second option you offer does nothing to explain what a compelling argument is; it just passes the recursive buck onto the word “committed”. I know you said you recognize that, but unless we can show that this line of reasoning is coherent (let alone leads to a relevant conclusion, let alone correct) then there’s no reason to assume that Eliezer’s point isn’t trivial in the end. Philosophers have believed a lot of silly things, after all. The only sensible resolution I can come up with is where you take “committed to x” to mean “would, on reflection and given sufficient (accurate) information and a great deal more intelligence, believe x”. The problem is that this is still trivially false in the entirety of mindspace. You might, although I doubt it, be able to establish a statement of that form over all humans (I think Eliezer disagrees with me on the likelihood here). You could certainly not establish one given a mindspace that includes both humans and paper clip maximizers.
If what you’re saying is this, then we agree: EY doesn’t here present an argument that UCMAs are likely to be false, but he does successfully argue that a certain class of generalizations over mind-space are likely to be false (such as generalizations about what minds will find persuasive) along with the assumption that a UCMA will fall into that class.
If that’s the line, then I think the argument is sound so far as it goes. UCMA enthusiasts (I am not among them, but I know them well) will not accept the final assumption, but you may be right that the burden is on them to show that UCMA’s (whatever ‘compelling’ is supposed to mean) does not fall into this class.
Alternatively, we could just posit that we’re only arguing against those people who do accept the assumption, that is those people who do take ‘compelling’ in UCMA to mean something like ‘immediately persuasive’, but then we’re probably tilting at windmills.
I suspect that our beliefs are close enough to each other at this point that any perceived differences are as likely to be due to minor linguistic quibbles as to actual disagreement. Which is to say, I wouldn’t have phrased it like you did (had I said it with that phrasing I would disagree) but I think that our maps are closer than our wording would suggest.
If anyone who does think they have a coherent definition for UCMA that does not involve persuasiveness (subject to the above taboos) wants to chime in I’d love to hear it. Otherwise, I think the thread has reached its (happy) conclusion.
I’ll give it a shot: an argument is universally compelling if no mind both a) has reasons to reject it, and b) has coherent beliefs. This is to say that a mind can only believe that the argument is false by believing a contradiction.
I think this may sound stronger than it actually is, for the same reasons that you can’t convince an arbitrary mind who does not accept modus ponens that it is true.
More to the point, recall that one rationalist’s modus tollens is another’s modus ponens. This definition is defeated by any mind who possesses a strong prior that the given UCMA is false, and is willing to accept any and all consequences of that fact as true (even if doing so contradicts mathematical logic, Occam’s Razor, Bayes, or anything else we take for granted). This prior is a reason to reject the argument (every decision to accept or reject a conclusion can be reduced to a choice of priors), and since it is willing to abandon all beliefs which contradict its rejection it will not hold any contradictory beliefs. It’s worth noting that “contradiction” is a notion from formal logic which not all minds need to hold as true; this definition technically imposes a very strong restriction on the space of all minds which have to be persuaded. The law of non-contradiction (~(A ^ ~A) ) is a UCMA by definition under that requirement, even though I don’t hold that belief with certainty.
The arbitrary choice of priors, even for rational minds, actually appears to defeat any UCMA definition that does not beg the question. Of course, it is also true that any coherent definition begs the question one way or another (by defining which minds have to be persuaded such that it either demands certain arguments be accepted by all, or such that it does not). Now that I think about it, that’s the whole problem with the notion from the start. You have to define which minds have to be persuaded somewhere between a tape recorder shouting “2 + 2 = 5!” for eternity and including only your brain’s algorithm. And where you draw that line determines exactly which arguments, if any, are UCMAs.
And if you don’t have to persuade any minds, then I hesitate to permit you to call your argument “universally compelling” in any context where I can act to prevent it.
Might we say something like:
More colloquially, one property of universally compelling evidence might be that all rational agents must agree on the particular direction a particular piece of evidence should adjust a particular prior.
You’re just passing the recursive buck over to “rational”. Taboo rational, and see what you get out; I suspect it will be something along the lines of “minds that determine the right direction to shift the evidence in every case”, which, notably, doesn’t include humans even if you assume that there is an objectively decidable “rational” direction. There is no objectively determinable method to determine what the correct direction to shift is in any case; imagine an agent with anti-occamian priors, who believes that because the coin has come up heads 100 times in a row, it must be more likely to come up tails next time. It’s all a question of priors.
I think there is an objectively right direction to shift, given particular priors. Your anti-regularity observer seems to be making a mistake by becoming more confident if he actually sees heads come up next.
Also, I edited my post above to fix a notational error.
You’re right that I am committed to denying this, though I would also point out that it does not follow a priori that it is always possible to resolve the state of having contradictory beliefs by rejecting either side of a contradiction arbitrarily. However, in order to deny the above, I must claim that there are some beliefs a mind holds (or is committed to, where this means that these beliefs are deductively provable from what the mind does believe) just in virtue of being a mind. I’ll bite that bullet, and claim that there exists a UCMA of this kind. I also think the Law of Non-Contradiction is a UCA, and in fact it’s trivially so on my definition, but I think that’ll hold up: there are no Bayesian reasons to think that ascribing it a probability of 1 is a problem, and I do think I can defend the claim that evidence against it is a priori impossible (EY’s example reasons for doubt in the two articles you cite wouldn’t apply in this case).
This isn’t a problem on my definition of a UCA. My understanding of a UCA (which I think represents an honest to god position, namely Kant’s) is consistant with any given mind believing the UCA to be false, perhaps because of reasons like the tape-recorder. Only, such a mind couldn’t have consistant beliefs.
Remember that my definition of a UCMA isn’t ‘any mind under any circumstances could always be persuaded’. To attack this view of UCMAs is, I think, to attack a strawman. If we must take UCMAs to be arguments which are universally and actually persuasive for any mind in any circumstance in order to see EY’s point (here or elsewhere) as valid, then this is a serious critique of EY.
Be very, very cautious assigning probability 1 to the proposition that you even understand what the Law of Contradiction means. How confident are you that logic works like you think it works; that you’re not just spouting gibberish even though it seems from the inside to make sense. If you’d just had a major concussion, with severe but temporary brain damage, would you notice? Are you sure? After such damage you might claim that “if bananas then clocks” was true with certainty 1, and feel from the inside like you we’re making sense. Don’t just dismiss minds you can’t empathize with (meaning minds which you can’t model by tweaking simple parameters of your self-model) as not having subjective experiences that look, to them, exactly like yours do to you. You already know you’re running on corrupted hardware; you can’t be perfectly confident that it’s not malfunctioning, and if you don’t know that then you can’t assign probability 1 to anything (on pain of being unable to update later).
Again, though, you’ve defined the subspace of minds which have to be persuaded in a way which defines precisely which statements are UCAs. If you can draw useful inferences on that set of statements then go for it, but I don’t think you can. Particularly worth noting is that there’s no way any “should” statement can be a UCA because I can have any preferences I want and still fit the definition, but “should” statements always engage with preferences.
I’m not even 90% sure of that, but I am entirely certain that the LNC is true: suppose I were to come across evidence to the effect that the LNC is false. But in the case where the LNC is false, the evidence against it is also evidence for it. In fact, if the LNC is false, the LNC is provable, since anything is provable from a contradiction. So if its true, it’s true, and if it’s false, it’s true. So it’s true. This isn’t entirely uncontroversial, there is Graham Priest after all.
I’ll channel Kant here, cause he’s the best UCMAist I know. He would say that almost all ‘should’ statements involve preferences, but not all. Most ‘should’ statements are hypothetical: If you want X, do Y. But one, he says, isn’t, it’s categorical: Do Y. But there’s nothing about ‘should’ statements which a priori requires the input of preferences. It just happens that most of them (all but one, in fact) do.
Now, Kant actually doesn’t think the UCMA is UC for every mind in mind-space, though he does think it’s UC for every mind capable of action. This is just to say that moral arguments are themselves only applicable to a subset of minds in mind-space, namely (what he calls) finite minds. But that’s a pretty acceptable qualification, since it still means the UCMA is UC for everything to which morality is relevant.
You say you’re not positive that you know how logic works, and then you go on to make an argument using logic for how you’re certain about one specific logical proposition. If you’re just confused and wrong, full stop, about how logic works then you can’t be sure of any specific piece of logic; you may just have an incomplete or outright flawed understanding. It’s unlikely, but not certain.
Also, you seem unduly concerned with pointing out that your arguments are not new. It’s not anti-productive, but neither is it particularly productive. Don’t take this as a criticism or argument, more of an observation that you might find relevant (or not).
The Categorical Imperative, in particular, is nonsense, in at least 2 ways. First, I don’t follow it, and have no incentive to do so. It basically says “always cooperate on the prisoner’s dilemma,” which is a terrible strategy (I want to cooperate iff my opponent will cooperate iff I cooperate). It’s hardly universally compelling since it carries neither a carrot nor a stick which could entice me to follow it. Second, an arbitrary agent need not care what other minds do. I could, easily, prefer that a) I maximize paperclips but b) all other agents maximize magnets. These are not instrumental goals; my real and salient terminal preferences are over the algorithms implemented not the outcomes (in this case). I should break the CI since what I want to do and what I want others to do are different.
Also, should statements are always descriptive, never prescriptive (as a consequence of what “should” means). You can’t propose a useful argument of the sort that says I should do x as a prescription. Rather you have to say that my preferences imply that I would prefer to do x. Should is a description of preferences. What would it even mean to say that I should do x, but that it wouldn’t make me happier or fulfill any other of my preferences, and I in fact will not do it? The word becomes entirely useless except as an invective.
I don’t really want to go into extreme detail on the issues with Kantian erhics; I’m relatively familiar with it after a friend of mine wrote a high school thesis on Kant, but it’s full of elementary mistakes. If you still think it’s got legs to stand I recommend reading some more of the sequences. Note that human morality is written nowhere except in our brains. I’m tapping out, I think.
Okay, fair enough. You’ve indulged me quite a ways with the whole UCMA thing, and we finished our discussion of EY’s sequence argument, so thanks for the discussion. I’ve spent some years studying Kant’s ethical theory though, so (largely for my own enjoyment) I’d like to address some of your criticisms of the CI in case curiosity provokes you to read on. If not, again, thanks.
This conclusion should set off alarm bells: if I told you I’d found a bunch of elementary mistakes in the sequences, having never read them but having discussed them with an acquaintance, you would bid me caution.
The issue of incentive is one that Kant really struggles with, and much of his writings on ethics following the publication of the Groundwork for the Metaphysics of Morals (where the CI is introduced) is concerned with this problem. So while on the one hand, you’re correct to think that this is a problem for Kant, it’s also a problem he spent a lot of time thinking about himself. I just can’t do it any justice here, but very roughly Kant thinks that in order to rationally pursue happiness, you have to pursue happiness in such a way that you are deserving of it, and only by being morally good can you deserve happiness. This sounds very unconvincing as read, but Kant’s view on this is both sophisticated and shifting. I don’t know that he felt he ever had a great solution, and he died writing a book on the importance of our sense of aesthetics and its relation to morality.
The CI is not a decision theory, nor is a decision theory a moral theory. It’s important not to confuse the two. If you gave Kant the prisoner’s dilemma, he would tell you to always defect, because you should always be honest. You would be annoyed, because he’s mucking around with irrelevant features of the set up, and he would point out to you that the CI is a moral theory and that the details of the setup matters. The CI says nothing consistant or interesting about the prisoner’s dilemma, nor should it.
You could, and that’s how preferences work. So there could be no universal hypothetical imperative. But the categorical imperative doesn’t involve reference to preferences. If you take yourself to have a reason to X, which makes no reference to preferences (terminal or otherwise), you at the same time take any arbitrary reasoner to have a reason to X. Suppose, for comparison, that a set of minds in mind-space happened to (against whatever odds) have exactly the same evidence as you for a proposition. You couldn’t coherently believe that you had reason to believe the proposition, but that they did not. Reasons don’t differentiate between reasoners that way.
You may think imperatives always make reference to preferences, but this is an argument you’d have to have with Kant. It’s not a priori obvious or anything, so it’s not enough to state it and say ‘Kant is wrong’.
The CI is not the claim that everyone should do what you want to do. The CI is the demand (essentially) that you act on reasons. The structure of reasons (like the fact that reasons don’t discriminate between reasoners) gives you the whole ‘universal’ bit.