Be very, very cautious assigning probability 1 to the proposition that you even understand what the Law of Contradiction means. How confident are you that logic works like you think it works; that you’re not just spouting gibberish even though it seems from the inside to make sense. If you’d just had a major concussion, with severe but temporary brain damage, would you notice? Are you sure? After such damage you might claim that “if bananas then clocks” was true with certainty 1, and feel from the inside like you we’re making sense. Don’t just dismiss minds you can’t empathize with (meaning minds which you can’t model by tweaking simple parameters of your self-model) as not having subjective experiences that look, to them, exactly like yours do to you. You already know you’re running on corrupted hardware; you can’t be perfectly confident that it’s not malfunctioning, and if you don’t know that then you can’t assign probability 1 to anything (on pain of being unable to update later).
Again, though, you’ve defined the subspace of minds which have to be persuaded in a way which defines precisely which statements are UCAs. If you can draw useful inferences on that set of statements then go for it, but I don’t think you can. Particularly worth noting is that there’s no way any “should” statement can be a UCA because I can have any preferences I want and still fit the definition, but “should” statements always engage with preferences.
How confident are you that logic works like you think it works; that you’re not just spouting gibberish even though it seems from the inside to make sense.
I’m not even 90% sure of that, but I am entirely certain that the LNC is true: suppose I were to come across evidence to the effect that the LNC is false. But in the case where the LNC is false, the evidence against it is also evidence for it. In fact, if the LNC is false, the LNC is provable, since anything is provable from a contradiction. So if its true, it’s true, and if it’s false, it’s true. So it’s true. This isn’t entirely uncontroversial, there is Graham Priest after all.
Particularly worth noting is that there’s no way any “should” statement can be a UCA because I can have any preferences I want and still fit the definition, but “should” statements always engage with preferences.
I’ll channel Kant here, cause he’s the best UCMAist I know. He would say that almost all ‘should’ statements involve preferences, but not all. Most ‘should’ statements are hypothetical: If you want X, do Y. But one, he says, isn’t, it’s categorical: Do Y. But there’s nothing about ‘should’ statements which a priori requires the input of preferences. It just happens that most of them (all but one, in fact) do.
Now, Kant actually doesn’t think the UCMA is UC for every mind in mind-space, though he does think it’s UC for every mind capable of action. This is just to say that moral arguments are themselves only applicable to a subset of minds in mind-space, namely (what he calls) finite minds. But that’s a pretty acceptable qualification, since it still means the UCMA is UC for everything to which morality is relevant.
I’m not even 90% sure of that, but I am entirely certain that the LNC is true: suppose I were to come across evidence to the effect that the LNC is false. But in the case where the LNC is false, the evidence against it is also evidence for it. In fact, if the LNC is false, the LNC is provable, since anything is provable from a contradiction. So if its true, it’s true, and if it’s false, it’s true. So it’s true. This isn’t entirely uncontroversial, there is Graham Priest after all.
You say you’re not positive that you know how logic works, and then you go on to make an argument using logic for how you’re certain about one specific logical proposition. If you’re just confused and wrong, full stop, about how logic works then you can’t be sure of any specific piece of logic; you may just have an incomplete or outright flawed understanding. It’s unlikely, but not certain.
Also, you seem unduly concerned with pointing out that your arguments are not new. It’s not anti-productive, but neither is it particularly productive. Don’t take this as a criticism or argument, more of an observation that you might find relevant (or not).
The Categorical Imperative, in particular, is nonsense, in at least 2 ways. First, I don’t follow it, and have no incentive to do so. It basically says “always cooperate on the prisoner’s dilemma,” which is a terrible strategy (I want to cooperate iff my opponent will cooperate iff I cooperate). It’s hardly universally compelling since it carries neither a carrot nor a stick which could entice me to follow it. Second, an arbitrary agent need not care what other minds do. I could, easily, prefer that a) I maximize paperclips but b) all other agents maximize magnets. These are not instrumental goals; my real and salient terminal preferences are over the algorithms implemented not the outcomes (in this case). I should break the CI since what I want to do and what I want others to do are different.
Also, should statements are always descriptive, never prescriptive (as a consequence of what “should” means). You can’t propose a useful argument of the sort that says I should do x as a prescription. Rather you have to say that my preferences imply that I would prefer to do x. Should is a description of preferences. What would it even mean to say that I should do x, but that it wouldn’t make me happier or fulfill any other of my preferences, and I in fact will not do it? The word becomes entirely useless except as an invective.
I don’t really want to go into extreme detail on the issues with Kantian erhics; I’m relatively familiar with it after a friend of mine wrote a high school thesis on Kant, but it’s full of elementary mistakes. If you still think it’s got legs to stand I recommend reading some more of the sequences. Note that human morality is written nowhere except in our brains. I’m tapping out, I think.
Okay, fair enough. You’ve indulged me quite a ways with the whole UCMA thing, and we finished our discussion of EY’s sequence argument, so thanks for the discussion. I’ve spent some years studying Kant’s ethical theory though, so (largely for my own enjoyment) I’d like to address some of your criticisms of the CI in case curiosity provokes you to read on. If not, again, thanks.
I don’t really want to go into extreme detail on the issues with Kantian erhics; I’m relatively familiar with it after a friend of mine wrote a high school thesis on Kant, but it’s full of elementary mistakes.
This conclusion should set off alarm bells: if I told you I’d found a bunch of elementary mistakes in the sequences, having never read them but having discussed them with an acquaintance, you would bid me caution.
First, I don’t follow it, and have no incentive to do so.
The issue of incentive is one that Kant really struggles with, and much of his writings on ethics following the publication of the Groundwork for the Metaphysics of Morals (where the CI is introduced) is concerned with this problem. So while on the one hand, you’re correct to think that this is a problem for Kant, it’s also a problem he spent a lot of time thinking about himself. I just can’t do it any justice here, but very roughly Kant thinks that in order to rationally pursue happiness, you have to pursue happiness in such a way that you are deserving of it, and only by being morally good can you deserve happiness. This sounds very unconvincing as read, but Kant’s view on this is both sophisticated and shifting. I don’t know that he felt he ever had a great solution, and he died writing a book on the importance of our sense of aesthetics and its relation to morality.
It basically says “always cooperate on the prisoner’s dilemma,”
The CI is not a decision theory, nor is a decision theory a moral theory. It’s important not to confuse the two. If you gave Kant the prisoner’s dilemma, he would tell you to always defect, because you should always be honest. You would be annoyed, because he’s mucking around with irrelevant features of the set up, and he would point out to you that the CI is a moral theory and that the details of the setup matters. The CI says nothing consistant or interesting about the prisoner’s dilemma, nor should it.
I could, easily, prefer that a) I maximize paperclips but b) all other agents maximize magnets.
You could, and that’s how preferences work. So there could be no universal hypothetical imperative. But the categorical imperative doesn’t involve reference to preferences. If you take yourself to have a reason to X, which makes no reference to preferences (terminal or otherwise), you at the same time take any arbitrary reasoner to have a reason to X. Suppose, for comparison, that a set of minds in mind-space happened to (against whatever odds) have exactly the same evidence as you for a proposition. You couldn’t coherently believe that you had reason to believe the proposition, but that they did not. Reasons don’t differentiate between reasoners that way.
You may think imperatives always make reference to preferences, but this is an argument you’d have to have with Kant. It’s not a priori obvious or anything, so it’s not enough to state it and say ‘Kant is wrong’.
I should break the CI since what I want to do and what I want others to do are different.
The CI is not the claim that everyone should do what you want to do. The CI is the demand (essentially) that you act on reasons. The structure of reasons (like the fact that reasons don’t discriminate between reasoners) gives you the whole ‘universal’ bit.
Be very, very cautious assigning probability 1 to the proposition that you even understand what the Law of Contradiction means. How confident are you that logic works like you think it works; that you’re not just spouting gibberish even though it seems from the inside to make sense. If you’d just had a major concussion, with severe but temporary brain damage, would you notice? Are you sure? After such damage you might claim that “if bananas then clocks” was true with certainty 1, and feel from the inside like you we’re making sense. Don’t just dismiss minds you can’t empathize with (meaning minds which you can’t model by tweaking simple parameters of your self-model) as not having subjective experiences that look, to them, exactly like yours do to you. You already know you’re running on corrupted hardware; you can’t be perfectly confident that it’s not malfunctioning, and if you don’t know that then you can’t assign probability 1 to anything (on pain of being unable to update later).
Again, though, you’ve defined the subspace of minds which have to be persuaded in a way which defines precisely which statements are UCAs. If you can draw useful inferences on that set of statements then go for it, but I don’t think you can. Particularly worth noting is that there’s no way any “should” statement can be a UCA because I can have any preferences I want and still fit the definition, but “should” statements always engage with preferences.
I’m not even 90% sure of that, but I am entirely certain that the LNC is true: suppose I were to come across evidence to the effect that the LNC is false. But in the case where the LNC is false, the evidence against it is also evidence for it. In fact, if the LNC is false, the LNC is provable, since anything is provable from a contradiction. So if its true, it’s true, and if it’s false, it’s true. So it’s true. This isn’t entirely uncontroversial, there is Graham Priest after all.
I’ll channel Kant here, cause he’s the best UCMAist I know. He would say that almost all ‘should’ statements involve preferences, but not all. Most ‘should’ statements are hypothetical: If you want X, do Y. But one, he says, isn’t, it’s categorical: Do Y. But there’s nothing about ‘should’ statements which a priori requires the input of preferences. It just happens that most of them (all but one, in fact) do.
Now, Kant actually doesn’t think the UCMA is UC for every mind in mind-space, though he does think it’s UC for every mind capable of action. This is just to say that moral arguments are themselves only applicable to a subset of minds in mind-space, namely (what he calls) finite minds. But that’s a pretty acceptable qualification, since it still means the UCMA is UC for everything to which morality is relevant.
You say you’re not positive that you know how logic works, and then you go on to make an argument using logic for how you’re certain about one specific logical proposition. If you’re just confused and wrong, full stop, about how logic works then you can’t be sure of any specific piece of logic; you may just have an incomplete or outright flawed understanding. It’s unlikely, but not certain.
Also, you seem unduly concerned with pointing out that your arguments are not new. It’s not anti-productive, but neither is it particularly productive. Don’t take this as a criticism or argument, more of an observation that you might find relevant (or not).
The Categorical Imperative, in particular, is nonsense, in at least 2 ways. First, I don’t follow it, and have no incentive to do so. It basically says “always cooperate on the prisoner’s dilemma,” which is a terrible strategy (I want to cooperate iff my opponent will cooperate iff I cooperate). It’s hardly universally compelling since it carries neither a carrot nor a stick which could entice me to follow it. Second, an arbitrary agent need not care what other minds do. I could, easily, prefer that a) I maximize paperclips but b) all other agents maximize magnets. These are not instrumental goals; my real and salient terminal preferences are over the algorithms implemented not the outcomes (in this case). I should break the CI since what I want to do and what I want others to do are different.
Also, should statements are always descriptive, never prescriptive (as a consequence of what “should” means). You can’t propose a useful argument of the sort that says I should do x as a prescription. Rather you have to say that my preferences imply that I would prefer to do x. Should is a description of preferences. What would it even mean to say that I should do x, but that it wouldn’t make me happier or fulfill any other of my preferences, and I in fact will not do it? The word becomes entirely useless except as an invective.
I don’t really want to go into extreme detail on the issues with Kantian erhics; I’m relatively familiar with it after a friend of mine wrote a high school thesis on Kant, but it’s full of elementary mistakes. If you still think it’s got legs to stand I recommend reading some more of the sequences. Note that human morality is written nowhere except in our brains. I’m tapping out, I think.
Okay, fair enough. You’ve indulged me quite a ways with the whole UCMA thing, and we finished our discussion of EY’s sequence argument, so thanks for the discussion. I’ve spent some years studying Kant’s ethical theory though, so (largely for my own enjoyment) I’d like to address some of your criticisms of the CI in case curiosity provokes you to read on. If not, again, thanks.
This conclusion should set off alarm bells: if I told you I’d found a bunch of elementary mistakes in the sequences, having never read them but having discussed them with an acquaintance, you would bid me caution.
The issue of incentive is one that Kant really struggles with, and much of his writings on ethics following the publication of the Groundwork for the Metaphysics of Morals (where the CI is introduced) is concerned with this problem. So while on the one hand, you’re correct to think that this is a problem for Kant, it’s also a problem he spent a lot of time thinking about himself. I just can’t do it any justice here, but very roughly Kant thinks that in order to rationally pursue happiness, you have to pursue happiness in such a way that you are deserving of it, and only by being morally good can you deserve happiness. This sounds very unconvincing as read, but Kant’s view on this is both sophisticated and shifting. I don’t know that he felt he ever had a great solution, and he died writing a book on the importance of our sense of aesthetics and its relation to morality.
The CI is not a decision theory, nor is a decision theory a moral theory. It’s important not to confuse the two. If you gave Kant the prisoner’s dilemma, he would tell you to always defect, because you should always be honest. You would be annoyed, because he’s mucking around with irrelevant features of the set up, and he would point out to you that the CI is a moral theory and that the details of the setup matters. The CI says nothing consistant or interesting about the prisoner’s dilemma, nor should it.
You could, and that’s how preferences work. So there could be no universal hypothetical imperative. But the categorical imperative doesn’t involve reference to preferences. If you take yourself to have a reason to X, which makes no reference to preferences (terminal or otherwise), you at the same time take any arbitrary reasoner to have a reason to X. Suppose, for comparison, that a set of minds in mind-space happened to (against whatever odds) have exactly the same evidence as you for a proposition. You couldn’t coherently believe that you had reason to believe the proposition, but that they did not. Reasons don’t differentiate between reasoners that way.
You may think imperatives always make reference to preferences, but this is an argument you’d have to have with Kant. It’s not a priori obvious or anything, so it’s not enough to state it and say ‘Kant is wrong’.
The CI is not the claim that everyone should do what you want to do. The CI is the demand (essentially) that you act on reasons. The structure of reasons (like the fact that reasons don’t discriminate between reasoners) gives you the whole ‘universal’ bit.