I think the relevant implication from the thought experiment is that thinking a bunch about metaethics and so on will in practice change your values; the pill itself is not very realistic, but thinking can make people smarter and will cause value changes. I would agree Land is overconfident (I think orthogonal and diagonal are both wrong models).
I think the relevant implication from the thought experiment is that thinking a bunch about metaethics and so on will in practice change your values
I don’t think that’s necessarily true. For example some people think about metaethics and decide that anti-realism is correct and they should just keep their current values. I think that’s overconfident but it does show that we don’t know whether correct thinking about metaethics necessarily leads one to change one’s values. (Under some other metaethical possibilities the same is also true.)
Also, even if it possible to steelman Land in a way to eliminate flaws in his argument, I’d rather spend my time reading philosophers who are more careful and do more thinking (or are better at it) before confidently declaring a conclusion. I do appreciate you giving an overview of his ideas, as it’s good to be familiar with that part of the current philosophical landscape (apparently Land is a fairly prominent philosopher with an extensive Wikipedia page).
I’m trying to understand where the source of disagreement lies, since I don’t really see much “overconfidence”—ie, i don’t see much of a probabilistic claim at all. Let me know if one of these suggestion points somewhere close to the right direction:
The texts cited were mostly a response to the putative inevitability of orthogonalism. Once that was (i think effectively) dispatched, one might consider that part of the argument closed. After that, one could excuse him for being less rigorous/have more fun with the rest; the goal there was not to debate but to allow the reader to experience what something akin to will-to-think would be like (im aware this is frowned upon in some circles);
The crux of the matter, imo, is not that thinking a lot about meta-ethics changes your values. Rather, that an increase in intelligence does—and namely, it changes them in the direction of greater appreciation for complexity and desire for thinking, and this change takes forms unintelligible to those one rung below. Of course, here the argument is either inductive/empirical or kinda neoplatonic. I will spare you the latter version, but the former would look something like:
- Imagine a fairly uncontroversial intelligence-sorted line-up, going: thermostat → mosquito → rat(🐭) → chimp → median human → rat(Ω) - Notice how intelligence grows together with the desire for more complexity, with curiosity, and ultimately with the drive towards increasing intelligence, per se: and notice also how morality evolves to accommodate those drives (one really wouldn’t want those on the left of wherever one stands to impose their moral code to those on the right).
While I agree these sort of arguments don’t cut it for a typical post-analytical, lesswrong-type debate, I still think that, at the very least, Occam’s razor should strongly slash their way—unless there’s some implicit counterargument i missed.
(As for the opportunity cost of deepening your familiarity with the subject matter, you might be right. The style of philosophy Land adopts is very very different from the one appreciated around here—it is indeed often a target for snark—and while I think there’s much of interest on that side of the continental split, the effort required for overcoming the aesthetic shift, weighted by chance of such shift completing, might still not make it worth it).
I think the relevant implication from the thought experiment is that thinking a bunch about metaethics and so on will in practice change your values; the pill itself is not very realistic, but thinking can make people smarter and will cause value changes. I would agree Land is overconfident (I think orthogonal and diagonal are both wrong models).
I don’t think that’s necessarily true. For example some people think about metaethics and decide that anti-realism is correct and they should just keep their current values. I think that’s overconfident but it does show that we don’t know whether correct thinking about metaethics necessarily leads one to change one’s values. (Under some other metaethical possibilities the same is also true.)
Also, even if it possible to steelman Land in a way to eliminate flaws in his argument, I’d rather spend my time reading philosophers who are more careful and do more thinking (or are better at it) before confidently declaring a conclusion. I do appreciate you giving an overview of his ideas, as it’s good to be familiar with that part of the current philosophical landscape (apparently Land is a fairly prominent philosopher with an extensive Wikipedia page).
I’m trying to understand where the source of disagreement lies, since I don’t really see much “overconfidence”—ie, i don’t see much of a probabilistic claim at all. Let me know if one of these suggestion points somewhere close to the right direction:
The texts cited were mostly a response to the putative inevitability of orthogonalism. Once that was (i think effectively) dispatched, one might consider that part of the argument closed.
After that, one could excuse him for being less rigorous/have more fun with the rest; the goal there was not to debate but to allow the reader to experience what something akin to will-to-think would be like (im aware this is frowned upon in some circles);
The crux of the matter, imo, is not that thinking a lot about meta-ethics changes your values. Rather, that an increase in intelligence does—and namely, it changes them in the direction of greater appreciation for complexity and desire for thinking, and this change takes forms unintelligible to those one rung below. Of course, here the argument is either inductive/empirical or kinda neoplatonic. I will spare you the latter version, but the former would look something like:
- Imagine a fairly uncontroversial intelligence-sorted line-up, going:
thermostat → mosquito → rat(🐭) → chimp → median human → rat(Ω)
- Notice how intelligence grows together with the desire for more complexity, with curiosity, and ultimately with the drive towards increasing intelligence, per se: and notice also how morality evolves to accommodate those drives (one really wouldn’t want those on the left of wherever one stands to impose their moral code to those on the right).
While I agree these sort of arguments don’t cut it for a typical post-analytical, lesswrong-type debate, I still think that, at the very least, Occam’s razor should strongly slash their way—unless there’s some implicit counterargument i missed.
(As for the opportunity cost of deepening your familiarity with the subject matter, you might be right. The style of philosophy Land adopts is very very different from the one appreciated around here—it is indeed often a target for snark—and while I think there’s much of interest on that side of the continental split, the effort required for overcoming the aesthetic shift, weighted by chance of such shift completing, might still not make it worth it).