I remain unconvinced. I agree with most of your points, and I think most of my disagreement stems from modeling my mind, the world, and/or ‘dark techniques’ in a different way than you do. I’d be happy to get together and try to converge sometime.
I do have one direct disagreement with the text, which is somewhat indicative of my more general disagreements.
Your instrumental rationality will always be limited by your epistemic rationality.
In my experience, many rationalists are motivation-limited, not accuracy-limited. I have met many people who are smarter than I am, who think faster than I do, who are better epistemic rationalists than I am—and who suffer greatly from akrasia or other stamina issues.
I seem to be quite good at achieving my goals. I am by no means convinced that this is due to some excess of willpower: my successes could alternatively be attributed to chance, genetics, self-delusion, or other factors. Even conditioned upon the assumption that my ability to avoid akrasia is a large part of my success, I am not convinced that my motivational techniques are the source of this ability.
However, I do see many “light-side” epistemic rationalists suffering from more akrasia than I do. In the real world, I am not convinced that epistemic rationality is enough. As such, I am cautious about removing motivational techniques in the name of the light.
(I also am under the impression that I can use my motivational techniques in such a way as to avoid many of the averse effects you mention, which gets back to us modeling things differently. This is, of course, exactly what my brain would tell me, and the objection should largely be disregarded until we have a chance to converge.)
There is, of course, some degree to which the above argument only indicates my ability to find self-protecting arguments that I myself find convincing. This topic is somewhat emotionally laden for me, so next time I find a few spare hours I will spend them strongly considering whether I am wrong. However, after cursory examination, I don’t expect any particular update.
It seems somewhat absurd to say that your ability to achieve goals is limited by the thingspace cluster we refer to as epistemic rationality. After all, caring too much about epistemic rationality leads to needing things like this.
Epistemic rationality seems like it should be something that you care about when it matters to care about, and don’t care about when it doesn’t matter. Like any other investment, your capital invested should be proportional to your rate and belief of return. Similarly, you should always be willing to sacrifice some epistemic cleanliness if it means winning. You can clean up the dark nasty corners of your mind on top of your pile of utility.
I think the point Brienne made is that seemingly small tradeoffs of epistemic accuracy for instrumental power actually cost much more than you might expect. You can’t pay a little epistemic accuracy for a lot of instrumental power, because epistemic rationality requires that you leave yourself no outs. If you sanction even one tiny exception, you lose the benefits of purity that you didn’t even know were available.
You’re definitely paying for epistemic rationality with instrumental power if you spend all of your time contemplating metaethics so that you have a pure epistemic notion of what your goals are.
Humans start with effectively zero rationality. At some point, it becomes less winning to spend time gaining epistemic rationality than to spend time effecting your goals.
So, it seems like you can spend potential epistemtic rationality for instrumental power by using time to effect change rather than becoming epistemically pure.
To respond to some of your later points:
Take a programming language like OCaml. OCaml supports mutable and immutable state, and you could write an analysis over OCaml that would indeed barf and die on the first instances of mutation. Mutable state does make it incredibly difficult, sometimes to the point of impossiblity, to use conventional analyses on modern computers to prove facts about programs.
But this doesn’t mean that a single ref cell destroys the benefits of purity in an OCaml program. To a human reading the program (which is really the most important use case), a single ref cell can be the best way to solve a problem, and the language’s modularity can easily abstract it away so that the interface on the whole is pure.
Similarily, I agree that it’s important to strive for perfection, in all cases. But striving for perfection doesn’t mean taking every available sacrifice for perfection. I can strive for epistemic perfection while still choosing locally to not improve my epistemic state. An AI might have a strict total ordering of terminal goals, but a human never will. So as a human, I can simultaneously strive for epistemic perfection and instrumental usefulness.
In any case, I still think there’s a limit where the return on investment into epistemic rationality diminishes into nothingness, and I think that limit is much closer than most less wrongers think, primarily because what matters most isn’t absolute rationality, but relative rationality in your particular social setting. You only need to be more able to win than everyone you compete with; becoming more able to win without actually winning is not only a waste of time, but actively harmful. It’s better to win two battles than to waste time overpreparing. Overfocusing on epistemic rationality ignores the opportunity cost of neglecting to use your arts for something outside themselves.
Keep in mind here that I’m steelmanning someone else’s argument, perhaps improperly. I don’t want to put words in anyone else’s mouth. That said, I used the term ‘purity’ in loose analogy to a ‘pure’ programming language, wherein one exception is sufficient to remove much of the possible gains.
Continuing the steelmanning, however, I’d say that while no human can achieve epistemic perfection, there’s a large class of epistemic failures that you only recognize if you’re striving for perfection. Striving for purity, not purity itself, is what gets you the gains.
So8ers, you’re completely accurate in your interpretation of my argument. I’m going to read some more of your previous posts before responding much to your first comment here.
Yes, as Eliezer put it somewhat dramatically here:
If you once tell a lie, the truth is ever after your enemy.
To expand on this in context, as long as you are striving for the truth any evidence you come across helps you, but once you choose to believe a lie you must forever avoid dis-confirming evidence.
Yes, it’s pretty much impossible to tell a lie without hurting other people, or at least interfering with them; that’s the point of lying, after all. But right now we’re talking about the harm one does to oneself by lying; I submit that there needn’t be any.
One distinction I don’t know if it matters, but many discussions fail to mention at all, is the distinction between telling a lie and maintaining it/keeping the secret. Many of the epistemic arguments seem to disappear if you’ve previously made it clear you might lie to someone, you intend to tell the truth a few weeks down the line, and if pressed or questioned you confess and tell the actual truth rather than try to cover it with further lies.
Edit: also, have some kind of oat and special circumstance where you will in fact never lie, but precommit to only use it for important things or give it a cost in some way so you won’t be pressed to give it for everything.
I think you and fezziwig aren’t disagreeing. You’re saying as an empirical matter that lying can (and maybe often does) harm the liar. He’s just saying that it doesn’t necessarily harm the liar, and indeed it may well be that lies are often a net benefit. These are compatible claims.
You’ve drawn an important distinction, between believing a lie and telling one. Right now we’re talking about lying to ourselves so the difference isn’t very great, but be very careful with that quote in general.
Thanks for writing this!
I remain unconvinced. I agree with most of your points, and I think most of my disagreement stems from modeling my mind, the world, and/or ‘dark techniques’ in a different way than you do. I’d be happy to get together and try to converge sometime.
I do have one direct disagreement with the text, which is somewhat indicative of my more general disagreements.
In my experience, many rationalists are motivation-limited, not accuracy-limited. I have met many people who are smarter than I am, who think faster than I do, who are better epistemic rationalists than I am—and who suffer greatly from akrasia or other stamina issues.
I seem to be quite good at achieving my goals. I am by no means convinced that this is due to some excess of willpower: my successes could alternatively be attributed to chance, genetics, self-delusion, or other factors. Even conditioned upon the assumption that my ability to avoid akrasia is a large part of my success, I am not convinced that my motivational techniques are the source of this ability.
However, I do see many “light-side” epistemic rationalists suffering from more akrasia than I do. In the real world, I am not convinced that epistemic rationality is enough. As such, I am cautious about removing motivational techniques in the name of the light.
(I also am under the impression that I can use my motivational techniques in such a way as to avoid many of the averse effects you mention, which gets back to us modeling things differently. This is, of course, exactly what my brain would tell me, and the objection should largely be disregarded until we have a chance to converge.)
There is, of course, some degree to which the above argument only indicates my ability to find self-protecting arguments that I myself find convincing. This topic is somewhat emotionally laden for me, so next time I find a few spare hours I will spend them strongly considering whether I am wrong. However, after cursory examination, I don’t expect any particular update.
Which particular motivation techniques do you use?
There are many. I was particularly referring to the ones I discussed in the dark arts post, to which the above post is a followup.
Okay, I didn’t remember that you were the person who wrote that post.
It seems somewhat absurd to say that your ability to achieve goals is limited by the thingspace cluster we refer to as epistemic rationality. After all, caring too much about epistemic rationality leads to needing things like this.
Epistemic rationality seems like it should be something that you care about when it matters to care about, and don’t care about when it doesn’t matter. Like any other investment, your capital invested should be proportional to your rate and belief of return. Similarly, you should always be willing to sacrifice some epistemic cleanliness if it means winning. You can clean up the dark nasty corners of your mind on top of your pile of utility.
I think the point Brienne made is that seemingly small tradeoffs of epistemic accuracy for instrumental power actually cost much more than you might expect. You can’t pay a little epistemic accuracy for a lot of instrumental power, because epistemic rationality requires that you leave yourself no outs. If you sanction even one tiny exception, you lose the benefits of purity that you didn’t even know were available.
You’re definitely paying for epistemic rationality with instrumental power if you spend all of your time contemplating metaethics so that you have a pure epistemic notion of what your goals are.
Humans start with effectively zero rationality. At some point, it becomes less winning to spend time gaining epistemic rationality than to spend time effecting your goals.
So, it seems like you can spend potential epistemtic rationality for instrumental power by using time to effect change rather than becoming epistemically pure.
To respond to some of your later points:
Take a programming language like OCaml. OCaml supports mutable and immutable state, and you could write an analysis over OCaml that would indeed barf and die on the first instances of mutation. Mutable state does make it incredibly difficult, sometimes to the point of impossiblity, to use conventional analyses on modern computers to prove facts about programs.
But this doesn’t mean that a single ref cell destroys the benefits of purity in an OCaml program. To a human reading the program (which is really the most important use case), a single ref cell can be the best way to solve a problem, and the language’s modularity can easily abstract it away so that the interface on the whole is pure.
Similarily, I agree that it’s important to strive for perfection, in all cases. But striving for perfection doesn’t mean taking every available sacrifice for perfection. I can strive for epistemic perfection while still choosing locally to not improve my epistemic state. An AI might have a strict total ordering of terminal goals, but a human never will. So as a human, I can simultaneously strive for epistemic perfection and instrumental usefulness.
In any case, I still think there’s a limit where the return on investment into epistemic rationality diminishes into nothingness, and I think that limit is much closer than most less wrongers think, primarily because what matters most isn’t absolute rationality, but relative rationality in your particular social setting. You only need to be more able to win than everyone you compete with; becoming more able to win without actually winning is not only a waste of time, but actively harmful. It’s better to win two battles than to waste time overpreparing. Overfocusing on epistemic rationality ignores the opportunity cost of neglecting to use your arts for something outside themselves.
What is that “purity” you’re talking about? I didn’t realize humans could achieve epistemic perfection.
Keep in mind here that I’m steelmanning someone else’s argument, perhaps improperly. I don’t want to put words in anyone else’s mouth. That said, I used the term ‘purity’ in loose analogy to a ‘pure’ programming language, wherein one exception is sufficient to remove much of the possible gains.
Continuing the steelmanning, however, I’d say that while no human can achieve epistemic perfection, there’s a large class of epistemic failures that you only recognize if you’re striving for perfection. Striving for purity, not purity itself, is what gets you the gains.
So8ers, you’re completely accurate in your interpretation of my argument. I’m going to read some more of your previous posts before responding much to your first comment here.
Yes, as Eliezer put it somewhat dramatically here:
To expand on this in context, as long as you are striving for the truth any evidence you come across helps you, but once you choose to believe a lie you must forever avoid dis-confirming evidence.
You’ve drawn an important distinction, between believing a lie and telling one. Your formulation is correct, but Eliezer’s is wrong.
Telling a lie has it’s own problems, as I discuss here.
Yes, it’s pretty much impossible to tell a lie without hurting other people, or at least interfering with them; that’s the point of lying, after all. But right now we’re talking about the harm one does to oneself by lying; I submit that there needn’t be any.
One distinction I don’t know if it matters, but many discussions fail to mention at all, is the distinction between telling a lie and maintaining it/keeping the secret. Many of the epistemic arguments seem to disappear if you’ve previously made it clear you might lie to someone, you intend to tell the truth a few weeks down the line, and if pressed or questioned you confess and tell the actual truth rather than try to cover it with further lies.
Edit: also, have some kind of oat and special circumstance where you will in fact never lie, but precommit to only use it for important things or give it a cost in some way so you won’t be pressed to give it for everything.
Did you even read the comment I linked to? It’s whole point was about the harm you do to yourself and your cause by lying.
I think you and fezziwig aren’t disagreeing. You’re saying as an empirical matter that lying can (and maybe often does) harm the liar. He’s just saying that it doesn’t necessarily harm the liar, and indeed it may well be that lies are often a net benefit. These are compatible claims.
You’ve drawn an important distinction, between believing a lie and telling one. Right now we’re talking about lying to ourselves so the difference isn’t very great, but be very careful with that quote in general.
I can already predict, though, that much or my response will include material from here and here.
Could you give some examples?
I am not sure which class you’re talking about… again, can you provide some examples?