Those who aspire to perfection
A short reply to the Book of Eliezer and a comment on the Book of Luke.
No one wants to save the world. You must thoroughly research this. Those who think they truly think they want to truly want to save the world, in reality they’re actually just horribly afraid of the consequences of not saving the world. And that is a world of difference.
Eliezer, you know that ridiculously strong aversion to lost purposes and sphexishness that you have?1 Sometimes, very rarely, other people have that too. And most often it is a double-negative aversion. I am sure you know as much as very nearly anyone what it feels like to work from the inside of a triple-negative motivation system by default, for fear of being as evil and imperfect as every other human in history, among other less noble fears. You quickly learn to go meta to escape the apparently impossible double-binds—if going meta isn’t itself choosing a side—but by constantly moving vertically you never practice pushing to the left or to the right, or choosing which responsibility to sacrifice in the first place. And even if you could, why would you want to be evil?
And for this rare kind of person, telling them to stop obsessing over prudence or to just try to make marginal contributions, immediately gets pattern-matched to that ages-old adage: “The solution is easy, just shut up and be evil.”. Luckily it is this kind of person we can make the most use of, when it comes to the big crunch time—if we’re not already in it.
1We do not yet know how to teach this skill, and no one can be a truly aspiring rationalist without it, even if they can still aspire to perfection. That does mean I believe there are like maybe 5 truly aspiring rationalists in this community, a larger set of falsely aspiring rationalists, a further much larger set of of truly aspiring aspiring “rationalists”, and a further much much larger set of falsely aspiring aspiring “rationalists”. (3, 30, 300, 3000, say.) I don’t think anyone thinks about this nearly enough, because no one has any affordance—no affordance to not not-think about it—especially not when they’re thinking fuzzy happy thoughts about creating aspiring rationalists or becoming a rationalist.
- Be Not Averse to Lost Purposes by 24 Jul 2011 5:09 UTC; 9 points) (
- 27 Jul 2011 20:11 UTC; 9 points) 's comment on Secrets of the eliminati by (
This reads like a personal journal entry. I can’t tell to what degree I’m missing necessary context, to what degree you’re being puckish, and to what degree this just isn’t communication.
It looks like you feel strongly about something going wrong, here; if you can, I’d appreciate you taking the time to state this comprehensibly.
It looks to me, rather, like Will Newsome continues to think that he can be cleverer than anyone else on the meta level without really understanding any of the relevant object-level topics. I continue to disagree.
Could you say that in English?
Probably not, but I’ll try to restate the message and motivation:
“I notice that wanting to do something is psychologically very different from aversion to not doing something. I have observed that attraction to saving far mode people and the like if taken very seriously is often the result of the latter. I observe and assert that the type of mind that does this is a disproportionately important mind to influence with “rationalist” or SingInst memes. This is the type of mind that truly groks Eliezer’s aversion to lost purposes. I theorize that this type of mind is sometimes formed by being around an abundance of double binds, though I am unwilling to put forth evidence strongly favoring this hypothesis. I think it is important to make a good impression on that type of mind and to avoid negatively reinforcing the anti-anti-virtuous behaviors associated with that type of mind, especially as it is a type of mind that is generally oversensitive to negative reinforcement and could become completely paralyzed. I notice that we specifically do not know how to create the skill of avoiding lost purposes which also makes it important to avoid negatively influencing those who already happen to have the skill. I have created this post to further the agenda of setting up a culture that doesn’t repel and perhaps even attracts this type of mind.
As a related side note, I notice that the skill of avoiding lost purposes is very important and wish to express some distress that no apparent effort has been put into addressing the problem. I assert that most “aspiring rationalists” do not seem to even aspire to attain this fundamental skill of rationality, and thus cannot actually be aspiring to rationality, even if they aspire to aspire to what they think is rationality. I thus implicitly claim that I would be able to tell if they were averse to lost purposes, but am unwilling to give evidence of this. I choose to be deliberately misleading about my confidence in this judgment to provoke interesting people to reply in indignation.”
From a Singularity perspective, the importance of rationality evangelism is being way overrated. There is still a tendency to mix up rationality and intelligence, as if becoming more rational will produce radically superior problem-solving skills. But if we’re talking about how to solve a problem like Friendly AI design, then what you need above all are people with high intelligence and relevant knowledge. “Aversion to lost purposes”, whatever that is, might be a trait of talented idealistic personalities who get distressed by dead hopes and organizational dysfunction, but some people learn early that that is normality, and their own progress is all the more streamlined for not fighting these facts of life.
In my opinion, the main source of the morale needed to sustain an effort like FAI research, in the midst of general indifference and incomprehension, is simply a sense among the protagonists that they are capable of solving the problem or of otherwise making a difference, and that derives in turn from a sense of one’s own abilities. If the objective is to solve the most difficult problems, and not just to improve the general quality of problem-solving in society, then rationality evangelism is a rather indiscriminate approach.
Agree that rationality evangelism (edit:) might be overrated, the importance is spreading the Friendliness-might-be-important memes far and apparently SingInst is using “rationality” as one of their memetic weapons of choice. I personally am not suggesting this memetic strategy is a well-thought-out one. “Aversion to lost purposes” totally doesn’t at all mean getting distressed because this world isn’t the should world, it means the thing that Eliezer talks about in his post “Lost Purposes”.
How much effort has been put into teaching an aversion to lost purposes? What has been tried and what have the failures looked like?
Moreover, given what’s being said here, teaching an aversion may be the wrong tack. I suspect it’s more motivating to get strong, positive feedback when your efforts align with your goals. It’s hard to state the positive condition clearly; it’s far easier to point at instances of lost purposes and disapprove than to point at clear goal-oriented behavior and approve. It might be useful to learn, though.
We must thoroughly research this. :j
Exactly, it’s tricky. I don’t know if anyone else will find this funny, but here’s a conversation I had recently:
I recognize this mental state! I don’t know if that’s hilarious or terrifying. :/
This actually got me thinking, though… I’m working on a top level comment now.
Your posts and comments would be much improved by trying to actually communicate, rather than playing around with oblique LW references to produce gibberish not worth the effort of decoding (even for those who understand the underlying ideas), or outright trolling.
I don’t like the whole “Book of Eliezer/Book of Luke” bit. And while I do appreciate the veiled Musashi reference, I think it too detracts from the (very important) message of the post.
Honestly, I’m also not really sure why this is a post rather than a reply or even a PM/email.
Thanks for the straightforward critique! Also I am surprised that at least someone thinks the message is very important and I notice that I have more positive affect towards Less Wrong as a result. (The Musashi reference is actually more of a veiled reference to Eliezer’s Lost Purpose, the part where he says “(I wish I lived in an era where I could just tell my readers they have to thoroughly research something, without giving insult.)”. That says a lot about my style of communication, I suppose...)
To be frank, anyone who doesn’t understand that the core of rationality is actually being more effective at making correct predictions is not only not gaining rationality from LW but may actually be becoming dangerous by attaining an increased ability to make clever arguments. I haven’t interacted with the community enough to determine whether a significant number of “aspiring rationalists” lack this understanding, but if they do it is absolutely critical that this be rectified.
If you’re saying that not many people (anywhere) are really trying all-out to save the world, or to do any one thing, I agree. I don’t understand your description of (a particular kind of) avoiding self-knowledge. I don’t understand being afraid of not wanting to save the world.
Could you expand on what you mean by the difference between aspiring rationalists and aspiring aspiring “rationalists”? I don’t think we have many people who say things like “I aspire to be an aspiring rationalist”, either truly or falsely.
Perhaps this.
Something like that sounds plausible, but wouldn’t many of those who were trying to try also falsely claim that they were actually trying? If so, what separates these “aspiring aspirings” from the “false aspirings”?
Also this.
They don’t say that, but it’s what they do. (As you noted this is different from falsely aspiring, which is not doing much of anything.) That’s my assertion-without-evidence, anyway.
Let me try again: Group 1 understands the Void and chases it, Group 2 understands the Void and doesn’t chase it, Group 3 doesn’t understand the Void but chases something vaguely like it, Group 4 doesn’t understand the Void and doesn’t chase something vaguely like it. “Understand” is intentionally vague here and doesn’t imply a full understanding, but I assert that there’s a somewhat sharp and meaningful discontinuity.
What’s the Void? Maybe I don’t know about it.
It’s the twelfth virtue.
Thanks. So it’s a good thing. Something like empiricism or truth and being purposeful and honest in meeting your goals rather than getting distracted.
Why is it called ‘the Void’?
When I read the Virtues a long time ago, I thought it meant the concept of ‘quality’ I read about in the Art of Motorcycle Maintenance. But now I don’t suppose such a Platonic concept could possibly have been intended.
This comment moved here.
Hm. I guess I should make this a discussion post, if I want anyone to read it… :/
Does that mean people for whom rationalism is a near-terminal goal that cannot become a lost purpose? Do you use “rationalism” somewhat like the way Charlie Sheen might use “winning”, as “rationalism” is often used here?
If yes and no, then to what end rationalism?
If yes and yes, then you value that for someone who, in relation to various things, wants them, that they have as a cherished thing achieving their own wants? Is people having such a nearly-terminal goal a correspondingly deep value of yours, or is it more instrumental? Either way, is coming to value that one of the smaller changes I could make to turn my values towards consistency (and how much more change than coming to consciously value it would I have to do if it is not emergent from my existing values)? If so, at what level would I be valuing that, presumably the same as you do, no? It isn’t enough to have a passing devotion to wanting that, that which I want, I should get it?
If this is unclear or badly off-target, let it indicate the magnitude of my confusion as to what you meant.
This comes to mind.
Eliezer obviously wouldn’t be telling them to shut up and be evil; he’d be intending to tell the person he’d infer he was talking to that, and if this “rare person” couldn’t learn about Eliezer’s actual intent by inferring the message Eliezer had intended to communicate to who Eliezer ought to have thought he was talking to, the person would be rarely dense.
So that part of Eliezer’s message is not flawed, so I’m not sure why you thought it needed addressing.
This assumes I’m reading this post correctly, something I’m not confident of.
Maybe in some way, but not in the way that you interpret it to mean… I emphasize the importance of noticing lost purposes, which is central to both epistemic and instrumental rationality. Elsewhere in this thread I re-wrote the post without the cool links, if you’re interested in figuring out what I originally meant. I apologize for the vagueness.
As for your second critique, I’m not claiming that Eliezer’s message is particularly flawed, just suggesting an improvement over current norms of which Eliezer’s original message could be taken as partially representative, even if it makes perfect sense in context. That is, Eliezer’s message isn’t really important to the point of the post and can be ignored.
The very first factor in the very first chapter of The Art of War is about the importance of synchronous goals between agents and represented. It is instrumental in preserving the state. It is also instrumental in preserving the state (sic).
Even so,
A metaphor.
The iron is hot, some feel fear.
You aren’t though.
You’re expressing belief in a possible downside of current practice. We can say, unconditionally and flatly, that it is a downside, if real, without it being right to minimize that downside. To your credit, you also argue that effects on the average influenced person are less valuable than is generally thought, which if true would be a step towards indicating a change in policy would be good.
But beyond that, you don’t articulate what would be a superior policy, and you have a lot of intermediary conclusions to establish to make a robust criticism.
Correct, I was imprecise. I’m listing a downside and listing nonobvious considerations that make it more of a downside than might be assumed.
(Apparently posts get moved to Discussion if they get downvoted enough. Cool.)