Just another Bay Area Singulatarian Transhumanist Libertarian Rationalist Polyamor-ish coder & math nerd. My career focuses on competitive governance; personally I’m very into personal development (“Inward & upward”); lately I’ve gotten super into cultivation novels because I want to continuously self-improve until my power has grown to where I can challenge the very heavens to protect humanity.
patrissimo
Love almost all of this. I worry that (3) is making the common rationalist mistake of basing a strategy on the type of person you wish you were rather than the type you are. (Striding toward Unhappiness, we might call it).
So, you wish that your passion for a cause were more strongly correlated with the utilitarian benefit of that cause, and game the instinct to work on what feels good with small gifts while putting most of your effort towards what you think is optimal. But if the result is working on something you aren’t as passionate and excited about, you may work less effectively, burn out on helping the world, or just be miserable. Your taste for a cause is what it is, not what you want it to be. It matters whether you feel good about what you do.
(4) compensates to this for some degree—you will tend to try to find reasons to value & love whatever you do, so to some extent you can pick a cause first and fall in love with it later. But this doesn’t always work, and can result in demotivated team members who demoralize others. A passionate & excited team is a high-performing team.
Anything can be mapped to tropes, but not all tropes are the same. It matters what tropes your life, mission, or organization are mapped to! To skillfully navigate the world (I guess the LW term is “to win”) you must know what tropes are being mapped to you, and what tropes your brain sees your identity as fitting into. That way you can manipulate others’ perception of you (what stories are they telling about you? How are they telling those stories? Do they gain you status and resources), as well as making sure you aren’t fooling yourself.
“So part of winning is being able to deal with human susceptibility to think in stories.”
Exactly! It is especially relevant if you are trying to grow a following around an idea, which SIAI is. Winning requires wearing your Slytherin hat sometimes, and an effective Slytherin will manipulate the stories that they tell and the stories that are told about them.
Actually, I will comment (for the purposes of authenticity and from the belief that being more transparent about my motivations will increase mutual truth-finding) that while I’m not arguing “against” SIAI, this post is to some degree emerging from me exploring the question of SIAI’s organizational instrumental rationality. I have the impression from a variety of angles/sources that it’s pretty bad. Since I care about SIAI’s success, it’s one of the things I think about in the background—why, and how you could be more effective.
First, I’m not claiming a connection between truth and tropism, but this idea that everything is equally tropish seems wrong. Not everyone has the role of a protagonist fighting for humanity against a great inhuman evil that only they foresee, and struggling to gather allies and resources before time runs out. Yet Eliezer has that role.
Second, even though tropes apply to everyone’s lives to some degree, it matters which tropes they are. For example, someone who sees themselves as a fundamentally misunderstood genius who deserves much more than society has given them is also living a trope—but it’s a very different trope with very different results. Identifying the tropes you are living is useful—it helps in your personal branding, can teach you lessons about strategies for achieving your goal, and may show you pitfalls.
For example, I live a very similar trope set to Eliezer, which is why I notice it, and it poses many challenges in being effective, because it’s tempting to (as Nick alluded to above) play the role rather than doing the work.
This is so common as to be an adage: “Never attribute to malice that which is adequately explained by stupidity.” (http://en.wikipedia.org/wiki/Hanlon’s_razor)
I can see how for your audience, the story-like qualities would be a minus. On the other hand, I think the story bias has to do with how people cognitively process information and arguments. If you can’t tell your mission & strategy as a story, it’s a lot harder to get across your ideas, whatever your audience.
The battle was meant to be metaphorical—the battle to ensure that AI is Friendly rather than Unfriendly. And I didn’t say anything about hostile humans—the problem is indifferent humans not giving you resources.
Also, I’m not arguing against SIAI, I just find it amusing how well the futurist sector maps onto a story outline—various protagonists passionate about fighting some great evil that others don’t see and trying to build alliances and grow resources before time runs out. You can squiggle, but that’s who you are. Instrumental rationality means figuring out how to make best positive use of it and avoid it biasing you.
The danger of living a story—Singularity Tropes
Write several pieces of analysis code, ideally in different languages, and check that the results are the same? Even better, have someone else replicate your analysis code. That way you have a somewhat independent source of confirmation.
Also, use practices like tons of unit testing which minimize the chance for bugs in your code. All this must be done before you see the results, of course.
Is this confirmation bias really that bad in practice? Scientists get credit for upsetting previous consensus. So this may lead potentially disruptive research to happen slightly less often. But it remains the case that an attempted change to the consensus—a “surprising” result will still get changed eventually, by someone who doesn’t question the surprising result, or questions it but thoroughly reviews their code and stands by it. So evidence for change will come slightly less often than it could, but changes will still be correct. Doesn’t seem like a big deal.
Science got the charge on an electron right, even after Milliken’s mistake.
And this argument has what to do with my personal decision to vote?
My choice does not determine the choices of others who believe like me, unless I’m a lot more popular than I think I am.
After saying voting is irrational, the next step for someone who truly cares about political change is to go figure out what the maximal political change they can get for their limited resources is—what’s the most efficient way to translate time or dollars into change. I believe that various strategies have different returns that vary by many orders of magnitude.
So ordinary people doing the stupid obvious thing (voting, collecting signatures, etc.) might easily have 1/1000th the impact per unit time of someone who just works an extra 5 hours a week and donates the money it to a carefully chose advocacy organization. If these rationals are > 0.1% of the population, they have greater impact. And convincing someone to become one of these anti-voting rationals ups their personal impact by 1000 as much as convincing someone to vote.
Wow, so it is accurate for the same reason as the The Wire (based on a study of reality), that’s awesome.
This is my worldview as well.
Or just post about it on Facebook without doing it, and thus get all but 1 vote of the benefit with almost none of the cost.
Such a system would appeal to the type of voter who doesn’t vote because he is rationally ignorant and/or calculates it isn’t worth his time (b/c he is not a utilitarian and doesn’t count common benefit as a reason to vote), and wouldn’t donate to a political party because he knows there are far more efficient ways to transform money into changing the expected future of the world. He can see why the system is an efficiency gain using the same tools he can see why voting and donating to political parties is a waste of his time and money.
This creates somewhat of a problem for the proposal, if it is only appreciated by those who can’t benefit from it...
Wow, I expect this kind of naivete from normal people, but not from LWers. This is exactly the sort of bias-influenced human behavior that LW should be teaching you to understand. It’s more Hansonian than Yudkowskian, but still.
Politics is not about policy. Donors are signaling affiliation. No one will use this service.
Vote. Down.
But politics is not about policy. Political donors want to signal their affiliation with their tribes, not spend their money efficiently to change the world. It’s a modern potlatch. Otherwise they’d be giving to something else in the first place—it’s extraordinarily unlikely that giving to the Democrats or Republicans is the maximal way to impact the world, so anyone who is doing it obviously doesn’t have the goal of efficient charity.
I predict that such an organization would capture < 0.5% of donors by # and < 0.1% by $. That’s a pretty small market—a few hundred thousand.
Doesn’t your chance of swaying an election depend on how close it is? If your favored candidate is way ahead or way behind, then changing a few thousand votes doesn’t matter. Whereas charity always has some marginal effect.
Also, influencing an election depends on the difference between the candidates. Which not only may be small, but may be difficult to predict, both due to reneging on campaign promises and to specialization—one candidate may do better in a recession, the other in a war. If you pick the wrong guy, your money has negative effect. All of these reduce the effect of spending for votes.
So I think there are a lot of reasons why the effects of spending on elections are diminished compared to spending on charity. But I have a lot of reasons to want to think that, so my opinion should be taken as a summary of one side rather than a balanced evaluation :).
I think there is a significant bias to overestimate the impact of who wins the Presidential election on policy. Look at how many of the Bush policies were continued by Obama. Normally that’s used as a condemnation of Obama, but I think it’s much better interpreted as evidence that the guy at the top doesn’t matter that much—whoever wins is subject to almost the same set of pressures from interest groups, constraints based on who has what powers & goals, etc, which has a huge effect on policy.
In the tribe, you saw everyone, so you saw everyone with political influence. In the modern world, you only see a few politicians, and so you assume that’s where the influence is, but you don’t see the millions of unelected bureaucrats, and they also have power.
To build such tools, don’t we need to know what techniques help increase rationality?
I suspect that tools built in the course of a directed rationality practice will be much more useful than those we come up with in advance.
That said, a website specifically structured to share rationality practices, as I discussed in my Shiny Distraction post, would be very useful. I could contribute content, wouldn’t have time to contribute code.
I think this is missing the primary advice of “work on instrumental rationality.” The art of accomplishing goals is useful for the goal of saving the world—and still useful if you change your goal later! (say, to destroying the world, or moving to a new one :) )
So while this is a great list of ways to be instrumentally rational specifically for philanthropy, I think the general tools of instrumental rationality are also useful too (like: have concrete goals, hypothesize how to achieve them, try methods, evaluate them and change based on results, find mentors who have succeeded at what you are trying to do, make sure you talk to people who think differently from you, be conscious about where to spend limited willpower...)