would any of the big names in tech have a chance at being elected president of the US?
No.
As far as maximizing altruistic impact goes, would it be a good idea for them to become president? … Do these people care about maximizing altruistic impact?
The impact of what? The whole of the US policy? That’s an unrealistic goal. Besides, I don’t know if any of them is particularly altruistic. They have lots of money which means that giving away large (in absolute terms) chunks of it leads to zero marginal impact on their life, but that’s a different thing.
I also find it… ironic that Bill Gates is missing from your list.
What other “sane” people have enough reputation in the public eye to have a chance at acquiring a lot of political power?
You’re thinking technocracy and that’s not necessarily a good idea. What you want above all in a political leader is that his value system be aligned to yours. If it is not, the fact that he is effective at reaching his goals becomes a threat, not a benefit.
It is also the case that the US political process is set up to filter away the sane people. Would anyone sane really want a team of competent and malicious lawyers and investigators to go through his entire life with a fine-toothed comb looking for any dirt (or for what can be made to look like it)?
P.S. You should distinguish between actually running for Presidency and “let’s pretend I’m running for President because it will be fun and I’m an attention whore, anyway”.
You’re thinking technocracy and that’s not necessarily a good idea. What you want above all in a political leader is that his value system be aligned to yours. If it is not, the fact that he is effective at reaching his goals becomes a threat, not a benefit.
Only if the goals are actually opposed to yours.
Anyway, I think most politicians goals are vaguely similar in certain respects—almost all think that economic growth is good, for instance.
Nope, because for many resources the game is zero-sum.
Anyway, I think most politicians goals are vaguely similar in certain respects
So, taking a look at the XX century, for example, do you think that the value systems of politicians (or, by extension, political elites) can be safely ignored? 8-0
No, I don’t think that the value systems can be ignored, I’m saying that ability to implement might be more important.
For instance, suppose you highly value environmentalism, but the party which puts environmentalism as their #1 option wants to stop nuclear power (as is typical of environmentalists). If you believe that nuclear power is the best clean, reliable option we have the technology for now, then you might vote for a party which has environmentalism lower down the list of priorities (and no-one wants the environment to be polluted) but has greater expertise.
I’m saying that ability to implement might be more important.
Depends on the degree of mismatch we are talking about, but generally speaking, no, I still think that similar values are MUCH more important than the capability to execute.
Your example, by the way, is not about expertise, it’s about the value mismatch (you highly value nuclear power and the Green party highly disvalues it).
I expect there are particular factual statements that pro-nuclear environmentalists and anti-nuclear environmentalists would disagree with about the world. (E.g. “When all things are said and done, the expected effect of more nuclear power on the environment is (positive|negative).”) If this is true, it seems to me that being pro/anti nuclear is probably an instrumental goal not a terminal one.
Could be or could be not—the original example is quite barebones and we can read different interpretations into it. But in any case that seems irrelevant: we are not talking about the difference between instrumental and terminal goals, we are talking about the choice between two agents/proxies one of which has a closer value system and the other is more effective at achieving his goals.
we are talking about the choice between two agents/proxies one of which has a closer value system and the other is more effective at achieving his goals.
If the two environmentalists had a debate about this subject, each could start the debate by saying they want to do whatever is best for the environment. And then each could present a series of facts suggesting that nuclear power either is or is not good for the environment—a factual disagreement about what the right instrumental goal is for achieving the terminal goal of helping the environment.
If you think anti-nuclear environmentalists possess lack of nuclear plants as a terminal value, imagine what would happen if one was convinced of the factual belief that nuclear power is actually good for the environment. If your model is correct, we can imagine that they would continue to be anti-nuclear environmentalists because that’s their terminal goal (while acknowledging that nuclear power is actually the best option for the environment). But we have counterexamples like Stewart Brand who switched from anti-nuclear to pro-nuclear after doing research & having their beliefs change.
Actual human beings’ goals don’t divide neatly into instrumental and terminal, and actual humans can be inconsistent. So you can have someone who has instrumental goals (that can be changed with evidence showing that they don’t meet a terminal goal), terminal goals (which cannot), and inbetween goals like nuclear power that are harder to change than the former category, but easier to change than the latter.
inbetween goals like nuclear power that are harder to change than the former category, but easier to change than the latter
Yep, this is kinda one of the things LW specializes in—helping people become better at changing their minds regarding things they are stubbornly wrong about.
I agree that human beings’ goals don’t neatly divide in to instrumental and terminal. This is just a model we use. I think Lumifer is using the model in a way that’s harmful—labeling stubborn incorrect beliefs as “terminal goals” amounts to throwing up your hands and saying it’s impossible to help people become better at changing their minds. Based on the what I’ve seen, this isn’t the case—although it’s difficult, it is possible to help people become better at changing their minds, and accomplishing this is highly valuable.
If the two environmentalists had a debate about this subject
This is not what this subthread is about. It started with me saying
What you want above all in a political leader is that his value system be aligned to yours. If it is not, the fact that he is effective at reaching his goals becomes a threat, not a benefit.
and skeptical_lurker pointing out that
Only if the goals are actually opposed to yours.
and me continuing with
I still think that similar values are MUCH more important than the capability to execute.
I don’t see how trying to tease apart terminal and instrumental goals is relevant to this issue. I also think that in practice many theoretically-instrumental goals are, in fact, terminal. Stewart Brand changed his mind, but a great deal more people didn’t and I am willing to argue that for at least some and probably many of them the opposition to the nuclear effectively became a terminal goal (along the “when you forget your goal you redouble your efforts” lines).
So there are two models I can have of politicians who advocate policies different from mine. The first is that we have different terminal goals—even though our model of the world is quite similar, in the sense that we agree about which policies would create which outcomes, we differ on which outcomes we prefer to create. The second is that we have different beliefs—for example, you think raising the minimum wage would be on net beneficial for the working class, whereas I think it’s likely to increase unemployment.
These two models suggest different strategies for people who have political disagreements. The first model suggests all-out war: take down the people who have different values from you at any cost through rhetoric, dirty tactics, etc. The second model suggests trying to improve your rationality and their rationality so your beliefs are less stubborn, you can see the world more accurately, and you can better achieve your collective values.
I also think that in practice many theoretically-instrumental goals are, in fact, terminal.
I don’t think this is the right model for something like an incorrect belief in nuclear power being bad. I think it’s more accurate to say that someone has a visceral disgust for nuclear power, or all their friends think nuclear power is bad, or whatever. Labeling incorrect beliefs as terminal goals basically makes them in to black boxes where investigating how the incorrect belief formed is a waste of time. The advantage of investigating how the incorrect belief formed is that we can learn how to prevent incorrect beliefs from forming in ourselves and others. That’s basically the project of this site.
I tend to think of it as a religion, but let’s be charitable and call it a set of (often inconsistent) preferences. For example, some people prefer not to live near a nuclear plant. How is it a goal?
Well, to a large extent you are right and much of environmentalism is quasi-religious. However, being charitable, consider the specific goal of maintaining an environment suitable for human habitation. Having a government that knows whether nuclear or coal power is more dangerous is very important here.
I also find it… ironic that Bill Gates is missing from your list.
I thought about him, but it seems “too obvious”(?). Like I’d think that it’s sort of clear that he has a solid chance at running and winning if he wanted to. But he doesn’t, so I take that as evidence that he doesn’t want to. Although I didn’t think much about it, and it very well may be bad reasoning.
it’s sort of clear that he has a solid chance at running and winning if he wanted to.
Really? Have you asked any “regular” people—cashiers in a Walmart, car mechanics, secretaries—whether they think Bill “Why isn’t my computer doing what I want?” Gates would make a good POTUS..?
But the ironic part actually has to do with Gates demonstrating much more altruism than other names on your list.
No.
The impact of what? The whole of the US policy? That’s an unrealistic goal. Besides, I don’t know if any of them is particularly altruistic. They have lots of money which means that giving away large (in absolute terms) chunks of it leads to zero marginal impact on their life, but that’s a different thing.
I also find it… ironic that Bill Gates is missing from your list.
You’re thinking technocracy and that’s not necessarily a good idea. What you want above all in a political leader is that his value system be aligned to yours. If it is not, the fact that he is effective at reaching his goals becomes a threat, not a benefit.
It is also the case that the US political process is set up to filter away the sane people. Would anyone sane really want a team of competent and malicious lawyers and investigators to go through his entire life with a fine-toothed comb looking for any dirt (or for what can be made to look like it)?
P.S. You should distinguish between actually running for Presidency and “let’s pretend I’m running for President because it will be fun and I’m an attention whore, anyway”.
Gates is the most credible “high tech” presidential candidate I can think of.
How do the EA folks rate his charitable efforts?
Only if the goals are actually opposed to yours.
Anyway, I think most politicians goals are vaguely similar in certain respects—almost all think that economic growth is good, for instance.
Nope, because for many resources the game is zero-sum.
So, taking a look at the XX century, for example, do you think that the value systems of politicians (or, by extension, political elites) can be safely ignored? 8-0
No, I don’t think that the value systems can be ignored, I’m saying that ability to implement might be more important.
For instance, suppose you highly value environmentalism, but the party which puts environmentalism as their #1 option wants to stop nuclear power (as is typical of environmentalists). If you believe that nuclear power is the best clean, reliable option we have the technology for now, then you might vote for a party which has environmentalism lower down the list of priorities (and no-one wants the environment to be polluted) but has greater expertise.
Depends on the degree of mismatch we are talking about, but generally speaking, no, I still think that similar values are MUCH more important than the capability to execute.
Your example, by the way, is not about expertise, it’s about the value mismatch (you highly value nuclear power and the Green party highly disvalues it).
I recommend these posts:
http://lesswrong.com/lw/l4/terminal_values_and_instrumental_values/
http://lesswrong.com/lw/le/lost_purposes/
I am aware of these posts. Can you be more direct?
I expect there are particular factual statements that pro-nuclear environmentalists and anti-nuclear environmentalists would disagree with about the world. (E.g. “When all things are said and done, the expected effect of more nuclear power on the environment is (positive|negative).”) If this is true, it seems to me that being pro/anti nuclear is probably an instrumental goal not a terminal one.
Could be or could be not—the original example is quite barebones and we can read different interpretations into it. But in any case that seems irrelevant: we are not talking about the difference between instrumental and terminal goals, we are talking about the choice between two agents/proxies one of which has a closer value system and the other is more effective at achieving his goals.
If the two environmentalists had a debate about this subject, each could start the debate by saying they want to do whatever is best for the environment. And then each could present a series of facts suggesting that nuclear power either is or is not good for the environment—a factual disagreement about what the right instrumental goal is for achieving the terminal goal of helping the environment.
If you think anti-nuclear environmentalists possess lack of nuclear plants as a terminal value, imagine what would happen if one was convinced of the factual belief that nuclear power is actually good for the environment. If your model is correct, we can imagine that they would continue to be anti-nuclear environmentalists because that’s their terminal goal (while acknowledging that nuclear power is actually the best option for the environment). But we have counterexamples like Stewart Brand who switched from anti-nuclear to pro-nuclear after doing research & having their beliefs change.
Actual human beings’ goals don’t divide neatly into instrumental and terminal, and actual humans can be inconsistent. So you can have someone who has instrumental goals (that can be changed with evidence showing that they don’t meet a terminal goal), terminal goals (which cannot), and inbetween goals like nuclear power that are harder to change than the former category, but easier to change than the latter.
Yep, this is kinda one of the things LW specializes in—helping people become better at changing their minds regarding things they are stubbornly wrong about.
I agree that human beings’ goals don’t neatly divide in to instrumental and terminal. This is just a model we use. I think Lumifer is using the model in a way that’s harmful—labeling stubborn incorrect beliefs as “terminal goals” amounts to throwing up your hands and saying it’s impossible to help people become better at changing their minds. Based on the what I’ve seen, this isn’t the case—although it’s difficult, it is possible to help people become better at changing their minds, and accomplishing this is highly valuable.
This is not what this subthread is about. It started with me saying
and skeptical_lurker pointing out that
and me continuing with
I don’t see how trying to tease apart terminal and instrumental goals is relevant to this issue. I also think that in practice many theoretically-instrumental goals are, in fact, terminal. Stewart Brand changed his mind, but a great deal more people didn’t and I am willing to argue that for at least some and probably many of them the opposition to the nuclear effectively became a terminal goal (along the “when you forget your goal you redouble your efforts” lines).
So there are two models I can have of politicians who advocate policies different from mine. The first is that we have different terminal goals—even though our model of the world is quite similar, in the sense that we agree about which policies would create which outcomes, we differ on which outcomes we prefer to create. The second is that we have different beliefs—for example, you think raising the minimum wage would be on net beneficial for the working class, whereas I think it’s likely to increase unemployment.
These two models suggest different strategies for people who have political disagreements. The first model suggests all-out war: take down the people who have different values from you at any cost through rhetoric, dirty tactics, etc. The second model suggests trying to improve your rationality and their rationality so your beliefs are less stubborn, you can see the world more accurately, and you can better achieve your collective values.
I don’t think this is the right model for something like an incorrect belief in nuclear power being bad. I think it’s more accurate to say that someone has a visceral disgust for nuclear power, or all their friends think nuclear power is bad, or whatever. Labeling incorrect beliefs as terminal goals basically makes them in to black boxes where investigating how the incorrect belief formed is a waste of time. The advantage of investigating how the incorrect belief formed is that we can learn how to prevent incorrect beliefs from forming in ourselves and others. That’s basically the project of this site.
I agree with hg00 - I meant that environmentalism is a goal and nuclear power is a means to an end, not a value in itself.
In which sense environmentalism is a goal?
I tend to think of it as a religion, but let’s be charitable and call it a set of (often inconsistent) preferences. For example, some people prefer not to live near a nuclear plant. How is it a goal?
Well, to a large extent you are right and much of environmentalism is quasi-religious. However, being charitable, consider the specific goal of maintaining an environment suitable for human habitation. Having a government that knows whether nuclear or coal power is more dangerous is very important here.
I thought about him, but it seems “too obvious”(?). Like I’d think that it’s sort of clear that he has a solid chance at running and winning if he wanted to. But he doesn’t, so I take that as evidence that he doesn’t want to. Although I didn’t think much about it, and it very well may be bad reasoning.
Really? Have you asked any “regular” people—cashiers in a Walmart, car mechanics, secretaries—whether they think Bill “Why isn’t my computer doing what I want?” Gates would make a good POTUS..?
But the ironic part actually has to do with Gates demonstrating much more altruism than other names on your list.
I think you’re right. My previous comment was just what my original thought process was, but now I think you’re right.