So, it seems like most people here are really smart. And a lot of us, I’m betting, will have been identified as smart when we were children, and gotten complimented on it a lot.
I guess I am the big exception here. I completely failed at school. I once took part in an IQ test done by the local jobs employment agency and received a score low enough that the executive didn’t want to tell me exactly how I scored to not discourage me (he said much below average but I shouldn’t bother and just try my best).
Just a few days ago I finished reading my first non-fiction book, at age 27 (which will hopefully be the first of hundreds to come).
Okay, here is the first stupid question: I once read that some people don’t vote because they believe that they can’t influence the outcome enough to outweigh the time it takes to vote (decide who to vote for etc.). Other reasons include the perceived inability to judge which candidate will be better. That line of reasoning seems to be even more relevant when it comes to existential risk charities. Not only might your impact turn out to be negligible but it seems even more difficult to judge the best charity. Are people who contribute money to existential risk charities also voting on presidential elections?
Second stupid question: There is a lot of talk about ethics on lesswrong. I still don’t understand why people talk about ethics and not just about what they want. Whatever morality is or is not, shouldn’t it be implied by what we want and the laws of thought?
Third stupid question: I still don’t get how expected utility maximization doesn’t lead to the destruction of complex values. Even if your utility-function is complex, some goals will yield more utility than others and don’t hit diminishing marginal returns. Bodily sensations like happiness for example don’t seem to run into diminishing returns. I don’t see how you can avoid world-states where wireheading is the favored outcome. Anecdotal evidence for this outcome is the behavior of people on lesswrong with respect to saving the world, a goal that does currently outweigh most other human values due to its enormous expected utility. I don’t see why this wouldn’t be the case for wireheading as well, or other narrow goals like leaving the universe or hacking the matrix instead of satisfying all human values like signaling games or procrastination. If you are willing to contribute your money to an existential risk charity now, even given the low probability of its success, then why wouldn’t you do the same after the singularity by contributing the computational resources you would be using until the end of the universe to the FAI so that it can figure out how to create a pocket universe or travel back in time to gain more resources to support many more human beings?
Whatever morality is or is not, shouldn’t it be implied by what we want and the laws of thought?
This is basically the EY/lukeprog school of thought on metaethics, isn’t it? Your preferences, delicately extrapolated to better match the laws of logic, probability theory and (advanced) decision theory, are the ideal form of what reductionists mean when they talk about morality.
Now, not everyone on LW agrees with this contention, which is why ethics is a perennial topic of discussion here.
This is basically the EY/lukeprog school of thought on metaethics, isn’t it?
If so, I’ve overestimated EY’s agreement with my take on it. I see both the preferences of extrapolated-me and actual-me as effects of partly common causes, some (my case) or all (his case) reflecting my good. What extrapolated-me seeks is not good because he seeks it, but because (for examples) it promotes deep personal relationships, or fun, or autonomy. These are the not-so-strange attractors (dumb question: does chaos theory literally apply here?) that explain the evolution of my values with increasing knowledge and experience.
I think I remember EY saying something along the same lines, so maybe we don’t differ.
This sounds exactly like what EY believes. Even the language is similar, which is nontrivial due to the difficulty of expressing this idea clearly in standard English. Did you start believing this after reading the metaethics sequence?
No, but maybe we were inspired by some of the same sources. I think it was David Zimmerman’s dissertation which got me started thinking along these lines.
Well, the concepts get messy, but I think we’re speaking of the same thing. It’s the bit of data in volition-space to which my current brain is a sort of pointer, but as it happens there are a lot of criteria that correspond to it; it’s not a random point in volition-space, most other human brains point to fairly similar bits, etc.
I once read that some people don’t vote because they believe that they can’t influence the outcome enough to outweigh the time it takes to vote (decide who to vote for etc.). Other reasons include the perceived inability to judge which candidate will be better. That line of reasoning seems to be even more relevant when it comes to existential risk charities. Not only might your impact turn out to be negligible but it seems even more difficult to judge the best charity. Are people who contribute money to existential risk charities also voting on presidential elections?
The obvious difference between voting in an election and giving money to the best charity is that voting is zero-sum. If you vote for Candidate A and it turns out that Candidate B was a better candidate (by your standards, whatever they are), then your vote actually had a negative impact. But if you give money to Charity A and it turns out Charity B was slightly more efficient, you’ve still had a dramatically bigger impact than if you spent it on yourself.
Even if you have no idea which charity is better, the only case in which you would be justified in not donating to either is if a) there’s a relatively simple way to figure out which is better (see the Value of Information stuff). or
b) you think that giving money to charity is likely enough to be counterproductive that the expected value is negative. Which seems plausible for some forms of African aid, possible for FAI, and demonstrably false for “charity in general.”
It’s also worth noting that the expected value of donating to a good charity is a lot higher than the expected value of voting, since the vast majority of people don’t direct their giving thoughtfully and there’s a lot of low hanging fruit. (GiveWell has plenty of articles on this).
Second stupid question: There is a lot of talk about ethics on lesswrong. I still don’t understand why people talk about ethics and not just about what they want. Whatever morality is or is not, shouldn’t it be implied by what we want and the laws of thought?
Yes, it should. That’s what people are talking about, for the most part, when they talk about ethics. Note that even though ethics is (probably) implied by what we want, it isn’t equal to what we want, so it’s worth having a separate word to distinguish between what we should want if we were better informed, etc. and what we actually want right now. This strikes me as so obvious I think I might be missing the point of your question. Do you want to clarify?
Third stupid question: I still don’t get how expected utility maximization doesn’t lead to the destruction of complex values. Even if your utility-function is complex, some goals will yield more utility than others and don’t hit diminishing marginal returns. Bodily sensations like happiness for example don’t seem to run into diminishing returns.
Well, since I value all that complex stuff, happiness has negative marginal returns as soon as it starts to interfere with my ability to have novelty, challenge, etc. I would rather be generally happier, but I would not rather be a wirehead, so somewhere between my current happiness state and wireheading, the return on happiness turns negative (assuming for a moment that my preferences now are a good guide to my extrapolated preferences). If your utility function is complex, and you value preserving all of its components, then maximizing one aspect can’t maximize your utility.
As for the second part of your question: hadn’t thought of that. I’ll let my smarter post-Singularity self evaluate my options and make the best decision it can, and if the utility-maximizing choice is to devote all resources to trying to beat entropy or something, then that’s what I’ll do. My current instinct, though, is that preserving existing lives is more important than creating new ones, so I don’t particularly care to get as many resources as possible to create as many humans as possible. I also don’t really understand what you are trying to get at. Is this an argument-from-consequences opposing x-risk prevention? Or are you arguing that utility-maximization generally is bad?
These aren’t stupid questions, by the way; they’re relevant and thought provoking, and the fact that you did extremely poorly on an IQ test is some of the strongest evidence that IQ tests don’t matter that I’ve encountered.
I guess I am the big exception here. I completely failed at school. I once took part in an IQ test done by the local jobs employment agency and received a score low enough that the executive didn’t want to tell me exactly how I scored to not discourage me (he said much below average but I shouldn’t bother and just try my best).
Just a few days ago I finished reading my first non-fiction book, at age 27 (which will hopefully be the first of hundreds to come).
Okay, here is the first stupid question: I once read that some people don’t vote because they believe that they can’t influence the outcome enough to outweigh the time it takes to vote (decide who to vote for etc.). Other reasons include the perceived inability to judge which candidate will be better. That line of reasoning seems to be even more relevant when it comes to existential risk charities. Not only might your impact turn out to be negligible but it seems even more difficult to judge the best charity. Are people who contribute money to existential risk charities also voting on presidential elections?
Second stupid question: There is a lot of talk about ethics on lesswrong. I still don’t understand why people talk about ethics and not just about what they want. Whatever morality is or is not, shouldn’t it be implied by what we want and the laws of thought?
Third stupid question: I still don’t get how expected utility maximization doesn’t lead to the destruction of complex values. Even if your utility-function is complex, some goals will yield more utility than others and don’t hit diminishing marginal returns. Bodily sensations like happiness for example don’t seem to run into diminishing returns. I don’t see how you can avoid world-states where wireheading is the favored outcome. Anecdotal evidence for this outcome is the behavior of people on lesswrong with respect to saving the world, a goal that does currently outweigh most other human values due to its enormous expected utility. I don’t see why this wouldn’t be the case for wireheading as well, or other narrow goals like leaving the universe or hacking the matrix instead of satisfying all human values like signaling games or procrastination. If you are willing to contribute your money to an existential risk charity now, even given the low probability of its success, then why wouldn’t you do the same after the singularity by contributing the computational resources you would be using until the end of the universe to the FAI so that it can figure out how to create a pocket universe or travel back in time to gain more resources to support many more human beings?
This is basically the EY/lukeprog school of thought on metaethics, isn’t it? Your preferences, delicately extrapolated to better match the laws of logic, probability theory and (advanced) decision theory, are the ideal form of what reductionists mean when they talk about morality.
Now, not everyone on LW agrees with this contention, which is why ethics is a perennial topic of discussion here.
If so, I’ve overestimated EY’s agreement with my take on it. I see both the preferences of extrapolated-me and actual-me as effects of partly common causes, some (my case) or all (his case) reflecting my good. What extrapolated-me seeks is not good because he seeks it, but because (for examples) it promotes deep personal relationships, or fun, or autonomy. These are the not-so-strange attractors (dumb question: does chaos theory literally apply here?) that explain the evolution of my values with increasing knowledge and experience.
I think I remember EY saying something along the same lines, so maybe we don’t differ.
This sounds exactly like what EY believes. Even the language is similar, which is nontrivial due to the difficulty of expressing this idea clearly in standard English. Did you start believing this after reading the metaethics sequence?
No, but maybe we were inspired by some of the same sources. I think it was David Zimmerman’s dissertation which got me started thinking along these lines.
Well, the concepts get messy, but I think we’re speaking of the same thing. It’s the bit of data in volition-space to which my current brain is a sort of pointer, but as it happens there are a lot of criteria that correspond to it; it’s not a random point in volition-space, most other human brains point to fairly similar bits, etc.
The obvious difference between voting in an election and giving money to the best charity is that voting is zero-sum. If you vote for Candidate A and it turns out that Candidate B was a better candidate (by your standards, whatever they are), then your vote actually had a negative impact. But if you give money to Charity A and it turns out Charity B was slightly more efficient, you’ve still had a dramatically bigger impact than if you spent it on yourself.
Even if you have no idea which charity is better, the only case in which you would be justified in not donating to either is if a) there’s a relatively simple way to figure out which is better (see the Value of Information stuff). or
b) you think that giving money to charity is likely enough to be counterproductive that the expected value is negative. Which seems plausible for some forms of African aid, possible for FAI, and demonstrably false for “charity in general.”
It’s also worth noting that the expected value of donating to a good charity is a lot higher than the expected value of voting, since the vast majority of people don’t direct their giving thoughtfully and there’s a lot of low hanging fruit. (GiveWell has plenty of articles on this).
Yes, it should. That’s what people are talking about, for the most part, when they talk about ethics. Note that even though ethics is (probably) implied by what we want, it isn’t equal to what we want, so it’s worth having a separate word to distinguish between what we should want if we were better informed, etc. and what we actually want right now. This strikes me as so obvious I think I might be missing the point of your question. Do you want to clarify?
Well, since I value all that complex stuff, happiness has negative marginal returns as soon as it starts to interfere with my ability to have novelty, challenge, etc. I would rather be generally happier, but I would not rather be a wirehead, so somewhere between my current happiness state and wireheading, the return on happiness turns negative (assuming for a moment that my preferences now are a good guide to my extrapolated preferences). If your utility function is complex, and you value preserving all of its components, then maximizing one aspect can’t maximize your utility.
As for the second part of your question: hadn’t thought of that. I’ll let my smarter post-Singularity self evaluate my options and make the best decision it can, and if the utility-maximizing choice is to devote all resources to trying to beat entropy or something, then that’s what I’ll do. My current instinct, though, is that preserving existing lives is more important than creating new ones, so I don’t particularly care to get as many resources as possible to create as many humans as possible. I also don’t really understand what you are trying to get at. Is this an argument-from-consequences opposing x-risk prevention? Or are you arguing that utility-maximization generally is bad?
These aren’t stupid questions, by the way; they’re relevant and thought provoking, and the fact that you did extremely poorly on an IQ test is some of the strongest evidence that IQ tests don’t matter that I’ve encountered.
That’s a good book.