Threads like that make me want to apply Bayes theorem to something.
You start with probability 0.03 that Eliezer is sociopath—the baseline. Then you do Bayesian updates on answers to questions like: Does he imagine grandiose importance to him or is he generally modest/in line with actual accomplishments? Does he have grand plans out of the line with his qualifications and prior accomplishments, or are the plans grandiose? Is he talking people into giving him money as source of income? Is he known to do very expensive altruistic stuff that is larger than self interested payoff or not? Did he claim to be an ideally moral being? And so on. You do updates based on the likehood of such for sociopaths and normal people. Now, I’m not saying he is something, all I am saying is that I can’t help it but do such updates—first via fast pattern matching by the neural network, then if I find the issue significant enough, explicitly with a calculator if i want to doublecheck.
edit: I think it will be better to change the wording here as different people understand that word differently. Let’s say we are evaluating whenever the utility function includes other people to any significant extent, in presence of communication noise and misunderstandings. Considering that some people are prone to being pascal wagered and so the utility function that doesn’t include other people leads to attempts to pascal-wager others, i.e. grandiose plans. On the AI work being charitable, I don’t believe it, to be honest. One has to study and get into Google (or the like) if one wants the best shot at influencing morality of future AI. I think that’s the direction into which everyone genuinely interested in saving the mankind and genuinely worried about the AI has gravitated. If one wants to make impact by talking—one needs to first gain some status among the cool guys, and that means making some really impressive working accomplishments.
It seems you are talking about high-functioning psychopaths, rather than psychopaths according to the diagnostic DSM-IV criteria. Thus the prior should be different from 0.03. Assuming a high-functioning psychopath is necessarily a psychopath then it seems it should be far lower than 0.03, at least from looking at the criteria:
A) There is a pervasive pattern of disregard for and violation of the rights of others occurring since age 15 years, as >indicated by three or more of the following:
failure to conform to social norms with respect to lawful behaviors as indicated by repeatedly performing acts that are >grounds for arrest;
deception, as indicated by repeatedly lying, use of aliases, or conning others for personal profit or pleasure;
impulsiveness or failure to plan ahead;
irritability and aggressiveness, as indicated by repeated physical fights or assaults;
reckless disregard for safety of self or others;
consistent irresponsibility, as indicated by repeated failure to sustain consistent work behavior or honor financial >obligations;
lack of remorse, as indicated by being indifferent to or rationalizing having hurt, mistreated, or stolen from another;
B) The individual is at least age 18 years.
C) There is evidence of conduct disorder with onset before age 15 years.
D) The occurrence of antisocial behavior is not exclusively during the course of schizophrenia or a manic episode.”
I am talking about conditional independence. Let us assume that the answer to your first two questions is true, and now you have a posterior of 0.1 that he is a sociopath. Next you want to update on the third claim “Is he talking people into giving him money as source of income?”. You have to estimate the ratio of people for whom the third claim is true, and you have to do it for two groups. But the two group is not sociopaths versus non-sociopaths. Rather, sociopaths for whom the first two claims are true versus non-sociopaths for whom the first two claims are true. You don’t have any data that would help you to estimate these numbers.
The ‘sociopath’ label is not a well identified brain lesion; it is a predictor for behaviours; the label is used for the purpose of decreasing the computational overhead by quantizing the quality (and to reduce communication overhead). One could in principle go without this label and directly predict the likehood of unethical self serving act based on the prior observed behaviour, and that is ideally better but more computationally expensive and may result in much higher failure rate.
This exchange is, by the way, why I do not think much of ‘rationality’ as presented here. It is incredibly important to be able to identify sociopaths; if your decision theory does not permit you to identify sociopaths as you strive for rigour that you can’t reach, then you will be taken advantage of.
I think that’s an overreaction… It’s not that you can’t do the math, it’s that you have to be very clear on what numbers go where and understand which you have to estimate and which can be objectively measured.
People do it selectively, though. When someone does IQ test and gets high score, you assume that person has high IQ, for instance, and don’t postulate existence of ‘low IQ people whom solved first two problems on the test’, whom would then be more likely to solve other, different problems, while having ‘low IQ’, and ultimately score high while having ‘low IQ’.
To explain the issue here in intuitive terms: let’s say we have the hypothesis that Alice owns a cat, and we start with the prior probability of a person owning a cat (let’s say 1 in 20), and then update on the evidence: she recently moved from an apartment building that doesn’t allow cats to one that does (3 times more likely if she has a cat than if she doesn’t), she regularly goes to a pet store now (7 times more likely if she has a cat than if she doesn’t), and when she goes out there’s white hair on her jacket sleeves (5 times more likely if she has a cat than if she doesn’t). Putting all of these together by Bayes’ Rule, we end up 85% confident she has a cat, but in fact we’re wrong: she has a dog. And thinking about it in retrospect, we shouldn’t have gotten 85% certainty of cat ownership. How did we get so confident in a wrong conclusion?
It’s because, while each of those likelihoods is valid in isolation, they’re not independent: there are a big chunk of people who move to pet-friendly apartments and go to pet stores regularly and have pet hair on their sleeves, and not all of them are cat owners. Those people are called pet owners in general, but even if we didn’t know that, a good Bayesian would have kept tabs on the cross-correlations and noted that the straightforward estimate would be thereby invalid.
EDITED TO ADD: So the difference between that and the IQ test example is that you don’t expect there to be an exceptional number of people who get the first two questions right and then do poorly on the rest of the test. The analogue there would be that, even though ability to solve mathematical problems correlates with ability to solve language problems, you should only count that correlation once. If a person does well on a slate of math problems, that’s evidence they’ll do well on language problems, but doing well on a second math test doesn’t count as more strong evidence they’ll do well on word problems. (That is, there are sharply diminishing returns.)
The cat is defined outside being a combination of traits of owner; that is the difference between the cat and IQ or any other psychological measure. If we were to say ‘pet’, the formula would have worked, even better if we had a purely black box qualifier into people who have bunch of traits vs people who don’t have bunch of traits, regardless of what is the cause (a pet, a cat, a weird fetish for pet related stuff).
It is however the case that narcissism does match sociopathy, to the point that difference between the two is not very well defined. Anyhow we can restate the problem and consider it a guess at the properties of the utility function, adding extra verbiage.
The analogy on the math problems is good but what we are compensating for is miscommunication, status gaming, and such, by normal people.
I would suggest, actually, not the Bayesian approach, but statistical prediction rule or trained neural network.
I would suggest, actually, not the Bayesian approach, but statistical prediction rule or trained neural network.
Given the asymptotic efficiency of the Bayes decision rule in a broad range of settings, those alternatives would give equivalent or less accurate classifications if enough training data (and computational power) were available. If this argument is not familiar, you might want to consult Chapter 2 of The Elements of Statistical Learning.
I don’t think you understood DanielVarga’s point. He’s saying that the numbers available for some of those features already have an unknown amount of the other features factored in. In other words, if you update on each feature separately, you’ll end up double-counting an unknown amount of the data. (Hopefully this explanation is reasonably accurate.)
I did understand his point. The issue is that the psychological traits are defined as what is behind the correlation, what ever this may be—brain lesion A, or brain lesion B, or weird childhood, or the like. They are very broad and are defined to include the ‘other features’
It is probably better to drop the word ‘sociopath’ and just say—selfish—but then it is not immediately apparent why e.g. arrogance not backed by achievements is predictive of selfishness, even though it very much is, as it is a case of false signal of capability.
You can eliminate the evidence that you consider double counted, for example grandiose self worth and grandiose plans, though those need to be both present because grandiose self worth without grandiose plans would just indicate some sort of miscommunication (and the self worth metric is more subjective), and are alone much poorer indicators than combined.
In any case accurate estimation of anything of this kind is very difficult. In general one just adopts a strategy such that sociopaths would not have sufficient selfish payoff for cheating it; altruism is far cheaper signal for non-selfish agents; in very simple terms if you give someone $3 for donating $4 to very well verified charity, those who value $4 in charity above $1 in pocket, will accept the deal. You just ensure that there is no selfish gain in transactions, and you’re fine; if you don’t adopt anti cheat strategy, you will be found and exploited with very high confidence as unlike the iterated prisoner dilemma, cheaters get to choose whom to play with, and get to make signals that make easily cheatable agents play with them; a bad strategy is far more likely to be exploited than any conservative estimate would suggest.
On the AI work being charitable, I don’t believe it, to be honest. One has to study and get into Google (or the like) if one wants the best shot at influencing morality of future AI. I think that’s the direction into which everyone genuinely interested in saving the mankind and genuinely worried about the AI has gravitated.
Can you name one person working in AI, commercial or academic, whose career is centered on the issue of AI safety? Whose actual research agenda (and not just what they say in interviews) even acknowledges the fact that artificial intelligence is potentially the end of the human race, just as human intelligence was the end of many other species?
I noticed in a HN comment Eliezer claimed to have gotten a vasectomy; I wonder if that’s consistent or inconsistent with sociopathy? I can come up with plausible stories either way.
Whereas in the UK, if your sperm is good then you’re pretty much certain to.
(I recently donated sperm in the UK. They’re ridiculously grateful. That at age 18 any offspring are allowed to know who the donor is seems to have been, in itself, enough to tremendously reduce the donation rate. So if you’re in the UK, male, smart and healthy, donate sperm and be sure to spread your genes.)
Really? Pardon me if I’m wrong, but I was under the impression that you were in your 30s or 40s, which in the US would damage your chances pretty badly. Perhaps I should amend my essay if it’s really that easy in the UK, because the difficulty of donating is the main problem (the next 2 problems are estimating the marginal increase in IQ by one donating, and then estimating the value of said marginal increase in IQ). Do you get notified when some of your sperm actually gets used or is it blind?
I’m 45, started this donation cycle at 44. Limit in the UK is 40-45 depending on clinic. I went to KCH, that link has all the tl;dr you could ever use on the general subject.
I thought I said this in email before … the UK typically has ~500 people a year wanting sperm, but only ~300 donors’ worth of sperm. So donate and it will be used if they can use it.
They don’t notify, but I can inquire about it later and find out if it’s been used. This will definitely not be for at least six months. The sperm may be kept and used up to about 10 years, I think.
My incentive for this was that I wanted more children but the loved one doesn’t (having had two others before). The process is sort of laborious and long winded, and I didn’t get paid. (Some reimbursement is possible, but it’s strictly limited to travel expenses, and I have a monthly train ticket anyway so I didn’t bother asking.) Basically it’s me doing something that feels to me like I’ve spread my genes and is a small social good—and when I said this was my reason for donating, they said that’s the usual case amongst donors (many of whom are gay men who want children but are, obviously, quite unlikely to have them in the usual sort of long term relationship with a woman).
Things different to your notes: The physical testing is of the sperm itself and a blood test, there was no physical examination. The personal background and “why are you doing this?” ethical talks were two ~1hr chats and were by far the most laborious part of the process. There is no signed contract to provide sperm. I was adopted, so only know a little about my family history (my birth mother got in touch with us a while ago and so I have a whole extra family I know something about); but what little I do know was fine. Once they’re ready for donations, the main burdensome aspect is appointments and travel time; in my case, whenever I couldn’t make it they had no problems rescheduling.
Under UK law, a sperm donor who goes through the HFEA-sanctioned process has no parental rights or responsibilities. However, since the interests of the hypothetical child are considered the most important thing, said child has the right to find out about the biological father at age 18, name and provided contact details. (The father has no right to contact the child.) This single thing, unfortunately, appears to have been enough to scare a lot of donors off; hence the shortage.
Other thing to note: I tried donating in 2010 and, despite proven fertility (my daughter), my sperm wasn’t healthy enough to survive freezing. Then I stopped carrying a microwave transmitter right next to my testicles (i.e., I switch off my phone’s radio when it’s in a trouser pocket) and by a year later it was apparently much better. Did I mention they’re really keen for donors, enough so they’re willing to try people again?
Research … a few hours. Say three. Email exchange: not much. Visits: 2.5 hours travel time each journey (KCH is on the other side of London from E17), which was one two-hour appointment for “why are you doing this?”, blood test and test sperm donation, a one-hour “are you absolutely OK with the ethical details of this?” (which leads me to think that people donating then changing their mind, which you can do any time until the donation is actually used, is a major pain in the backside for them), and four visits so far for actual donations (about 15 min each). Total including travel, which was most of it: 22 hours, if I’ve counted correctly.
She did expensive altruistic stuff that was more expensive than expected self interested payoff, though; the actions that are more expensive to fake than the win from faking are a very strong predictor for non-psychopathy; the distinction between psychopath that is genuinely altruistic, and non-psychopath, is that of philosophical zombie vs human.
Eliezer either picked a much less lucrative career than he could have gotten with the same hours and enjoyment because he wanted to be altruistic, or I’m mistaken about career prospects for good programmers, or he’s a dirty rotten conscious liar about his ability to program.
People don’t gain ability to program out of empty air… everyone able to program has long list of various working projects that they trained on. In any case, programming is real work, it is annoying, it takes training, it takes education, it slaps your ego on the nose just about every time you hit compile after writing any interesting code. And the newbies are grossly mistaken about their abilities. You can’t trust anyone to measure their skills accurately, let alone report them.
Are you claiming (a non-negligible probability) that Eliezer would be a worse programmer if he’d decided to take up programming instead of AI research (perhaps because he would have worked on boring projects and given up?), or that he isn’t competent enough to get hired as a programmer now?
Threads like that make me want to apply Bayes theorem to something.
You start with probability 0.03 that Eliezer is sociopath—the baseline. Then you do Bayesian updates on answers to questions like: Does he imagine grandiose importance to him or is he generally modest/in line with actual accomplishments? Does he have grand plans out of the line with his qualifications and prior accomplishments, or are the plans grandiose? Is he talking people into giving him money as source of income? Is he known to do very expensive altruistic stuff that is larger than self interested payoff or not? Did he claim to be an ideally moral being? And so on. You do updates based on the likehood of such for sociopaths and normal people. Now, I’m not saying he is something, all I am saying is that I can’t help it but do such updates—first via fast pattern matching by the neural network, then if I find the issue significant enough, explicitly with a calculator if i want to doublecheck.
edit: I think it will be better to change the wording here as different people understand that word differently. Let’s say we are evaluating whenever the utility function includes other people to any significant extent, in presence of communication noise and misunderstandings. Considering that some people are prone to being pascal wagered and so the utility function that doesn’t include other people leads to attempts to pascal-wager others, i.e. grandiose plans. On the AI work being charitable, I don’t believe it, to be honest. One has to study and get into Google (or the like) if one wants the best shot at influencing morality of future AI. I think that’s the direction into which everyone genuinely interested in saving the mankind and genuinely worried about the AI has gravitated. If one wants to make impact by talking—one needs to first gain some status among the cool guys, and that means making some really impressive working accomplishments.
It seems you are talking about high-functioning psychopaths, rather than psychopaths according to the diagnostic DSM-IV criteria. Thus the prior should be different from 0.03. Assuming a high-functioning psychopath is necessarily a psychopath then it seems it should be far lower than 0.03, at least from looking at the criteria:
He is a high IQ individual, though. That is rare on its own. There are smart people who pretty much maximize their personal utility only.
Be aware that the conditional independence of these features (Naive Bayes Assumption) is not true.
They are not independent—the sociopathy (or lesser degree, narcissism) is a common cause.
I am talking about conditional independence. Let us assume that the answer to your first two questions is true, and now you have a posterior of 0.1 that he is a sociopath. Next you want to update on the third claim “Is he talking people into giving him money as source of income?”. You have to estimate the ratio of people for whom the third claim is true, and you have to do it for two groups. But the two group is not sociopaths versus non-sociopaths. Rather, sociopaths for whom the first two claims are true versus non-sociopaths for whom the first two claims are true. You don’t have any data that would help you to estimate these numbers.
The ‘sociopath’ label is not a well identified brain lesion; it is a predictor for behaviours; the label is used for the purpose of decreasing the computational overhead by quantizing the quality (and to reduce communication overhead). One could in principle go without this label and directly predict the likehood of unethical self serving act based on the prior observed behaviour, and that is ideally better but more computationally expensive and may result in much higher failure rate.
This exchange is, by the way, why I do not think much of ‘rationality’ as presented here. It is incredibly important to be able to identify sociopaths; if your decision theory does not permit you to identify sociopaths as you strive for rigour that you can’t reach, then you will be taken advantage of.
I think that’s an overreaction… It’s not that you can’t do the math, it’s that you have to be very clear on what numbers go where and understand which you have to estimate and which can be objectively measured.
People do it selectively, though. When someone does IQ test and gets high score, you assume that person has high IQ, for instance, and don’t postulate existence of ‘low IQ people whom solved first two problems on the test’, whom would then be more likely to solve other, different problems, while having ‘low IQ’, and ultimately score high while having ‘low IQ’.
To explain the issue here in intuitive terms: let’s say we have the hypothesis that Alice owns a cat, and we start with the prior probability of a person owning a cat (let’s say 1 in 20), and then update on the evidence: she recently moved from an apartment building that doesn’t allow cats to one that does (3 times more likely if she has a cat than if she doesn’t), she regularly goes to a pet store now (7 times more likely if she has a cat than if she doesn’t), and when she goes out there’s white hair on her jacket sleeves (5 times more likely if she has a cat than if she doesn’t). Putting all of these together by Bayes’ Rule, we end up 85% confident she has a cat, but in fact we’re wrong: she has a dog. And thinking about it in retrospect, we shouldn’t have gotten 85% certainty of cat ownership. How did we get so confident in a wrong conclusion?
It’s because, while each of those likelihoods is valid in isolation, they’re not independent: there are a big chunk of people who move to pet-friendly apartments and go to pet stores regularly and have pet hair on their sleeves, and not all of them are cat owners. Those people are called pet owners in general, but even if we didn’t know that, a good Bayesian would have kept tabs on the cross-correlations and noted that the straightforward estimate would be thereby invalid.
EDITED TO ADD: So the difference between that and the IQ test example is that you don’t expect there to be an exceptional number of people who get the first two questions right and then do poorly on the rest of the test. The analogue there would be that, even though ability to solve mathematical problems correlates with ability to solve language problems, you should only count that correlation once. If a person does well on a slate of math problems, that’s evidence they’ll do well on language problems, but doing well on a second math test doesn’t count as more strong evidence they’ll do well on word problems. (That is, there are sharply diminishing returns.)
The cat is defined outside being a combination of traits of owner; that is the difference between the cat and IQ or any other psychological measure. If we were to say ‘pet’, the formula would have worked, even better if we had a purely black box qualifier into people who have bunch of traits vs people who don’t have bunch of traits, regardless of what is the cause (a pet, a cat, a weird fetish for pet related stuff).
It is however the case that narcissism does match sociopathy, to the point that difference between the two is not very well defined. Anyhow we can restate the problem and consider it a guess at the properties of the utility function, adding extra verbiage.
The analogy on the math problems is good but what we are compensating for is miscommunication, status gaming, and such, by normal people.
I would suggest, actually, not the Bayesian approach, but statistical prediction rule or trained neural network.
Given the asymptotic efficiency of the Bayes decision rule in a broad range of settings, those alternatives would give equivalent or less accurate classifications if enough training data (and computational power) were available. If this argument is not familiar, you might want to consult Chapter 2 of The Elements of Statistical Learning.
I don’t think you understood DanielVarga’s point. He’s saying that the numbers available for some of those features already have an unknown amount of the other features factored in. In other words, if you update on each feature separately, you’ll end up double-counting an unknown amount of the data. (Hopefully this explanation is reasonably accurate.)
http://en.wikipedia.org/wiki/Conditional_independence
I did understand his point. The issue is that the psychological traits are defined as what is behind the correlation, what ever this may be—brain lesion A, or brain lesion B, or weird childhood, or the like. They are very broad and are defined to include the ‘other features’
It is probably better to drop the word ‘sociopath’ and just say—selfish—but then it is not immediately apparent why e.g. arrogance not backed by achievements is predictive of selfishness, even though it very much is, as it is a case of false signal of capability.
I don’t think it matters how it is defined… One still shouldn’t double count the evidence.
You can eliminate the evidence that you consider double counted, for example grandiose self worth and grandiose plans, though those need to be both present because grandiose self worth without grandiose plans would just indicate some sort of miscommunication (and the self worth metric is more subjective), and are alone much poorer indicators than combined.
In any case accurate estimation of anything of this kind is very difficult. In general one just adopts a strategy such that sociopaths would not have sufficient selfish payoff for cheating it; altruism is far cheaper signal for non-selfish agents; in very simple terms if you give someone $3 for donating $4 to very well verified charity, those who value $4 in charity above $1 in pocket, will accept the deal. You just ensure that there is no selfish gain in transactions, and you’re fine; if you don’t adopt anti cheat strategy, you will be found and exploited with very high confidence as unlike the iterated prisoner dilemma, cheaters get to choose whom to play with, and get to make signals that make easily cheatable agents play with them; a bad strategy is far more likely to be exploited than any conservative estimate would suggest.
Can you name one person working in AI, commercial or academic, whose career is centered on the issue of AI safety? Whose actual research agenda (and not just what they say in interviews) even acknowledges the fact that artificial intelligence is potentially the end of the human race, just as human intelligence was the end of many other species?
I noticed in a HN comment Eliezer claimed to have gotten a vasectomy; I wonder if that’s consistent or inconsistent with sociopathy? I can come up with plausible stories either way.
Given the shortage of sperm donors [*], that strikes me as possibly foolish if IQ is significantly heritable.
[*] I know there is in the UK—is there in California?
My basic conclusion is that it’s not worth trying because there is an apparent over-supply in the US and you’re highly unlikely to get any offspring.
Whereas in the UK, if your sperm is good then you’re pretty much certain to.
(I recently donated sperm in the UK. They’re ridiculously grateful. That at age 18 any offspring are allowed to know who the donor is seems to have been, in itself, enough to tremendously reduce the donation rate. So if you’re in the UK, male, smart and healthy, donate sperm and be sure to spread your genes.)
Really? Pardon me if I’m wrong, but I was under the impression that you were in your 30s or 40s, which in the US would damage your chances pretty badly. Perhaps I should amend my essay if it’s really that easy in the UK, because the difficulty of donating is the main problem (the next 2 problems are estimating the marginal increase in IQ by one donating, and then estimating the value of said marginal increase in IQ). Do you get notified when some of your sperm actually gets used or is it blind?
I’m 45, started this donation cycle at 44. Limit in the UK is 40-45 depending on clinic. I went to KCH, that link has all the tl;dr you could ever use on the general subject.
I thought I said this in email before … the UK typically has ~500 people a year wanting sperm, but only ~300 donors’ worth of sperm. So donate and it will be used if they can use it.
They don’t notify, but I can inquire about it later and find out if it’s been used. This will definitely not be for at least six months. The sperm may be kept and used up to about 10 years, I think.
My incentive for this was that I wanted more children but the loved one doesn’t (having had two others before). The process is sort of laborious and long winded, and I didn’t get paid. (Some reimbursement is possible, but it’s strictly limited to travel expenses, and I have a monthly train ticket anyway so I didn’t bother asking.) Basically it’s me doing something that feels to me like I’ve spread my genes and is a small social good—and when I said this was my reason for donating, they said that’s the usual case amongst donors (many of whom are gay men who want children but are, obviously, quite unlikely to have them in the usual sort of long term relationship with a woman).
Things different to your notes: The physical testing is of the sperm itself and a blood test, there was no physical examination. The personal background and “why are you doing this?” ethical talks were two ~1hr chats and were by far the most laborious part of the process. There is no signed contract to provide sperm. I was adopted, so only know a little about my family history (my birth mother got in touch with us a while ago and so I have a whole extra family I know something about); but what little I do know was fine. Once they’re ready for donations, the main burdensome aspect is appointments and travel time; in my case, whenever I couldn’t make it they had no problems rescheduling.
Under UK law, a sperm donor who goes through the HFEA-sanctioned process has no parental rights or responsibilities. However, since the interests of the hypothetical child are considered the most important thing, said child has the right to find out about the biological father at age 18, name and provided contact details. (The father has no right to contact the child.) This single thing, unfortunately, appears to have been enough to scare a lot of donors off; hence the shortage.
Other thing to note: I tried donating in 2010 and, despite proven fertility (my daughter), my sperm wasn’t healthy enough to survive freezing. Then I stopped carrying a microwave transmitter right next to my testicles (i.e., I switch off my phone’s radio when it’s in a trouser pocket) and by a year later it was apparently much better. Did I mention they’re really keen for donors, enough so they’re willing to try people again?
I see, that’s remarkably different from everything I’ve found about US donating. Thanks for summarizing it.
Could you estimate your total time, soup to nuts, travel time and research included? I’m guessing perhaps 10-20 hours.
Research … a few hours. Say three. Email exchange: not much. Visits: 2.5 hours travel time each journey (KCH is on the other side of London from E17), which was one two-hour appointment for “why are you doing this?”, blood test and test sperm donation, a one-hour “are you absolutely OK with the ethical details of this?” (which leads me to think that people donating then changing their mind, which you can do any time until the donation is actually used, is a major pain in the backside for them), and four visits so far for actual donations (about 15 min each). Total including travel, which was most of it: 22 hours, if I’ve counted correctly.
Note that Melinda Gates corresponds to the same criteria about as well.
She did expensive altruistic stuff that was more expensive than expected self interested payoff, though; the actions that are more expensive to fake than the win from faking are a very strong predictor for non-psychopathy; the distinction between psychopath that is genuinely altruistic, and non-psychopath, is that of philosophical zombie vs human.
Eliezer either picked a much less lucrative career than he could have gotten with the same hours and enjoyment because he wanted to be altruistic, or I’m mistaken about career prospects for good programmers, or he’s a dirty rotten conscious liar about his ability to program.
People don’t gain ability to program out of empty air… everyone able to program has long list of various working projects that they trained on. In any case, programming is real work, it is annoying, it takes training, it takes education, it slaps your ego on the nose just about every time you hit compile after writing any interesting code. And the newbies are grossly mistaken about their abilities. You can’t trust anyone to measure their skills accurately, let alone report them.
Are you claiming (a non-negligible probability) that Eliezer would be a worse programmer if he’d decided to take up programming instead of AI research (perhaps because he would have worked on boring projects and given up?), or that he isn’t competent enough to get hired as a programmer now?
Is it clear that he would have gotten the same enjoyment out of a career as a programmer?