That decision would be met with disbelief by my surroundings.
Do you mean that people around you would not believe you were donating? Or would not think your cause a good one? Or would tell you that large donations are strange or a bad idea?
I’d really be interested to know, since I’ve recently started writing on the topic. Hardly anyone is willing to say why they don’t give.
1) The funds I have I might need later, and I’m not willing to take chances on my future.
2) Uncertainty as to whether the money would do much good.
3) Selfishness; my own well-being feels more important than a cause that might save everyone .
4) Potential benefits from SIAI are far and there’s no societal pressure for donating money
5) I like money. Giving money hurts. It’s supposed to go in my direction dammit.
I do anticipate that if I were to attain a sufficient amount of money (i.e. enough to feel that I don’t need to worry about money ever again) I would donate, though probably not exclusively to SIAI.
Edit: Looking at those five points, an idea formulates. What if there was a post exclusively for people to comment and say that they donated (possibly specifying the amount), and receiving karma in return? Strange as it is, karma might be more motivating (near) for some people than the prospect of saving humanity (far!). Just like the promise of extra harry potter fan-fiction.
It looks as though about half of your objections have to do with SIAI and the other half have to do with charitable donation in general. In my opinion, there are very strong arguments for donating to charity even if the best available charity is much less effective than SI claims to be. If you find these arguments persuasive, you could separate your charitable giving into 2 phases: during the 1st phase, you would establish some sort of foundation and begin gradually redirecting your income/assets to it. During the 2nd phase, you could attempt to figure out what the optimal charity was and donate to it.
I’ve found that this breaking up of a difficult task into phases helps me a lot. For example, I try to separate the brainstorming of rules for myself to follow and their actual implementation into separate phases.
Hardly anyone is willing to say why they don’t give.
My brain tells me that the reason I’ve only given a tiny fraction of my wealth so far is that there is still valuable information to be learned regarding whether SIAI is the best charity to donate to.
I feel fairly sure that I haven’t yet acquired/processed all of the relevant information
I feel fairly sure that the value of that information is still high
I feel that I’m not choosing to acquire that information as quickly as I could be
I’m not sure what will happen when the value of information dips below the value of acting promptly—whether I’ll start giving massively or move onto the next excuse, i.e. I’m not sure I trust my future self.
Recently I’ve been in touch with Nick Beckstead from Giving What We Can who had told me at the singularity summit that the org was starting a project to research unconventional (but potentially higher impact) charities including x-risk mitigation. I may be able to help him with that project somehow but right now I’m not quite sure.
I’m also starting a Toronto LW singularity discussion group next week—I don’t know how much interest in optimal philanthropy there is in Toronto, but I’m hoping that I can at least improve my understanding of this particular issue, as well as having a meatspace community who I can rely on to understand/support what I’m trying to do.
This is definitely an above-average week in terms of sensible things done though.
Some more background: I have a long history of not getting much done (except in my paid job) and have been depressed (on and off) since realising that x-risk was such a major issue and that optimal philanthropy was what I really wanted to be doing.
What have you done to seek info as to which charity is best?
What have you done? I would really like to hear how various SI members came to believe what they believe now.
How did you learn about risks from AI? Have you been evaluating charities and learn about existential risks? What did you do next, read all available material on AGI research?
I can’t imagine how someone could possible be as convinced as the average SI member without first becoming an expert when it comes to AI and complexity theory.
I ran across the Wikipedia article about the technological singularity when I was still in high school, maybe around 2004. From there, I found Staring into the Singularity, SL4, the Finnish Transhumanist Association, others.
My opinions about the Singularity have been drifting back and forth, with the initial enthusiasm and optimism being replaced by pessimism and a feeling of impending doom. I’ve been reading various things on and off, as well as participated in a number of online discussions. Mostly my relatively high certainty is based on the simple fact that people have been unable to refute the core claims, and so have I. I don’t think it’s just confirmation bias either, because plenty of my other opinions have changed over the same time period, and because I’ve on many occasions been unwilling to believe in the issue but then had no choice but to admit the facts.
Mostly my relatively high certainty is based on the simple fact that people have been unable to refute the core claims, and so have I.
Which paper or post outlines those core claims? I am not sure what they are.
...because plenty of my other opinions have changed over the same time period...
I find it very hard to pinpoint when and how I changed my mind about what. I’d be interested to hear some examples to compare my own opinion on those issues, thanks.
I’ve on many occasions been unwilling to believe in the issue but then had no choice but to admit the facts.
What do you mean by that? What does it mean for you to believe in the issue. What facts? Personally I don’t see how anyone could possible justify not to believe that risks from AI are a possibility. At the same time I think that some people are much more confident than the evidence allows them to be. Or I am missing something.
The SI is an important institution doing very important work that deserves much more monetary support and attention than it currently gets. The same is true for the FHI and existential risks research. But that’s all there is to it. The fanaticism and portrayal as world saviours, e.g. “I feel like humanity’s future is in good hands”, really makes me sick.
Which paper or post outlines those core claims? I am not sure what they are.
Mostly just:
AGI might be created within my lifetime
When AGI is created, it will eventually take control of humanity’s future
It will be very hard to create AGI in such a way that it won’t destroy almost everything that we hold valuable
I find it very hard to pinpoint when and how I changed my mind about what. I’d be interested to hear some examples to compare my own opinion on those issues, thanks.
Off the top of my head:
I stopped being religious (since then I’ve alternated between various degrees of “religion is idiotic” and “religion is actually kinda reasonable”)
I think it was around this time that I did a pretty quick heel-turn from being a strong supporter of the current copyright system to wanting to see the whole system drastically reformed (been refining and changing my exact opinions on the subject since then)
I used to be very strongly socialist (in the Scandinavian sense) and thought libertarians were pretty much crazy, since then I’ve come to see that they do have a lot of good points
I used to be very frustrated by people behaving in seemingly stupid and irrational ways; these days I’m a lot less frustrated, since I’ve come to see the method in the madness. (E.g. realizing the reason for some of the (self-)signaling behavior outlined in this post makes me a lot more understanding of people engaging in it.)
What do you mean by that? What does it mean for you to believe in the issue. What facts?
Things like:
Thinking that “oh, this value problem can’t really be that hard, I’m sure it’ll be solved” and then realizing that no, the value problem really is quite hard.
Thinking that “well, maybe there’s no hard takeoff, Moore’s law levels off and society will gradually adapt to control the AGIs” and then realizing that even if there were no hard takeoff at first, it would only be a matter of time before the AGIs broke free of human control. Things might be fine and under control for thirty years, say, and just when everyone is getting complacent, some computing breakthrough suddenly lets the AGIs run ten times as fast and then humans are out of the loop.
Thinking that “well, even if AGIs are going to break free of human control, at least we can play various AGIs against each other” and then realizing that this will only get humans caught in the crossfire; various human factions fighting each other hasn’t allowed the chimpanzees to play us against each other very well.
I became more convinced this was important work after talking to Anna Salamon. After talking to her and other computer scientists, I that a singularity is somewhat likely, and that it would be easy to screw up with disastrous consequences.
But evaluating a charity doesn’t just mean deciding whether they’re working on an important problem. It also means evaluating their chance of success. If you think SIAI has no chance of success, or is sure to succeed given the funding they already have, there’s no point in donating. I have no idea how likely it is that they’ll succeed, and don’t know how to get such information. Holden Karnofsky’s writing on estimate error is relevant here.
If you think SIAI has no chance of success, or is sure to succeed giving the funding they already have, there’s no point in donating.
I agree, a very important point.
I became more convinced this was important work after talking to Anna Salamon.
I have read very little from her when it comes to issues concerning SI’s main objective. Most of her posts seem to be about basic rationality.
She tried to start a webcam conversation with me once but my spoken English was just too bad and slow to have a conversation about such topics.
And even if I talked to her, she could tell me a lot and I would be unable to judge if what she says is more than internally consistent, if there is any connection to actual reality. I am simply not an AGI expert, very far from it. The best I can do so far is judge her output relative to what others have to say.
I’m also far from an expert in this field—I didn’t study anything technical, and didn’t have many friends who did, either. At the time I spoke to Anna, I wasn’t sure how to judge whether a singularity was even possible. At her suggestion, I asked some non-LW computer scientists (her further suggestion was to walk into office hours of a math or CS department at a university, which I haven’t done). They thought a singularity was fairly likely, and obviously hadn’t thought about any dangers associated with it. From reading Eliezer’s writings I’m convinced that a carelessly made AI could be disastrous. So from those points, I’m willing to believe that most computer scientists, if they succeeded in making an AI, would accidentally make an unfriendly one. Which makes me think SIAI’s cause is a good one.
But after reading GiveWell’s interview with SIAI, I don’t think they’re the best choice for my donation, especially since they say they don’t have immediate plans for more funding at this time. I’ll probably go with GiveWell’s top pick once they come out with their new ratings.
The question of how much information is enough is difficult for me, too. My plan right now is to give a good chunk of money every six months or so to whichever charity I think best at that time. That way I stay in the habit of giving (reducing chances that my future self will fail to start) and it gives me a deadline so that I actually do some research.
Hardly anyone is willing to say why they don’t give.
In my case, reducing existential risks isn’t high on my priority list.
I don’t claim to be an altruist. I donate blood, but this has the
advantage from my point of view that it is bounded, and visible, and local
in both time and space.
I’m guessing that reciprocity is more likely to work locally. If nothing else, it is spread across a smaller population.
(I should add: Locality isn’t cleanly orthogonal to visibility as a criterion. I’d guess that they have a considerable correlation.)
For the last ~12 years, most of my money has gone to donations or tuition. During this time I’ve maintained good relationships with friends and family, met and married a man with an outlook similar to mine, enjoyed lots of inexpensive pleasures, and generally had a good life. People thinking I’m mad or being mad at me has not been a problem. I blog on the topic.
Do you mean that people around you would not believe you were donating? Or would not think your cause a good one? Or would tell you that large donations are strange or a bad idea?
I’d really be interested to know, since I’ve recently started writing on the topic. Hardly anyone is willing to say why they don’t give.
For me:
1) The funds I have I might need later, and I’m not willing to take chances on my future. 2) Uncertainty as to whether the money would do much good. 3) Selfishness; my own well-being feels more important than a cause that might save everyone . 4) Potential benefits from SIAI are far and there’s no societal pressure for donating money 5) I like money. Giving money hurts. It’s supposed to go in my direction dammit.
I do anticipate that if I were to attain a sufficient amount of money (i.e. enough to feel that I don’t need to worry about money ever again) I would donate, though probably not exclusively to SIAI.
Edit: Looking at those five points, an idea formulates. What if there was a post exclusively for people to comment and say that they donated (possibly specifying the amount), and receiving karma in return? Strange as it is, karma might be more motivating (near) for some people than the prospect of saving humanity (far!). Just like the promise of extra harry potter fan-fiction.
We did that once.
The results were weird.
It looks as though about half of your objections have to do with SIAI and the other half have to do with charitable donation in general. In my opinion, there are very strong arguments for donating to charity even if the best available charity is much less effective than SI claims to be. If you find these arguments persuasive, you could separate your charitable giving into 2 phases: during the 1st phase, you would establish some sort of foundation and begin gradually redirecting your income/assets to it. During the 2nd phase, you could attempt to figure out what the optimal charity was and donate to it.
I’ve found that this breaking up of a difficult task into phases helps me a lot. For example, I try to separate the brainstorming of rules for myself to follow and their actual implementation into separate phases.
My brain tells me that the reason I’ve only given a tiny fraction of my wealth so far is that there is still valuable information to be learned regarding whether SIAI is the best charity to donate to.
I feel fairly sure that I haven’t yet acquired/processed all of the relevant information
I feel fairly sure that the value of that information is still high
I feel that I’m not choosing to acquire that information as quickly as I could be
I’m not sure what will happen when the value of information dips below the value of acting promptly—whether I’ll start giving massively or move onto the next excuse, i.e. I’m not sure I trust my future self.
What have you done to seek info as to which charity is best? Or what do you plan to do in the next few weeks?
Recently I’ve been in touch with Nick Beckstead from Giving What We Can who had told me at the singularity summit that the org was starting a project to research unconventional (but potentially higher impact) charities including x-risk mitigation. I may be able to help him with that project somehow but right now I’m not quite sure.
I’m also starting a Toronto LW singularity discussion group next week—I don’t know how much interest in optimal philanthropy there is in Toronto, but I’m hoping that I can at least improve my understanding of this particular issue, as well as having a meatspace community who I can rely on to understand/support what I’m trying to do.
This is definitely an above-average week in terms of sensible things done though.
Some more background: I have a long history of not getting much done (except in my paid job) and have been depressed (on and off) since realising that x-risk was such a major issue and that optimal philanthropy was what I really wanted to be doing.
What have you done? I would really like to hear how various SI members came to believe what they believe now.
How did you learn about risks from AI? Have you been evaluating charities and learn about existential risks? What did you do next, read all available material on AGI research?
I can’t imagine how someone could possible be as convinced as the average SI member without first becoming an expert when it comes to AI and complexity theory.
I ran across the Wikipedia article about the technological singularity when I was still in high school, maybe around 2004. From there, I found Staring into the Singularity, SL4, the Finnish Transhumanist Association, others.
My opinions about the Singularity have been drifting back and forth, with the initial enthusiasm and optimism being replaced by pessimism and a feeling of impending doom. I’ve been reading various things on and off, as well as participated in a number of online discussions. Mostly my relatively high certainty is based on the simple fact that people have been unable to refute the core claims, and so have I. I don’t think it’s just confirmation bias either, because plenty of my other opinions have changed over the same time period, and because I’ve on many occasions been unwilling to believe in the issue but then had no choice but to admit the facts.
Which paper or post outlines those core claims? I am not sure what they are.
I find it very hard to pinpoint when and how I changed my mind about what. I’d be interested to hear some examples to compare my own opinion on those issues, thanks.
What do you mean by that? What does it mean for you to believe in the issue. What facts? Personally I don’t see how anyone could possible justify not to believe that risks from AI are a possibility. At the same time I think that some people are much more confident than the evidence allows them to be. Or I am missing something.
The SI is an important institution doing very important work that deserves much more monetary support and attention than it currently gets. The same is true for the FHI and existential risks research. But that’s all there is to it. The fanaticism and portrayal as world saviours, e.g. “I feel like humanity’s future is in good hands”, really makes me sick.
Mostly just:
AGI might be created within my lifetime
When AGI is created, it will eventually take control of humanity’s future
It will be very hard to create AGI in such a way that it won’t destroy almost everything that we hold valuable
Off the top of my head:
I stopped being religious (since then I’ve alternated between various degrees of “religion is idiotic” and “religion is actually kinda reasonable”)
I think it was around this time that I did a pretty quick heel-turn from being a strong supporter of the current copyright system to wanting to see the whole system drastically reformed (been refining and changing my exact opinions on the subject since then)
I used to be very strongly socialist (in the Scandinavian sense) and thought libertarians were pretty much crazy, since then I’ve come to see that they do have a lot of good points
I used to be very frustrated by people behaving in seemingly stupid and irrational ways; these days I’m a lot less frustrated, since I’ve come to see the method in the madness. (E.g. realizing the reason for some of the (self-)signaling behavior outlined in this post makes me a lot more understanding of people engaging in it.)
Things like:
Thinking that “oh, this value problem can’t really be that hard, I’m sure it’ll be solved” and then realizing that no, the value problem really is quite hard.
Thinking that “well, maybe there’s no hard takeoff, Moore’s law levels off and society will gradually adapt to control the AGIs” and then realizing that even if there were no hard takeoff at first, it would only be a matter of time before the AGIs broke free of human control. Things might be fine and under control for thirty years, say, and just when everyone is getting complacent, some computing breakthrough suddenly lets the AGIs run ten times as fast and then humans are out of the loop.
Thinking that “well, even if AGIs are going to break free of human control, at least we can play various AGIs against each other” and then realizing that this will only get humans caught in the crossfire; various human factions fighting each other hasn’t allowed the chimpanzees to play us against each other very well.
I became more convinced this was important work after talking to Anna Salamon. After talking to her and other computer scientists, I that a singularity is somewhat likely, and that it would be easy to screw up with disastrous consequences.
But evaluating a charity doesn’t just mean deciding whether they’re working on an important problem. It also means evaluating their chance of success. If you think SIAI has no chance of success, or is sure to succeed given the funding they already have, there’s no point in donating. I have no idea how likely it is that they’ll succeed, and don’t know how to get such information. Holden Karnofsky’s writing on estimate error is relevant here.
I agree, a very important point.
I have read very little from her when it comes to issues concerning SI’s main objective. Most of her posts seem to be about basic rationality.
She tried to start a webcam conversation with me once but my spoken English was just too bad and slow to have a conversation about such topics.
And even if I talked to her, she could tell me a lot and I would be unable to judge if what she says is more than internally consistent, if there is any connection to actual reality. I am simply not an AGI expert, very far from it. The best I can do so far is judge her output relative to what others have to say.
I’m also far from an expert in this field—I didn’t study anything technical, and didn’t have many friends who did, either. At the time I spoke to Anna, I wasn’t sure how to judge whether a singularity was even possible. At her suggestion, I asked some non-LW computer scientists (her further suggestion was to walk into office hours of a math or CS department at a university, which I haven’t done). They thought a singularity was fairly likely, and obviously hadn’t thought about any dangers associated with it. From reading Eliezer’s writings I’m convinced that a carelessly made AI could be disastrous. So from those points, I’m willing to believe that most computer scientists, if they succeeded in making an AI, would accidentally make an unfriendly one. Which makes me think SIAI’s cause is a good one.
But after reading GiveWell’s interview with SIAI, I don’t think they’re the best choice for my donation, especially since they say they don’t have immediate plans for more funding at this time. I’ll probably go with GiveWell’s top pick once they come out with their new ratings.
The question of how much information is enough is difficult for me, too. My plan right now is to give a good chunk of money every six months or so to whichever charity I think best at that time. That way I stay in the habit of giving (reducing chances that my future self will fail to start) and it gives me a deadline so that I actually do some research.
Because GiveWell says I can do better.
Sorry, I didn’t mean “give to SIAI.” I mean give to whatever cause you think best. I agree that GiveWell is a good tool.
I don’t care enough about myself and I am not really an altruist either.
From your initial post in this thread, I doubt that your true rejection is “I don’t care about anything.”
In my case, reducing existential risks isn’t high on my priority list. I don’t claim to be an altruist. I donate blood, but this has the advantage from my point of view that it is bounded, and visible, and local in both time and space.
I understand why visibility is an advantage, and possibly boundedness. What is better about local?
I’m guessing that reciprocity is more likely to work locally. If nothing else, it is spread across a smaller population. (I should add: Locality isn’t cleanly orthogonal to visibility as a criterion. I’d guess that they have a considerable correlation.)
They would think that I have gone mad and would probably be mad at me as a result.
For the last ~12 years, most of my money has gone to donations or tuition. During this time I’ve maintained good relationships with friends and family, met and married a man with an outlook similar to mine, enjoyed lots of inexpensive pleasures, and generally had a good life. People thinking I’m mad or being mad at me has not been a problem. I blog on the topic.