Mostly my relatively high certainty is based on the simple fact that people have been unable to refute the core claims, and so have I.
Which paper or post outlines those core claims? I am not sure what they are.
...because plenty of my other opinions have changed over the same time period...
I find it very hard to pinpoint when and how I changed my mind about what. I’d be interested to hear some examples to compare my own opinion on those issues, thanks.
I’ve on many occasions been unwilling to believe in the issue but then had no choice but to admit the facts.
What do you mean by that? What does it mean for you to believe in the issue. What facts? Personally I don’t see how anyone could possible justify not to believe that risks from AI are a possibility. At the same time I think that some people are much more confident than the evidence allows them to be. Or I am missing something.
The SI is an important institution doing very important work that deserves much more monetary support and attention than it currently gets. The same is true for the FHI and existential risks research. But that’s all there is to it. The fanaticism and portrayal as world saviours, e.g. “I feel like humanity’s future is in good hands”, really makes me sick.
Which paper or post outlines those core claims? I am not sure what they are.
Mostly just:
AGI might be created within my lifetime
When AGI is created, it will eventually take control of humanity’s future
It will be very hard to create AGI in such a way that it won’t destroy almost everything that we hold valuable
I find it very hard to pinpoint when and how I changed my mind about what. I’d be interested to hear some examples to compare my own opinion on those issues, thanks.
Off the top of my head:
I stopped being religious (since then I’ve alternated between various degrees of “religion is idiotic” and “religion is actually kinda reasonable”)
I think it was around this time that I did a pretty quick heel-turn from being a strong supporter of the current copyright system to wanting to see the whole system drastically reformed (been refining and changing my exact opinions on the subject since then)
I used to be very strongly socialist (in the Scandinavian sense) and thought libertarians were pretty much crazy, since then I’ve come to see that they do have a lot of good points
I used to be very frustrated by people behaving in seemingly stupid and irrational ways; these days I’m a lot less frustrated, since I’ve come to see the method in the madness. (E.g. realizing the reason for some of the (self-)signaling behavior outlined in this post makes me a lot more understanding of people engaging in it.)
What do you mean by that? What does it mean for you to believe in the issue. What facts?
Things like:
Thinking that “oh, this value problem can’t really be that hard, I’m sure it’ll be solved” and then realizing that no, the value problem really is quite hard.
Thinking that “well, maybe there’s no hard takeoff, Moore’s law levels off and society will gradually adapt to control the AGIs” and then realizing that even if there were no hard takeoff at first, it would only be a matter of time before the AGIs broke free of human control. Things might be fine and under control for thirty years, say, and just when everyone is getting complacent, some computing breakthrough suddenly lets the AGIs run ten times as fast and then humans are out of the loop.
Thinking that “well, even if AGIs are going to break free of human control, at least we can play various AGIs against each other” and then realizing that this will only get humans caught in the crossfire; various human factions fighting each other hasn’t allowed the chimpanzees to play us against each other very well.
I became more convinced this was important work after talking to Anna Salamon. After talking to her and other computer scientists, I that a singularity is somewhat likely, and that it would be easy to screw up with disastrous consequences.
But evaluating a charity doesn’t just mean deciding whether they’re working on an important problem. It also means evaluating their chance of success. If you think SIAI has no chance of success, or is sure to succeed given the funding they already have, there’s no point in donating. I have no idea how likely it is that they’ll succeed, and don’t know how to get such information. Holden Karnofsky’s writing on estimate error is relevant here.
If you think SIAI has no chance of success, or is sure to succeed giving the funding they already have, there’s no point in donating.
I agree, a very important point.
I became more convinced this was important work after talking to Anna Salamon.
I have read very little from her when it comes to issues concerning SI’s main objective. Most of her posts seem to be about basic rationality.
She tried to start a webcam conversation with me once but my spoken English was just too bad and slow to have a conversation about such topics.
And even if I talked to her, she could tell me a lot and I would be unable to judge if what she says is more than internally consistent, if there is any connection to actual reality. I am simply not an AGI expert, very far from it. The best I can do so far is judge her output relative to what others have to say.
I’m also far from an expert in this field—I didn’t study anything technical, and didn’t have many friends who did, either. At the time I spoke to Anna, I wasn’t sure how to judge whether a singularity was even possible. At her suggestion, I asked some non-LW computer scientists (her further suggestion was to walk into office hours of a math or CS department at a university, which I haven’t done). They thought a singularity was fairly likely, and obviously hadn’t thought about any dangers associated with it. From reading Eliezer’s writings I’m convinced that a carelessly made AI could be disastrous. So from those points, I’m willing to believe that most computer scientists, if they succeeded in making an AI, would accidentally make an unfriendly one. Which makes me think SIAI’s cause is a good one.
But after reading GiveWell’s interview with SIAI, I don’t think they’re the best choice for my donation, especially since they say they don’t have immediate plans for more funding at this time. I’ll probably go with GiveWell’s top pick once they come out with their new ratings.
Which paper or post outlines those core claims? I am not sure what they are.
I find it very hard to pinpoint when and how I changed my mind about what. I’d be interested to hear some examples to compare my own opinion on those issues, thanks.
What do you mean by that? What does it mean for you to believe in the issue. What facts? Personally I don’t see how anyone could possible justify not to believe that risks from AI are a possibility. At the same time I think that some people are much more confident than the evidence allows them to be. Or I am missing something.
The SI is an important institution doing very important work that deserves much more monetary support and attention than it currently gets. The same is true for the FHI and existential risks research. But that’s all there is to it. The fanaticism and portrayal as world saviours, e.g. “I feel like humanity’s future is in good hands”, really makes me sick.
Mostly just:
AGI might be created within my lifetime
When AGI is created, it will eventually take control of humanity’s future
It will be very hard to create AGI in such a way that it won’t destroy almost everything that we hold valuable
Off the top of my head:
I stopped being religious (since then I’ve alternated between various degrees of “religion is idiotic” and “religion is actually kinda reasonable”)
I think it was around this time that I did a pretty quick heel-turn from being a strong supporter of the current copyright system to wanting to see the whole system drastically reformed (been refining and changing my exact opinions on the subject since then)
I used to be very strongly socialist (in the Scandinavian sense) and thought libertarians were pretty much crazy, since then I’ve come to see that they do have a lot of good points
I used to be very frustrated by people behaving in seemingly stupid and irrational ways; these days I’m a lot less frustrated, since I’ve come to see the method in the madness. (E.g. realizing the reason for some of the (self-)signaling behavior outlined in this post makes me a lot more understanding of people engaging in it.)
Things like:
Thinking that “oh, this value problem can’t really be that hard, I’m sure it’ll be solved” and then realizing that no, the value problem really is quite hard.
Thinking that “well, maybe there’s no hard takeoff, Moore’s law levels off and society will gradually adapt to control the AGIs” and then realizing that even if there were no hard takeoff at first, it would only be a matter of time before the AGIs broke free of human control. Things might be fine and under control for thirty years, say, and just when everyone is getting complacent, some computing breakthrough suddenly lets the AGIs run ten times as fast and then humans are out of the loop.
Thinking that “well, even if AGIs are going to break free of human control, at least we can play various AGIs against each other” and then realizing that this will only get humans caught in the crossfire; various human factions fighting each other hasn’t allowed the chimpanzees to play us against each other very well.
I became more convinced this was important work after talking to Anna Salamon. After talking to her and other computer scientists, I that a singularity is somewhat likely, and that it would be easy to screw up with disastrous consequences.
But evaluating a charity doesn’t just mean deciding whether they’re working on an important problem. It also means evaluating their chance of success. If you think SIAI has no chance of success, or is sure to succeed given the funding they already have, there’s no point in donating. I have no idea how likely it is that they’ll succeed, and don’t know how to get such information. Holden Karnofsky’s writing on estimate error is relevant here.
I agree, a very important point.
I have read very little from her when it comes to issues concerning SI’s main objective. Most of her posts seem to be about basic rationality.
She tried to start a webcam conversation with me once but my spoken English was just too bad and slow to have a conversation about such topics.
And even if I talked to her, she could tell me a lot and I would be unable to judge if what she says is more than internally consistent, if there is any connection to actual reality. I am simply not an AGI expert, very far from it. The best I can do so far is judge her output relative to what others have to say.
I’m also far from an expert in this field—I didn’t study anything technical, and didn’t have many friends who did, either. At the time I spoke to Anna, I wasn’t sure how to judge whether a singularity was even possible. At her suggestion, I asked some non-LW computer scientists (her further suggestion was to walk into office hours of a math or CS department at a university, which I haven’t done). They thought a singularity was fairly likely, and obviously hadn’t thought about any dangers associated with it. From reading Eliezer’s writings I’m convinced that a carelessly made AI could be disastrous. So from those points, I’m willing to believe that most computer scientists, if they succeeded in making an AI, would accidentally make an unfriendly one. Which makes me think SIAI’s cause is a good one.
But after reading GiveWell’s interview with SIAI, I don’t think they’re the best choice for my donation, especially since they say they don’t have immediate plans for more funding at this time. I’ll probably go with GiveWell’s top pick once they come out with their new ratings.