(a) There are other existential risks, not just AGI. I think it more likely than not that one of these other existential risks will hit before an unfriendly AI is created. I have not seen anybody present a coherent argument that AGI is likely to be developed before any other existential risk hits us,
(b) Even if AGI deserves top priority, there’s still the important question of how to go about working toward a FAI. As far as I can tell, working to build an AGI right now now makes sense only if AGI is actually near (a few decades away).
(c) Even if AGI is near, there are still serious issues of accountability and transparency connected with SIAI. How do we know that they’re making a careful effort to use donations in an optimal way? As things stand, I believe that it would be better to start a organization which exhibits high transparency and accountability, fund that, and let SIAI fold. I might change my mind on this point if SIAI decided to strive toward transparency and accountability.
Re: “There are other existential risks, not just AGI. I think it more likely than not that one of these other existential risks will hit before an unfriendly AI is created hits.”
The humans are going to be obliterated soon?!?
Alas, you don’t present your supporting reasoning.
No, no, I’m not at all confident that humans will be obliterated soon. But why, for example, is it more likely that humans will go extinct due to AGI than that humans will go extinct due to a large scale nuclear war? It could be that AGI deserves top priority, but I haven’t seen a good argument for why.
I think AGI wiping out humanity is far more likely than nuclear war doing so (it’s hard to kill everyone with a nuclear war) but even if I didn’t, I’d still want to work on the issue which is getting the least attention, since the marginal contribution I can make is greater.
Yes, I actually agree with you about nuclear war (and did before I mentioned it!) - I should have picked a better example. How about existential risk from asteroid strikes?
Several points:
(1) Nuclear war could still cause an astronomical waste in the form that I discuss here.
(2) Are you sure that the marginal contribution that you can make to the issue which is getting the least attention is the greatest? The issues getting the least attention may be getting little attention precisely because people know that there’s nothing that can be done about them.
(3) If you satisfactorially address my point (a), points (b) and (c) will remain.
The question is whether at present it’s possible to lower existential risk more by funding and advocating FAI research than than it is to lower existential risk by funding and advocating an asteroid strike prevention program. Despite the low probability of an asteroid strike, I don’t think that the answer to this question is obvious.
I figure a pretty important thing is to get out of the current vulnerable position as soon as possible. To do that, a major thing we will need is intelligent machines—and so we should allocate resources to their development. Inevitably, that will include consideration of safety features. We can already see some damage when today’s companies decide to duke it out—and today’s companies are not very powerful compared to what is coming. The situation seems relatively pressing and urgent.
I agree that friendly intelligent machines would be a great asset to assuaging future existential risk.
My current position is that at present, it’s so unlikely that devoting resources to developing safe intelligent machines will substantially increase the probability that we’ll develop safe intelligent machines that funding and advocating an asteroid strike program is likely to reduce existential risk more than funding and advocating FAI research is.
I may be wrong, but would require a careful argument for the opposite position before changing my mind.
Asteroid strikes are very unlikely—so beating them is a really low standard, which IMO, machine intelligence projects do with ease. Funding the area sensibly would help make it happen—by most accounts. Detailed justification is beyond the scope of this comment, though.
Assuming that an asteroid strike prevention program costs no more than a few hundred million dollars, I don’t think that it’s easy to do better to assuage existential risk than funding an asteroid strike prevention program (though it may be possible). I intend to explain why I think it’s so hard to lower existential risk through funding FAI research later on (not sure when, but within a few months).
I’d be interested in hearing your detailed justification. Maybe you can make a string top level posts at some point.
My current position is that at present, it’s so unlikely that devoting resources to developing safe intelligent machines will substantially increase the probability that we’ll develop safe intelligent machines that funding and advocating an asteroid strike program is likely to reduce existential risk more than funding and advocating FAI research is.
Considering the larger problem statement, technically understanding what we value as opposed to actually building an AGI with those values, what do you see as distinguishing a situation where we are ready to consider the problem, from a situation where we are not? How can one come to such conclusion without actually considering the problem?
I think that understanding what we value is very important. I’m not convinced that developing a technical understanding of what we value is the most important thing right now.
I don’t believe that the best thing for me to do is to study human values. I also don’t believe that at the margin, funding researchers who study human values is the best use of money.
Of course, my thinking on these matters is subject to change with incoming information. But if what I think you’re saying is true, I’d need to see a more detailed argument than the one that you’ve offered so far to be convinced.
If you’d like to correspond by email about these things, I’d be happy to say more about my thinking about these things. Feel free to PM me with your email address.
I didn’t ask about perceived importance (that has already taken feasibility into account), I asked about your belief that it’s not a productive enterprise (that is the feasibility component of importance, considered alone), that we are not ready to efficiently work on the problem yet.
If you believe that we are not ready now, but believe that we must work on the problem eventually, you need to have a notion of what conditions are necessary to conclude that it’s productive to work on the problem under those conditions.
And that’s my question: what are those conditions, or how can one figure them out without actually attempting to study the problem (by a proxy of a small team devoted to professionally studying the problem; I’m not yet arguing to start a program on the scale of what’s expended on study of string theory).
I think that research of the type that you describe is productive. Unless I’ve erred, my statements above are statements about the relative efficacy of funding research of the type that you describe rather than suggestions that research of the type that you describe has no value.
I personally still feel the way that I did in June despite having read Fake Fake Utility Functions, etc. I don’t think that it’s very likely the case that we will eventually have to do research of the type that you describe to ensure an ideal outcome. Relatedly, I believe that at the margin, at the moment funding other projects has higher expected value than funding research of the type that you describe. But I may be wrong and don’t have an argument against your position. I think that this is something that reasonable people can disagree on. I have no problem with you funding, engaging in and advocating research of the type that you describe.
You and I may have a difference which cannot be rationally resolved in a timely fashion on account of the information that we have access to being in a forms that makes it difficult or impossible to share. Having different people fund different projects according to their differing beliefs about the world serves as some sort of real world approximation to funding what should be funded according to the result of Bayesian averaging over all people and then funding what should be funded based on that.
So, anyway, I think you’ve given satisfactory answers to how you feel about questions (a) and (b) raised in my comment. I remain curious how you feel about point (c).
I did answer to (c) before: any reasonable effort in that direction should start with trying to get SIAI itself to change or justify the way it behaves.
Yes, I agree with you. I didn’t remember that you had answered this question before. Incidentally, I did correspond with Michael Vassar. More on this to follow later.
p(machine intelligence) is going up annually—while p(nuclear holocaust) has been going down for a long time now. Neither are likely to obliterate civilisation—but machine intelligence could nontheless be disruptive.
My comment was specifically about importance of FAI irrespective of existential risks, AGI or not. If we manage to survive at all, this is what we must succeed at. It also prevents all existential risks on completion, where theoretically possible.
Okay, we had this back and forth before and I didn’t understand you then and now I do. I guess I was being dense before. Anyway, the probability of current action leading to FAI might still be sufficiently small so that it makes sense to focus on other existential risks for the moment. And my other points remain.
This is the same zero-sum thinking as in your previous post: people are currently not deciding between different causes, they are deciding whether to take a specific cause seriously. If you already contribute everything you could to a nanotech-risk-prevention organization, then we could ask whether switching to SIAI will do more good. But it’s not the question usually posed.
As far as I can tell, working to build an AGI right now now makes sense only if AGI is actually near (a few decades away).
Working to build AGI right now is certainly a bad idea, at best leading nowhere, at worst killing us all. SIAI doesn’t work on building AGI right now, no no no. We need understanding, not robots. Like this post, say.
This is the same zero-sum thinking as in your previous post: people are currently not deciding between different causes, they are deciding whether to take a specific cause seriously. If you already contribute everything you could to a nanotech-risk-prevention organization, then we could ask whether switching to SIAI will do more good. But it’s not the question usually posed.
I agree that in general people should be more concerned about existential risk and that it’s worthwhile to promote general awareness of existential risk.
But there is a zero-sum aspect to philanthropic efforts. See the GiveWell blog entry titled Denying The Choice.
More to the point, I think that one of the major factors keeping people away from studying existential risk is the fact that the many of the people who are interested in existential risk (including Eliezer) have low credibility on account expressing confident, apparently sensationalist claims without supporting them with careful, well reasoned arguments. I’m seriously concerned about this issue.
If Eliezer can’t explain why it’s pretty obvious to him that AGI will be developed within the next century, then he should explicitly say something like “I believe that AGI will be developed over the next 100 years but it’s hard for me to express why so it’s understandable that people don’t believe me” or “I’m uncertain as to whether or not AGI will be developed over the next 100 years”
When he makes unsupported claims that sound like the sort of thing that somebody would say just to get attention, he’s actively damaging the cause of existential risk.
Thanks, I’ll check this out when I get a chance. I don’t know whether I’ll agree with your conclusions, but it looks like you’ve at least attempted to answer one of my main questions concerning the feasibility of SIAI’s approach.
Those surveys suffer from selection bias. Nick Bostrom is going to try to get a similar survey instrument administered to a less-selected AI audience. There was also a poll at the AI@50 conference.
Hm, I had not heard about that. SIAI doesn’t seem to do a very good job of publicizing its projects or perhaps doesn’t do a good job of finishing and releasing them.
As I’ve said elsewhere:
(a) There are other existential risks, not just AGI. I think it more likely than not that one of these other existential risks will hit before an unfriendly AI is created. I have not seen anybody present a coherent argument that AGI is likely to be developed before any other existential risk hits us,
(b) Even if AGI deserves top priority, there’s still the important question of how to go about working toward a FAI. As far as I can tell, working to build an AGI right now now makes sense only if AGI is actually near (a few decades away).
(c) Even if AGI is near, there are still serious issues of accountability and transparency connected with SIAI. How do we know that they’re making a careful effort to use donations in an optimal way? As things stand, I believe that it would be better to start a organization which exhibits high transparency and accountability, fund that, and let SIAI fold. I might change my mind on this point if SIAI decided to strive toward transparency and accountability.
I really agree with both a and b (although I do not care about c). I am glad to see other people around here who think both these things.
Re: “There are other existential risks, not just AGI. I think it more likely than not that one of these other existential risks will hit before an unfriendly AI is created hits.”
The humans are going to be obliterated soon?!?
Alas, you don’t present your supporting reasoning.
No, no, I’m not at all confident that humans will be obliterated soon. But why, for example, is it more likely that humans will go extinct due to AGI than that humans will go extinct due to a large scale nuclear war? It could be that AGI deserves top priority, but I haven’t seen a good argument for why.
I think AGI wiping out humanity is far more likely than nuclear war doing so (it’s hard to kill everyone with a nuclear war) but even if I didn’t, I’d still want to work on the issue which is getting the least attention, since the marginal contribution I can make is greater.
Yes, I actually agree with you about nuclear war (and did before I mentioned it!) - I should have picked a better example. How about existential risk from asteroid strikes?
Several points:
(1) Nuclear war could still cause an astronomical waste in the form that I discuss here.
(2) Are you sure that the marginal contribution that you can make to the issue which is getting the least attention is the greatest? The issues getting the least attention may be getting little attention precisely because people know that there’s nothing that can be done about them.
(3) If you satisfactorially address my point (a), points (b) and (c) will remain.
p(asteroid strike/year) is pretty low. Most are not too worried.
The question is whether at present it’s possible to lower existential risk more by funding and advocating FAI research than than it is to lower existential risk by funding and advocating an asteroid strike prevention program. Despite the low probability of an asteroid strike, I don’t think that the answer to this question is obvious.
I figure a pretty important thing is to get out of the current vulnerable position as soon as possible. To do that, a major thing we will need is intelligent machines—and so we should allocate resources to their development. Inevitably, that will include consideration of safety features. We can already see some damage when today’s companies decide to duke it out—and today’s companies are not very powerful compared to what is coming. The situation seems relatively pressing and urgent.
that=asteroids?
If yes, I highly doubt we need machines significantly more intelligent than existing military technology adopted for the purpose.
That would hardly be a way to “get out of the current vulnerable position as soon as possible”.
I agree that friendly intelligent machines would be a great asset to assuaging future existential risk.
My current position is that at present, it’s so unlikely that devoting resources to developing safe intelligent machines will substantially increase the probability that we’ll develop safe intelligent machines that funding and advocating an asteroid strike program is likely to reduce existential risk more than funding and advocating FAI research is.
I may be wrong, but would require a careful argument for the opposite position before changing my mind.
Asteroid strikes are very unlikely—so beating them is a really low standard, which IMO, machine intelligence projects do with ease. Funding the area sensibly would help make it happen—by most accounts. Detailed justification is beyond the scope of this comment, though.
Assuming that an asteroid strike prevention program costs no more than a few hundred million dollars, I don’t think that it’s easy to do better to assuage existential risk than funding an asteroid strike prevention program (though it may be possible). I intend to explain why I think it’s so hard to lower existential risk through funding FAI research later on (not sure when, but within a few months).
I’d be interested in hearing your detailed justification. Maybe you can make a string top level posts at some point.
Considering the larger problem statement, technically understanding what we value as opposed to actually building an AGI with those values, what do you see as distinguishing a situation where we are ready to consider the problem, from a situation where we are not? How can one come to such conclusion without actually considering the problem?
I think that understanding what we value is very important. I’m not convinced that developing a technical understanding of what we value is the most important thing right now.
I imagine that for some people, working on a developing a technical understanding understanding what we value is the best thing that they could be doing. Different people have different strengths, and this leads to the utilitarian thing varying from person to person..
I don’t believe that the best thing for me to do is to study human values. I also don’t believe that at the margin, funding researchers who study human values is the best use of money.
Of course, my thinking on these matters is subject to change with incoming information. But if what I think you’re saying is true, I’d need to see a more detailed argument than the one that you’ve offered so far to be convinced.
If you’d like to correspond by email about these things, I’d be happy to say more about my thinking about these things. Feel free to PM me with your email address.
I didn’t ask about perceived importance (that has already taken feasibility into account), I asked about your belief that it’s not a productive enterprise (that is the feasibility component of importance, considered alone), that we are not ready to efficiently work on the problem yet.
If you believe that we are not ready now, but believe that we must work on the problem eventually, you need to have a notion of what conditions are necessary to conclude that it’s productive to work on the problem under those conditions.
And that’s my question: what are those conditions, or how can one figure them out without actually attempting to study the problem (by a proxy of a small team devoted to professionally studying the problem; I’m not yet arguing to start a program on the scale of what’s expended on study of string theory).
I think that research of the type that you describe is productive. Unless I’ve erred, my statements above are statements about the relative efficacy of funding research of the type that you describe rather than suggestions that research of the type that you describe has no value.
I personally still feel the way that I did in June despite having read Fake Fake Utility Functions, etc. I don’t think that it’s very likely the case that we will eventually have to do research of the type that you describe to ensure an ideal outcome. Relatedly, I believe that at the margin, at the moment funding other projects has higher expected value than funding research of the type that you describe. But I may be wrong and don’t have an argument against your position. I think that this is something that reasonable people can disagree on. I have no problem with you funding, engaging in and advocating research of the type that you describe.
You and I may have a difference which cannot be rationally resolved in a timely fashion on account of the information that we have access to being in a forms that makes it difficult or impossible to share. Having different people fund different projects according to their differing beliefs about the world serves as some sort of real world approximation to funding what should be funded according to the result of Bayesian averaging over all people and then funding what should be funded based on that.
So, anyway, I think you’ve given satisfactory answers to how you feel about questions (a) and (b) raised in my comment. I remain curious how you feel about point (c).
I did answer to (c) before: any reasonable effort in that direction should start with trying to get SIAI itself to change or justify the way it behaves.
Yes, I agree with you. I didn’t remember that you had answered this question before. Incidentally, I did correspond with Michael Vassar. More on this to follow later.
p(machine intelligence) is going up annually—while p(nuclear holocaust) has been going down for a long time now. Neither are likely to obliterate civilisation—but machine intelligence could nontheless be disruptive.
My comment was specifically about importance of FAI irrespective of existential risks, AGI or not. If we manage to survive at all, this is what we must succeed at. It also prevents all existential risks on completion, where theoretically possible.
Okay, we had this back and forth before and I didn’t understand you then and now I do. I guess I was being dense before. Anyway, the probability of current action leading to FAI might still be sufficiently small so that it makes sense to focus on other existential risks for the moment. And my other points remain.
This is the same zero-sum thinking as in your previous post: people are currently not deciding between different causes, they are deciding whether to take a specific cause seriously. If you already contribute everything you could to a nanotech-risk-prevention organization, then we could ask whether switching to SIAI will do more good. But it’s not the question usually posed.
Working to build AGI right now is certainly a bad idea, at best leading nowhere, at worst killing us all. SIAI doesn’t work on building AGI right now, no no no. We need understanding, not robots. Like this post, say.
I agree that in general people should be more concerned about existential risk and that it’s worthwhile to promote general awareness of existential risk.
But there is a zero-sum aspect to philanthropic efforts. See the GiveWell blog entry titled Denying The Choice.
More to the point, I think that one of the major factors keeping people away from studying existential risk is the fact that the many of the people who are interested in existential risk (including Eliezer) have low credibility on account expressing confident, apparently sensationalist claims without supporting them with careful, well reasoned arguments. I’m seriously concerned about this issue.
If Eliezer can’t explain why it’s pretty obvious to him that AGI will be developed within the next century, then he should explicitly say something like “I believe that AGI will be developed over the next 100 years but it’s hard for me to express why so it’s understandable that people don’t believe me” or “I’m uncertain as to whether or not AGI will be developed over the next 100 years”
When he makes unsupported claims that sound like the sort of thing that somebody would say just to get attention, he’s actively damaging the cause of existential risk.
Re: “AGI will be developed over the next 100 years”
I list various estimates from those interested enough in the issue to bother giving probabality density functions at the bottom of:
http://alife.co.uk/essays/how_long_before_superintelligence/
Thanks, I’ll check this out when I get a chance. I don’t know whether I’ll agree with your conclusions, but it looks like you’ve at least attempted to answer one of my main questions concerning the feasibility of SIAI’s approach.
Those surveys suffer from selection bias. Nick Bostrom is going to try to get a similar survey instrument administered to a less-selected AI audience. There was also a poll at the AI@50 conference.
http://www.engagingexperience.com/2006/07/ai50_first_poll.html
If the raw data was ever published, that might be of some interest.
Any chance of piggybacking questions relevant to Maes-Garreau on that survey? As you point out on that page, better stats are badly needed.
And indeed, I suggested to SIAI folk that all public record predictions of AI timelines be collected for that purpose, and such a project is underway.
Hm, I had not heard about that. SIAI doesn’t seem to do a very good job of publicizing its projects or perhaps doesn’t do a good job of finishing and releasing them.
It just started this month, at the same time as Summit preparation.
Re: “Working to build AGI right now is certainly a bad idea, at best leading nowhere, at worst killing us all.”
The marginal benefit of making machines smarter seems large—e.g. see automobile safety applications: http://www.youtube.com/watch?v=I4EY9_mOvO8
I don’t really see that situation changing much anytime soon—there will probably be such marginal benefits for a long time to come.