First, that anyone would attempt to implement FAI with any definition similar to that of SIAI seems highly unlikely, regardless of safety concern.
I don’t think that the argument, that people can be smart enough to create an AGI, that can take over the universe in a matter of hours, can be dumb enough not to recognize the dangers posed by such an AGI, is very strong. To fortify that argument you would either have to show that the people working for SIAI are vastly more intelligent than most AGI researchers, in which case they would be more likely to build the first AGI, or that the creation of an AGI, that is capable of explosive recursive self-improvement, demands much less intelligence and insight than which is necessary to recognize risks from AI.
Second, that Eliezer would be upset if someone got it right before he did seems obviously absurd.
I deem that to be very unlikely as well. But given the scope of the project, and human nature, it should be taken into account. Not only that, but also that it’s a giant scam. Because if that is the case, even at a probability as low as 0.1%, valuable resources would be wasted that could be used to mitigate other existential risks, or being used by someone who follows selfish motives.
It is very easy to dispel any such doubts, all he would have to do is publish some technical paper that manages to survive peer-review, thereby substantiate his claims and prove that he is qualified.
Not only that, but also that it’s a giant scam. Because if that is the case, even at a probability as low as 0.1%, valuable resources would be wasted that could be used to mitigate other existential risks, or being used by someone who follows selfish motives.
You say that as if it’s worse than other ways that money could go to waste.
(There are good game-theoretic reasons to act sort of as if you thought that, but they should be made explicit, and explicit consideration of them probably wasn’t what motivated your statement.)
It is very easy to dispel any such doubts, all he would have to do is publish some technical paper that manages to survive peer-review, thereby substantiate his claims and prove that he is qualified.
You seem to have the idea that this is all about Eliezer Yudkowsky. In actual fact, he wasn’t at the meeting where we came up with the model I described in this article, he’s influential but doesn’t control SIAI, and the existential risk issue is bigger than SIAI and a lot bigger than any one person. Most of the people involved think AI risk is important based on their own reasoning, not based on trusting Eliezer. Personally, I don’t really care whether he’s qualified, because I consider myself qualified enough to judge his arguments (or anonymous arguments) directly. What may be throwing you off is that he’s extremely visible—he’s the public face of SIAI to a lot of people—because he’s a prolific writer, and because he optimizes his writing to get lots of people to read it.
Journals are actually very bad for getting read by non-specialists, and Eliezer’s specialized his writing skill for presenting to smart laymen, rather than academics. Nevertheless, other authors have written and published papers about AI risk have been published. The issue at hand right now is getting into prestigious machine learning and computer science journals, rather than philosophy journals, so that the right specialists will read them. That’s much more difficult, because their editors think of them as having narrow topics that don’t include philosophy or futurism.
Most of the people involved think AI risk is important based on their own reasoning, not based on trusting Eliezer. Personally, I don’t really care whether he’s qualified, because I consider myself qualified enough to judge his arguments (or anonymous arguments) directly.
As someone who is still acquiring a basic education I have to rely on some amount of intuition and trust in peer-review. Here I give a lot of weight to actual, provable success, recognition, and substantial evidence in the form of a real world demonstration of intelligence and skill.
The Less Wrong sequences and upvotes by unknown and anonymous strangers are not enough to prove the expertise and intelligence that I consider necessary to lend enough support to such extraordinary ideas as the possibility of risks from artificial general intelligences undergoing explosive recursive self-improvement. At least not enough to disregard other risks that have been deemed important by a world-wide network of professionals with a track record of previous achievements.
I do not intent to be derogatory, but who are you, why would I trust your judgement or that of other people on Less Wrong? This is a blog on the Internet created by an organisation with a few papers that lack a lot of mathematical rigor and technical details.
What may be throwing you off is that he’s extremely visible—he’s the public face of SIAI to a lot of people...
What is bothering me is that I haven’t seen much evidence that he is qualified and intelligent enough to just believe him. People don’t even believe Roger Penrose when he is making up extraordinary claims outside his realm of expertise. And Roger Penrose achieved a lot more than Eliezer Yudkowsky and has demonstrated his superior intellect.
The Less Wrong sequences and upvotes by unknown and anonymous strangers are not enough to prove the expertise and intelligence that I consider necessary to lend enough support to such extraordinary ideas as the possibility of risks from artificial general intelligences undergoing explosive recursive self-improvement. At least not enough to disregard other risks that have been deemed important by a world-wide network of professionals with a track record of previous achievements.
Machine intelligence will be an enormous deal. As to experts who think global warming is the important issue of the day—they are not doing that through genuine concern about the future. That is all about funding, and marketing, not the welfare of the planet. It is easier to put together grant applications in that area. Easier to build a popular movement. Easier to win votes. Environmentalist concerns signal greenenss, which is linked by many to goodness. Showing that you even care for trees, whales and the whole planet, shows you have a big heart.
The fact that global warming is actually irrelevant fluff is a side issue.
I don’t think that the argument, that people can be smart enough to create an AGI, that can take over the universe in a matter of hours, can be dumb enough not to recognize the dangers posed by such an AGI, is very strong. To fortify that argument you would either have to show that the people working for SIAI are vastly more intelligent than most AGI researchers, in which case they would be more likely to build the first AGI, or that the creation of an AGI, that is capable of explosive recursive self-improvement, demands much less intelligence and insight than which is necessary to recognize risks from AI.
One of the major messages which I think you should be picking up from the sequences is that it takes more than just intelligence to consistently separate good ideas from bad ones.
I don’t think that the argument, that people can be smart enough to create an AGI, that can take over the universe in a matter of hours, can be dumb enough not to recognize the dangers posed by such an AGI, is very strong. To fortify that argument you would either have to show that the people working for SIAI are vastly more intelligent than most AGI researchers, in which case they would be more likely to build the first AGI, or that the creation of an AGI, that is capable of explosive recursive self-improvement, demands much less intelligence and insight than which is necessary to recognize risks from AI.
I have some scepticism about that point as well. We do have some relevant history relating to engineering disaasters. There have been lots of engineering projects in history, and we know roughly how many people died in accidents, and as a result of negligence, or were screwed over in other ways.
Engineers do sometimes fail. The Titanic. The Tacoma Narrows Bridge Collapse.
Then there’s all the people killed by cars and in coal mines. Society wants the benefits, and individuals pay the price. This effect seems much more significant than accidents to me—in terms of number of deaths.
However, I think that engineers have a reasonable record. In a historical enginnering project with lives at stake, one would certainly not expect failure—or claim that failure is “the default case”. The case goes the other way: high technology and machines cause humans to thrive.
Of course, reference class forecasting has limitations, but we should at least attempt to learn from this type of data.
I don’t think that the argument, that people can be smart enough to create an AGI, that can take over the universe in a matter of hours, can be dumb enough not to recognize the dangers posed by such an AGI, is very strong. To fortify that argument you would either have to show that the people working for SIAI are vastly more intelligent than most AGI researchers, in which case they would be more likely to build the first AGI, or that the creation of an AGI, that is capable of explosive recursive self-improvement, demands much less intelligence and insight than which is necessary to recognize risks from AI.
I don’t consider this a response to my point. My point was that “concern for safety” is not well correlated with “ability to perform safely”. It’s very likely that many or all AGI researchers are aware of “risks” regarding the outcomes of their research. However, I consider it very unlikely that they will think deeply enough about the topic to come up with, or even start on, solutions such as Friendliness.
It is very easy to dispel any such doubts, all he would have to do is publish some technical paper that manages to survive peer-review, thereby substantiate his claims and prove that he is qualified.
Why do these discussions constantly come down to the same people debating the same points? Because, as you said, there are no published technical papers such as those promised by last year’s donation drive. SIAI is operating internally and not revising their public information. Do you believe their thoughts have failed to change, on any detail, in the time since initial publication?
It’s very likely that many or all AGI researchers...very unlikely that they will...
If I had an extraordinary idea related to a field of expertise that I am not part of, I would humbly request some of the experts to review it, before claiming that I know something that they don’t know, if I don’t even know if my idea makes sense.
Has this happened? All I know about are derogatory comments about mainstream AGI research, the academia and peer-review in general.
In case it has happened, it seems that the idea was not received positively. Does that mean that the idea is bogus? No. Does that mean that you should be particularly confident in your idea? No. It means that you should reassess it and gather or wait for more evidence before telling everyone that the world is going to end, create a whole movement around it, ask for money and advice people to neglect any other ideas, because everyone else is below your epistemic level.
Why do these discussions constantly come down to the same people debating the same points?
Because nobody other than a school dropout like me cares to take a critical look at those points, points that haven’t been addressed enough to generate the slightest academic interest.
If I had an extraordinary idea related to a field of expertise that I am not part of, I would humbly request some of the experts to review it, before claiming that I know something that they don’t know, if I don’t even know if my idea makes sense.
Has this happened? All I know about are derogatory comments about mainstream AGI research, the academia and peer-review in general.
I don’t think that the argument, that people can be smart enough to create an AGI, that can take over the universe in a matter of hours, can be dumb enough not to recognize the dangers posed by such an AGI, is very strong. To fortify that argument you would either have to show that the people working for SIAI are vastly more intelligent than most AGI researchers, in which case they would be more likely to build the first AGI, or that the creation of an AGI, that is capable of explosive recursive self-improvement, demands much less intelligence and insight than which is necessary to recognize risks from AI.
I deem that to be very unlikely as well. But given the scope of the project, and human nature, it should be taken into account. Not only that, but also that it’s a giant scam. Because if that is the case, even at a probability as low as 0.1%, valuable resources would be wasted that could be used to mitigate other existential risks, or being used by someone who follows selfish motives.
It is very easy to dispel any such doubts, all he would have to do is publish some technical paper that manages to survive peer-review, thereby substantiate his claims and prove that he is qualified.
You say that as if it’s worse than other ways that money could go to waste.
(There are good game-theoretic reasons to act sort of as if you thought that, but they should be made explicit, and explicit consideration of them probably wasn’t what motivated your statement.)
You seem to have the idea that this is all about Eliezer Yudkowsky. In actual fact, he wasn’t at the meeting where we came up with the model I described in this article, he’s influential but doesn’t control SIAI, and the existential risk issue is bigger than SIAI and a lot bigger than any one person. Most of the people involved think AI risk is important based on their own reasoning, not based on trusting Eliezer. Personally, I don’t really care whether he’s qualified, because I consider myself qualified enough to judge his arguments (or anonymous arguments) directly. What may be throwing you off is that he’s extremely visible—he’s the public face of SIAI to a lot of people—because he’s a prolific writer, and because he optimizes his writing to get lots of people to read it.
Journals are actually very bad for getting read by non-specialists, and Eliezer’s specialized his writing skill for presenting to smart laymen, rather than academics. Nevertheless, other authors have written and published papers about AI risk have been published. The issue at hand right now is getting into prestigious machine learning and computer science journals, rather than philosophy journals, so that the right specialists will read them. That’s much more difficult, because their editors think of them as having narrow topics that don’t include philosophy or futurism.
As someone who is still acquiring a basic education I have to rely on some amount of intuition and trust in peer-review. Here I give a lot of weight to actual, provable success, recognition, and substantial evidence in the form of a real world demonstration of intelligence and skill.
The Less Wrong sequences and upvotes by unknown and anonymous strangers are not enough to prove the expertise and intelligence that I consider necessary to lend enough support to such extraordinary ideas as the possibility of risks from artificial general intelligences undergoing explosive recursive self-improvement. At least not enough to disregard other risks that have been deemed important by a world-wide network of professionals with a track record of previous achievements.
I do not intent to be derogatory, but who are you, why would I trust your judgement or that of other people on Less Wrong? This is a blog on the Internet created by an organisation with a few papers that lack a lot of mathematical rigor and technical details.
What is bothering me is that I haven’t seen much evidence that he is qualified and intelligent enough to just believe him. People don’t even believe Roger Penrose when he is making up extraordinary claims outside his realm of expertise. And Roger Penrose achieved a lot more than Eliezer Yudkowsky and has demonstrated his superior intellect.
Rightly so.
Penrose has made a much bigger fool of himself in public—if that is what you mean.
IMO, a Yudkowsky is worth 10 Penroses—at least.
Machine intelligence will be an enormous deal. As to experts who think global warming is the important issue of the day—they are not doing that through genuine concern about the future. That is all about funding, and marketing, not the welfare of the planet. It is easier to put together grant applications in that area. Easier to build a popular movement. Easier to win votes. Environmentalist concerns signal greenenss, which is linked by many to goodness. Showing that you even care for trees, whales and the whole planet, shows you have a big heart.
The fact that global warming is actually irrelevant fluff is a side issue.
I give global warming as my number one example in my Bad Causes video.
One of the major messages which I think you should be picking up from the sequences is that it takes more than just intelligence to consistently separate good ideas from bad ones.
I have some scepticism about that point as well. We do have some relevant history relating to engineering disaasters. There have been lots of engineering projects in history, and we know roughly how many people died in accidents, and as a result of negligence, or were screwed over in other ways.
Engineers do sometimes fail. The Titanic. The Tacoma Narrows Bridge Collapse.
Then there’s all the people killed by cars and in coal mines. Society wants the benefits, and individuals pay the price. This effect seems much more significant than accidents to me—in terms of number of deaths.
However, I think that engineers have a reasonable record. In a historical enginnering project with lives at stake, one would certainly not expect failure—or claim that failure is “the default case”. The case goes the other way: high technology and machines cause humans to thrive.
Of course, reference class forecasting has limitations, but we should at least attempt to learn from this type of data.
I don’t consider this a response to my point. My point was that “concern for safety” is not well correlated with “ability to perform safely”. It’s very likely that many or all AGI researchers are aware of “risks” regarding the outcomes of their research. However, I consider it very unlikely that they will think deeply enough about the topic to come up with, or even start on, solutions such as Friendliness.
Why do these discussions constantly come down to the same people debating the same points? Because, as you said, there are no published technical papers such as those promised by last year’s donation drive. SIAI is operating internally and not revising their public information. Do you believe their thoughts have failed to change, on any detail, in the time since initial publication?
If I had an extraordinary idea related to a field of expertise that I am not part of, I would humbly request some of the experts to review it, before claiming that I know something that they don’t know, if I don’t even know if my idea makes sense.
Has this happened? All I know about are derogatory comments about mainstream AGI research, the academia and peer-review in general.
In case it has happened, it seems that the idea was not received positively. Does that mean that the idea is bogus? No. Does that mean that you should be particularly confident in your idea? No. It means that you should reassess it and gather or wait for more evidence before telling everyone that the world is going to end, create a whole movement around it, ask for money and advice people to neglect any other ideas, because everyone else is below your epistemic level.
Because nobody other than a school dropout like me cares to take a critical look at those points, points that haven’t been addressed enough to generate the slightest academic interest.
Did you see the coverage in recent versions of “AI: A Modern Approach”? Peter Norvig is an actual expert in artificial intelligence. The End of The World As We Know It even gets a mention!
Cool, I admit I have been wrong there and herewith retract that point.