Yes, I said that I believe that even sub-human level AI pose an existential risk. At the same time I am highly skeptical of FOOM. So why don’t I agree with Eliezer outright anyway? Because the risks from AI that I perceive to be a possibility are not something you can solve by inventing provable “friendliness”. How are you going to make a sophisticated monitoring system friendly? Why would people want to make it friendly? How are you going to make a virus with sub-human level general intelligence friendly? Why would one do that? Risks from AI are a broad category that need meta-solutions that involve preemptive political and security measures. You need to make sure that the first intelligent surveillance systems are employed transparently and democratically so that everyone can monitor the world for the various risks ahead. We need a global immune system that keeps care that nowhere someone gets ahead of everyone else.
Have you taken your own survey and published the results somewhere? Or is it only for AI researchers? It seems like there are a great deal of hidden assumptions on all sides which make these discussions go off the tracks very quickly. Some kind of basic survey with standard probability estimates might easily show where views differ.
So why don’t I agree with Eliezer outright anyway? Because the risks from AI that I perceive to be a possibility are not something you can solve by inventing provable “friendliness”.
I agree that friendliness is a long shot. If you know of a better solution, please let me know.
How are you going to make a sophisticated monitoring system friendly?
By developing a theory of friendliness and implementing it in software.
Why would people want to make it friendly?
Because unfriendly things are bad.
How are you going to make a virus with sub-human level general intelligence friendly?
By developing a theory of friendliness and implementing it in software.
Why would one do that?
Because unfriendly things are bad.
Risks from AI are a broad category that need meta-solutions that involve preemptive political and security measures. You need to make sure that the first intelligent surveillance systems are employed transparently and democratically so that everyone can monitor the world for the various risks ahead. We need a global immune system that keeps care that nowhere someone gets ahead of everyone else.
Sounds like a job for CEV and a friendly AI to run it on.
Have you taken your own survey and published the results somewhere?
Yes I have done so. But I don’t trust my ability to make correct probability estimates, don’t trust the overall arguments and methods and don’t know how to integrate that uncertainty into my estimates. It is all too vague.
There sure are a lot of convincing arguments in favor of risks from AI. But do arguments suffice? Nobody is an expert when it comes to intelligence. Even worse, I don’t think anybody knows much about artificial general intelligence.
My problem is that I fear that some convincing blog posts are simply not enough. Just imagine all there was to climate change was someone with a blog who never studied the climate but instead wrote some essays about how it might be physical possible for humans to cause a global warming. Not enough, the same person then goes on to make further inferences based on the implications of those speculations. Am I going to tell everyone to stop emitting CO2 because of that? Hardly! Or imagine that all there was to the possibility of asteroid strikes was someone who argued that there might be big chunks of rocks out there which might fall down on our heads and kill us all, inductively based on the fact that the Earth and the moon are also a big rocks. Would I be willing to launch a billion dollar asteroid deflection program solely based on such speculations? I don’t think so. Luckily, in both cases, we got a lot more than some convincing arguments in support of those risks.
Another example: If there were no studies about the safety of high energy physics experiments then I might assign a 20% chance of a powerful particle accelerator destroying the universe based on some convincing arguments put forth on a blog by someone who never studied high energy physics. We know that such an estimate would be wrong by many orders of magnitude. Yet the reason for being wrong would largely be a result of my inability to make correct probability estimates, the result of vagueness or a failure of the methods I employed. The reason for being wrong by many orders of magnitude would have nothing to do with the arguments in favor of the risks, as they might very well be sound given my epistemic sate and the prevalent uncertainty.
In summary: I believe that mere arguments in favor of one risk do not suffice to neglect other risks that are supported by other kinds of evidence. I believe that logical implications of sound arguments should not reach out indefinitely and thereby outweigh other risks whose implications are fortified by empirical evidence. Sound arguments, predictions, speculations and their logical implications are enough to demand further attention and research, but not much more.
I agree that friendliness is a long shot. If you know of a better solution, please let me know.
If there was a risk that might kill us with a probability of .7 and another risk with .1 while our chance to solve the first one was .0001 and the second one .1, which one should we focus on?
Why do I feel like there’s massively more evidence than “a few blog posts”? I must be counting information I’ve gained from other studies, like those on human history, and lumping it all under “what intelligent agents can accomplish”. I’m likely counting fictional evidence, as well; I feel sort of like an early 20th century sci-fi buff must have felt about rockets to the moon. Another large part of being convinced falls under a lack of counterarguments—rather, there are plenty out there, just none that seem to have put thought into the matter.
At any rate, I’m not asking for the entire world to throw down their asteroid detection schemes or their climate mitigation strategies; that’s not politically feasible, regardless of risk probabilities. I’m just asking them to increase the size of the pie by a few million, maybe as little as one billion total, to add research about AI, and to spend more money on the whole gamut of existential risk reduction as a cohesive topic of great importance.
What would you tell the first climate scientist to examine global warming, or the first to predict asteroid strikes, other than “do more research, and get others to do research as well”?
What would you tell the first climate scientist to examine global warming, or the first to predict asteroid strikes, other than “do more research, and get others to do research as well”?
I have no problem with a billion dollars spend on friendly AI research. But that doesn’t mean that I agree that the SIAI needs a billion dollars right now or that I agree that the current evidence is enough to tell people to stop researching cancer therapies or create educational videos about basic algebra. I don’t think we know enough about risks from AI to justify such advice. I also don’t think that we should all become expected utility maximizer’s because we don’t know enough about economics, game theory, and decision theory and especially about human nature and the nature of discovery.
Why do I feel like there’s massively more evidence than “a few blog posts”?
Maybe because there is massively more evidence and I don’t know about it, don’t understand it, haven’t taken it into account or because I am simply biased. I am not saying that you are wrong and I am right.
...those on human history, and lumping it all under “what intelligent agents can accomplish”.
Shortly after human flight was invented we reached the moon. Yet human flight is not as sophisticated as bird or insect flight, it is much more inefficient, and we never reached other stars. Therefore, what I get out of this, shortly after we invent artificial general intelligence we might reach human-level intelligence and in some areas superhuman intelligence. But that doesn’t mean that it will be particularly fast or efficient, or that it will be able to take over the world shortly after. Artificial general intelligence is already an inference made from what we currently believe to be true, going a step further and drawing further inferences from previous speculations, e.g. explosive recursive self-improvement, is in my opinion a very shaky business. We have no idea about the nature of discovery, if intelligence (whatever that is) is even instrumental or quickly hits diminishing returns.
In principle we could build antimatter weapons capable of destroying worlds, but in practise it is much harder to accomplish. The same seems to be the case for intelligence. It is not intelligence in and of itself that allows humans to accomplish great feats. Someone like Einstein was lucky to be born into the right circumstances, the time was ripe for great discoveries.
Another large part of being convinced falls under a lack of counterarguments—rather, there are plenty out there, just none that seem to have put thought into the matter.
Prediction: The world is going to end.
Got any counterarguments I couldn’t easily dismiss?
Most of the superficially disjunctive lines of reasoning about risks from AI derive their appeal from their inherent vagueness. It’s not like you don’t need any assumptions to be true to get “artificial general intelligence that can undergo explosive recursive self-improvement to turn all matter in the universe into paperclips”. That’s a pretty complex prediction actually.
There are various different scenarios regarding the possibility and consequences of artificial general intelligence. I just don’t see why the one put forth by the SIAI is more likely to be true than others. Why for example would intelligence be a single principle that, once discovered, allows us to grow superhuman intelligence overnight? Why are we going to invent artificial general intelligence quickly, rather than having to painstakingly optimize our expert systems over many centuries? Why would intelligence be effectively applicable to intelligence itself, rather than demanding the discovery of unknown unknowns due to sheer luck or the pursuit of treatments for rare diseases in cute kittens? Why would general intelligence be at all efficient compared to expert systems, maybe general intelligence demands a tradeoff between plasticity and goal-stability? I can think of dozens of possibilities within minutes, none of them leading to existential risk scenarios.
I have no problem with a billion dollars spend on friendly AI research. But that doesn’t mean that I agree that the SIAI needs a billion dollars right now or that I agree that the current evidence is enough to tell people to stop researching cancer therapies or create educational videos about basic algebra. I don’t think we know enough about risks from AI to justify such advice. I also don’t think that we should all become expected utility maximizer’s because we don’t know enough about economics, game theory, and decision theory and especially about human nature and the nature of discovery.
This is the part I’d like to focus on. Restating that position from my understanding, you are unconvinced that SIAI is important to fund, and you will not pay them to convince you, and it would be perfectly fine for other people to fund them, and you will be following the area to see if they provide convincing things in the future. Is that a fair characterization?
...you are unconvinced that SIAI is important to fund, and you will not pay them to convince you, and it would be perfectly fine for other people to fund them, and you will be following the area to see if they provide convincing things in the future. Is that a fair characterization?
Almost, I think it is important that the SIAI continues to receive at least as much as it did last year. If the SIAI’s sustainability was at stake I would contribute money, I just don’t know how much. I would probably devote some time to think about the whole issue, more thoroughly than I have until now. Which also hints at a general problem, I think many people lack the initial incentive that is necessary to take the whole topic seriously in the first place, seriously enough to even invest the required time and resources to analyze the available data sufficiently.
I recently hinted at some problems that need to be addressed in order to convince me that the SIAI needs more money. I am currently waiting for the “exciting developments”, that have been mentioned in the subsequent comments thread, to take place.
Another problem is the secretive approach the SIAI seems to subscribe to. I am not convinced that a secretive approach is the right thing to do. I also don’t have enough confidence to just take their word for it if they say that they are making progress. They have to figure out how to convince people that actual progress is being made, or at least attempted, without revealing too much detail. They also have to explain if they suspect Eliezer Yudkowsky to be able to solve friendly AI on his own, or otherwise how they are going to guarantee the “friendliness” of future employees.
I think this thread started by timtyler is more representative of the opinion of most people (if they knew about the SIAI) than those members of lesswrong who are already sold. People here seem overly confident in what they are told without asking for further evidence. Not that I care about the AI box experiment, even prison guards can be persuaded by humans to let them out of the jail. But as timtyler said, the secretive approach employed by the SIAI, “don’t ask don’t tell”, isn’t going to convince many people any time soon. I doubt actual researchers would just trust the SIAI if they claimed they proved something without providing any evidence supporting the claim.
Shortly after human flight was invented we reached the moon. Yet human flight is not as sophisticated as bird or insect flight, it is much more inefficient, and we never reached other stars.
How do you mean? Human planes are faster and can transport freight better. They can even self-pilot with modern AI software. The biggest weaknesses would seem to be a lack of self-reproduction and self-repair, but those aren’t really part of flight.
How do you mean? Human planes are faster and can transport freight better.
Energy efficiency and maneuverability. I suppose a dragonfly would have been a better example. We never really went straight from no artificial flight towards generally superbird/insect flight. All we got are expert flight systems, no general flight systems. Even if we were handed the design for a perfect artificial dragonfly, minus the design for the flight of a dragonfly, we wouldn’t be able to build a dragonfly that can take over the world of dragonflies, all else equal, by means of superior flight characteristics.
Where are your figures for energy efficiency? (Recalling that the comparison should be for the same speed, or energy per kilogram transported for a kilometer given the optimal speed tradeoff).
A Harpy Eagle can lift more than three-quarters of its body weight while the Boeing 747 Large Cargo Freighter has a maximum take-off weight of almost double its operating empty weight. I suspect that insects can do better. But my whole point is that we never reached artificial flight that is strongly above the level of natural flight. An eagle can after all catch its cargo under various circumstances like the slope of a mountain or from under the surface of water, thanks to its superior maneuverability.
If there was a risk that might kill us with a probability of .7 and another risk with .1 while our chance to solve the first one was .0001 and the second one .1, which one should we focus on?
To solve this problem we need to know more. As it stands, the marginal effect of investment in the problems on the probability of the problems being solved is unknown—as is the temporal relationship of the problems. Do they arise at the same time? Is there going to be time to concentrate on the second problem after solving the first one? - etc.
I don’t trust my ability to make correct probability estimates, don’t trust the overall arguments and methods and don’t know how to integrate that uncertainty into my estimates. It is all too vague.
Essentially, uncertainty → wider confidence intervals and less certainty (i.e. fewer extreme probability estimates).
Yes, I said that I believe that even sub-human level AI pose an existential risk. At the same time I am highly skeptical of FOOM. So why don’t I agree with Eliezer outright anyway? Because the risks from AI that I perceive to be a possibility are not something you can solve by inventing provable “friendliness”. How are you going to make a sophisticated monitoring system friendly? Why would people want to make it friendly? How are you going to make a virus with sub-human level general intelligence friendly? Why would one do that? Risks from AI are a broad category that need meta-solutions that involve preemptive political and security measures. You need to make sure that the first intelligent surveillance systems are employed transparently and democratically so that everyone can monitor the world for the various risks ahead. We need a global immune system that keeps care that nowhere someone gets ahead of everyone else.
Have you taken your own survey and published the results somewhere? Or is it only for AI researchers? It seems like there are a great deal of hidden assumptions on all sides which make these discussions go off the tracks very quickly. Some kind of basic survey with standard probability estimates might easily show where views differ.
I agree that friendliness is a long shot. If you know of a better solution, please let me know.
By developing a theory of friendliness and implementing it in software.
Because unfriendly things are bad.
By developing a theory of friendliness and implementing it in software.
Because unfriendly things are bad.
Sounds like a job for CEV and a friendly AI to run it on.
Yes I have done so. But I don’t trust my ability to make correct probability estimates, don’t trust the overall arguments and methods and don’t know how to integrate that uncertainty into my estimates. It is all too vague.
There sure are a lot of convincing arguments in favor of risks from AI. But do arguments suffice? Nobody is an expert when it comes to intelligence. Even worse, I don’t think anybody knows much about artificial general intelligence.
My problem is that I fear that some convincing blog posts are simply not enough. Just imagine all there was to climate change was someone with a blog who never studied the climate but instead wrote some essays about how it might be physical possible for humans to cause a global warming. Not enough, the same person then goes on to make further inferences based on the implications of those speculations. Am I going to tell everyone to stop emitting CO2 because of that? Hardly! Or imagine that all there was to the possibility of asteroid strikes was someone who argued that there might be big chunks of rocks out there which might fall down on our heads and kill us all, inductively based on the fact that the Earth and the moon are also a big rocks. Would I be willing to launch a billion dollar asteroid deflection program solely based on such speculations? I don’t think so. Luckily, in both cases, we got a lot more than some convincing arguments in support of those risks.
Another example: If there were no studies about the safety of high energy physics experiments then I might assign a 20% chance of a powerful particle accelerator destroying the universe based on some convincing arguments put forth on a blog by someone who never studied high energy physics. We know that such an estimate would be wrong by many orders of magnitude. Yet the reason for being wrong would largely be a result of my inability to make correct probability estimates, the result of vagueness or a failure of the methods I employed. The reason for being wrong by many orders of magnitude would have nothing to do with the arguments in favor of the risks, as they might very well be sound given my epistemic sate and the prevalent uncertainty.
In summary: I believe that mere arguments in favor of one risk do not suffice to neglect other risks that are supported by other kinds of evidence. I believe that logical implications of sound arguments should not reach out indefinitely and thereby outweigh other risks whose implications are fortified by empirical evidence. Sound arguments, predictions, speculations and their logical implications are enough to demand further attention and research, but not much more.
If there was a risk that might kill us with a probability of .7 and another risk with .1 while our chance to solve the first one was .0001 and the second one .1, which one should we focus on?
Why do I feel like there’s massively more evidence than “a few blog posts”? I must be counting information I’ve gained from other studies, like those on human history, and lumping it all under “what intelligent agents can accomplish”. I’m likely counting fictional evidence, as well; I feel sort of like an early 20th century sci-fi buff must have felt about rockets to the moon. Another large part of being convinced falls under a lack of counterarguments—rather, there are plenty out there, just none that seem to have put thought into the matter.
At any rate, I’m not asking for the entire world to throw down their asteroid detection schemes or their climate mitigation strategies; that’s not politically feasible, regardless of risk probabilities. I’m just asking them to increase the size of the pie by a few million, maybe as little as one billion total, to add research about AI, and to spend more money on the whole gamut of existential risk reduction as a cohesive topic of great importance.
What would you tell the first climate scientist to examine global warming, or the first to predict asteroid strikes, other than “do more research, and get others to do research as well”?
I have no problem with a billion dollars spend on friendly AI research. But that doesn’t mean that I agree that the SIAI needs a billion dollars right now or that I agree that the current evidence is enough to tell people to stop researching cancer therapies or create educational videos about basic algebra. I don’t think we know enough about risks from AI to justify such advice. I also don’t think that we should all become expected utility maximizer’s because we don’t know enough about economics, game theory, and decision theory and especially about human nature and the nature of discovery.
Maybe because there is massively more evidence and I don’t know about it, don’t understand it, haven’t taken it into account or because I am simply biased. I am not saying that you are wrong and I am right.
Shortly after human flight was invented we reached the moon. Yet human flight is not as sophisticated as bird or insect flight, it is much more inefficient, and we never reached other stars. Therefore, what I get out of this, shortly after we invent artificial general intelligence we might reach human-level intelligence and in some areas superhuman intelligence. But that doesn’t mean that it will be particularly fast or efficient, or that it will be able to take over the world shortly after. Artificial general intelligence is already an inference made from what we currently believe to be true, going a step further and drawing further inferences from previous speculations, e.g. explosive recursive self-improvement, is in my opinion a very shaky business. We have no idea about the nature of discovery, if intelligence (whatever that is) is even instrumental or quickly hits diminishing returns.
In principle we could build antimatter weapons capable of destroying worlds, but in practise it is much harder to accomplish. The same seems to be the case for intelligence. It is not intelligence in and of itself that allows humans to accomplish great feats. Someone like Einstein was lucky to be born into the right circumstances, the time was ripe for great discoveries.
Prediction: The world is going to end.
Got any counterarguments I couldn’t easily dismiss?
Most of the superficially disjunctive lines of reasoning about risks from AI derive their appeal from their inherent vagueness. It’s not like you don’t need any assumptions to be true to get “artificial general intelligence that can undergo explosive recursive self-improvement to turn all matter in the universe into paperclips”. That’s a pretty complex prediction actually.
There are various different scenarios regarding the possibility and consequences of artificial general intelligence. I just don’t see why the one put forth by the SIAI is more likely to be true than others. Why for example would intelligence be a single principle that, once discovered, allows us to grow superhuman intelligence overnight? Why are we going to invent artificial general intelligence quickly, rather than having to painstakingly optimize our expert systems over many centuries? Why would intelligence be effectively applicable to intelligence itself, rather than demanding the discovery of unknown unknowns due to sheer luck or the pursuit of treatments for rare diseases in cute kittens? Why would general intelligence be at all efficient compared to expert systems, maybe general intelligence demands a tradeoff between plasticity and goal-stability? I can think of dozens of possibilities within minutes, none of them leading to existential risk scenarios.
This is the part I’d like to focus on. Restating that position from my understanding, you are unconvinced that SIAI is important to fund, and you will not pay them to convince you, and it would be perfectly fine for other people to fund them, and you will be following the area to see if they provide convincing things in the future. Is that a fair characterization?
Almost, I think it is important that the SIAI continues to receive at least as much as it did last year. If the SIAI’s sustainability was at stake I would contribute money, I just don’t know how much. I would probably devote some time to think about the whole issue, more thoroughly than I have until now. Which also hints at a general problem, I think many people lack the initial incentive that is necessary to take the whole topic seriously in the first place, seriously enough to even invest the required time and resources to analyze the available data sufficiently.
I recently hinted at some problems that need to be addressed in order to convince me that the SIAI needs more money. I am currently waiting for the “exciting developments”, that have been mentioned in the subsequent comments thread, to take place.
Another problem is the secretive approach the SIAI seems to subscribe to. I am not convinced that a secretive approach is the right thing to do. I also don’t have enough confidence to just take their word for it if they say that they are making progress. They have to figure out how to convince people that actual progress is being made, or at least attempted, without revealing too much detail. They also have to explain if they suspect Eliezer Yudkowsky to be able to solve friendly AI on his own, or otherwise how they are going to guarantee the “friendliness” of future employees.
I think this thread started by timtyler is more representative of the opinion of most people (if they knew about the SIAI) than those members of lesswrong who are already sold. People here seem overly confident in what they are told without asking for further evidence. Not that I care about the AI box experiment, even prison guards can be persuaded by humans to let them out of the jail. But as timtyler said, the secretive approach employed by the SIAI, “don’t ask don’t tell”, isn’t going to convince many people any time soon. I doubt actual researchers would just trust the SIAI if they claimed they proved something without providing any evidence supporting the claim.
How do you mean? Human planes are faster and can transport freight better. They can even self-pilot with modern AI software. The biggest weaknesses would seem to be a lack of self-reproduction and self-repair, but those aren’t really part of flight.
Energy efficiency and maneuverability. I suppose a dragonfly would have been a better example. We never really went straight from no artificial flight towards generally superbird/insect flight. All we got are expert flight systems, no general flight systems. Even if we were handed the design for a perfect artificial dragonfly, minus the design for the flight of a dragonfly, we wouldn’t be able to build a dragonfly that can take over the world of dragonflies, all else equal, by means of superior flight characteristics.
Where are your figures for energy efficiency? (Recalling that the comparison should be for the same speed, or energy per kilogram transported for a kilometer given the optimal speed tradeoff).
A Harpy Eagle can lift more than three-quarters of its body weight while the Boeing 747 Large Cargo Freighter has a maximum take-off weight of almost double its operating empty weight. I suspect that insects can do better. But my whole point is that we never reached artificial flight that is strongly above the level of natural flight. An eagle can after all catch its cargo under various circumstances like the slope of a mountain or from under the surface of water, thanks to its superior maneuverability.
To solve this problem we need to know more. As it stands, the marginal effect of investment in the problems on the probability of the problems being solved is unknown—as is the temporal relationship of the problems. Do they arise at the same time? Is there going to be time to concentrate on the second problem after solving the first one? - etc.
Essentially, uncertainty → wider confidence intervals and less certainty (i.e. fewer extreme probability estimates).
Please convince me that your Roboto Protocol could work. I don’t observe politics ever producing results like the ones you seem to require.