though the safety of AGI is indeed an important issue, currently we don’t know enough about the subject to make any sure conclusion. Higher safety can only be achieved by more research on all related topics, rather than by pursuing approaches that have no solid scientific foundation.
As for your suggestion that “Higher safety can only be achieved by more research on all related topics,” I wonder if you think that is true of all subjects, or only in AGI. For example, should mankind vigorously pursue research on how to make Ron Fouchier’s alteration of the H5N1 bird flu virus even more dangerous and deadly to humans, because “higher safety can only be achieved by more research on all related topics”? (I’m not trying to broadly compare AGI capabilities research to supervirus research; I’m just trying to understand the nature of your rejection of my recommendation for mankind to decelerate AGI capabilities research and accelerate AGI safety research.)
One might then ask “Well, what safety research can we do if we don’t know what AGI architecture will succeed first?” My answer is that much of the research in this outline of open problems doesn’t require us to know which AGI architecture will succeed first, for example the problem of representing human values coherently.
For example, should mankind vigorously pursue research on how to make Ron Fouchier’s alteration of the H5N1 bird flu virus even more dangerous and deadly to humans, because “higher safety can only be achieved by more research on all related topics”?
Yeah, I remember reading this argument and thinking how it does not hold water. The flu virus is a well-research area. It may yet hold some surprises, sure, but we think that we know quite a bit about it. We know enough to tell what is dangerous and what is not. AGI research is nowhere near this stage. My comparison would be someone screaming at Dmitri Ivanovsky in 1892 “do not research viruses until you know that this research is safe!”.
My answer is that much of the research in this outline of open problems doesn’t require us to know which AGI architecture will succeed first, for example the problem of representing human values coherently.
Do other AI researchers agree with your list of open problems worth researching? If you asked Dr. Wang about it, what was his reaction?
My comparison would be someone screaming at Dmitri Ivanovsky in 1892 “do not research viruses until you know that this research is safe!”.
I want to second that. Also, when reading through this (and feeling the—probably imagined—tension of both parties to stay polite) the viral point was the first one that triggered the “this is clearly an attack!” emotion in my head. I was feeling sad about that, and had hoped that luke would find another ingenious example.
For example, should mankind vigorously pursue research on how to make Ron Fouchier’s alteration of the H5N1 bird flu virus even more dangerous and deadly to humans, because “higher safety can only be achieved by more research on all related topics”?
That’s not really anyone’s proposal. Humans will probably just continue full-steam-ahead on machine intelligence research. There will be luddite-like factions hissing and throwing things—but civilisation is used to that. What we may see is governments with the technology selfishly attempting to stem their spread—in a manner somewhat resembling the NSA crypto-wars.
For example, should mankind vigorously pursue research on how to make Ron Fouchier’s alteration of the H5N1 bird flu virus even more dangerous and deadly to humans...
Trivially speaking, I would say “yes”.
More specifically, though, I would of course be very much against developing increasingly more dangerous viral biotechnologies. However, I would also be very much in favor of advancing our understanding of biology in general and viruses in particular. Doing so will enable us to cure many diseases and bioengineer our bodies (or anything else we want to engineer) to highly precise specifications; unfortunately, such scientific understanding will also allow us to create new viruses, if we chose to do so. Similarly, the discovery of fire allowed us to cook our food as well as set fire to our neighbours. Overall, I think we still came out ahead.
I think there is something wrong with your analogy with the fire. The thing is that you cannot accidentally or purposefully burn all the people in the world or the vast majority of them by setting fire to them, but with a virus like the one Luke is talking about you can kill most people.
Yes, both a knife and an atomic bomb can kill 100.000 people. It is just way easier to do it with the atomic bomb. That is why everybody can have a knife but only a handful of people can “have” an atomic bomb. Imagine what the risks would be if we would give virtually everybody who would be interested, all the instructions on how to build a weapon 100 times more dangerous than an atomic bomb (like a highly contagious deadly virus).
The thing is that you cannot accidentally or purposefully burn all the people in the world or the vast majority of them by setting fire to them...
Actually, you could, if your world consists of just you and your tribe, and you start a forest fire on accident (or on purpose).
Yes, both a knife and an atomic bomb can kill 100.000 people. It is just way easier to do it with the atomic bomb. That is why everybody can have a knife but only a handful of people can “have” an atomic bomb.
Once again, I think you are conflating science with technology. I am 100% on board with not giving out atomic bombs for free to anyone who asks for one. However, this does not mean that we should prohibit the study of atomic theory; and, in fact, atomic theory is taught in high school nowadays.
When Luke says, “we should decelerate AI research”, he’s not saying, “let’s make sure people don’t start build AIs in their garages using well-known technologies”. Rather, he’s saying, “we currently have no idea how to build an AI, or whether it’s even possible, or what principles might be involved, but let’s make sure no one figures this out for a long time”. This is similar to saying, “these atomic theory and quantum physics things seem like they might lead to all kinds of fascinating discoveries, but let’s put a lid on them until we can figure out how to make the world safe from nuclear annihilation”. This is a noble sentiment, but, IMO, a misguided one. I am typing these words on a device that’s powered by quantum physics, after all.
His main agenda and desired conclusion regarding social policy is represented in the summary there, but the main point made in his discussion is “Adaptive! Adaptive! Adaptive!”. Where by ‘adaptive’ he refers to his conception of an AI that is changes its terminal goals based on education.
Pang calls these “original goals” and “derived goals”. The “original goals” don’t change, but they may not stay “dominant” for long—in Pei’s proposed system.
though the safety of AGI is indeed an important issue, currently we don’t know enough about the subject to make any sure conclusion. Higher safety can only be achieved by more research on all related topics, rather than by pursuing approaches that have no solid scientific foundation.
Not sure how you can effectively argue with this.
Far from considering the argument irrefutable it struck me as superficial and essentially fallacious reasoning. The core of the argument is the claim ‘more research on all related topics is good’ and failing to include the necessary ceteris paribus clause and ignoring the details of the specific instance that suggest that all else is not, if fact, equal.
Specifically, we are considering a situation where there is one area of research (capability), the completion of which will approximately guarantee that the technology created will be implemented shortly after (especially given Wang’s assumption that such research should be done through empirical experimentation.) The second area of research (about how to ensure desirable behavior of an AI) is one that it is not necessary to complete in order for the first to be implemented. If both technologies need to have been developed at the time when the first is implemented in order to be safe then the second technology must be completed at the same time or earlier than when the technological capability for the first to be implemented is complete.
rather than by pursuing approaches that have no solid scientific foundation.
(And this part just translates to “I’m the cool one, not you”. The usual considerations on how much weight to place on various kinds of status and reputation of an individual or group apply.)
I think his main point is in the summary:
Not sure how you can effectively argue with this.
How ’bout the way I argued with it?
One might then ask “Well, what safety research can we do if we don’t know what AGI architecture will succeed first?” My answer is that much of the research in this outline of open problems doesn’t require us to know which AGI architecture will succeed first, for example the problem of representing human values coherently.
Yeah, I remember reading this argument and thinking how it does not hold water. The flu virus is a well-research area. It may yet hold some surprises, sure, but we think that we know quite a bit about it. We know enough to tell what is dangerous and what is not. AGI research is nowhere near this stage. My comparison would be someone screaming at Dmitri Ivanovsky in 1892 “do not research viruses until you know that this research is safe!”.
Do other AI researchers agree with your list of open problems worth researching? If you asked Dr. Wang about it, what was his reaction?
I want to second that. Also, when reading through this (and feeling the—probably imagined—tension of both parties to stay polite) the viral point was the first one that triggered the “this is clearly an attack!” emotion in my head. I was feeling sad about that, and had hoped that luke would find another ingenious example.
Well, bioengineered viruses are on the list of existential threats...
And there aren’t naturally occurring AIs scampering around killing millions of people… It’s a poor analogy.
“Natural AI” is an oxymoron. There are lots of NIs (natural intelligences) scampering around killing millions of people.
And we’re only a little over a hundred years into virus research, much less on intelligence. Give it another hundred.
Wouldn’t a “naturally occurring AI” be an “intelligence” like humans?
That’s not really anyone’s proposal. Humans will probably just continue full-steam-ahead on machine intelligence research. There will be luddite-like factions hissing and throwing things—but civilisation is used to that. What we may see is governments with the technology selfishly attempting to stem their spread—in a manner somewhat resembling the NSA crypto-wars.
This seems topical:
http://www.nature.com/news/controversial-research-good-science-bad-science-1.10511
Trivially speaking, I would say “yes”.
More specifically, though, I would of course be very much against developing increasingly more dangerous viral biotechnologies. However, I would also be very much in favor of advancing our understanding of biology in general and viruses in particular. Doing so will enable us to cure many diseases and bioengineer our bodies (or anything else we want to engineer) to highly precise specifications; unfortunately, such scientific understanding will also allow us to create new viruses, if we chose to do so. Similarly, the discovery of fire allowed us to cook our food as well as set fire to our neighbours. Overall, I think we still came out ahead.
I think there is something wrong with your analogy with the fire. The thing is that you cannot accidentally or purposefully burn all the people in the world or the vast majority of them by setting fire to them, but with a virus like the one Luke is talking about you can kill most people.
Yes, both a knife and an atomic bomb can kill 100.000 people. It is just way easier to do it with the atomic bomb. That is why everybody can have a knife but only a handful of people can “have” an atomic bomb. Imagine what the risks would be if we would give virtually everybody who would be interested, all the instructions on how to build a weapon 100 times more dangerous than an atomic bomb (like a highly contagious deadly virus).
Actually, you could, if your world consists of just you and your tribe, and you start a forest fire on accident (or on purpose).
Once again, I think you are conflating science with technology. I am 100% on board with not giving out atomic bombs for free to anyone who asks for one. However, this does not mean that we should prohibit the study of atomic theory; and, in fact, atomic theory is taught in high school nowadays.
When Luke says, “we should decelerate AI research”, he’s not saying, “let’s make sure people don’t start build AIs in their garages using well-known technologies”. Rather, he’s saying, “we currently have no idea how to build an AI, or whether it’s even possible, or what principles might be involved, but let’s make sure no one figures this out for a long time”. This is similar to saying, “these atomic theory and quantum physics things seem like they might lead to all kinds of fascinating discoveries, but let’s put a lid on them until we can figure out how to make the world safe from nuclear annihilation”. This is a noble sentiment, but, IMO, a misguided one. I am typing these words on a device that’s powered by quantum physics, after all.
His main agenda and desired conclusion regarding social policy is represented in the summary there, but the main point made in his discussion is “Adaptive! Adaptive! Adaptive!”. Where by ‘adaptive’ he refers to his conception of an AI that is changes its terminal goals based on education.
Pang calls these “original goals” and “derived goals”. The “original goals” don’t change, but they may not stay “dominant” for long—in Pei’s proposed system.
Far from considering the argument irrefutable it struck me as superficial and essentially fallacious reasoning. The core of the argument is the claim ‘more research on all related topics is good’ and failing to include the necessary ceteris paribus clause and ignoring the details of the specific instance that suggest that all else is not, if fact, equal.
Specifically, we are considering a situation where there is one area of research (capability), the completion of which will approximately guarantee that the technology created will be implemented shortly after (especially given Wang’s assumption that such research should be done through empirical experimentation.) The second area of research (about how to ensure desirable behavior of an AI) is one that it is not necessary to complete in order for the first to be implemented. If both technologies need to have been developed at the time when the first is implemented in order to be safe then the second technology must be completed at the same time or earlier than when the technological capability for the first to be implemented is complete.
(And this part just translates to “I’m the cool one, not you”. The usual considerations on how much weight to place on various kinds of status and reputation of an individual or group apply.)