However, since ASI could reduce most risks, delaying the creation of ASI could also increase other existential risks, especially from advanced future technologies such as synthetic biology and molecular nanotechnology.
Here’s a solution to all this. I call this revolutionary new philosophy....
Acting Like Adults
Here’s how it works. We don’t create a new technology which poses an existential risk until we’ve credibly figured out how to make the last one safe.
So, in practice, it looks like this. End all funding for AI, synthetic biology and molecular nanotechnology etc until we figure how to liberate ourselves from 1945 existential risk technology.
The super sophisticated, high end, intellectual elite, philosophically elegant methodology involved here is called...
Common Sense
If our teenage son wants us to buy him a car, we might respond by saying, “show me that you won’t crash this moped first”. Prove that you’re ready.
The fact that all of this has to be explained, and once explained it will be universally ignored, demonstrates that...
Also, I don’t think that any of these conclusions or recommendations are simple or common sense.
Though some of them may seem simple in hindsight just as a math problem seems simple after one has seen the solution.
The reason why I wrote this post was that I was very confused about the subject. If I thought there was a simple answer, I wouldn’t have written the post or written a much shorter post.
Here is a quote from my research proposal:
“Given that the development of AGI could both increase or decrease existential risk, it is not clear when it should be developed.”
And a quote from the person reviewing my proposal:
“I took a look at your final project proposal. I’m somewhat concerned that your project as currently proposed is intractable, especially for a short research project.”
Not only was the project not simple, the reviewer thought that it was almost impossible to make progress on given the number of factors at play.
“Retard the development of dangerous and harmful technologies, especially ones that raise the level of existential risk; and accelerate the development of beneficial technologies, especially those that reduce the existential risks posed by nature or by other technologies.”
The problem with this policy is the unilateralist’s curse which says that a single optimistic actor could develop a technology. Technologies such as AI have substantial benefits and risks, the balance is uncertain and the net benefit is perceived differently by different actors. For a technology not to be developed all actors would have to agree not to develop it which would require significant coordination. In the post I describe several factors such as war that might affect the level of global coordination and that it might be wise to slow down AI development by a few years or decades if coordination can’t be achieved since I think AI risk is higher than other risks.
The problem with this policy is the unilateralist’s curse which says that a single optimistic actor could develop a technology. Technologies such as AI have substantial benefits and risks, the balance is uncertain and the net benefit is perceived differently by different actors. For a technology not to be developed all actors would have to agree not to develop it which would require significant coordination.
Yes, agreed, what you refer to is indeed a huge obstacle.
From years of writing on this I’ve discovered another obstacle. When ever this subject comes up almost all those who join the conversation focus almost exclusively on obstacles and theories regarding why such change isn’t possible, and...
The conversation almost never gets to the point of folks rolling up their sleeves to look for solutions.
I don’t have a big pile of solutions to put on the table either. All I really have is the insight that overcoming these challenges isn’t optional.
In my judgement there is little chance of such fundamental change to our relationship with unlimited technological progress within the current cultural status quo. However, given the vast scale of forces being released in to the world there would seem to be an unprecedented possibility of revolutionary change to the status quo.
As example, imagine even a limited nuclear exchange between Pakistan and India. More people would die in a few minutes than died in all of WWII. The media would feed on the carnage for a long time, relentlessly pumping unspeakable horror imagery in to every home in the world with a TV.
Consider for instance how all the stories about floods, fires and heat waves etc are editing our relationship with climate change. It’s no longer such an abstract issue to us, it’s increasingly becoming real, hitting us where we really live, in the emotional realm.
Here’s a solution to all this. I call this revolutionary new philosophy....
Acting Like Adults
Here’s how it works. We don’t create a new technology which poses an existential risk until we’ve credibly figured out how to make the last one safe.
So, in practice, it looks like this. End all funding for AI, synthetic biology and molecular nanotechnology etc until we figure how to liberate ourselves from 1945 existential risk technology.
The super sophisticated, high end, intellectual elite, philosophically elegant methodology involved here is called...
Common Sense
If our teenage son wants us to buy him a car, we might respond by saying, “show me that you won’t crash this moped first”. Prove that you’re ready.
The fact that all of this has to be explained, and once explained it will be universally ignored, demonstrates that...
We ain’t ready.
Also, I don’t think that any of these conclusions or recommendations are simple or common sense.
Though some of them may seem simple in hindsight just as a math problem seems simple after one has seen the solution.
The reason why I wrote this post was that I was very confused about the subject. If I thought there was a simple answer, I wouldn’t have written the post or written a much shorter post.
Here is a quote from my research proposal:
And a quote from the person reviewing my proposal:
Not only was the project not simple, the reviewer thought that it was almost impossible to make progress on given the number of factors at play.
What you’re suggesting sounds like differential technological development or the precautionary principle:
The problem with this policy is the unilateralist’s curse which says that a single optimistic actor could develop a technology. Technologies such as AI have substantial benefits and risks, the balance is uncertain and the net benefit is perceived differently by different actors. For a technology not to be developed all actors would have to agree not to develop it which would require significant coordination. In the post I describe several factors such as war that might affect the level of global coordination and that it might be wise to slow down AI development by a few years or decades if coordination can’t be achieved since I think AI risk is higher than other risks.
Yes, agreed, what you refer to is indeed a huge obstacle.
From years of writing on this I’ve discovered another obstacle. When ever this subject comes up almost all those who join the conversation focus almost exclusively on obstacles and theories regarding why such change isn’t possible, and...
The conversation almost never gets to the point of folks rolling up their sleeves to look for solutions.
I don’t have a big pile of solutions to put on the table either. All I really have is the insight that overcoming these challenges isn’t optional.
In my judgement there is little chance of such fundamental change to our relationship with unlimited technological progress within the current cultural status quo. However, given the vast scale of forces being released in to the world there would seem to be an unprecedented possibility of revolutionary change to the status quo.
As example, imagine even a limited nuclear exchange between Pakistan and India. More people would die in a few minutes than died in all of WWII. The media would feed on the carnage for a long time, relentlessly pumping unspeakable horror imagery in to every home in the world with a TV.
Consider for instance how all the stories about floods, fires and heat waves etc are editing our relationship with climate change. It’s no longer such an abstract issue to us, it’s increasingly becoming real, hitting us where we really live, in the emotional realm.