So long as we’re talking about AI, we’re not talking about the knowledge explosion which created AI, and all the other technology based existential risks which are coming our way.
Endlessly talking about AI is like going around our house mopping up puddles one after another after another every time it rains. The more effective and rational approach is to get up on the roof and fix the hole where the water is coming in. The most effective approach is to deal with the problem at it’s source.
This year everybody is talking about AI. Next year it will be some other new threat. Soon after, another another threat. And then more threats, bigger and bigger, coming faster and faster.
It’s the simplest thing. If we were working at the end of a product shipping line at an Amazon warehouse, and the product shipping line kept sending us new products to package, faster, and faster, and faster, without limit...
What’s probably going to happen?
If we don’t turn our attention to the firehose of knowledge which is generating all the threats, there’s really no point in talking about AI.
I think AI misalignment is uniquely situated as one of these threats because it multiplies the knowledge explosion effect you’re talking about to a large degree. It’s one of the few catastrophic risks that is a plausible total human extinction risk too. Also if AI goes well, it could be used to address many of the other threats you mention as well as upcoming unforeseen ones.
Would it be sensible to assume that all technologies with the potential for crashing civilization have already been invented?
If the development of knowledge feeds back on itself...
And if this means the knowledge explosion will continue to accelerate...
And if there is no known end to such a process....
Then, while no one can predict exactly what new threats will emerge when, it seems safe to propose that they will.
I’m 70 and so don’t worry too much about how as yet unknown future threats might affect me personally, as I don’t have a lot of future left. Someone who is 50 years younger probably should worry, when we consider how many new technologies have emerged over the last 50 years, and how the emergence of new threats is likely to unfold at a faster rate than previously was the case.
A knowledge explosion itself—to the extent that that is happening—seems like it could be a great thing. So for what it’s worth my guess would be that it does make sense to focus on mitigating the specific threats that it creates (insofar as it does) so that the we get the benefits too.
A knowledge explosion itself—to the extent that that is happening—seems like it could be a great thing.
It’s certainly true that many benefits will continue to flow from the knowledge explosion, no doubt about it.
The 20th century is a good real world example of the overall picture.
TONS of benefits from the knowledge explosion, and...
Now a single human being can destroy civilization in just minutes.
This pattern illustrates the challenge presented by the knowledge explosion. As the scale of the emerging powers grows, the room for error shrinks, and we are ever more in the situation where one bad day can erase all the very many benefits the knowledge explosion has delivered.
In 1945 we saw the emergence of what is arguably the first existential threat technology. To this day, we still have no idea how to overcome that threat.
And now in the 21st century we are adding more existential threats to the pile. And we don’t really know how to manage those threats either.
And the 21st century is just getting underway. With each new threat that we add to the pile of threats, the odds of us being able to defeat each and every existential threat (required for survival) goes down.
Footnote: I’m using “existential threat” to refer to a possible collapse of civilization, not human extinction, which seems quite unlikely short of an astronomical event.
So long as we’re talking about AI, we’re not talking about the knowledge explosion which created AI, and all the other technology based existential risks which are coming our way.
Endlessly talking about AI is like going around our house mopping up puddles one after another after another every time it rains. The more effective and rational approach is to get up on the roof and fix the hole where the water is coming in. The most effective approach is to deal with the problem at it’s source.
This year everybody is talking about AI. Next year it will be some other new threat. Soon after, another another threat. And then more threats, bigger and bigger, coming faster and faster.
It’s the simplest thing. If we were working at the end of a product shipping line at an Amazon warehouse, and the product shipping line kept sending us new products to package, faster, and faster, and faster, without limit...
What’s probably going to happen?
If we don’t turn our attention to the firehose of knowledge which is generating all the threats, there’s really no point in talking about AI.
I think AI misalignment is uniquely situated as one of these threats because it multiplies the knowledge explosion effect you’re talking about to a large degree. It’s one of the few catastrophic risks that is a plausible total human extinction risk too. Also if AI goes well, it could be used to address many of the other threats you mention as well as upcoming unforeseen ones.
Can you give some examples?
Would it be sensible to assume that all technologies with the potential for crashing civilization have already been invented?
If the development of knowledge feeds back on itself...
And if this means the knowledge explosion will continue to accelerate...
And if there is no known end to such a process....
Then, while no one can predict exactly what new threats will emerge when, it seems safe to propose that they will.
I’m 70 and so don’t worry too much about how as yet unknown future threats might affect me personally, as I don’t have a lot of future left. Someone who is 50 years younger probably should worry, when we consider how many new technologies have emerged over the last 50 years, and how the emergence of new threats is likely to unfold at a faster rate than previously was the case.
A knowledge explosion itself—to the extent that that is happening—seems like it could be a great thing. So for what it’s worth my guess would be that it does make sense to focus on mitigating the specific threats that it creates (insofar as it does) so that the we get the benefits too.
It’s certainly true that many benefits will continue to flow from the knowledge explosion, no doubt about it.
The 20th century is a good real world example of the overall picture.
TONS of benefits from the knowledge explosion, and...
Now a single human being can destroy civilization in just minutes.
This pattern illustrates the challenge presented by the knowledge explosion. As the scale of the emerging powers grows, the room for error shrinks, and we are ever more in the situation where one bad day can erase all the very many benefits the knowledge explosion has delivered.
In 1945 we saw the emergence of what is arguably the first existential threat technology. To this day, we still have no idea how to overcome that threat.
And now in the 21st century we are adding more existential threats to the pile. And we don’t really know how to manage those threats either.
And the 21st century is just getting underway. With each new threat that we add to the pile of threats, the odds of us being able to defeat each and every existential threat (required for survival) goes down.
Footnote: I’m using “existential threat” to refer to a possible collapse of civilization, not human extinction, which seems quite unlikely short of an astronomical event.