Would it be sensible to assume that all technologies with the potential for crashing civilization have already been invented?
If the development of knowledge feeds back on itself...
And if this means the knowledge explosion will continue to accelerate...
And if there is no known end to such a process....
Then, while no one can predict exactly what new threats will emerge when, it seems safe to propose that they will.
I’m 70 and so don’t worry too much about how as yet unknown future threats might affect me personally, as I don’t have a lot of future left. Someone who is 50 years younger probably should worry, when we consider how many new technologies have emerged over the last 50 years, and how the emergence of new threats is likely to unfold at a faster rate than previously was the case.
A knowledge explosion itself—to the extent that that is happening—seems like it could be a great thing. So for what it’s worth my guess would be that it does make sense to focus on mitigating the specific threats that it creates (insofar as it does) so that the we get the benefits too.
A knowledge explosion itself—to the extent that that is happening—seems like it could be a great thing.
It’s certainly true that many benefits will continue to flow from the knowledge explosion, no doubt about it.
The 20th century is a good real world example of the overall picture.
TONS of benefits from the knowledge explosion, and...
Now a single human being can destroy civilization in just minutes.
This pattern illustrates the challenge presented by the knowledge explosion. As the scale of the emerging powers grows, the room for error shrinks, and we are ever more in the situation where one bad day can erase all the very many benefits the knowledge explosion has delivered.
In 1945 we saw the emergence of what is arguably the first existential threat technology. To this day, we still have no idea how to overcome that threat.
And now in the 21st century we are adding more existential threats to the pile. And we don’t really know how to manage those threats either.
And the 21st century is just getting underway. With each new threat that we add to the pile of threats, the odds of us being able to defeat each and every existential threat (required for survival) goes down.
Footnote: I’m using “existential threat” to refer to a possible collapse of civilization, not human extinction, which seems quite unlikely short of an astronomical event.
Would it be sensible to assume that all technologies with the potential for crashing civilization have already been invented?
If the development of knowledge feeds back on itself...
And if this means the knowledge explosion will continue to accelerate...
And if there is no known end to such a process....
Then, while no one can predict exactly what new threats will emerge when, it seems safe to propose that they will.
I’m 70 and so don’t worry too much about how as yet unknown future threats might affect me personally, as I don’t have a lot of future left. Someone who is 50 years younger probably should worry, when we consider how many new technologies have emerged over the last 50 years, and how the emergence of new threats is likely to unfold at a faster rate than previously was the case.
A knowledge explosion itself—to the extent that that is happening—seems like it could be a great thing. So for what it’s worth my guess would be that it does make sense to focus on mitigating the specific threats that it creates (insofar as it does) so that the we get the benefits too.
It’s certainly true that many benefits will continue to flow from the knowledge explosion, no doubt about it.
The 20th century is a good real world example of the overall picture.
TONS of benefits from the knowledge explosion, and...
Now a single human being can destroy civilization in just minutes.
This pattern illustrates the challenge presented by the knowledge explosion. As the scale of the emerging powers grows, the room for error shrinks, and we are ever more in the situation where one bad day can erase all the very many benefits the knowledge explosion has delivered.
In 1945 we saw the emergence of what is arguably the first existential threat technology. To this day, we still have no idea how to overcome that threat.
And now in the 21st century we are adding more existential threats to the pile. And we don’t really know how to manage those threats either.
And the 21st century is just getting underway. With each new threat that we add to the pile of threats, the odds of us being able to defeat each and every existential threat (required for survival) goes down.
Footnote: I’m using “existential threat” to refer to a possible collapse of civilization, not human extinction, which seems quite unlikely short of an astronomical event.