Exchange from my facebook between Robin Hanson and myself:
Robin Hanson “Will” is WAY too strong a claim.
Jeffrey Ladish The key assumption is that tech development will continue in key areas, like computing and biotech. I grant that if this assumption is false, the conclusion does not follow.
Jeffrey Ladish On short-medium (<100-500 years) timescales, I could see scenarios where tech development does not reach “black marble” levels of dangerous. I’d be quite surprised if on long time scales 1k − 100k years we did not reach that level of development. This is why I feel okay making the strong claim, though I am also working on a post about why this might be wrong.
Robin Hanson You are assuming something much stronger than merely that tech improves.
Jeffrey Ladish However, I think we may have different cruxes here. I think you may believe that there can be fast tech development (i.e. Age of Em), without centralized coordination of some sort (I think of markets as kinds of decentralized coordination), without extinction.
Jeffrey Ladish I’m assuming that if tech improves, humans will discover some autopoietic process that will result in human extinction. This could be an intelligence explosion, it could be synthetic biotech (“green goo”), it could be some kind of vacuum decay, etc. I recognize this is a strong claim.
Robin Hanson Jeffrey, a strong assumption quite out of line with our prior experience with tech.
Jeffrey Ladish That’s right.
Jeffrey Ladish Not out of line with our prior experience of evolution though.
Robin Hanson Species tend to improve, but they don’t tend to destroy themselves via one such improvement.
Jeffrey Ladish They do tend to destroy themselves via many improvements. Specialists evolve then go extinct. Though I think humans are different because we can engineer new species / technologies / processes. I’m pointing at reference classes like biotic replacement events: https://eukaryotewritesblog.com/2017/08/14/evolutionary-innovation/
Jeffrey Ladish I’m working on an longform argument about this, will look forward to your criticism / feedback on it.
Robin Hanson The risk of increasing specialization creating more fragility is not at all what you are talking about in the above discussion.
Jeffrey Ladish Yes, that was sort of a pedantic point. I do think it’s related but not very directly. But the second point, about the biotic replacement reference class, is the main one.
Exchange from my facebook between Robin Hanson and myself:
Robin Hanson “Will” is WAY too strong a claim.
Jeffrey Ladish The key assumption is that tech development will continue in key areas, like computing and biotech. I grant that if this assumption is false, the conclusion does not follow.
Jeffrey Ladish On short-medium (<100-500 years) timescales, I could see scenarios where tech development does not reach “black marble” levels of dangerous. I’d be quite surprised if on long time scales 1k − 100k years we did not reach that level of development. This is why I feel okay making the strong claim, though I am also working on a post about why this might be wrong.
Robin Hanson You are assuming something much stronger than merely that tech improves.
Jeffrey Ladish However, I think we may have different cruxes here. I think you may believe that there can be fast tech development (i.e. Age of Em), without centralized coordination of some sort (I think of markets as kinds of decentralized coordination), without extinction.
Jeffrey Ladish I’m assuming that if tech improves, humans will discover some autopoietic process that will result in human extinction. This could be an intelligence explosion, it could be synthetic biotech (“green goo”), it could be some kind of vacuum decay, etc. I recognize this is a strong claim.
Robin Hanson Jeffrey, a strong assumption quite out of line with our prior experience with tech.
Jeffrey Ladish That’s right.
Jeffrey Ladish Not out of line with our prior experience of evolution though.
Robin Hanson Species tend to improve, but they don’t tend to destroy themselves via one such improvement.
Jeffrey Ladish They do tend to destroy themselves via many improvements. Specialists evolve then go extinct.
Though I think humans are different because we can engineer new species / technologies / processes. I’m pointing at reference classes like biotic replacement events: https://eukaryotewritesblog.com/2017/08/14/evolutionary-innovation/
Jeffrey Ladish I’m working on an longform argument about this, will look forward to your criticism / feedback on it.
Robin Hanson The risk of increasing specialization creating more fragility is not at all what you are talking about in the above discussion.
Jeffrey Ladish Yes, that was sort of a pedantic point. I do think it’s related but not very directly. But the second point, about the biotic replacement reference class, is the main one.