OK, fair. I didn’t actually read the post in detail. There’s a good chance Hanson assigns >1% chance of AI killing everyone, if you don’t include EMs. But two points
1) Hanson’s view of EMs results in vast numbers of very human like minds continuing to exist for a long subjective period of time. That’s not really an x-risk, though Hanson does think it plausible that biological humans may suffer greatly in the transition. He doesn’t give a detailed picture of what happens after, besides some stuff like colonozing the sun etc. Yet, there could still be humans hanging around in the Age of EM. To me, Age of EM paints a picture that makes OP’s question seem kind of poorly phrased. Like, if someone believed solving alignment would result in all humans being uploaded, then gradually becoming transhuman entities, would that qualify as a >1% chance of human extinction? I think most here would say no.
2) Working on capabilities doesn’t seem to be nearly as big an issue in a Hansonian worldview as it would be in e.g. Yudkowsky’s, or even Christiano’s. So I feel like pointing out Hanson would still be worthwhile, especially as he’s a person who engaged heavily with the early AI alignment people.
I claim that Hanson has >1% chance of Yudkowsky’s scenario that AI comes first and destroys all value and also a >1% chance that Ems come first and a scenario that a lot of people would say killed all people, including the Ems. This is not directly relevant to the question about AI, but it suggests that he is sanguine about analogous AI scenarios, soft takeoff scenarios not covered by Yudkowsky.
Yes, during the 2 years of wallclock time, the Ems exist for 1000 subjective years. Is that so long? This is not “longtermism.” Yes, you should probably count the Ems as humans, so if they kill all the biological humans, they don’t “kill everyone,” but after this period they are outcompeted by something more alien. Does this count as killing everyone?
Working on capabilities isn’t a problem in his mainline, but the question was not about mainline, but about tail events. If Ems are going to come first, then you could punt alignment to their millennium of work. But if it’s not guaranteed who comes first and AI is worse than Ems, working on AI could cause it to come first. Or maybe not. Maybe one is so much easier than the other and nothing is decision relevant.
Yes, Hanson sees value drift as inevitable. The Ems will be outcompeted by something better adapted that we should see some value in. He thinks it’s parochial to dislike the Ems evolving under Malthusian pressures. Maybe, but it’s important not to confuse the factual questions with the moral questions. “It’s OK because there’s no risk of X” is different from “X is OK, actually.” Yes, he talks about the Dreamtime. Part of that is the delusion that we can steer the future more than Malthusian forces. But part of it is that because we are not yet under strict competition, we have excess resources that we can use to steer the future, if only a little.
I think this is a good summary of Hanson’s views, and your answer is correct as pertains to the question that was actually asked. That said, I think reading Hanson counts as a skeptic for the need for more AI-safety researchers on the margin. And, I think he’d be skeptical of the marginal person claiming a large impact via working on AI capabilities relative to most counterfactuals. I am not sure if we disagree there, but I’m going to tap out anyway.
OK, fair. I didn’t actually read the post in detail. There’s a good chance Hanson assigns >1% chance of AI killing everyone, if you don’t include EMs. But two points
1) Hanson’s view of EMs results in vast numbers of very human like minds continuing to exist for a long subjective period of time. That’s not really an x-risk, though Hanson does think it plausible that biological humans may suffer greatly in the transition. He doesn’t give a detailed picture of what happens after, besides some stuff like colonozing the sun etc. Yet, there could still be humans hanging around in the Age of EM. To me, Age of EM paints a picture that makes OP’s question seem kind of poorly phrased. Like, if someone believed solving alignment would result in all humans being uploaded, then gradually becoming transhuman entities, would that qualify as a >1% chance of human extinction? I think most here would say no.
2) Working on capabilities doesn’t seem to be nearly as big an issue in a Hansonian worldview as it would be in e.g. Yudkowsky’s, or even Christiano’s. So I feel like pointing out Hanson would still be worthwhile, especially as he’s a person who engaged heavily with the early AI alignment people.
I claim that Hanson has >1% chance of Yudkowsky’s scenario that AI comes first and destroys all value and also a >1% chance that Ems come first and a scenario that a lot of people would say killed all people, including the Ems. This is not directly relevant to the question about AI, but it suggests that he is sanguine about analogous AI scenarios, soft takeoff scenarios not covered by Yudkowsky.
Yes, during the 2 years of wallclock time, the Ems exist for 1000 subjective years. Is that so long? This is not “longtermism.” Yes, you should probably count the Ems as humans, so if they kill all the biological humans, they don’t “kill everyone,” but after this period they are outcompeted by something more alien. Does this count as killing everyone?
Working on capabilities isn’t a problem in his mainline, but the question was not about mainline, but about tail events. If Ems are going to come first, then you could punt alignment to their millennium of work. But if it’s not guaranteed who comes first and AI is worse than Ems, working on AI could cause it to come first. Or maybe not. Maybe one is so much easier than the other and nothing is decision relevant.
Yes, Hanson sees value drift as inevitable. The Ems will be outcompeted by something better adapted that we should see some value in. He thinks it’s parochial to dislike the Ems evolving under Malthusian pressures. Maybe, but it’s important not to confuse the factual questions with the moral questions. “It’s OK because there’s no risk of X” is different from “X is OK, actually.” Yes, he talks about the Dreamtime. Part of that is the delusion that we can steer the future more than Malthusian forces. But part of it is that because we are not yet under strict competition, we have excess resources that we can use to steer the future, if only a little.
I think this is a good summary of Hanson’s views, and your answer is correct as pertains to the question that was actually asked. That said, I think reading Hanson counts as a skeptic for the need for more AI-safety researchers on the margin. And, I think he’d be skeptical of the marginal person claiming a large impact via working on AI capabilities relative to most counterfactuals. I am not sure if we disagree there, but I’m going to tap out anyway.