I remain interested in what a detailed scenario forecast from you looks like. A big disagreement I think we have is in how socciety will react to various choices and I think laying this out could make this more clear. (As far as what a scenario forecast from my perspective looks like, I think @Daniel Kokotajlo is working on one which is pretty close to my perspective and generally has the SOTA stuff here.)
I’m not entirely opposed to doing a scenario forecasting exercise, but I’m also unsure if it’s the most effective approach for clarifying our disagreements. In fact, to some extent, I see this kind of exercise—where we create detailed scenarios to illustrate potential futures—as being tied to a specific perspective on futurism that I consciously try to distance myself from.
When I think about the future, I don’t see it as a series of clear, predictable paths. Instead, I envision it as a cloud of uncertainty—a wide array of possibilities that becomes increasingly difficult to map or define the further into the future I try to look.
This is fundamentally different from the idea that the future is a singular, fixed trajectory that we can anticipate with confidence. Because of this, I find scenario forecasting less meaningful and even misleading as it extends further into the future. It risks creating the false impression that I am confident in a specific model of what is likely to happen, when in reality, I see the future as inherently uncertain and difficult to pin down.
The point of a scenario forecast (IMO) is less that you expect clear, predictable paths and more that:
Humans often do better understanding and thinking about something if there is a specific story to discuss and thus tradeoffs can be worth it.
Sometimes scenario forecasting indicates a case where your previous views were missing a clearly very important consideration or were assuming something implausible.
(See also Daniel’s sibling comment.)
My biggest disagreements with you are probably a mix of:
We have disagreements about how society will react to AI (and how AI will react to society) given a realistic development arc (especially in short timelines) that imply that your vision of the future seems implausible to me. And perhaps the easiest way to get through all of these disagreements is for you to concretely describe what you expect might happen. As an example, I have a view like “it will be hard for power to very quickly transition from humans to AIs without some sort of hard takeover especially given dynamics about alignment and training AIs on imitation (and sandbagging)”, but I think this is tied up “when I think about the story for how a non-hard-takeover quick transition would go, it doesn’t seem to make sense to me”, and thus if you told the story from your perspective it would be easier to point at the disagreement in your ontology/world view.
(Less importantly?) We have various technical disagreements about how AI takeoff and misalignment will practically work that I don’t think will be addressed by scenario forecasting. (E.g., I think software only singularity is more likely than you do, and think that worst cast scheming is more likely.)
E.g., I think software only singularity is more likely than you do, and think that worst cast scheming is more likely
By “software only singularity” do you mean a scenario where all humans are killed before singularity, a scenario where all humans merge with software (uploading) or something else entirely?
Software only singularity is a singularity driven by just AI R&D on a basically fixed hardware base. As in, can you singularity using only a fixed datacenter (with no additional compute over time) just by improving algorithms? See also here.
This isn’t directly talking about the outcomes from this.
You can get a singularity via hardware+software where the AIs are also accelerating the hardware supply chain such that you can use more FLOP to train AIs and you can run more copies. (Analogously to the hyperexponential progress throughout human history seemingly driven by higher population sizes, see here.)
I don’t think that’s a crux between us—I love scenario forecasting but I don’t think of the future as a series of clear predictable paths, I envision it as wide array of uncertain possibilities that becomes increasingly difficult to map or define the further into the future I look. I definitely don’t think we can anticipate the future with confidence.
I remain interested in what a detailed scenario forecast from you looks like. A big disagreement I think we have is in how socciety will react to various choices and I think laying this out could make this more clear. (As far as what a scenario forecast from my perspective looks like, I think @Daniel Kokotajlo is working on one which is pretty close to my perspective and generally has the SOTA stuff here.)
I’m not entirely opposed to doing a scenario forecasting exercise, but I’m also unsure if it’s the most effective approach for clarifying our disagreements. In fact, to some extent, I see this kind of exercise—where we create detailed scenarios to illustrate potential futures—as being tied to a specific perspective on futurism that I consciously try to distance myself from.
When I think about the future, I don’t see it as a series of clear, predictable paths. Instead, I envision it as a cloud of uncertainty—a wide array of possibilities that becomes increasingly difficult to map or define the further into the future I try to look.
This is fundamentally different from the idea that the future is a singular, fixed trajectory that we can anticipate with confidence. Because of this, I find scenario forecasting less meaningful and even misleading as it extends further into the future. It risks creating the false impression that I am confident in a specific model of what is likely to happen, when in reality, I see the future as inherently uncertain and difficult to pin down.
The point of a scenario forecast (IMO) is less that you expect clear, predictable paths and more that:
Humans often do better understanding and thinking about something if there is a specific story to discuss and thus tradeoffs can be worth it.
Sometimes scenario forecasting indicates a case where your previous views were missing a clearly very important consideration or were assuming something implausible.
(See also Daniel’s sibling comment.)
My biggest disagreements with you are probably a mix of:
We have disagreements about how society will react to AI (and how AI will react to society) given a realistic development arc (especially in short timelines) that imply that your vision of the future seems implausible to me. And perhaps the easiest way to get through all of these disagreements is for you to concretely describe what you expect might happen. As an example, I have a view like “it will be hard for power to very quickly transition from humans to AIs without some sort of hard takeover especially given dynamics about alignment and training AIs on imitation (and sandbagging)”, but I think this is tied up “when I think about the story for how a non-hard-takeover quick transition would go, it doesn’t seem to make sense to me”, and thus if you told the story from your perspective it would be easier to point at the disagreement in your ontology/world view.
(Less importantly?) We have various technical disagreements about how AI takeoff and misalignment will practically work that I don’t think will be addressed by scenario forecasting. (E.g., I think software only singularity is more likely than you do, and think that worst cast scheming is more likely.)
By “software only singularity” do you mean a scenario where all humans are killed before singularity, a scenario where all humans merge with software (uploading) or something else entirely?
Software only singularity is a singularity driven by just AI R&D on a basically fixed hardware base. As in, can you singularity using only a fixed datacenter (with no additional compute over time) just by improving algorithms? See also here.
This isn’t directly talking about the outcomes from this.
You can get a singularity via hardware+software where the AIs are also accelerating the hardware supply chain such that you can use more FLOP to train AIs and you can run more copies. (Analogously to the hyperexponential progress throughout human history seemingly driven by higher population sizes, see here.)
I don’t think that’s a crux between us—I love scenario forecasting but I don’t think of the future as a series of clear predictable paths, I envision it as wide array of uncertain possibilities that becomes increasingly difficult to map or define the further into the future I look. I definitely don’t think we can anticipate the future with confidence.