The bulk of my p(doom), certainly >50%, comes mostly from a pattern we’re used to, let’s call it institutional incentives, being instantiated with AI help towards an end where eg there’s effectively a competing-with-humanity nonhuman ~institution, maybe guided by a few remaining humans. It doesn’t depend strictly on anything about AI, and solving any so-called alignment problem for AIs without also solving war/altruism/disease completely—or in other words, in a leak-free way—not just partially, means we get what I’d call “doom”, ie worlds where malthusian-hells-or-worse are locked in.
If not for AI, I don’t think we’d have any shot of solving something so ambitious; but the hard problem that gets me below 50% would be serious progress on something-around-as-good-as-CEV-is-supposed-to-be—something able to make sure it actually gets used to effectively-irreversibly reinforce that all beings ~have a non-torturous time, enough fuel, enough matter, enough room, enough agency, enough freedom, enough actualization.
If you solve something about AI-alignment-to-current-strong-agents, right now, that will on net get used primarily as a weapon to reinforce the power of existing superagents-not-aligned-with-their-components (name an organization of people where the aggregate behavior durably-cares about anyone inside it, even its most powerful authority figures or etc, in the face of incentives, in a way that would remain durable if you handed them a corrigible super-ai). If you get corrigibility and give it to human orgs, those orgs are misaligned with most-of-humanity-and-most-reasonable-AIs, and end up handing over control to an AI because it’s easier.
Eg, near term, merely making the AI nice doesn’t prevent the AI from being used by companies to suck up >99% of jobs; and if at some point it’s better to have a (corrigible) ai in charge of your company, what social feedback pattern is guaranteeing that you’ll use this in a way that is prosocial the way “people work for money and this buys your product only if you provide them something worth-it” was previously?
It seems to me that the natural way to get good outcomes most-easily from where we are is for the rising tide of AI to naturally make humans more able to share-care-protect across existing org boundaries in the face of current world-stress induced incentives. Most of the threat already doesn’t come from current-gen AI; the reason anyone would make the dangerous AI is because of incentives like these. corrigibility wouldn’t change those incentives.
[edit: pinned to profile]
The bulk of my p(doom), certainly >50%, comes mostly from a pattern we’re used to, let’s call it institutional incentives, being instantiated with AI help towards an end where eg there’s effectively a competing-with-humanity nonhuman ~institution, maybe guided by a few remaining humans. It doesn’t depend strictly on anything about AI, and solving any so-called alignment problem for AIs without also solving war/altruism/disease completely—or in other words, in a leak-free way—not just partially, means we get what I’d call “doom”, ie worlds where malthusian-hells-or-worse are locked in.
If not for AI, I don’t think we’d have any shot of solving something so ambitious; but the hard problem that gets me below 50% would be serious progress on something-around-as-good-as-CEV-is-supposed-to-be—something able to make sure it actually gets used to effectively-irreversibly reinforce that all beings ~have a non-torturous time, enough fuel, enough matter, enough room, enough agency, enough freedom, enough actualization.
If you solve something about AI-alignment-to-current-strong-agents, right now, that will on net get used primarily as a weapon to reinforce the power of existing superagents-not-aligned-with-their-components (name an organization of people where the aggregate behavior durably-cares about anyone inside it, even its most powerful authority figures or etc, in the face of incentives, in a way that would remain durable if you handed them a corrigible super-ai). If you get corrigibility and give it to human orgs, those orgs are misaligned with most-of-humanity-and-most-reasonable-AIs, and end up handing over control to an AI because it’s easier.
Eg, near term, merely making the AI nice doesn’t prevent the AI from being used by companies to suck up >99% of jobs; and if at some point it’s better to have a (corrigible) ai in charge of your company, what social feedback pattern is guaranteeing that you’ll use this in a way that is prosocial the way “people work for money and this buys your product only if you provide them something worth-it” was previously?
It seems to me that the natural way to get good outcomes most-easily from where we are is for the rising tide of AI to naturally make humans more able to share-care-protect across existing org boundaries in the face of current world-stress induced incentives. Most of the threat already doesn’t come from current-gen AI; the reason anyone would make the dangerous AI is because of incentives like these. corrigibility wouldn’t change those incentives.