I don’t like the thing you’re doing where you’re eliding all mention of the actual danger AI Safety/Alignment was founded to tackle—AGI having a mind of its own, goals of its own, that seem more likely to be incompatible with/indifferent to our continued existence than not.
Everything else you’re saying is agreeable in the context you’re discussing it, that of a dangerous new technology—I’d feel much more confident if the Naval Nuclear Propulsion Program (Rickover’s people) was the dominant culture in AI development. Albeit I have strong doubts about the feasibility of the ‘Oughts[1]’ you’re proposing, and more critically—I reject the framing...
Any sufficiently advanced technology is indistinguishable from magicbiologylife
To assume AGI is transformative and important is to assume it has a mind[2] of its own: the mind is what makes it transformative.
At the very least—assuming no superintelligence—we are dealing with a profound philosophical/ethical/social crisis, for which control based solutions are no solution. Slavery’s problem wasn’t a lack of better chains, whether institutional or technical.
Please entertain another framing of the ‘technical’ alignment problem: midwifery—the technical problem of striving for optimal conditions during pregnancy/birth. Alignment originated as the study of how to bring into being minds that are compatible with our own.
Whether humans continue to be relevant/dominant decision makers post-Birth is up for debate, but what I claim is not up for debate is that we will no longer be the only decision makers.
There’s a lot to unpack here about what mind actually is/does. I’d appreciate if people who want to discuss this point are at least familiar with Leven’s work.
I don’t like the thing you’re doing where you’re eliding all mention of the actual danger AI Safety/Alignment was founded to tackle—AGI having a mind of its own, goals of its own, that seem more likely to be incompatible with/indifferent to our continued existence than not.
Everything else you’re saying is agreeable in the context you’re discussing it, that of a dangerous new technology—I’d feel much more confident if the Naval Nuclear Propulsion Program (Rickover’s people) was the dominant culture in AI development.
Albeit I have strong doubts about the feasibility of the ‘Oughts[1]’ you’re proposing, and more critically—I reject the framing...
To assume AGI is transformative and important is to assume it has a mind[2] of its own: the mind is what makes it transformative.
At the very least—assuming no superintelligence—we are dealing with a profound philosophical/ethical/social crisis, for which control based solutions are no solution. Slavery’s problem wasn’t a lack of better chains, whether institutional or technical.
Please entertain another framing of the ‘technical’ alignment problem: midwifery—the technical problem of striving for optimal conditions during pregnancy/birth. Alignment originated as the study of how to bring into being minds that are compatible with our own.
Whether humans continue to be relevant/dominant decision makers post-Birth is up for debate, but what I claim is not up for debate is that we will no longer be the only decision makers.
https://en.wikipedia.org/wiki/Ought_implies_can
There’s a lot to unpack here about what mind actually is/does. I’d appreciate if people who want to discuss this point are at least familiar with Leven’s work.