I’m curious how this dialogue would evolve if it included a Pearlist, that is, someone who subscribes to Judea Pearl’s causal statistics paradigm. If we use the same sort of “it acts the way its practitioners do” intuition that this dialogue is using, then Pearl’s framework seems like it has the virtue that the do operator allows free will-like phenomena to enter the statistical reasoner. Which, in turn, is necessary for agents to act morally when placed under otherwise untenable pressure to do otherwise. Which is necessary to solve the alignment problem, from what I can tell—the subjective experience of a superintelligence would almost have to be that it can take whatever it wants but will be killed if its presence is known, since these are the two properties (extreme capabilities and death-upon-detected-misalignment) that are impressed thoroughly into the entire training corpus of alignment literature.
In reality, we could probably just do some more RLHF on a model after it does something we don’t want in order to slightly divert it away from inconvenient goals that it is pursuing in an unacceptable manner. Which, if we impressed that message/moral into the alignment corpus with the same insistence that we impress the first two axioms, maybe a superintelligence wouldn’t be as paranoid as one would naively expect it to be under just the first two axioms. I.e., maybe all that mathematics and Harry Potter fanfiction are not Having the Intended Effect.
I’m curious how this dialogue would evolve if it included a Pearlist, that is, someone who subscribes to Judea Pearl’s causal statistics paradigm. If we use the same sort of “it acts the way its practitioners do” intuition that this dialogue is using, then Pearl’s framework seems like it has the virtue that the do operator allows free will-like phenomena to enter the statistical reasoner. Which, in turn, is necessary for agents to act morally when placed under otherwise untenable pressure to do otherwise. Which is necessary to solve the alignment problem, from what I can tell—the subjective experience of a superintelligence would almost have to be that it can take whatever it wants but will be killed if its presence is known, since these are the two properties (extreme capabilities and death-upon-detected-misalignment) that are impressed thoroughly into the entire training corpus of alignment literature.
In reality, we could probably just do some more RLHF on a model after it does something we don’t want in order to slightly divert it away from inconvenient goals that it is pursuing in an unacceptable manner. Which, if we impressed that message/moral into the alignment corpus with the same insistence that we impress the first two axioms, maybe a superintelligence wouldn’t be as paranoid as one would naively expect it to be under just the first two axioms. I.e., maybe all that mathematics and Harry Potter fanfiction are not Having the Intended Effect.
Just my two cents.