Like I said, that part is tricky to formalize. But, ultimately, it’s an individual choice on the part of the model (and, indirectly, the agent being modeled). I can’t formalize what counts as a valid continuation today, let alone in all future societies. So, leave it up to the agents in question.
As for the racism thing: yeah, so? You would rather we encode our own morality into our machine, so that it will ignore aspects of people’s personality we don’t like? I suppose you could insist that the models behave as though they had access to the entire factual database of the AI (so, at least, they couldn’t be racist simply out of factual inaccuracy), but that might be tricky to implement.
I can’t formalize what counts as a valid continuation today, let alone in all future societies. So, leave it up to the agents in question.
I think you use the words “valid continuation” to refer to a confused concept. That’s why it seems hard to formalize. There is no English sentence that successfully refers to the concept of valid continuation, because it is a confused concept.
If you propose to literally ask models “is this a valid continuation of you?” and simulate them sitting in a room with the future model, then you’ve got to think about how the models will react to those almost-meaningless words. You might as well ask them “is this a wakalix?”.
Like I said, that part is tricky to formalize. But, ultimately, it’s an individual choice on the part of the model (and, indirectly, the agent being modeled). I can’t formalize what counts as a valid continuation today, let alone in all future societies. So, leave it up to the agents in question.
As for the racism thing: yeah, so? You would rather we encode our own morality into our machine, so that it will ignore aspects of people’s personality we don’t like? I suppose you could insist that the models behave as though they had access to the entire factual database of the AI (so, at least, they couldn’t be racist simply out of factual inaccuracy), but that might be tricky to implement.
Which scenario are you affirming? I’m trying to understand your intention here. Would a racist get to veto a nonracist future version of themself?
I think you use the words “valid continuation” to refer to a confused concept. That’s why it seems hard to formalize. There is no English sentence that successfully refers to the concept of valid continuation, because it is a confused concept.
If you propose to literally ask models “is this a valid continuation of you?” and simulate them sitting in a room with the future model, then you’ve got to think about how the models will react to those almost-meaningless words. You might as well ask them “is this a wakalix?”.