Yes, I quite agree that a slow inevitable change is just about as bad as a quick inevitable change. But a slow change which can be intervened against and halted is much less bad than a fast change which could theoretically be intervened against but you likely would miss the chance.
Like, if someone were to strap a bomb to me and say, “This will go off in X minutes” I’d much rather that the X be thousands of minutes rather than 5. Having thousands of minutes to defuse the bomb is a much better scenario for me.
Value drift is the kind of thing that naturally happens gradually and in an unclear way. It’s hard to intervene against it without novel coordination tech/institutions, especially if it leaves people unworried and tech/instututions remain undeveloped.
This seems very similar to not worrying about AGI because it’s believed to be far away, systematically not considering the consequences of whenever it arrives, not working on solutions as a result. And then suddenly starting to see what the consequences are when it’s getting closer, when it’s too late to develop solutions, or to put in place institutions that would stop its premature development. As if anything about the way it’s getting closer substantially informs the shape of the consequences and couldn’t be imagined well in advance. Except fire alarms for value drift might be even less well-defined than for AGI.
Yes, I quite agree that a slow inevitable change is just about as bad as a quick inevitable change. But a slow change which can be intervened against and halted is much less bad than a fast change which could theoretically be intervened against but you likely would miss the chance.
Like, if someone were to strap a bomb to me and say, “This will go off in X minutes” I’d much rather that the X be thousands of minutes rather than 5. Having thousands of minutes to defuse the bomb is a much better scenario for me.
Value drift is the kind of thing that naturally happens gradually and in an unclear way. It’s hard to intervene against it without novel coordination tech/institutions, especially if it leaves people unworried and tech/instututions remain undeveloped.
This seems very similar to not worrying about AGI because it’s believed to be far away, systematically not considering the consequences of whenever it arrives, not working on solutions as a result. And then suddenly starting to see what the consequences are when it’s getting closer, when it’s too late to develop solutions, or to put in place institutions that would stop its premature development. As if anything about the way it’s getting closer substantially informs the shape of the consequences and couldn’t be imagined well in advance. Except fire alarms for value drift might be even less well-defined than for AGI.