His claim is that we should expect any random evolved agent to mostly care about long-run power.
I meant that any system which mostly cares about long-run power won’t be selected out. I don’t really have a strong view about whether other systems that don’t care about long-run power will end up persisting, especially earlier (e.g. human evolution). I was just trying to argue against a claim about what gets selected out.
My language was bit sloppy here.
(If evolutionary pressures continue forever, then ultimately you’d expect that all systems have to act very similarly to ones that only care about long-run power, but there could be other motivations that explain this. So, at least from a behavioral perspective, I do expect that ultimately (if evolutionary pressures continue forever) you get systems which at least act like they are optimizing for long-run power. I wasn’t really trying to make an argument about this though.)
Then shouldn’t such systems (which can surely recognize this argument) just take care of short term survival instrumentally? Maybe you’re making a claim about irrationality being likely or a claim that systems that care about long run benefit act in appararently myopic ways.
(Note that historically it was much harder to keep value stability/lock-in than it will be for AIs.)
He might or might not, but if he doesn’t he’s less likely to end up controlling the solar system and/or lightcone.
.
I meant that any system which mostly cares about long-run power won’t be selected out. I don’t really have a strong view about whether other systems that don’t care about long-run power will end up persisting, especially earlier (e.g. human evolution). I was just trying to argue against a claim about what gets selected out.
My language was bit sloppy here.
(If evolutionary pressures continue forever, then ultimately you’d expect that all systems have to act very similarly to ones that only care about long-run power, but there could be other motivations that explain this. So, at least from a behavioral perspective, I do expect that ultimately (if evolutionary pressures continue forever) you get systems which at least act like they are optimizing for long-run power. I wasn’t really trying to make an argument about this though.)
.
Then shouldn’t such systems (which can surely recognize this argument) just take care of short term survival instrumentally? Maybe you’re making a claim about irrationality being likely or a claim that systems that care about long run benefit act in appararently myopic ways.
(Note that historically it was much harder to keep value stability/lock-in than it will be for AIs.)
I’m not going to engage in detail FYI.