The strange new values that were satisfied were those of the AI systems, but the entire outcome only happened because people like Bob chose it knowingly (let’s say). Bob liked it more than the long glorious human future where his business was less good
I think a relevant consideration here is that Bob doesn’t actually have the ability to choose between these two futures—rather his choice seems to be between a world where his business succeeds but AI takes over later, or a world where his business fails but AI takes over anyway(because other people will use AI even if he doesn’t). Bob might actually prefer to sign a contract forbidding the use of AI if he knew that everybody else would be in on it. I suspect that this would be the position of most people who actually thought AI would eventually take over, and that most people who would oppose such a contract would not think AI takeover is likely(perhaps via self-deception due to their local incentives, which in some ways is similar to just not valuing the future)
I think a relevant consideration here is that Bob doesn’t actually have the ability to choose between these two futures—rather his choice seems to be between a world where his business succeeds but AI takes over later, or a world where his business fails but AI takes over anyway(because other people will use AI even if he doesn’t). Bob might actually prefer to sign a contract forbidding the use of AI if he knew that everybody else would be in on it. I suspect that this would be the position of most people who actually thought AI would eventually take over, and that most people who would oppose such a contract would not think AI takeover is likely(perhaps via self-deception due to their local incentives, which in some ways is similar to just not valuing the future)