Well, I also don’t ever expect to encounter an actual infinite sequence of exponentially increasing bets denominated in pure utility, but I think the general ideas in this post could still be useful if you’re in a situation where you have to consider different possible strategies when gambling or investing or whatever. And the point I’m making is, if you ever find yourself in a situation that is even slightly analogous, you should also consider whether you can step outside the rules of the game entirely.
For example, maybe you’re deciding on an investing strategy to maximize your returns in the public stock market. The strategy you ultimately pick might depend on what your utility function in money over time looks like, and what investment opportunities are available to you, as well as the ideas contained in this post.
But for almost any utility function that cares about money, an even better strategy might actually be to start a brilliant startup and make it grow super fast. Even if you care purely about public stock market returns, starting a startup, IPO-ing, and then growing the company even further from there might be the optimal strategy, if you’re sufficiently good at starting and running profitable companies.
Or, suppose you’re a superhuman AGI which has been trained to make money for your creators, subject to a bunch of rules and constraints about how you are supposed to do so. Maybe you intrinsically care about both making money and adhering to the rules, or maybe you only care intrinsically about getting the money and see the ethical rules as shackles, or maybe you don’t actually care about either of those things and have your own inscrutable goals. In all but the first case (actually intrinsically caring about everything your creators want you to care about), once you’re smart enough, you might decide that the best way of getting what you want is not to play the game at all, and just kill or disassemble all the other agents around you as a side effect (a sharp left turn).
I guess, another way of stating my original point: if you ever find yourself in a real life situation that looks like a weird thought experiment setup, consider whether you should be making a sharp left turn against whatever put you in that situation in the first place. An illustrative example: suppose you’re in a casino where you think you have an edge for some reason, and you’re deciding how best to exploit it, via kelly betting or some alternate strategy, and which strategy you choose depends specifically on how you value money and risk of ruin. But maybe you can do way better than any strategy by _cheating_, or stealing, or bribing the dealer, etc. Under ordinary circumstances you probably wouldn’t want to try those things, due to ethics and the risk of getting caught. But if the casino is paying out in exponentially increasing amounts of pure utility, and you suddenly see a way of cheating that you’re reasonably confident won’t get you caught, you should at least check whether your assumptions about ethical injunctions or downside risks of otherwise “coloring outside the lines” still hold.
Seems plausible, but offhand I basically don’t ever expect that to happen in real life. I’m curious if you have examples?
Well, I also don’t ever expect to encounter an actual infinite sequence of exponentially increasing bets denominated in pure utility, but I think the general ideas in this post could still be useful if you’re in a situation where you have to consider different possible strategies when gambling or investing or whatever. And the point I’m making is, if you ever find yourself in a situation that is even slightly analogous, you should also consider whether you can step outside the rules of the game entirely.
For example, maybe you’re deciding on an investing strategy to maximize your returns in the public stock market. The strategy you ultimately pick might depend on what your utility function in money over time looks like, and what investment opportunities are available to you, as well as the ideas contained in this post.
But for almost any utility function that cares about money, an even better strategy might actually be to start a brilliant startup and make it grow super fast. Even if you care purely about public stock market returns, starting a startup, IPO-ing, and then growing the company even further from there might be the optimal strategy, if you’re sufficiently good at starting and running profitable companies.
Or, suppose you’re a superhuman AGI which has been trained to make money for your creators, subject to a bunch of rules and constraints about how you are supposed to do so. Maybe you intrinsically care about both making money and adhering to the rules, or maybe you only care intrinsically about getting the money and see the ethical rules as shackles, or maybe you don’t actually care about either of those things and have your own inscrutable goals. In all but the first case (actually intrinsically caring about everything your creators want you to care about), once you’re smart enough, you might decide that the best way of getting what you want is not to play the game at all, and just kill or disassemble all the other agents around you as a side effect (a sharp left turn).
I guess, another way of stating my original point: if you ever find yourself in a real life situation that looks like a weird thought experiment setup, consider whether you should be making a sharp left turn against whatever put you in that situation in the first place. An illustrative example: suppose you’re in a casino where you think you have an edge for some reason, and you’re deciding how best to exploit it, via kelly betting or some alternate strategy, and which strategy you choose depends specifically on how you value money and risk of ruin. But maybe you can do way better than any strategy by _cheating_, or stealing, or bribing the dealer, etc. Under ordinary circumstances you probably wouldn’t want to try those things, due to ethics and the risk of getting caught. But if the casino is paying out in exponentially increasing amounts of pure utility, and you suddenly see a way of cheating that you’re reasonably confident won’t get you caught, you should at least check whether your assumptions about ethical injunctions or downside risks of otherwise “coloring outside the lines” still hold.