We currently live in a world full of double-or-nothing gambles on resources. Bet it all on black. Invest it all in risky options. Go on a space mission with a 99% chance of death, but a 1% chance of reaching Jupiter, which has about 300 times the mass-energy of earth, and none of those pesky humans that keep trying to eat your resources. Challenge one such pesky human to a duel.
Make these bets over and over again and your chance of total failure (i.e. death) approaches 100%. When convex agents appear in real life, they do this, and very quickly die. For these agents, that is all part of the plan. Their death is worth it for a fraction of a percent chance of getting a ton of resources.
But we, as concave agents, don’t really care. We might as well be in completely logically disconnected worlds. Convex agents feel the same about us, since most of their utility is concentrated on those tiny-probability worlds where a bunch of their bets pay off in a row (for most value functions, that means we die). And they feel even more strongly about each other.
This serves as a selection argument for why agents we see in real life (including ourselves) tend to be concave (with some notable exceptions). The convex ones take a bunch of double-or-nothing bets in a row, and, in almost all worlds, eventually land on “nothing”.
Convex agents are practically invisible.
We currently live in a world full of double-or-nothing gambles on resources. Bet it all on black. Invest it all in risky options. Go on a space mission with a 99% chance of death, but a 1% chance of reaching Jupiter, which has about 300 times the mass-energy of earth, and none of those pesky humans that keep trying to eat your resources. Challenge one such pesky human to a duel.
Make these bets over and over again and your chance of total failure (i.e. death) approaches 100%. When convex agents appear in real life, they do this, and very quickly die. For these agents, that is all part of the plan. Their death is worth it for a fraction of a percent chance of getting a ton of resources.
But we, as concave agents, don’t really care. We might as well be in completely logically disconnected worlds. Convex agents feel the same about us, since most of their utility is concentrated on those tiny-probability worlds where a bunch of their bets pay off in a row (for most value functions, that means we die). And they feel even more strongly about each other.
This serves as a selection argument for why agents we see in real life (including ourselves) tend to be concave (with some notable exceptions). The convex ones take a bunch of double-or-nothing bets in a row, and, in almost all worlds, eventually land on “nothing”.
On the contrary, convex agents are wildly abundant—we call them r-selected organisms.