First of all, it’s not obligatory for an optimisation process to trample everything we value into the mud.
In an intense competition environment, it is obligatory. Any resources spent not optimizing on a fitness axis necessarily make the entity more likely to lose.
Which implies to me that the only way out is to compete so thoroughly and be rich enough that we can act like we’re not in competition, and can afford to waste effort on values other than survival/expansion.
If the optimum for any political party is to produce 80% fluff and 20% substance, then optimisation pressure will push them towards it. (fun little observation: it seems to me that parties go something like 80-20 fluff substance in how they spend their money, but 20-80 in how individual party members spend their time).
Is it? Products can be dangerous, but they can’t instantly kill their purchaser; it’s a least conceivable that this, plus increased information, would restrict how bad various products could get. It’s not clear that there are no fences on the various slopes.
What I had in mind were the two largest traps: societies which maintained breathing space being overrun by societies which ruthlessly optimized to overrun other societies, and our entire planet being overrun by more efficient extraterrestrial intelligences which ruthlessly optimized for ability to expand through the universe.
I agree that for more mundane cases like dangerous consumer products and political parties, there’ll probably be some “fences on the various slopes”. But they will be cold comfort indeed if we get wiped out by Malthusian limit-embracing aliens in a century’s time!
But it occurs to me that ruthlessly efficient societies need to be highly coordinated societies, which may push in other directions; I wonder if there’s something worth digging into there...
Another hopeful thought: we might escape being eaten for an unexpectedly long time because evolution is stupid. It might consistently program organic life to maximize for proxies of reproductive success like social status, long life, or ready access to food, rather than the ability to tile the universe with copies of itself.
This in no way implies humanity’s safe forever; evolution would almost surely blunder into creating a copy-maximizing species eventually, by sheer random accident if nothing else. But humanity’s window of safety might be millions or billions or trillions of years rather than millennia.
In an intense competition environment, it is obligatory. Any resources spent not optimizing on a fitness axis necessarily make the entity more likely to lose.
Which implies to me that the only way out is to compete so thoroughly and be rich enough that we can act like we’re not in competition, and can afford to waste effort on values other than survival/expansion.
If the optimum for any political party is to produce 80% fluff and 20% substance, then optimisation pressure will push them towards it. (fun little observation: it seems to me that parties go something like 80-20 fluff substance in how they spend their money, but 20-80 in how individual party members spend their time).
Unfortunately, that kind of breathing-space-at-the-optimum seems a lot more likely in the case of political parties than for humanity as a whole.
Is it? Products can be dangerous, but they can’t instantly kill their purchaser; it’s a least conceivable that this, plus increased information, would restrict how bad various products could get. It’s not clear that there are no fences on the various slopes.
What I had in mind were the two largest traps: societies which maintained breathing space being overrun by societies which ruthlessly optimized to overrun other societies, and our entire planet being overrun by more efficient extraterrestrial intelligences which ruthlessly optimized for ability to expand through the universe.
I agree that for more mundane cases like dangerous consumer products and political parties, there’ll probably be some “fences on the various slopes”. But they will be cold comfort indeed if we get wiped out by Malthusian limit-embracing aliens in a century’s time!
I take your point.
But it occurs to me that ruthlessly efficient societies need to be highly coordinated societies, which may push in other directions; I wonder if there’s something worth digging into there...
Another hopeful thought: we might escape being eaten for an unexpectedly long time because evolution is stupid. It might consistently program organic life to maximize for proxies of reproductive success like social status, long life, or ready access to food, rather than the ability to tile the universe with copies of itself.
This in no way implies humanity’s safe forever; evolution would almost surely blunder into creating a copy-maximizing species eventually, by sheer random accident if nothing else. But humanity’s window of safety might be millions or billions or trillions of years rather than millennia.