I think we don’t have the same way of imagining what a “random alternative” would be like. For example, I don’t imagine that a random alternative would be the kind generated by a monkey randomly typing on a keyboard in a reasonable amount of time. Or even the kind generated by an unexperienced child or teenager. I imagine whoever would have the chance to enact an alternative would be more likely to understand how not to “break society by mistake” than a randomly chosen person in the population.
I might be totally off making that analogy, but you seem to me like my aunt who’s afraid her computer is broken every time an unexpected window pops up in her browser. She sees her computer as something beyond comprehension, where changing even the tiniest thing could cause iredeemable damage. In reality, an experience computer user makes plenty of changes that would frighten her, but are safe. And her computer is full of useless stuff that were auto-installed by/with other stuff and slow it down.
I endorse Taran’s comment that’s a sibling of this one. Most startups fail, even though they are generally run by smart hardworking people who have spotted something that could genuinely be better.
Let’s run with your computer software analogy. Ever worked on the insides of a large “mature” software system? It’s common for those to be full of cruft and mess and things no one quite understands and unexpected interactions, such that small changes really can cause severe damage. It’s also notorious that trying to do a wholesale rewrite of such a system is usually a bad move.
The situation there is similar to the one with startups, and indeed is sometimes literally the actual same situation. Eventually your big old crufty legacy-software system will likely get replaced by something smaller and simpler that does the job well enough and is easier for its developers to work on. (That will probably be made by a startup.) But any particular attempt to replace it, your own included, is likely to fail.
I think there’s a difference because the legacy software doesn’t develop itself the way a bureaucracy does. It’s not made up out of actors that try to get more power for themselves.
I agree with Taran’s comment as well. I possibly underestimated how likely to fail an attempt at replacing the current system is. I just think the danger of letting the situation rot is underestimated too. The world is moving on, fast. To keep the software analogy, we’re keeping the same legacy software, but demanding it be used on new use cases every year. That’s not sustainable. I’m open to third options.
The original startup analogy might be a useful intuition pump here. Most attempts to displace entrenched incumbents fail, even when those incumbents aren’t good and ultimately are displaced. The challengers aren’t random in the monkeys-using-keyboard sense, but if you sample the space of challengers you will probably pick a loser. This is especially true of the challengers who don’t have a concrete, specific thesis of what their competitors are doing wrong and how they’ll improve on it—without that, VCs mostly won’t even talk to you.
But this isn’t a general argument against startups, just an argument against your ability to figure out in advance which ones will work. The standard solution, which I expect will apply to transhumanism as to everything else, is to try lots of different things, compare them, and keep the winners. If you are upstream of that process, deciding which projects to fund, then you are out of luck: you are going to fund a bunch of losers, and you can’t do anything about it.
If you can’t do that, the other common strategy is to generate a detailed model of both the problem space and your proposed improvement, and use those models to iterate in hypothesis space instead of in real life. Sometimes this is relatively straightforward: if you want the slaves to be free, you can issue a proclamation that frees them and have high confidence that they won’t be slaves afterward (though note that the real plan was much more detailed than that, and didn’t really work out as expected). Other times it looks straightforward but isn’t: sparrows are pests, but you can’t improve your rice yields by getting rid of them. Here, to me the plan does not even look straightforward: the Pentagon does a lot of different things and some of them are existentially important to keep around. If we draw one sample from the space of possible successors, as Cummings suggests, I don’t think we’ll get what we want.
I think we don’t have the same way of imagining what a “random alternative” would be like. For example, I don’t imagine that a random alternative would be the kind generated by a monkey randomly typing on a keyboard in a reasonable amount of time. Or even the kind generated by an unexperienced child or teenager. I imagine whoever would have the chance to enact an alternative would be more likely to understand how not to “break society by mistake” than a randomly chosen person in the population.
I might be totally off making that analogy, but you seem to me like my aunt who’s afraid her computer is broken every time an unexpected window pops up in her browser. She sees her computer as something beyond comprehension, where changing even the tiniest thing could cause iredeemable damage. In reality, an experience computer user makes plenty of changes that would frighten her, but are safe. And her computer is full of useless stuff that were auto-installed by/with other stuff and slow it down.
I endorse Taran’s comment that’s a sibling of this one. Most startups fail, even though they are generally run by smart hardworking people who have spotted something that could genuinely be better.
Let’s run with your computer software analogy. Ever worked on the insides of a large “mature” software system? It’s common for those to be full of cruft and mess and things no one quite understands and unexpected interactions, such that small changes really can cause severe damage. It’s also notorious that trying to do a wholesale rewrite of such a system is usually a bad move.
The situation there is similar to the one with startups, and indeed is sometimes literally the actual same situation. Eventually your big old crufty legacy-software system will likely get replaced by something smaller and simpler that does the job well enough and is easier for its developers to work on. (That will probably be made by a startup.) But any particular attempt to replace it, your own included, is likely to fail.
I think there’s a difference because the legacy software doesn’t develop itself the way a bureaucracy does. It’s not made up out of actors that try to get more power for themselves.
I agree with Taran’s comment as well. I possibly underestimated how likely to fail an attempt at replacing the current system is. I just think the danger of letting the situation rot is underestimated too. The world is moving on, fast. To keep the software analogy, we’re keeping the same legacy software, but demanding it be used on new use cases every year. That’s not sustainable. I’m open to third options.
The original startup analogy might be a useful intuition pump here. Most attempts to displace entrenched incumbents fail, even when those incumbents aren’t good and ultimately are displaced. The challengers aren’t random in the monkeys-using-keyboard sense, but if you sample the space of challengers you will probably pick a loser. This is especially true of the challengers who don’t have a concrete, specific thesis of what their competitors are doing wrong and how they’ll improve on it—without that, VCs mostly won’t even talk to you.
But this isn’t a general argument against startups, just an argument against your ability to figure out in advance which ones will work. The standard solution, which I expect will apply to transhumanism as to everything else, is to try lots of different things, compare them, and keep the winners. If you are upstream of that process, deciding which projects to fund, then you are out of luck: you are going to fund a bunch of losers, and you can’t do anything about it.
If you can’t do that, the other common strategy is to generate a detailed model of both the problem space and your proposed improvement, and use those models to iterate in hypothesis space instead of in real life. Sometimes this is relatively straightforward: if you want the slaves to be free, you can issue a proclamation that frees them and have high confidence that they won’t be slaves afterward (though note that the real plan was much more detailed than that, and didn’t really work out as expected). Other times it looks straightforward but isn’t: sparrows are pests, but you can’t improve your rice yields by getting rid of them. Here, to me the plan does not even look straightforward: the Pentagon does a lot of different things and some of them are existentially important to keep around. If we draw one sample from the space of possible successors, as Cummings suggests, I don’t think we’ll get what we want.