This does sound nice in theory, to organically align the incentives instead of exerting control by passing laws or using external punishment/reward systems, but in reality you end up dealing with a lot of chameleon leeches, those who mimic your TrustyCar startup with their own SureDrive startup that games the review system, scams the buyers and then disappears. After a short time it will be impossible to tell who is the honest one, since every player is incentivised to signal their honesty, and so no one can be trusted. Eliezer talked about it in Inadequate Equilibria. Still, the strategy of aligning the incentives and reducing control is definitely worth keeping in mind, and it is important to consciously budget for any deviations from it.
Chameleon leeches are a small problem—consumers routinely pay attention to size and longevity of source for durable goods like cars. It may be difficult to initially gain the trust, but if this actually works it’ll go far. The bigger problem is that you’re taking on liability for something that YOUR vendors don’t stand behind. You’re buying used cars at auction based on whatever minimal inspection you get, and selling them with a deeper warranty than any existing seller offers.
But those object-level failures are actually SUCCESSES of the main point: Unilateral or GTFO!
The way to discover if something is workable is NOT to implement it by force of law, but by just trying it with resources you control. If it doesn’t work, you’ve learned a valuable lesson about your beliefs. If it does work, you’ve been personally successful and have a solid base to start thinking about how to scale your insight to the rest of humanity.
Skin in the game, liability for failure, recognition of risks—all are terms for what’s missing in the vast majority of social media discussions (including LessWrong) about how to fix an apparent current failing of societal equilibria.
This does sound nice in theory, to organically align the incentives instead of exerting control by passing laws or using external punishment/reward systems, but in reality you end up dealing with a lot of chameleon leeches, those who mimic your TrustyCar startup with their own SureDrive startup that games the review system, scams the buyers and then disappears. After a short time it will be impossible to tell who is the honest one, since every player is incentivised to signal their honesty, and so no one can be trusted. Eliezer talked about it in Inadequate Equilibria. Still, the strategy of aligning the incentives and reducing control is definitely worth keeping in mind, and it is important to consciously budget for any deviations from it.
Chameleon leeches are a small problem—consumers routinely pay attention to size and longevity of source for durable goods like cars. It may be difficult to initially gain the trust, but if this actually works it’ll go far. The bigger problem is that you’re taking on liability for something that YOUR vendors don’t stand behind. You’re buying used cars at auction based on whatever minimal inspection you get, and selling them with a deeper warranty than any existing seller offers.
But those object-level failures are actually SUCCESSES of the main point: Unilateral or GTFO!
The way to discover if something is workable is NOT to implement it by force of law, but by just trying it with resources you control. If it doesn’t work, you’ve learned a valuable lesson about your beliefs. If it does work, you’ve been personally successful and have a solid base to start thinking about how to scale your insight to the rest of humanity.
Skin in the game, liability for failure, recognition of risks—all are terms for what’s missing in the vast majority of social media discussions (including LessWrong) about how to fix an apparent current failing of societal equilibria.