I think the confidence game framework is pointing at a useful thing—an important red flag to pay attention to. But I think your examples tend to blur the distinction between an orthogonal axis of “is the thing exploitive?”.
I think basically all things worth doing that involve 2 or more people are something of a confidence game (and quickly scale up how confidence-game-like they are as they become more ambitious).
Silicon Valley Startup Culture has a lot of exploitation going on, and maybe this isn’t coincidence, but some things that seem confidence-game-like (or maybe, “seem like the thing Ray cares about right now that inspired him to write this post, which may or may not be what Ben is pointing at”) include:
1. Honest businesses starting from their own funding, trying to build a good product, pay fair wages and treat their neighbors well. Their chance of success is still probably less than 50%, and succeeding usually requires at least the founders to all believe (or alieve) that this is worth the effort to give it their all. I currently believe maintaining that alief on most human-hardware requires some degree of overconfidence. (link to Julia Galef’s piece on this debate)
2. Secular Solsticewas almost literally this—a holiday is only real if people believe it’s real. The first 2-3 years were “on credit”—I got people to believe in it enough to give it a chance. Now it’s become sufficiently real that people starting to celebrate in a new location are earnestly buying into an “existing real holiday” that has grown beyond any one person’s control.
3. Quakers, Less Wrong, or Academia—Even when your goal is just thinking about things together, you still need to convince people to actually do it, and to put effort into it. When the “product” is thinking, certain forms of selling-yourself or maintaining-the-vision that’d work for a business no longer work (i.e. you can’t lie), but it’s still possible to accidentally kill the thing if a critical mass of participants say too many true/plausibly-but-pessimistic things at too early a stage and create an atmosphere where putting in the work doesn’t feel worthwhile.
I have been persuaded that the “have a community that never compromises on the pursuit of truth” is really valuable, and trumps the sorts of concerns that instrumental truth seekers tend to bring up. But I am an instrumental truthseeker, and I don’t think those concerns actually go away even if we’re all committed to the truth, and I think navigating the “what does committing to truth even mean?” is still a confusing question.
(I know this is all something you’re still in the process of mulling over, and last I checked you were trying to think them through on your own before getting tangled up in social consensus pressure stuff. At some point when you’ve had time to sort through those things I’m very interested in talking through it in more detail)
On honest businesses, I’d expect successful ones to involve overconfidence on average because of winner’s curse. But I’d also expect people who found successful businesses to correctly reject most ideas they consider, and overconfidence to cause them to select the wrong business plan. It’s possible that this is true but founders to switch to overconfidence once they’re committed, perhaps as part of a red queen’s race where founders who don’t exaggerate their business’s prospects don’t attract employees, investors, customers, etc.
As far as the motivation of the founders themselves, though, if success requires overconfidence, it seems to me like either that implies that these businesses actually weren’t worth it in expectation for their founders (in which case founders are harmed, not helped, by overconfidence), or there’s some failure of truthseeking where people with nominally calibrated beliefs irrationally avoid positive-EV ventures.
Eliezer seems like something of a counterexample here—if I recall correctly his prior estimate that HPMoR would do what it did (recollected after the fact) was something like 10%, he just tries lots of things like that, when they’re EV-positive, and some of them work.
Secular Solstice seems basically honest. Speculative bubbles cause harm in part because they tend to be based on distorted narratives about underlying facts, such that participating in the bubble implies actually having the facts wrong (e.g. that some type of business is very profitable). With a process that plays out entirely at a single level of social reality (the holiday’s real iff enough people declare it real), there’s not some underlying thing being distorted.
Something else that may be confusing here is that in our language, predictions and performative statements of intent often sound identical. Strunk and White’s discussion of the difference between “will” and “shall” is a clear example of an intent to promote distinguishing these. Again, there could be incentives pointing towards exaggeration (and recasting statements of intent as predictions).
EA and LessWrong seem like examples of things that succeeded at becoming movements despite a lot of skepticism expressed early on. On the other hand, a substantial amount of the promotion for these has been promotion of still-untested vaporware, and I’m not sure I uniformly regret this.
Definitely agree that a good business person needs to be epistemically sound enough to pick good plans. I think the idealized honest business person is something like Nate Soares, separating their ability to feel conviction from their ability to think honestly. But I think that’s beyond most people (in practice, if not in theory). And I think most business ventures, even good ones you have reason for confidence in, would still have a less than 50% success rate. (I think the “switch to overconfidence once you commit” strategy is probably good for most people)
(I do think you can select business ventures that would generate value even if the business ends up folding within a few years, and that an ecosystem that pushed more towards that than random-pie-in-the-sky-with-VC-exploitation-thrown in is probably better. But because of the competitive nature of business, it’s hard for something to end up with a greater than 50% success rate)
[mental flag: I notice that I’ve made a prediction that’s worth me spending another hour or two thinking about, fleshing out the underlying model of and/or researching]
> Eliezer...
Oddly enough I actually have a memory of Eliezer saying both the thing you just referenced and also the opposite (i.e, that he tries a lot of things and you don’t see the ones that don’t succeed, but also that he had a pretty good idea of what he was doing with HPMOR, and while it succeeded better than he expected… he did have a pretty good model of how fanfiction and memespreading worked and that he did expect it to work “at all” or something)
Your recollection about Eliezer seems right to me.
Also I guess you’re right on idealized businessperson vs nearly every actual one. But it’s worth noting that “X is very rare” is more than enough reason for X to be rare among successes.
[Note: This comment seems pretty pedantic in retrospect. Posting anyway to gauge reception, and because I’d still prefer clarity.]
On honest businesses, I’d expect successful ones to involve overconfidence on average because of winner’s curse.
I’m having trouble understanding this application of winner’s curse.
Are you saying something like the following:
People put in more resources and generally try harder when they estimate a higher chance of success. (Analogous to people bidding more in an auction when they estimate a higher value.)
These actions increase the chance of success, so overconfident people are overrepresented among successes.
This overrepresentation holds even if the “true chance of success” is the main factor. Overconfidence of founders just needs to shift the distribution of successes a bit, for “successful ones to involve overconfidence on average”.
First, this seems weird to me because I got the impression that you were arguing against overconfidence being useful.
Second, are you implying that successful businesses have on average “overpaid” for their successes in effort/resources? That is central to my understanding of winner’s curse, but maybe not yours.
You get winner’s curse even if results are totally random. If you have an unbiased estimator with a random error term, and select only the highest estimate in your sample, the expected error is positive (i.e. you probably overestimated it).
I think the confidence game framework is pointing at a useful thing—an important red flag to pay attention to. But I think your examples tend to blur the distinction between an orthogonal axis of “is the thing exploitive?”.
I think basically all things worth doing that involve 2 or more people are something of a confidence game (and quickly scale up how confidence-game-like they are as they become more ambitious).
Silicon Valley Startup Culture has a lot of exploitation going on, and maybe this isn’t coincidence, but some things that seem confidence-game-like (or maybe, “seem like the thing Ray cares about right now that inspired him to write this post, which may or may not be what Ben is pointing at”) include:
1. Honest businesses starting from their own funding, trying to build a good product, pay fair wages and treat their neighbors well. Their chance of success is still probably less than 50%, and succeeding usually requires at least the founders to all believe (or alieve) that this is worth the effort to give it their all. I currently believe maintaining that alief on most human-hardware requires some degree of overconfidence. (link to Julia Galef’s piece on this debate)
2. Secular Solstice was almost literally this—a holiday is only real if people believe it’s real. The first 2-3 years were “on credit”—I got people to believe in it enough to give it a chance. Now it’s become sufficiently real that people starting to celebrate in a new location are earnestly buying into an “existing real holiday” that has grown beyond any one person’s control.
3. Quakers, Less Wrong, or Academia—Even when your goal is just thinking about things together, you still need to convince people to actually do it, and to put effort into it. When the “product” is thinking, certain forms of selling-yourself or maintaining-the-vision that’d work for a business no longer work (i.e. you can’t lie), but it’s still possible to accidentally kill the thing if a critical mass of participants say too many true/plausibly-but-pessimistic things at too early a stage and create an atmosphere where putting in the work doesn’t feel worthwhile.
I have been persuaded that the “have a community that never compromises on the pursuit of truth” is really valuable, and trumps the sorts of concerns that instrumental truth seekers tend to bring up. But I am an instrumental truthseeker, and I don’t think those concerns actually go away even if we’re all committed to the truth, and I think navigating the “what does committing to truth even mean?” is still a confusing question.
(I know this is all something you’re still in the process of mulling over, and last I checked you were trying to think them through on your own before getting tangled up in social consensus pressure stuff. At some point when you’ve had time to sort through those things I’m very interested in talking through it in more detail)
On honest businesses, I’d expect successful ones to involve overconfidence on average because of winner’s curse. But I’d also expect people who found successful businesses to correctly reject most ideas they consider, and overconfidence to cause them to select the wrong business plan. It’s possible that this is true but founders to switch to overconfidence once they’re committed, perhaps as part of a red queen’s race where founders who don’t exaggerate their business’s prospects don’t attract employees, investors, customers, etc.
As far as the motivation of the founders themselves, though, if success requires overconfidence, it seems to me like either that implies that these businesses actually weren’t worth it in expectation for their founders (in which case founders are harmed, not helped, by overconfidence), or there’s some failure of truthseeking where people with nominally calibrated beliefs irrationally avoid positive-EV ventures.
Eliezer seems like something of a counterexample here—if I recall correctly his prior estimate that HPMoR would do what it did (recollected after the fact) was something like 10%, he just tries lots of things like that, when they’re EV-positive, and some of them work.
Secular Solstice seems basically honest. Speculative bubbles cause harm in part because they tend to be based on distorted narratives about underlying facts, such that participating in the bubble implies actually having the facts wrong (e.g. that some type of business is very profitable). With a process that plays out entirely at a single level of social reality (the holiday’s real iff enough people declare it real), there’s not some underlying thing being distorted.
Something else that may be confusing here is that in our language, predictions and performative statements of intent often sound identical. Strunk and White’s discussion of the difference between “will” and “shall” is a clear example of an intent to promote distinguishing these. Again, there could be incentives pointing towards exaggeration (and recasting statements of intent as predictions).
EA and LessWrong seem like examples of things that succeeded at becoming movements despite a lot of skepticism expressed early on. On the other hand, a substantial amount of the promotion for these has been promotion of still-untested vaporware, and I’m not sure I uniformly regret this.
Definitely agree that a good business person needs to be epistemically sound enough to pick good plans. I think the idealized honest business person is something like Nate Soares, separating their ability to feel conviction from their ability to think honestly. But I think that’s beyond most people (in practice, if not in theory). And I think most business ventures, even good ones you have reason for confidence in, would still have a less than 50% success rate. (I think the “switch to overconfidence once you commit” strategy is probably good for most people)
(I do think you can select business ventures that would generate value even if the business ends up folding within a few years, and that an ecosystem that pushed more towards that than random-pie-in-the-sky-with-VC-exploitation-thrown in is probably better. But because of the competitive nature of business, it’s hard for something to end up with a greater than 50% success rate)
[mental flag: I notice that I’ve made a prediction that’s worth me spending another hour or two thinking about, fleshing out the underlying model of and/or researching]
> Eliezer...
Oddly enough I actually have a memory of Eliezer saying both the thing you just referenced and also the opposite (i.e, that he tries a lot of things and you don’t see the ones that don’t succeed, but also that he had a pretty good idea of what he was doing with HPMOR, and while it succeeded better than he expected… he did have a pretty good model of how fanfiction and memespreading worked and that he did expect it to work “at all” or something)
Your recollection about Eliezer seems right to me.
Also I guess you’re right on idealized businessperson vs nearly every actual one. But it’s worth noting that “X is very rare” is more than enough reason for X to be rare among successes.
[Note: This comment seems pretty pedantic in retrospect. Posting anyway to gauge reception, and because I’d still prefer clarity.]
I’m having trouble understanding this application of winner’s curse.
Are you saying something like the following:
People put in more resources and generally try harder when they estimate a higher chance of success. (Analogous to people bidding more in an auction when they estimate a higher value.)
These actions increase the chance of success, so overconfident people are overrepresented among successes.
This overrepresentation holds even if the “true chance of success” is the main factor. Overconfidence of founders just needs to shift the distribution of successes a bit, for “successful ones to involve overconfidence on average”.
First, this seems weird to me because I got the impression that you were arguing against overconfidence being useful.
Second, are you implying that successful businesses have on average “overpaid” for their successes in effort/resources? That is central to my understanding of winner’s curse, but maybe not yours.
Sorry if I’m totally missing your point.
You get winner’s curse even if results are totally random. If you have an unbiased estimator with a random error term, and select only the highest estimate in your sample, the expected error is positive (i.e. you probably overestimated it).