Say you are a strong believer and advocate for the Silicon Valley startup tech culture, but you want to be able to pass an Ideological Turing Test to show that you are not irrational or biased. In other words, you need to write some essays along the lines of “Startups are Dumb” or “Why You Should Stay at Your Big Company Job”. What kind of arguments would you use?
This comment got 6+ responses, but none that actually attempted to answer the question. My goal of Socratically prompting contrarian thinking, without being explicitly contrarian myself, apparently failed. So here is my version:
Most startups are gimmicky and derivative, even or especially the ones that get funded.
Working for a startup is like buying a lottery ticket: a small chance of a big payoff. But since humans are by nature risk-averse, this is a bad strategy from a utility standpoint.
The vast majority of value from startups comes from the top 1% of firms, like Facebook, Amazon, Google, Microsoft, and Apple. All of those firms were founded by young white males in their early 20s. VCs are driven by the goal of funding the next Facebook, and they know about the demographic skew, even if they don’t talk about it. So if you don’t fit the profile of a megahit founder, you probably won’t get much attention from the VC world.
There is a group of people (called VCs) whose livelihood depends on having a supply of bright young people who want to jump into the startup world. These people act as professional activists in favor of startup culture. This would be fine, except there is no countervailing force of professional critics. This creates a bias in our collective evaluation of the culture.
You should probably stay at your big company job because the people who are currently startup founders are self-selected for, on average, different things than you’re selecting yourself for by trying to jump on a popular trend, and so their success is only a weak predictor of your success.
Startups often cash out by generating hype and getting bought for ridiculous amounts of money by a big company. But they are very, very often, in more sober analysis, not worth this money. From a societal perspective this is bad because it’s not properly aligning incentives with wealth creation, and from a new-entrant perspective this is bad because you likely fail if the bubble pops before you can sell.
This comment got 6+ responses, but none that actually attempted to answer the question.
Likely because the answers called for a ITT but provided no questions for the ITT.
Startups typically do not create new technology; instead they create new technology-dependent business models.
There is a group of people (called VCs) whose livelihood depends on having a supply of bright young people who want to jump into the startup world. These people act as professional activists in favor of startup culture. This would be fine, except there is no countervailing force of professional critics. This creates a bias in our collective evaluation of the culture.
Both of those seem to me like failing the Intellectual Turing Test. I would have a hard time thinking that the average person who works at a big company would make those arguments.
You never explained what you mean by “startup culture,” nor “good.”
One can infer something from your arguments. But different arguments definitely appeal to different definitions of “good.” In particular: good for the founder, good for the startup employee, good for the VC, and good for society.
There is no reason to believe that it should be good for all of them. In particular, a belief that equity is valuable to startup employees is good for founders and VCs, but if it is false, it is bad for startup employees. If startups are good for society, it may be good for society for the employees to be deceived. But if startups are good for society, it may be a largely win-win for startups to be considered virtuous and everyone involved in startups to receive status. Isn’t that the kind of thing “culture” does, rather than promulgate specific beliefs?
By “startup culture” you seem to mean anything that promotes startups. Do these form a natural category? If they are all VC propaganda, then I guess that’s a natural category, but it probably isn’t a coherent culture. Perhaps there is a pro-startup culture that confabulates specific claims when asked. But are the details actually motivating people, or is it really the amorphous sense of virtue or status?
Sometimes I see people using “startup culture” in a completely different way. They endorse the claim that startups are good for society, but condemn the current culture as unproductive.
What exactly is the thesis in question? “Startup culture is a valuable piece of a large economy”, for example, is not the same thing as “I should go and create a startup, it’s gonna be great!”.
Not to disagree with this exercise, but I think that the name ITT is overused and should not be applied here. Why not just ask “What are some good arguments against startups?” If you want a LW buzzword for this exercise, how about hypothetical apostasy or premortem?
I think that ITT should be reserved for the narrow situation where there is a specific set of opponents and you want to prove that you are payingattention to their arguments. Even when the conventional wisdom is correct, it is quite common that the majority has no idea what the minority is saying and falsely claims to have rebutted their arguments. ITT is a way of testing this.
Not to disagree with this exercise, but I think that the name ITT is overused and should not be applied here. Why not just ask “What are some good arguments against startups?”
That’s a different question.
A good argument against startups might be set VC as an asset class don’t outperfom the stock market. On the other hand it’s unlikely that the average person working at a company would make that argument, so arguing it would fail the ideological turing test.
The question seems like it has more levels of indirection in it than necessary. I mean, to pass an ITT is to behave/speak/write just like someone with the views you’re pretending to have. So how is “Say you believe X and want to pass an ITT by arguing not-X. What would you say?” different from “Say you believe not-X and want to defend it. What would you say?” or, even, just “What are the best arguments for not-X?”?
Being a believer in X inherently means, for a rationalist, that you think there are no good arguments against X. So this should be impossible, except by deliberately including arguments that are, to the best of your knowledge, flawed. I might be able to imitate a homeopath, but I can’t imitate a rational, educated, homeopath, because if I thought there was such a thing I would be a homeopath.
Yes, a lot of people extoll the virtues of doing this. But a lot of people aren’t rational, and don’t believe X on the basis of arguments in the first place. If so, then producing good arguments against X are logically possible, and may even be helpful.
(There’s another possibility: where you are weighing things and the other side weighs them differently from you. But that’s technically just a subcase—you still think the other side’s weights are incorrect—and I still couldn’t use it to imitate a creationist or flat-earther.)
Being a believer in X inherently means, for a rationalist, that you think there are no good arguments against X.
Huh? You are proposing a very stark, black-and-white, all-or-nothing position. Recall that for a rationalist a belief has a probability associated with it. It doesn’t have to be anywhere near 1. Moreover, a rationalist can “believe” (say, with probability > 90%) something against which good arguments exist. It just so happens that the arguments pro are better and more numerous than the arguments con. That does not mean that the arguments con are not good or do not exist.
And, of course, you should not think yourself omniscient. One of the benefits of steelmanning is that it acquaints you with the counterarguments. Would you know what they are if you didn’t look?
I might be able to imitate a homeopath, but I can’t imitate a rational, educated, homeopath, because if I thought there was such a thing I would be a homeopath.
Great point!
I guess the point of ITT is that even when you disagree with your opponents, you have the ability to see their (wrong) model of the world exactly as they have it, as opposed to a strawman.
For example, if your opponent believes that 2+2=5, you pass ITT by saying “2+2=5”, but you fail it by saying “2+2=7″. From your perspective, both results are “equally wrong”, but from their perspective, the former is correct, while the latter is plainly wrong.
In other words, the goal of ITT isn’t to develop a “different, but equally correct” map of the territory (because if you would believe in correctness of the opponent’s map, it would also become your map), but to develop a correct map of your opponent’s map (as opposed to an incorrect map of your opponent’s map).
So, on some level, while you pass an ITT, you know you are saying something false or misleading; even if just by taking correct arguments and assigning incorrect weights to them. But the goal isn’t to derive a correct “alternative truth”; it is to have a good model of your opponent’s mind.
(Not that I know a thing about the subject, but are you sure this angle is exactly how an ’unbiased re: startups” person would think about it? Why not something more like, “Startups are simply irrelevant, if we get down to it”?)
Say you are a strong believer and advocate for the Silicon Valley startup tech culture, but you want to be able to pass an Ideological Turing Test to show that you are not irrational or biased. In other words, you need to write some essays along the lines of “Startups are Dumb” or “Why You Should Stay at Your Big Company Job”. What kind of arguments would you use?
This comment got 6+ responses, but none that actually attempted to answer the question. My goal of Socratically prompting contrarian thinking, without being explicitly contrarian myself, apparently failed. So here is my version:
Most startups are gimmicky and derivative, even or especially the ones that get funded.
Working for a startup is like buying a lottery ticket: a small chance of a big payoff. But since humans are by nature risk-averse, this is a bad strategy from a utility standpoint.
Startups typically do not create new technology; instead they create new technology-dependent business models.
Even if startups are a good idea in theory, currently they are massively overhyped, so on the margin people should be encouraged to avoid them.
Early startup employees (not founders) don’t make more than large company employees.
The vast majority of value from startups comes from the top 1% of firms, like Facebook, Amazon, Google, Microsoft, and Apple. All of those firms were founded by young white males in their early 20s. VCs are driven by the goal of funding the next Facebook, and they know about the demographic skew, even if they don’t talk about it. So if you don’t fit the profile of a megahit founder, you probably won’t get much attention from the VC world.
There is a group of people (called VCs) whose livelihood depends on having a supply of bright young people who want to jump into the startup world. These people act as professional activists in favor of startup culture. This would be fine, except there is no countervailing force of professional critics. This creates a bias in our collective evaluation of the culture.
Argument thread!
You should probably stay at your big company job because the people who are currently startup founders are self-selected for, on average, different things than you’re selecting yourself for by trying to jump on a popular trend, and so their success is only a weak predictor of your success.
Startups often cash out by generating hype and getting bought for ridiculous amounts of money by a big company. But they are very, very often, in more sober analysis, not worth this money. From a societal perspective this is bad because it’s not properly aligning incentives with wealth creation, and from a new-entrant perspective this is bad because you likely fail if the bubble pops before you can sell.
Likely because the answers called for a ITT but provided no questions for the ITT.
Both of those seem to me like failing the Intellectual Turing Test. I would have a hard time thinking that the average person who works at a big company would make those arguments.
You never explained what you mean by “startup culture,” nor “good.”
One can infer something from your arguments. But different arguments definitely appeal to different definitions of “good.” In particular: good for the founder, good for the startup employee, good for the VC, and good for society.
There is no reason to believe that it should be good for all of them. In particular, a belief that equity is valuable to startup employees is good for founders and VCs, but if it is false, it is bad for startup employees. If startups are good for society, it may be good for society for the employees to be deceived. But if startups are good for society, it may be a largely win-win for startups to be considered virtuous and everyone involved in startups to receive status. Isn’t that the kind of thing “culture” does, rather than promulgate specific beliefs?
By “startup culture” you seem to mean anything that promotes startups. Do these form a natural category? If they are all VC propaganda, then I guess that’s a natural category, but it probably isn’t a coherent culture. Perhaps there is a pro-startup culture that confabulates specific claims when asked. But are the details actually motivating people, or is it really the amorphous sense of virtue or status?
Sometimes I see people using “startup culture” in a completely different way. They endorse the claim that startups are good for society, but condemn the current culture as unproductive.
What exactly is the thesis in question? “Startup culture is a valuable piece of a large economy”, for example, is not the same thing as “I should go and create a startup, it’s gonna be great!”.
Not to disagree with this exercise, but I think that the name ITT is overused and should not be applied here. Why not just ask “What are some good arguments against startups?” If you want a LW buzzword for this exercise, how about hypothetical apostasy or premortem?
I think that ITT should be reserved for the narrow situation where there is a specific set of opponents and you want to prove that you are paying attention to their arguments. Even when the conventional wisdom is correct, it is quite common that the majority has no idea what the minority is saying and falsely claims to have rebutted their arguments. ITT is a way of testing this.
That’s a different question.
A good argument against startups might be set VC as an asset class don’t outperfom the stock market. On the other hand it’s unlikely that the average person working at a company would make that argument, so arguing it would fail the ideological turing test.
The question seems like it has more levels of indirection in it than necessary. I mean, to pass an ITT is to behave/speak/write just like someone with the views you’re pretending to have. So how is “Say you believe X and want to pass an ITT by arguing not-X. What would you say?” different from “Say you believe not-X and want to defend it. What would you say?” or, even, just “What are the best arguments for not-X?”?
Being a believer in X inherently means, for a rationalist, that you think there are no good arguments against X. So this should be impossible, except by deliberately including arguments that are, to the best of your knowledge, flawed. I might be able to imitate a homeopath, but I can’t imitate a rational, educated, homeopath, because if I thought there was such a thing I would be a homeopath.
Yes, a lot of people extoll the virtues of doing this. But a lot of people aren’t rational, and don’t believe X on the basis of arguments in the first place. If so, then producing good arguments against X are logically possible, and may even be helpful.
(There’s another possibility: where you are weighing things and the other side weighs them differently from you. But that’s technically just a subcase—you still think the other side’s weights are incorrect—and I still couldn’t use it to imitate a creationist or flat-earther.)
Huh? You are proposing a very stark, black-and-white, all-or-nothing position. Recall that for a rationalist a belief has a probability associated with it. It doesn’t have to be anywhere near 1. Moreover, a rationalist can “believe” (say, with probability > 90%) something against which good arguments exist. It just so happens that the arguments pro are better and more numerous than the arguments con. That does not mean that the arguments con are not good or do not exist.
And, of course, you should not think yourself omniscient. One of the benefits of steelmanning is that it acquaints you with the counterarguments. Would you know what they are if you didn’t look?
Great point!
I guess the point of ITT is that even when you disagree with your opponents, you have the ability to see their (wrong) model of the world exactly as they have it, as opposed to a strawman.
For example, if your opponent believes that 2+2=5, you pass ITT by saying “2+2=5”, but you fail it by saying “2+2=7″. From your perspective, both results are “equally wrong”, but from their perspective, the former is correct, while the latter is plainly wrong.
In other words, the goal of ITT isn’t to develop a “different, but equally correct” map of the territory (because if you would believe in correctness of the opponent’s map, it would also become your map), but to develop a correct map of your opponent’s map (as opposed to an incorrect map of your opponent’s map).
So, on some level, while you pass an ITT, you know you are saying something false or misleading; even if just by taking correct arguments and assigning incorrect weights to them. But the goal isn’t to derive a correct “alternative truth”; it is to have a good model of your opponent’s mind.
No good arguments, or the weight of the arguments for X are greater than the weight of the arguments against X?
You know, I did mention weighing arguments in my post.
No, http://lesswrong.com/lw/gz/policy_debates_should_not_appear_onesided/
In high level debating at the debating world championship the participants are generally able to give good arguments for both sides of every issue.
(Not that I know a thing about the subject, but are you sure this angle is exactly how an ’unbiased re: startups” person would think about it? Why not something more like, “Startups are simply irrelevant, if we get down to it”?)