While responsiveness is doubtless a valid test of some sort of intellectual aliveness and ability, it could easily take hours or days to integrate real, substantive new information; in ten minutes all one may be able to do is perform responsiveness.
I get what you’re saying. I also think that there’s something to be said about how much you can read about someone very quickly. I recall Oli writing a post proposing to test the hypothesis that we should be able to estimate IQ from a short conversation with someone. I agree there are many important things missing in the above test, but note that Altman’s belief is explicitly that he does not know what the next company should look like, what area it should be in, and what problem it should solve, and so I think he mostly thinks that good people need to be very reactive to their environment, which is the sort of thing I think you could check quickly.
In other words, the people who best succeed at Y Combinator’s screening process are exactly the people you’d expect to score highest at Desire To Pass Tests.
It was you over here who pointed out that these such people who have the slack required to be actually moral perhaps would be the best decision makers. I agree it’s sad, but just because it’s politically convenient for some people doesn’t prove it’s false.
ALTMAN: I think it’s correlated with successful founders. It’s fun to have numbers that go up and to the right.
I think that it’s fundamentally hard to distinguish TDTPT with being good at optimisation, both of which involve pushing a metric really hard, and just because something sounds like the former doesn’t mean it’s definitely not the latter. I think surface features can’t be the decider here.
Paul Graham cheerfully acknowledged that, by instilling message discipline, “we help the bad founders look indistinguishable from the good ones.”
When I was doing the pre-reading for this post, no line shouted out at me more as “Benquo is going to point at this in his post” than this one.
Treatment Effects
I don’t know quite what this section is about. I agree that insofar as money doesn’t correlate with human values, Altman’s work will produce goodharting. I agree some of the metrics they work on are fake. But I think that… optimising hard for metrics to the ignorance of many social expectations is important for real work, as people often don’t do anything because they’re optimising for being socially stable to the loss of everything else. I agree that Ramen profitability is a pretty grim psychological tool for doing this, but it’s not obvious to me that people shouldn’t put everything on the line more often than they do, to get to grips with real risk.
Graham has a natural affinity for production-based strategies which allowed him to acquire various kinds of capital. He blinds himself to the existence of adversarial strategies, so he’s able to authentically claim to think that e.g. mean people fail—he just forgets about Jeff Bezos, Larry Ellison, Steve Jobs, and Travis Kalanick because they are too anomalous in his model, and don’t feel to him like central cases of success.
I think there’s something to this. I remember reading Graham as a teenager and thinking “This is how optimal companies work” and then learning that not all organisations are just “bad startups”, but that there’s a lot of variation and lots more complicated things going on, especially of the adversarial kind, which YC doesn’t really talk about publicly except to say “Ignore it”. It does seem like Graham somehow is very blinded to things that aren’t production-based startups, and doesn’t talk about them very well. Actually, I think this is the best paragraph in your essay.
Here’s the thing, though. Graham knows he’s doing the wrong thing. He confessed in Black Swan Farming that even though doing the right thing would work out better for him in the long run, he just isn’t getting enough positive feedback, so it’s psychologically intolerable:
I’m surprised you didn’t include the other quote by him on this… I’m having a hard time finding it, I think I read it while doing all the required reading for this essay. Anyway, I distinctly recall him saying that one of the reasons he can’t have the supermajority of companies fail to get funding on Demo Day, is because it would demotivate them all and stop them applying to YC. Alas, I can’t find the quote.
--
Doing the required reading was great, the TDTPT essay was brilliant, and I got a bunch of your taste from reading it all myself and then only afterwards seeing which parts you pulled out. I think I wouldn’t have noticed the connection you make in “A High Health Score is Better Than Health” myself, and I am glad you phrased that idea about Graham’s blindspot so precisely, which I’ll have to think about more.
Just to state it aloud, the main hypothesis that occurred to me when doing the pre-reading and thinking about why YC would be a scam (to use a term from the comments below), is that after you get in:
They ensure that you have massive (artificial) growth by hooking you into their massive network of successful companies.
They destroy information for other funders on demo day by making the good companies look exactly as good as the bad.
This is overall a system that gains a lot of power and resources while wasting everyone else’s time and attention, exerting major costs on the world globally. “Apex predator” sounds like an appropriate term.
Altman’s belief is explicitly that he does not know what the next company should look like, what area it should be in, and what problem it should solve, and so I think he mostly thinks that good people need to be very reactive to their environment
I expect this to be good at many things—he’s probably not wrong that Napoleon would make a good YC founder by his standards—but I expect it’s not the mindset that can develop substantive new tech. And sure enough, it seems like substantive new tech is mostly not being developed.
I get what you’re saying. I also think that there’s something to be said about how much you can read about someone very quickly. I recall Oli writing a post proposing to test the hypothesis that we should be able to estimate IQ from a short conversation with someone. I agree there are many important things missing in the above test, but note that Altman’s belief is explicitly that he does not know what the next company should look like, what area it should be in, and what problem it should solve, and so I think he mostly thinks that good people need to be very reactive to their environment, which is the sort of thing I think you could check quickly.
It was you over here who pointed out that these such people who have the slack required to be actually moral perhaps would be the best decision makers. I agree it’s sad, but just because it’s politically convenient for some people doesn’t prove it’s false.
I think that it’s fundamentally hard to distinguish TDTPT with being good at optimisation, both of which involve pushing a metric really hard, and just because something sounds like the former doesn’t mean it’s definitely not the latter. I think surface features can’t be the decider here.
When I was doing the pre-reading for this post, no line shouted out at me more as “Benquo is going to point at this in his post” than this one.
I don’t know quite what this section is about. I agree that insofar as money doesn’t correlate with human values, Altman’s work will produce goodharting. I agree some of the metrics they work on are fake. But I think that… optimising hard for metrics to the ignorance of many social expectations is important for real work, as people often don’t do anything because they’re optimising for being socially stable to the loss of everything else. I agree that Ramen profitability is a pretty grim psychological tool for doing this, but it’s not obvious to me that people shouldn’t put everything on the line more often than they do, to get to grips with real risk.
I think there’s something to this. I remember reading Graham as a teenager and thinking “This is how optimal companies work” and then learning that not all organisations are just “bad startups”, but that there’s a lot of variation and lots more complicated things going on, especially of the adversarial kind, which YC doesn’t really talk about publicly except to say “Ignore it”. It does seem like Graham somehow is very blinded to things that aren’t production-based startups, and doesn’t talk about them very well. Actually, I think this is the best paragraph in your essay.
I’m surprised you didn’t include the other quote by him on this… I’m having a hard time finding it, I think I read it while doing all the required reading for this essay. Anyway, I distinctly recall him saying that one of the reasons he can’t have the supermajority of companies fail to get funding on Demo Day, is because it would demotivate them all and stop them applying to YC. Alas, I can’t find the quote.
--
Doing the required reading was great, the TDTPT essay was brilliant, and I got a bunch of your taste from reading it all myself and then only afterwards seeing which parts you pulled out. I think I wouldn’t have noticed the connection you make in “A High Health Score is Better Than Health” myself, and I am glad you phrased that idea about Graham’s blindspot so precisely, which I’ll have to think about more.
Just to state it aloud, the main hypothesis that occurred to me when doing the pre-reading and thinking about why YC would be a scam (to use a term from the comments below), is that after you get in:
They ensure that you have massive (artificial) growth by hooking you into their massive network of successful companies.
They destroy information for other funders on demo day by making the good companies look exactly as good as the bad.
This is overall a system that gains a lot of power and resources while wasting everyone else’s time and attention, exerting major costs on the world globally. “Apex predator” sounds like an appropriate term.
I expect this to be good at many things—he’s probably not wrong that Napoleon would make a good YC founder by his standards—but I expect it’s not the mindset that can develop substantive new tech. And sure enough, it seems like substantive new tech is mostly not being developed.