Those people don’t get substantial equity in most business in the world. They generally get paid a salary and benefits in exchange for their work, and that’s about it.
Eli Tyre
I don’t think that’s a valid inference.
Ok. So I haven’t thought through these proposals in much detail, and I don’t claim any confident take, but my first response is “holy fuck, that’s a lot of complexity. It really seems like there will be some flaw in our control scheme that we don’t notice, if we’re stacking a bunch of clever ideas like this one on top of each other.”
This is not at all to be taken as a disparagement of the authors. I salute them for their contribution. We should definitely explore ideas like these, and test them, and use the best ideas we have at AGI time.
But my intuitive first order response is “fuck.”
But he helped found OpenAI, and recently founded another AI company.
I think Elon’s strategy of “telling the world not to build AGI, and then going to start another AGI company himself” is much less dumb / ethical fraught, than people often credit.
Thinking about this post for a bit shifted my view of Elon Musk a bit. He gets flack for calling for an AI pause, and then going and starting an AGI lab, and I now think that’s unfair.
I think his overall strategic takes are harmful, but I do credit him with being basically the only would-be AGI-builder who seems to me to be engaged in a reformative hypocrisy strategy. For one thing, it sounds like he went out of his way to try to get AI regulated (talking to congress, talking to the governors), and supported SB-1047.
I think it’s actually not that unreasonable to shout “Yo! This is dangerous! This should be regulated, and controlled democratically!”, see that that’s not happening, and then go and try do it in a way that you think is better.That seems like possibly an example of “follower-conditional leadership.” Taking real action to shift to the better equilibrium, failing, and then going back to the dominant strategy given the inadequate equilibrium that exists.
Obviously he has different beliefs than I do, and than my culture does, about what is required for a good outcome. I think he’s still causing vast harms, but I think he doesn’t deserve the eye-roll for founding another AGI lab after calling for everyone to stop.- 16 Nov 2024 3:48 UTC; 6 points) 's comment on Lao Mein’s Shortform by (
You maybe right. Maybe the top talent wouldn’t have gotten on board with that mission, and so it wouldn’t have gotten top talent.
I bet Illya would have been in for that mission, and I think a surprisingly large number of other top researchers might have been in for it as well. Obviously we’ll never know.And I think if the founders are committed to a mission, and they reaffirm their commitment in every meeting, they can go surprisingly far in making in the culture of an org.
Also, Sam Altman is a pretty impressive guy. I wonder what would have happened if he had decided to try to stop humanity from building AGI, instead of trying to be the one to do it instead of google.
Absolutely true.
But also Altman’s actions since are very clearly counter to the spirit of that email. I could imagine a version of this plan, executed with earnestness and attempted cooperativeness, that wasn’t nearly as harmful (though still pretty bad, probably).
Part of the problem is that “we should build it first, before the less trustworthy” is a meme that universalizes terribly.
Part of the problem is that Sam Altman was not actually sincere in the the execution of that sentiment, regardless of how sincere his original intentions were.
I predict this won’t work as well as you hope because you’ll be fighting the circadian effect that partially influences your cognitive performance.
Also, some ways to maximize your sleep quality are too exercise very intensely and/or to sauna, the day before.
It’s possible no one tried literally “recreate OkC”, but I think dating startups are very oversubscribed by founders, relative to interest from VCs
If this is true, it’s somewhat cruxy for me.
I’m still disappointed that no one cared enough to solve this problem without VC funding.
I only skimmed the retrospective now, but it seems mostly to be detailing problems that stymied their ability to find traction.
Right. But they were not relentlessly focused on solving this problem.
I straight up don’t believe that that the problems outlined can’t be surmounted, especially if you’re going for a cashflow business instead of an exit.
But I think you’re trying to draw an update that’s something like “tech startups should be doing an unbiased search through viable valuable business, but they’re clearly not”, or maybe, “tech startups are supposed to be able to solve a large fraction of our problems, but if they can’t solve this, then that’s not true”, and I don’t think either of these conclusions seem that licensed from the dating data point.
Neither of those, exactly.
I’m claiming that the narrative around the startup scene is that they are virtuous engines of [humane] value creation (often in counter to a reactionary narrative that “big tech” is largely about exploitation and extraction). It’s about “changing the world” (for the better).
This opportunity seems like a place where one could have traded meaningfully large personal financial EV for enormous amounts of humane value. Apparently no founder wanted to take that trade. Because I would expect there to be variation in how much funders are motivated by money vs. making a mark on the world vs. creating value vs. other stuff, that fact that (to my knowledge) no founder went for it, is evidence about the motivations of the whole founder class. The number of founders who are more interested in creating something that helps a lot of people than they are in making a lot of money (even if they’re interested in both) is apparently very small.
Now, maybe startups actually do create lots of humane value, even if they’re created by founders and VC’s motivated by profit. The motivations of of the founders are only indirect evidence about the effects of startups.
But the tech scene is not motivated to optimize for this at all?? That sure does update me about how much the narrative is true vs. propaganda.
Now if I’m wrong and old OkCupid was only drastically better for me and my unusually high verbal intelligence friends, and it’s not actually better than the existing offerings for the vast majority of people, that’s a crux for me.
You mention manifold.love, but also mention it’s in maintenance mode – I think because the type of business you want people to build does not in fact work.
From their retrospective:
Manifold.Love is going into maintenance mode while we focus on our core product. We hope to return with improvements once we have more bandwidth; we’re still stoked on the idea of a prediction market-based dating app!
It sounds less like they found it didn’t work, and more like they have other priorities and aren’t (currently) relentlessly pursing this one.
I didn’t say Silicon Valley is bad. I said that the narrative about Silicon Valley is largely propagnada, which can be true independently of how good or bad it is, in absolute terms, or relative to the rest of the world.
Yep. I’m aware, and strongly in support.
But it took this long (and even now, isn’t being done by a traditional tech founder). This project doesn’t feel like it ameliorates my point.
The fact that there’s a sex recession is pretty suggestive that tinder and the endless stream of tinder clones doesn’t serve people very well.
Even if you don’t assess potential romantic partners by reading their essays, like I do, OkC’s match percentage meant that you could easily filter out 95% of the pool to people who are more likely to be compatible with you, along whatever metrics of compatibility you care about.
That no one rebuilt old OkCupid updates me a lot about how much the startup world actually makes the world better
The prevailing ideology of San Francisco, Silicon Valley, and the broader tech world, is that startups are an engine (maybe even the engine) that drives progress towards a future that’s better than the past, by creating new products that add value to people’s lives.I now think this is true in a limited way. Software is eating the world, and lots of bureaucracy is being replaced by automation which is generally cheaper, faster, and a better UX. But I now think that this narrative is largely propaganda.
It’s been 8 years since Match bought and ruined OkCupid and no one, in the whole tech ecosystem, stepped up to make a dating app even as good as old OkC is a huge black mark against the whole SV ideology of technology changing the world for the better.
Finding a partner is such a huge, real, pain point for millions of people. The existing solutions are so bad and extractive. A good solution has already been demonstrated. And yet not a single competent founder wanted to solve that problem for planet earth, instead of doing something else, that (arguably) would have been more profitable. At minimum, someone could have forgone venture funding and built this as a cashflow business.
It’s true that this is a market that depends on economies of scale, because the quality of your product is proportional to the size of your matching pool. But I don’t buy that this is insurmountable. Just like with any startup, you start by serving a niche market really well, and then expand outward from there. (The first niche I would try for is by building an amazing match-making experience for female grad students at a particular top university. If you create a great experience for the women, the men will come, and I’d rather build an initial product for relatively smart customers. But there are dozens of niches one could try for.)
But it seems like no one tried to recreate OkC, much less creating something better, until the manifold team built manifold.love (currently in maintenance mode)? Not that no one succeeded. To my knowledge, no else one even tried. Possibly Luna counts, but I’ve heard through the grapevine that they spent substantial effort running giant parties, compared to actually developing and launching their product—from which I infer that they were not very serious. I’ve been looking for good dating apps. I think if a serious founder was trying seriously, I would have heard about it.
Thousands of funders a year, and no one?!That’s such a massive failure, for almost a decade, that it suggests to me that the SV ideology of building things that make people’s lives better is broadly propaganda. The best founders might be relentlessly resourceful, but a tiny fraction of them seem to be motivated by creating value for the world, or this low hanging fruit wouldn’t have been left hanging for so long.
This is of course in addition to the long list of big tech companies who exploit their network-effect monopoly power to extract value from their users (often creating negative societal externalities in the process), more than creating value for them. But it’s a weaker update that there are some tech companies that do ethically dubious stuff, compared to the stronger update that there was no startup that took on this obvious, underserved, human problem.
My guess is that the tech world is a silo of competence (because competence is financially rewarded), but operates from an ideology with major distortions / blindspots, that are disconnected from commonsense reasoning about what’s Good. eg following profit incentives, and excitement about doing big things (independent from whether those good things have humane or inhumane impacts) off a cliff.
my current guess is that the superior access to large datasets of big institutions gives them too much of an advantage for me to compete with, and I’m not comfortable with joining one.
Very much a side note, but the way you phrased this suggest that you might have ethical concerns? Is that right? If so, what are they?
Advice that looked good: buy semis (TSMC, NVDA, ASML, TSMC vol in particular)
Advice that looked okay: buy bigtech
Advice that looked less good: short long bonds
None of the above was advice, remember! It was...entertainment or something?
Note that all of this happened before the scaling hypothesis was really formulated, much less made obvious.
We now know, with the benefit of hindsight that developing AI and it’s precursors is extremely compute intensive, which means capital intensive. There was some reason to guess this might be true at the time, but it wasn’t a forgone conclusion—it was still an open question if the key to AGI would be mostly some technical innovation that hadn’t been developed yet.