I mostly agree with you… but:
1. You created a false dichotomy where budgeting excludes investing, then said you can’t ever make money with budgeting, because by definition anything that is in your budget cannot make money—but spending on a house instead of rent, for example, clearly violates that assumption.
2. Budgeting can include time, in addition to money, and among other things, that matters because income isn’t a fixed quantity over time. Things that people can do to use time to drastically change income include starting businesses, or taking night classes to get a more lucrative job.
3. The last point is conflating questions, because yes, inheriting or a trust fund is already being rich (but inheritances and trust funds are not the same thing!) However, most second and third generation nouveau riche folks do, in fact, spend down and waste the inherited fortune, once they are old enough to actually inherit instead of living on a managed trust fund.
Davidmanheim
You interpreted this as defending boots theory, which wasn’t my intent. I said that there was a real phenomena, not that boots theory is correct.
And sure, rent can come out ahead for some cases, that doesn’t imply it’s always better, or that upper middle class people generally actually come out ahead—because even where you could end up ahead renting, in fact, the money saved is often spent instead of invested.
Also, I think the claimed non-sequitur isn’t one—lots of passive income routes don’t require much financial investment, and many that do, like starting a company, can be financed with loans. The point is that people choose to invest their limited time and money in ways that do not build wealth. (Which isn’t a criticism—there are plenty of other, better, goals in life.)
Agreed
I mentioned that there should be much more impressive behavior if they were that smart...
A counterargument is that it takes culture to build cumulative knowledge to build wealth to create cognitive tools that work well enough to do obviously impressive things. And 50,000 individuals distributed globally isn’t enough to build that culture.
There is still a real phenomenon where people spend a lot to buy things that are poor quality instead of longer lasting higher quality things. At the extreme, this is paying rent instead of buying and building equity, or buying consumable goods instead of investments, or working jobs instead of building passive income—and those are things that use money instead of building up generational wealth.
I strongly second a number of the recommendations made here about who to reach out to and where to look for more information. If you’re looking for somewhere to donate, the Long Term Futures Fund is an underfunded and very effective funding mechanism. (If you’d like more control, you could engage with the Survival and Flourishing Fund, which has a complex process to make recommendations.)
My headcannon for the animals was that early on, they released viruses that genetically modified non-human animals in ways that don’t violate the pact.
I didn’t think the pact could have been as broad as “the terrestrial Earth will be left unmodified,” because the causal impact of their actions certainly changed things. I assumed it was something like “AIs and AI created technologies may not do anything that interferes with humans actions on Earth. or harms humans in any way”—but genetic engineering instructions sent from outside of the earth, assumedly pre-collapse, didn’t qualify because they didn’t affect human, they made animals affect humans, which was parsed as similar to impacts of the environment on humans, not an AI technology.
Yes, except that as soon as AI can replace the other sources of friction, we’ll have a fairly explosive takeoff; he thinks these sources of friction will stay forever, while I think they are currently barriers, but the engine for radical takeoff isn’t going to happen via traditional processes adopting the models in individual roles, it will be via new business models developed to take advantage of the technology.
Much like early TV was just videos of people putting on plays, and it took time for people to realize the potential—but once they did, they didn’t make plays that were better for TV, they did something that actually used the medium well. And what using AI well would mean, in context of business implications is cutting out human delays, inputs, and required oversight. Which is worrying for several reasons!
I mostly agree, but “the reference class of gamers who put forth enough effort to beat the game” is still necessarily truncated by omitting any who nonetheless failed to complete it, and is likely also omitting gamers embarrassed of how long it took them.
Meanwhile, the average human can beat the entirety of Red in just 26 hours, and with substantially less thought per hour.
I mostly agree with the post, but this number is absolutely bullshit. What you could more honestly claim, given the link, is that the average hardcore gamer that both completed the game, then input their completion time into this type of website is 26 hours. That’s an insanely different claim. In fact, I would be shocked if even 50% of people who have played a Pokemon game have completed it at all, much less doing so in under a week of playtime.
I’m not sure we’d see this starkly if people can change roles and shift between job types, but haven’t we seen firms engage in large rounds of layoffs and follow up by not hiring as many coders already over the past couple years?
Lemonade is doing something like what you describe in Insurance. I suspect other examples exist. But most market segments, even in “pure“ software, don’t revolve around only the software product, so it is slower to become obvious if better products emerge.
My understanding of the situations, speaking to people in normal firms who code, and management, is that this is all about theory of constraints. As a simplified example, if you previously needed 1 business analyst, one QA tester, and one programmer a day each to do a task, and the programmer’s efficiency doubles, or quintuples, the impact on output is zero, because the firm isn’t set up to go much faster.
Firms need to rebuild their processes around this to take advantage, and that’s only starting to happen, and only at some firms.
This seems reasonable, though efficacy of the learning method seems unclear to me.
But:
with a heavily-reinforced constraint that the author vectors are identical for documents which have the same author
This seems wrong. To pick on myself, my peer reviewed papers, my substack, my lesswrong posts, my 1990s blog posts, and my twitter feed are all substantively different in ways that I think the author vector should capture.
There’s a critical (and interesting) question about how you generate the latent space of authors, and/or how it is inferred from the text. Did you have thoughts on how this would be done?
That is completely fair, and I was being uncharitable (which is evidently what happens when I post before I have my coffee, apologies.)
I do worry that we’re not being clear enough that we don’t have solutions for this worryingly near-term problem, and think that there’s far too little public recognition that this is a hard or even unsolvable problem.
it could be just as easily used that way once there’s a reason to worry about actual alignment of goal-directed agents
This seems to assume that we solve various Goodhart’s law and deception problems
Assuming that timelines are exogenous, I would completely agree—but they are not.
The load bearing assumption here seems to be that we won’t make unaligned superintelligent systems given current methods soon enough to think it matters.
This seems false, and at the very least should be argued explicitly.
8 hours of clock time for an expert seems likely to be enough to do anything humans can do; people rarely productively work in longer chunks than that, and as long as we assume models are capable of task breakdown and planning, (which seems like a non trivial issue, but an easier one than the scaling itself,) that should allow it to parallelize and serialize chucks to do larger human-type tasks.
But it’s unclear alignment can be solved by humans at all, and even if it can, of course, there is no reason to think these capabilities would scale as well or better for alignment than for capabilities and self-improvement, so this is not at all reassuring to me.