Yes that’s true that beginning almost any form of exercise will deliver most of the benefits compared to what an optimal rountine would! But your post is all about trying to be as time efficient as possible (e.g you also discussed the potential to use drop sets to go even beyond failure!).
For vast majority of people reading this post—if their goal is to get the greatest possible benefit from resistance training in shortest amount of time—the biggest mistake they’re making right now is not having their sets be difficult enough. You’re right that you don’t need to go to failure to get most of the benefits, but if time efficiency is the goal, spending that extra 15 seconds to add those two final pre-failure reps to a set is the first thing I’d reccomend.
X4vier
(This comment is directed more at the rest of the audience who I think are likely to take the wrong lessons from this post, rather than OP who I don’t doubt knows what they are talking about and has found success following their own advice)
[For supersets] the main downside being that the overall greater work packed in to a smaller amount of time being somewhat more motivationally taxing.
At least for me personally, this is an overwhelmingly important consideration! There’s no way in hell I could go to failure on both ends of a superset at once without throwing up.
(I just go 1 or two reps shy of failure and don’t worry too much about perfect weight selection to hit the same rep ranges all the time). Studies have found even 2 sets to failure once a week to be sufficient for maintenance if you are super strapped for time. I normally do this twice a week.
This might work for OP, but I don’t think it’s good advice for most readers (80% of whom I’d bet have never actually pushed a weight lifting set to failure and will be mistaken about where their failure point actually is).
If you aren’t tracking your lifts exactly, reccording them each time, and forcing yourself to do more than the previous time every time you return to the gym, it’s very hard to know for sure that you were actually 2 reps shy of failure.
We can experience a lot of discomfort/panic before the point at which we actually fail - (for most weight exercises if you’re not involuntarily making loud yelling noises during the last couple reps, you’re probably not anywhere close to true failure).
If you’re reading this, and you don’t already have a book/app where you write down exactly how much weight you lifted and are progressively increasing that, do this first!
I hear you that teenagers spending hours computing hundreds of analytic derivatives or writing a bunch of essays is a pretty sad time waste… But if incentives shifted so instead that time got spent perfecting a starcraft build order against the AI a hundred times or grinding for years to pull off carpetless star in SM64, this might be one of the few ways to make that time spent even more pointless… (And for most people the latter is no more fun than the former)
You might be right that the concept only applies to specific subcultures (in my case, educated relatively well-off Australians).
Maybe another test could be—can you think of someone you’ve met in the past who a critic might describe as “rude/loud/obnoxious” but despite this, they seem to draw in lots of friends and you have a lot of fun whenever you hang out with them?
Inspired by: Failures in Kindness
Maybe an analogy which seems closer to the “real world” situation—let’s say you and someone like Sam Altman both tried to start new companies. How much more time and starting capital do you think you’d need to have a better shot of success than him?
Out of interest—if you had total control over OpenAI—what would you want them to do?
I think OP is correct about cultural learning being the most important factor in explaining the large difference in intelligence between homo sapiens and other animals.
In early chapters of Secrets of Our Success, the book examines studies comparing performance of young humans and young chimps on various congnitive tasks. The book argues that across a broad array of cognitive tests, 4 year old humans do not perform singificantly better than 4 year old chimps on average, except in cases where the task can be solved by immitating others (human children crushed the chimps when this was the case).
The book makes a very compelling argument that our species is uniquely prone to immitating others (even in the absense of causal models about why the behaviour we’re immitating is useful), and even very young humnans have inate instincts for picking up on signals of prestige/compotence in others and preferentially immitating those high prestige poeple. Imo the arguments put forward in this book make cultral learning look like a very strong theory better in comparison to Machieavellian intelligence hypothesis, (although what actually happend at a lower level abstraction probably includes aspects of both).
If we expect there will be lots of intermediate steps—does this really change the analysis much?
How will we know once we’ve reached the point where there aren’t many intermediate steps left before crossing a crticial threshold? How do you expect everyone’s behaviour to change once we do get close?
Doesn’t make sense to use the particular consumer’s preferencces to estimate the cruelty cost. If that’s how we define the cruelty cost it then the buyer should already be taking it into account when making their purchasing decision, so it’s not an exernality.
The externality comes from the animals themselves having interests which the consumers aren’t considering
(This is addressed both to concerned_dad and to “John” who I hope will read this comment).
Hey John, I’m Xavier—hope you don’t mind me giving this unsolicited advice but I’d love to share my take on your situation, and some personal anecdotes about myself and my friends which I think you might find useful (who I suspect had a lot in common with you when we were your age). I bet you’re often frustrated about adults thinking they know better than you, especially when most of the time they’re clearly not as sharp as you are and don’t seem to be thinking as deeply as you about things—I’m 30, and my IQ/ raw cognitive horsepower is probably a little below the average user of this site—so I’ll do my best to not fall into that same archetype.
First off John I think it’s really fucking cool that you’re so interested in EA/LW especially at such a young age—and this makes me think you’re probably really smart, ambitious, overflowing with potential, and have a huge amount of love and concern for your fellow sentient beings. Concerned dad: on balance, the fact that your son has a passionate interest in rationality and effective altruism is probably a really great thing—he’s really lucky to have a father who loves him and cares about him as much as you clearly do. Your son’s interest in rationality+effective altruism suggests you’ve helped produce a young man who’s extremely intelligent and has excellent values (even though right now they’re being expressed in a concerning way which makes you understandably worried). I’m sure you care deeply about John leading a happy life in which he enjoys wonderful relationships, an illustrious career, and makes great contributions to society. In the long run, engaging with this community can offer huge support in service of that goal.
Before I say anything about stimulants/hallucinogens on the object level John, there’s a point I want to make on the meta level—something which I failed to properly appreciate until a full 9 years after I read the sequences:
Meta Level
Imagine two people, Sam and Eric, a pair of friends who are both pretty smart 15 year olds. Sam and Eric are about equally clever and share similar interests and opinions on most things, with the only major personality difference being that Sam has a bit more of an iconoclastic leaning, and is very willing to act on surprising conclusions when provided strong rational arguments in favor of them, while Eric is a bit more timid to do things which his parents/culture deem strange and has a higher respect for Chesterson fences.
One night Sam and Eric go to a party together filled with a bunch of slightly older university undergraduates who all seem really cool and smart, and they end up in fascinating philosophical conversation, which could be about a great many different things.
Perhaps Sam and Eric ended up chatting with a group of socialists. People in the circle made a lot of really compelling points about the issues with wealth inequality—Sam and Eric both learn some really shocking facts about the level of wealth inequality in our society, and they hear contrasting anecdotes from different people who came from families of wildly different levels of wealth, who tell stories which make it clear that the disparity of advantage and opportunity and dignity they all experienced growing up was deeply unfair. Now I’m sure you, John, are already way too smart to fall for this (as may be the case for every example I’m about to give), but let’s imagine that Eric and Sam have never read something like I, pencil, so don’t yet have a good level of industrial literacy or a deep grasp of how impossible it is to coordinate a modern society without the help of price signals. This, the pair walk away from this conversation knowing many compelling arguments in favor of communism—and the only reason they have so far to doubt becoming a communist is a vague sense of “this seems extreme and responsible adults usually says this a really bad idea…”.
After this conversation, Eric updates a little more in favor of wealth distribution, and Sam becomes a full-on Marxist. Ten years from now, which of the pair do you think will be doing better in life?
But maybe the conversation wasn’t about communism! Maybe it was all about how, even though most people think the purpose of school is to educate children, actually there’s very little evidence that getting kids to study more has much of an effect on lifetime income and the real purpose of school is basically public funded babysitting. Upon realizing this, Eric updates a little bit towards not stressing too much about his English grades, but still gets out of bed and heads to school with the rest of his peers every day—while Sam totally gives up on caring about school at all and starts sleeping in, skipping class almost every, and plays a shitload of DOTA2 in is room alone instead. If this was all you knew about Sam and Eric, ten years from now, which of the two would you expect to be doing better in their career/relationships?Or perhaps the conversation is all about climate change—Sam and Eric are both exposed to a group of people who are both very knowledgeable about ecology and climate systems, and who are all extremely worried about global warming. Eric updates slightly towards taking this issue seriously and supporting continued investment in clean energy technology, while Sam makes a massive update towards believing that the world in 30 years is likely to be nearly inhospitable, and resolves to never bother tying his money up in a retirement savings account and commits to never have children due to their CO2 footprint. Again, 10 years from now, I think Eric’s reluctance to make powerful updates away from what’s “normal” will leave him in a better position than Sam.
Or maybe the conversation was about traditional norms surrounding marriage/monogamy. Sam and Eric are both in great relationships, but now, for the first time, are exposed to a new and exciting perspective which asks questions like
“Why should one person have the right to tell another person who she/he can and can’t sleep with?”
“If I love my girlfriend, why shouldn’t I feel happy for her when she’s enjoying another partner rather than jealous?”
“Think about the beautiful feelings we experience being with our current partners, imagine how amazing it would be to multiply that feeling by many similar concurrent romantic relationships!”
Eric hears all this, finds it pretty interesting/compelling, but decides that the whole polyamory thing still feels a bit unusual, and marries his wonderful childhood sweetheart anyway, and they buy a beautiful house together. Sam on the other hand, makes a strong update in favor of polyamory—convinces his girlfriend that they ought to try an open relationship, and then ends up experiencing a horrific amount of jealousy/rage when his girlfriend starts a new relationship with his other friend, eventually leading to immense suffering and the loss of many previously great relationships.
Maybe they chatted about decentralized finance, and while Eric still kept 80% of his money in a diversified index fund, Sam got really into liquidity pooling+yield farming inflationary crypto tokens while hedging against price fluctuations using perpetual futures on FTX.
Maybe it was a chat about having an attractive physique—Eric starts exercising a little extra and eating a bit less junk food, whilst Sam completely stops eating his parents cooking, orders a shitload of pre-workout formula from overseas with a possibly dishonest ingredients list, starts hitting the gym 5 times a week, obsessively measures his arms with a tape measure, feels ashamed to not be as big as Chris Hemsworth, and sets alarms for 3am in the middle of the night so that he’s able to force more blended chicken breast down his throat.
Maybe it’s a chat about how group living actually makes a lot of sense and enables lots of economies of scale/gains from trade. Eric resolves to try out a 4-person group house when he moves out of his parent’s, whilst Sam convinces a heap of friends to move out and start a 12-person house next month (which is predictably filthy, overrun with interpersonal drama, and leads to the share house eventually dissolving and everyone leaving on less-than-friendly terms).
Maybe they thought deeply about whether money really makes you happy beyond a certain level or not, and then upon reflection, Eric did a Google summer internship anyway while Sam didn’t bother to apply.
Or maybe the conversation was about one of countless other topics where thinking too much for yourself can be extremely dangerous! Especially for a sixteen year old.
I know it’s unfair for me to only write stories where Eric wins and Sam loses—and there’s definitely some occasions where that’s not true! Sometimes Eric does waste time studying for a test that doesn’t matter, maybe Eric would have got better results in the gym if he’d started on creatine sooner, maybe he should have taken Sam’s advice to bet more money on Biden winning the 2024 US election—but when Eric messes up by following the cultural wisdom too closely, it’s never a total disaster. In the worst case, Eric still ends up moderately happy and moderately successful but when Sam makes a mistake in the opposite direction, the downsides can be catastrophic.
Every single one of those anecdotes maps directly onto a real thing that’s actually happened to me or my partner or one of our LW-adjacent friends between the ages of 15 and 30.
John, just because you are smarter and better able to argue that the vast majority of people living within a culture, that doesn’t mean you’re smarter than the aggregated package of norms, taboos and cultural which has evolved around you (even if most of the time nobody can clearly justify them). If you haven’t read Secrets of Our Success yet, you should definitely check it out! It makes this point in ruthlessly convincing fashion.
The midwit meme format is popular for a reason—the world is filled with intellectual traps for smart people to fall into when they’re not wise enough to pay the appropriate credit to “common sense” above their own reasoning on every single question.Object Level
When faced with a situation similar to yours, what do we think Sam/Eric might each do?
Eric would perhaps start taking 100-300mg of caffeine each day (setting strict upper limits on usage), or even start cautiously experimenting with chewing a couple milligrams worth of nicotine gum on days when he has heaps of study to do.
Sam on the other hand, might google the diagnosis criteria for ADHD and lies to a psychiatrist in order to obtain an illegitimate adderall prescription.
I know this is only anecdotal, but I’ve witnessed this exact situation play out multiple times among my close friends, and each time dexamphetamine use has come just a little before disastrous outcomes (which I can’t prove are linked to drug abuse, but it’s very plausible).Once you’re 18 years old your dad has no right to control your behaviour, but none the less, in the support he’s able to offer you could still be hugely valuable to you for decades to come, so I’m sure there is a massive space of mutually beneficial agreements you could come to involving you promising to not start using illegal/prescription drugs.
John and Concerned Dad, I’d love to chat more about this with either of you (and offer an un-anonomised version of literally all these anecdotes, please feel free to send me a private message)
For the final bet (or the induction base for a finite sequence), one cannot pick an amount without knowing the zero-point on the utility curve.
I’m a little confused about what you mean sorry -
What’s wrong with this example?:
It’s time for the final bet, I have $100 and my utility isI have the opportunity to bet on a coin which lands heads with probability , at odds.
If I bet on heads, then my expected utility is , which is maximized when .
So I decide to bet 50 dollars.
What am I missing here?
As far as I can tell, the fact that you only ever control a very small proportion of the total wealth in the universe isn’t something we need to consider here.
No matter what your wealth is, someone with log utility will treat a prospect of doubling their money to be exactly as good as it would be bad to have their wealth cut in half, right?
Thanks heaps for the post man, I really enjoyed it! While I was reading it felt like you were taking a bunch of half-baked vague ideas out of my own head, cleaning them up, and giving some much clearer more-developed versions of those ideas back to me :)
Okay, I agree. Thanks :)
Thanks for response!
Input/output: I agree that the unnatural input/output channel is just as much a problem for the ‘intended’ model as for the models harbouring consequentialists, but I understood your original argument as relying on there being a strong asymmetry where the models containing consequentialists aren’t substantially penalised by the unnaturalness of their input/output channels. An asymmetry like this seems necessary because specifying the input channel accounts for pretty much all of the complexity in the intended model.
Computational constraints: I’m not convinced that the necessary calculations the consequentialists would have to make aren’t very expensive (from the their point of view). They don’t merely need to predict the continuation of our bit sequence—they have to run simulations of all kinds of possible universes to work out which ones they care about and where in the multiverse Solomonoff inductors are being used to make momentous decisions, and then they perhaps need to simulate their own universe to work out which plausible input/output channels they want to target—if they do this then all they get in return is a pretty measly influence over our beliefs, (since they’re competing with many other daemons in approximately equally similar universes who have opposing values). I think there’s a good chance these consequentialists might instead elect devote their computational resources to realising other things they desire (like simulating happy copies of themselves or something).
Weak arguments against the universal prior being malign
Thanks for your comment, I think I’m a little confused about what it would mean to actually satisfy this assumption.
It seems to me that many current algorithms, for example, a rainbowDQN agent, would satisfy assumption 3? But like I said I’m super confused about anything resembling questions about self-awareness/naturalisation.
Sorry for the late response! I didn’t realise I had comments :)
In this proposal we go with (2): The AI does whatever it thinks the handlers will reward it for.
I agree this isn’t as good as giving the agents an actually safe reward function, but if our assumptions are satisfied then this approval-maximising behaviour might still result in the human designers getting what they actually want.
What I think you’re saying (please correct me if I misunderstood) is that an agent aiming to do whatever its designers reward it for will be incentivised to do undesirable things to us (like wiring up our brains to machines which make us want to press the reward button all the time).
It’s true that the agents will try to take these kind nefarious actions if they think they can get away with it. But in this setup the agent knows that it can’t get away with tricking the humans like this, since it’s ancestors already warned the humans that a future agent might try this, and the humans prepared appropriately.
I think an important consideration being overlooked is how comptetntly a centralised project would actually be managed.
In one of your charts, you suggest worlds where there is a single project will make progress faster due to “speedup from compute almagamation”. This is not necessarily true. It’s very possible that different teams would be able to make progress at very different rates even if both given identical compute resources.
At a boots-on-the-ground level, the speed of progress an AI project makes will be influenced by thosands of tiny decisions about how to:
Manage people
Collect training data
Prioritize research direcitons
Debug training runs
Decide who to hire
Assess people’s perfomance and decide to should be promoted to more influential positions
Manage code quality/technical debt
Design+run evals
Transfer knowledge between teams
Retain key personnel
Document findings
Decide what internal tools to use/build
Handle data pipeline bottlenecks
Coordinate between engineers/researchers/infrastructure teams
Make sure operations run smoothly
The list goes on!
Even seemingly minor decisions like coding standards, meeting structures and reporting processes might compound over time to create massive differences in research velocity. A poorly run organization with 10x the budget might make substantially less progress than a well-run one.
If there was only one major AI project underway it would probably be managed less well than the overall best-run project selected from a diverse set of competing companies.
Unlike the Manhattan project—there’s already sufficently strong commercial incentives for private companies to focus on the problem, it’s not already clear exactly how the first AGI system will work, and capital markets today are more mature and capable of funding projects at much larger scales. My gut feeling is if AI was fully consolidated tomorrow—this is more likely to slow things down than speed them up.