I’m really noticing how the best life improvements come from purchasing or building better infrastructure, rather than trying permutations of the same set of things and expecting different results. (Much of this results from having more money, granting an expanded sense of possibility to buying useful things.)
The guiding question is, “What upgrades would make my life easier?” In contrast with the question that is more typically asked: “How do I achieve this hard thing?”
It seems like part of what makes this not just immediately obvious is that I feel a sense of resistance (that I don’t really identify with). Part of that is a sense of… naughtiness? Like we’re supposed to signal how hardworking we are. For me this relates to this fear I have that if I get too powerful, I will break away from others (e.g. skipping restaurants for a Soylent Guzzler Helmet, metaphorically) as I re-engineer my life and thereby invite conflict. There’s something like a fear that buying or engaging in nicer things would be an affront to my internalized model of my parents?
The infrastructure guideline relates closely to the observation that to a first approximation we are stimulus-response machines reacting to our environment, and that the best way to improve is to actually change your environment, rather than continuing to throw resources past the point of diminishing marginal returns in adaptation to the current environment. And for the same reasons, the implications can scare me, for it may imply leaving the old environment behind, and it may even imply that the larger the environmental change you make, the more variance you have for a good or bad update to your life. That would mean we should strive for large positive environmental shifts, while minimizing the risk of bad ones.
(This also gives me a small update towards going to Mars being more useful for x-risk, although I may need to still propagate a larger update in the other direction away from space marketing. )
Of course, most of one’s upgrades should be tiny and within one’s comfort zone. What the portfolio of small vs huge changes one should make in one’s life is an open question to me, because while it makes sense to be mostly conservative with one’s allocation of one’s life resources, I suspect that fear brings people to justify the static zone of safety they’ve created with their current structure, preventing them from seeking out better states of being that involve jettisoning sunk costs that they identify with. Better coordination infrastructure could make such changes easier if people don’t have to risk as much social conflict.
I find the question, “What would change my mind?”, to be quite powerful, psychotherapeutic even. AKA “singlecruxing”. It cuts right through to seeking disconfirmation of one’s model, and can make the model more explicit, legible, object. It’s proactively seeking out the data rather than trying to reduce the feeling of avoidant deflection associated with shielding a beloved notion from assault. Seems like it comports well with the OODA loop as well. Taken from Raemon’s “Keeping Beliefs Cruxy”.
I am curious how others ask this question of themselves. What follows is me practicing the question.
What would change my mind about the existence of the moon? Here are some hypotheses:
I would look up in the sky every few hours for several days and nights and see that it’s not there.
I see over a dozen posts on my Facebook feed talking about how it turns out it was just a cardboard cutout and SpaceX accidentally tore a hole in it. They show convincing video of the accident and footage of people reacting such as leaders of the world convening to discuss it.
Multiple friends are very concerned about my belief in this luminous, reflective rocky body. They suggest I go see a doctor or the government will throw me in the lunatics’ asylum. The doctor prescribes me a pill and I no longer believe.
It turns out I was deluded and now I’m relieved to be sane.
It turns out they have brainwashed me and now I’m relieved to be sane.
I am hit over the head with a rock which permanently damages my ability to form lunar concepts. Or it outright kills me. I think this Goodharts (is that the closest term I’m looking for?) the question but it’s interesting to know what are bad/nonepistemic/out-of-context reasons I would stop believing in a thing.
These anticipations were System 2 generated and I’m still uncertain to what extent I can imagine them actually happening and changing my mind. It’s probably sane and functional that the mind doesn’t just let you update on anything you imagine, though I also hear the apocryphal saying that the mind 80% believes whatever you imagine is real.
An interesting second exercise you might apply here is taking note of what other beliefs in your network would have to change (you sort of touch on this here). If you find out the moon isn’t real, you’ve found out something very important about your entire epistemic state. This indeed makes updating on it harder or more interesting, at least.
You bring to mind a visual of the Power of a Mind as this dense directed cyclic graph of beliefs where updates propagate in one fluid circuit at the speed of thought.
I wonder what formalized measures of [agency, updateability, connectedness, coherence, epistemic unity, whatever sounds related to this general idea] are put forth by different theories (schools of psychotherapy, predictive processing, Buddhism, Bayesian epistemology, sales training manuals, military strategy, machine learning, neuroscience...) related to the mind and how much consilience there is between them. Do we already know how to rigorously describe peak mental functioning?
Why does CHAI exclude people who don’t have a near-perfect GPA? This doesn’t seem like a good way to maximize the amount of alignment work being done. High GPA won’t save the world and in fact selects for obedience to authority and years of status competition, leading to poor mental health to do work in, decreasing the total amount of cognitive resources being thrown at the problem.
(Hypothesis 1: “Yes, this is first-order bad but the second-order effect is we have one institutionally prestigious organization, and we need to say we have selective GPA in order to fit in and retain that prestige.” [Translator’s Note: “We must work with evil in order to do good.” (The evil being colleges and grades and most of the economic system.)])
(Hypothesis 2: “GPA is the most convenient way we found to select for intelligence and conscientiousness, and those are the traits we need the most.”)
(Hypothesis 3: “The university just literally requires us to do this or we’ll be shut down.”)
What was the most valuable habit you had during the past decade?
What is the most valuable habit you could inculcate or strengthen over the next decade?
(Habit here broadly construed as: “specific activity that lasts anywhere from a number of seconds to half an hour or more. Examples: playing golf each morning. Better example: practicing your driving swing at 6:00am for 30 minutes (but you can give much more detail than that!). Bad example: poorly operationalized vague statements like “being more friendly”.)
I doubt the premise of “the one thing” book. Just looking at their example—Bill Gates—if he’d only have one skill, the computer programming, he would never get rich. (The actual developers of MS DOS did not get nearly as rich as Bill Gates.) Instead it was a combination of understanding software, being at the right moment at the right place with the right connections, abusing the monopoly and winning the legal battles, etc. So at the very least, it was software development skills and the business skills; the latter much more important than the former.
(To see a real software development demigod, look at Donald Knuth. Famous among the people who care about the craft, but nowhere as rich or generally famous as Bill Gates.)
I would expect similar stories to be mostly post-hoc fairy tales. You do dozen things; you succeed or get lucky at one and fail at the rest; you get famous for the one thing and everything else is forgotten; a few years later self-improvement gurus write books using you as an example of a laser-sharp focus and whatever is their current hypothesis of the magic that creates success. “Just choose one thing, and if you make your choice well, you can ignore everything else and your life will still become a success” is wishful thinking.
Some people get successful by iteration. They do X, and fail. Then they switch to Y, which has enough similarity with X that they get comparative advantage against people who do Y from scratch, but they fail again. They they switch to Z, which is against has some similarity with Y… and finally they succeed. The switch to Y may happen after decades of trying something else.
Some people get successful by following their dream for decades, but it takes a long time until that dream starts making profit (some artists only get world-wide recognition after they die), so they need a daily job. They probably also need some skills to do the daily job well.
To answer your question directly, recent useful habits are exercising and cooking.
(I also exercised before, but that was some random thing that came to my mind at the moment, e.g. only push-ups; the recent habit is a sequence of pull-ups, one-legged squats, push-ups, and leg raise. I also cooked before, but I recently switched to mostly vegetarian meals, and found a subset that my family is happy to eat. I also cook more frequently, and remember some of the frequent recipes, so at shop I can spontaneously decide what to cook today and buy the ingredients on the spot, and I can easily multi-task while cooking.)
My next decade will mostly be focused on teaching habits to my kids, because what they also do has a big impact on my daily life. The less they need me to micromanage them, the more time I have for everything else.
How might a person develop INCREDIBLY low time preference? (They value their future selves in decades to a century nearly as much as they value their current selves?)
Who are people who have this, or have acquired this, and how did they do it?
Do these concepts make sense or might they be misunderstanding something? Tabooing/decomposing them, what is happening cognitively, experientially, when a human mind does this thing?
You might be interested in Tad James’ work on Timeline Therapy. Its not exactly what I would call epistemically rigorous, but I don’t know any other source that has done as much in-depth modelling of how people represent time. and work on how to change people’s representations of time.
I am very interested in practicing steelmanning/Ideological Turing Test with people of any skill level. I have only done it once conversationally and it felt great. I’m sure we can find things to disagree about. You can book a call here.
I’ve mentioned previously that I’ve been digging into a pocket of human knowledge in pursuit of explanations for the success of the traditional Chinese businessman. The hope I have is that some of these explanations are directly applicable to my practice.
Here’s my current bet: I think one can get better at trial and error, and that the body of work around instrumental rationality hold some clues as to how you can get better.
I’ve argued that the successful Chinese businessmen are probably the ones who are better at trial and error than the lousier ones; I posited that perhaps they needed less cycles to learn the right lessons to make their businesses work.
I think the body of research around instrumental rationality tell us how they do so. I’m thankful that Jonathan Baron has written a fairly good overview of the field, with his fourth edition of Thinking and Deciding. And I think both Ray Dalio’s and Nicholas Nassem Taleb’s writings have explored the implications of some of these ideas. If I were to summarise the rough thrust of these books:
Don’t do trial and error where error is catastrophic.
Don’t repeat the same trials over and over again (aka don’t repeat the same mistakes over and over again).
Increase the number of trials you can do in your life. Decrease the length and cost of each trial.
In fields with optionality (i.e. your downside is capped but your upside is large) the more trials you take, and the more cheap each trial costs, the more likely you’ll eventually win. Or, as Taleb says: “randomness is good when you have optionality.”
Write down your lessons and approaches from your previous successful trials, so you may generalise them to more situations (Principles, chapter 5)
Systematically identify the factor that gives positive evidence, and vary that to maximise the expected size of the impact (Thinking and Deciding, chapter 7)
Actively look for disconfirming evidence when you’ve found an approach that seems to work. (Thinking and Deciding, chapter 7, Principles, chapter 3).
Don’t do trial and error where error is catastrophic.
Wearing a mask in a pandemic. Not putting ALL of your money on a roulette wheel. Not balancing on a tightrope without a net between two skyscrapers unless you have extensive training. Not posting about controversial things without much upside. Not posting photos of meat you cooked to Instagram if you want to have good acclaim in 200 years when eating meat is outlawed. Not building AI because it’s cool. Falling in love with people who don’t reciprocate.
The unknown unknown risk that hasn’t been considered yet.Not having enough slack dedicated to detecting this.
Don’t repeat the same trials over and over again (aka don’t repeat the same mistakes over and over again).
If you’ve gone on OkCupid for the past 7 years and still haven’t got a date from it, maybe try a different strategy.If messaging potential tenants on a 3rd-party site doesn’t work, try texting them. If asking questions on Yahoo Answers doesn’t get good answers, try a different site.
Increase the number of trials you can do in your life. Decrease the length and cost of each trial.
Talk to 10x the number of people; message using templates and/or simple one-liners. Invest with Other People’s Money if asymmetric upside. Write something for 5 minutes using Most Dangerous Writing App then post to 5 subreddits. Posting ideas on Twitter instead of Facebook, rationality content on LessWrong Shortform instead of longform. Yoda Timers. If running for the purpose of a runner’s high mood boost, try running 5 times that day as fast as possible. Optimizing standard processes for speed.
In fields with optionality (i.e. your downside is capped but your upside is large) the more trials you take, and the more cheap each trial costs, the more likely you’ll eventually win. Or, as Taleb says: “randomness is good when you have optionality.”
Posting content to 10x the people 10x faster generally has huge upside (YMMV). Programming open-source something useful and sharing it.
Write down your lessons and approaches from your previous successful trials, so you may generalise them to more situations (Principles, chapter 5)
Roam is good for this, perhaps SuperMemo. Posting things to social media and coming up with examples of the rules is also a good way of learning content. cough
Systematically identify the factor that gives positive evidence, and vary that to maximise the expected size of the impact (Thinking and Deciding, chapter 7
Did messaging or posting to X different places work? Try 2X, 5X, etc. 1 to N after successfully going 0 to 1.
Actively look for disconfirming evidence when you’ve found an approach that seems to work. (Thinking and Deciding, chapter 7, Principles, chapter 3).
Stating assumptions strongly and clearly so they are disconfirmable, then setting a Yoda Timer to seek counter-examples of the generalization.
Do humans actually need breaks from working, physiologically? How much of this is a cultural construct? And if it is, can those assumptions be changed? Could a person be trained to enjoyably have 100-hour workweeks? (assume, if the book Deep Work is correct that you have max 4 hours of highly productive work on a domain, that my putative powerhuman is working on 2-4 different skill domains that synergize)
I never have a productive six-hour unbroken stretch of work, but my partner will occasionally have 6-hour bursts of very productive coding where he stays in the zone and doesn’t notice time passing. He basically looks up and realizes it’s night and everyone else had dinner hours ago. But the rest of the time he works normal hours with a more standard-to-loose level of concentration.
The idea of “work” is a cultural construct. If you want to move past the cultural connotations you would need to define what you are talking about.
It’s worth noting that deep work talks about the ability of humans to perform deep work. A lot of useful work isn’t deep work in the sense Cal Newport uses the term. If I remember right he sees large parts of management as not being about Deep Work.
We do know that some people do suffer burn out from work stress. It seems that higher workload in Japan does result in higher burnout rates then in countries with lower workload.
Beware over-generalization and https://wiki.lesswrong.com/wiki/Typical_mind_fallacy. There’s a LOT of variation in human capabilities and preferences (including preferences about productivity vs rest). Some people do have 100-hour workweeks (I did for awhile, when I was self-employed. )
Try it, see how it works for you. If you’re in a position of leadership over others, give them room to find what works best for them.
Physiologically, the body can keep going for a long time if it is healthy and well maintained.
The body needs good nutrition (“real food”, adequate water), physical activity to maintain function and enough sleep.
Meet these conditions and your workers should be able to keep on going, but what’s your definition of “work”?
Agrarian based work (for some) is already every day, all year. As many hours as can be worked, are worked.
Anyone that enjoys their work, that has a drive to do it, can manage 15 hrs a day.
Sitting at a desk, staring at a screen, drinking a lot of coffee and eating processed food … not so good physiologically.
I am finding the phrasing “could a person be trained” is a little concerning… Who’s asking?!
An ambitious junior vice-president in a ethically dubious multinational corporation?
A pragmatic intergalactic manager wondering if Earthlings have a use more than as for fodder?
“Trained to enjoy” - I’d probably research altering the human brain to achieve that. Possibly fry a couple of appropriate synapses or find the right combination of chemicals.
Unless you start the ‘training’ at an early age—if you don’t know any different then 100 hour work week is just how it is.
We live in a world with large incentives to teach yourself to do something like this, so either it is too hard for a single person to come up with on their own or it is possible to find people that have done it.
Some military studies might fit what you’re looking for.
There’s a higher tier incentive point, where a) upper management, and b) independent artists/thinkers/etc want to get more productive work out of people or themselves. The decision of whether to pay someone by the hour is partly about what you think will produce more output (where paying by the hour might be bad because it leads people to be stingy with their time, when what they need is open space to think)
But Google still puts a lot of optimization into providing lunch, exercise, and campuses that cause people to incidentally bump into each and have conversations as a way to squeeze extra value out of people in a given day and over the course of a given year. (and company management generally prefers legibility where possible, so if it were possible to get the benefits in a more measurable and compensate-able way, they’d have tried to do so)
Not, necessarily. Not every productive hour is made equally. If you trade part of your creativity for more productive hours then might not be a worthwhile trade.
This might seem like a nitpick but it matters. The idea of chasing productive hours leads to bad ideas like the Uberman sleep schedule that do sound seductive to rationalists.
It’s a reasoning error to equate “Is X possible to do” with ” Is X possible to do without paying any price”.
The implication I meant was “if it’s possible to keep working hours of the same productivity.” If you lose creativity and your profession is creative, the hours are no longer productive.
I’m specifically arguing the original point: “there are huge incentives to try to be more productive.” If, as Dony was originally asking, it were possible to just get into a mental state where you could work productively (including creatively) indefinitely, people would have found it.
A normal programmer is working productively in the sense most people see “working productively”. The proverbial 10x programmer on the other hand is much more productive even with the same number of productive working hours.
The thesis of deep work is that there’s much higher payoff to increasing quality of your work then quantity for a knowledge works.
One of the assumptions you seem to be making is that the time not spend working is not productively used. Creative ideas often come when there’s a bit of distance to the work while showering. Working 100 hours per week means that this distance is never really achieved and the benefits that get produced when your brain can process the problem in the background while your conscious mind doesn’t interfere don’t materialize.
I think I remember from a YCombinator source that they would tell founder who tried to work 100 hours, that they need to get better about prioritizing and not try use working that much as the solution to their challenges.
It’s easier to discover that you are working at the wrong thing when you have breaks in between that give you distance that allows reflection.
I think you think I’m making assumptions I’m not. I agree with all these points – this is why the world looks the way it does. I’m saying, if there weren’t the sorts of limits that you’re describing, we’d observe people working more, more often.
If, as Dony was originally asking, it were possible to just get into a mental state where you could work productively (including creatively) indefinitely, people would have found it.
Perhaps not indefinitely, but I do think there are people like this already? There are some people who are much more productive than others, even at similar intelligence levels. The simplest explanation is that these people have simply discovered a way to be productive for many hours in a day.
Personally, I know it’s at least possible to be productive for a long time (say 10 hours with a few breaks). I also think professional gamers are typically productive for this much most days.
I think the main issue is that it’s difficult to transfer insights and motivation to other people.
The short answer is “it turns out making use of an assistant is a surprisingly high-skill task, which requires a fair amount of initial investment to understand which sort of things are easy to outsource and which are not, and how to effectively outsource them.”
“You’ve never experienced bliss, and so you’re frantically trying to patch everything up and pin it all together and screw the universe up so that it’s fixed.”—Alan Watts
I’m really noticing how the best life improvements come from purchasing or building better infrastructure, rather than trying permutations of the same set of things and expecting different results. (Much of this results from having more money, granting an expanded sense of possibility to buying useful things.)
The guiding question is, “What upgrades would make my life easier?” In contrast with the question that is more typically asked: “How do I achieve this hard thing?”
It seems like part of what makes this not just immediately obvious is that I feel a sense of resistance (that I don’t really identify with). Part of that is a sense of… naughtiness? Like we’re supposed to signal how hardworking we are. For me this relates to this fear I have that if I get too powerful, I will break away from others (e.g. skipping restaurants for a Soylent Guzzler Helmet, metaphorically) as I re-engineer my life and thereby invite conflict. There’s something like a fear that buying or engaging in nicer things would be an affront to my internalized model of my parents?
The infrastructure guideline relates closely to the observation that to a first approximation we are stimulus-response machines reacting to our environment, and that the best way to improve is to actually change your environment, rather than continuing to throw resources past the point of diminishing marginal returns in adaptation to the current environment. And for the same reasons, the implications can scare me, for it may imply leaving the old environment behind, and it may even imply that the larger the environmental change you make, the more variance you have for a good or bad update to your life. That would mean we should strive for large positive environmental shifts, while minimizing the risk of bad ones.
(This also gives me a small update towards going to Mars being more useful for x-risk, although I may need to still propagate a larger update in the other direction away from space marketing. )
Of course, most of one’s upgrades should be tiny and within one’s comfort zone. What the portfolio of small vs huge changes one should make in one’s life is an open question to me, because while it makes sense to be mostly conservative with one’s allocation of one’s life resources, I suspect that fear brings people to justify the static zone of safety they’ve created with their current structure, preventing them from seeking out better states of being that involve jettisoning sunk costs that they identify with. Better coordination infrastructure could make such changes easier if people don’t have to risk as much social conflict.
You would probably break away from some, connect with some new ones, and reconnect with some that you lost in the past.
Nod.
Some relevant previous thoughts over at Strategies for Personal Growth.
I find the question, “What would change my mind?”, to be quite powerful, psychotherapeutic even. AKA “singlecruxing”. It cuts right through to seeking disconfirmation of one’s model, and can make the model more explicit, legible, object. It’s proactively seeking out the data rather than trying to reduce the feeling of avoidant deflection associated with shielding a beloved notion from assault. Seems like it comports well with the OODA loop as well. Taken from Raemon’s “Keeping Beliefs Cruxy”.
I am curious how others ask this question of themselves. What follows is me practicing the question.
What would change my mind about the existence of the moon? Here are some hypotheses:
I would look up in the sky every few hours for several days and nights and see that it’s not there.
I see over a dozen posts on my Facebook feed talking about how it turns out it was just a cardboard cutout and SpaceX accidentally tore a hole in it. They show convincing video of the accident and footage of people reacting such as leaders of the world convening to discuss it.
Multiple friends are very concerned about my belief in this luminous, reflective rocky body. They suggest I go see a doctor or the government will throw me in the lunatics’ asylum. The doctor prescribes me a pill and I no longer believe.
It turns out I was deluded and now I’m relieved to be sane.
It turns out they have brainwashed me and now I’m relieved to be sane.
I am hit over the head with a rock which permanently damages my ability to form lunar concepts. Or it outright kills me. I think this Goodharts (is that the closest term I’m looking for?) the question but it’s interesting to know what are bad/nonepistemic/out-of-context reasons I would stop believing in a thing.
These anticipations were System 2 generated and I’m still uncertain to what extent I can imagine them actually happening and changing my mind. It’s probably sane and functional that the mind doesn’t just let you update on anything you imagine, though I also hear the apocryphal saying that the mind 80% believes whatever you imagine is real.
An interesting second exercise you might apply here is taking note of what other beliefs in your network would have to change (you sort of touch on this here). If you find out the moon isn’t real, you’ve found out something very important about your entire epistemic state. This indeed makes updating on it harder or more interesting, at least.
You bring to mind a visual of the Power of a Mind as this dense directed cyclic graph of beliefs where updates propagate in one fluid circuit at the speed of thought.
I wonder what formalized measures of [agency, updateability, connectedness, coherence, epistemic unity, whatever sounds related to this general idea] are put forth by different theories (schools of psychotherapy, predictive processing, Buddhism, Bayesian epistemology, sales training manuals, military strategy, machine learning, neuroscience...) related to the mind and how much consilience there is between them. Do we already know how to rigorously describe peak mental functioning?
Why does CHAI exclude people who don’t have a near-perfect GPA? This doesn’t seem like a good way to maximize the amount of alignment work being done. High GPA won’t save the world and in fact selects for obedience to authority and years of status competition, leading to poor mental health to do work in, decreasing the total amount of cognitive resources being thrown at the problem.
(Hypothesis 1: “Yes, this is first-order bad but the second-order effect is we have one institutionally prestigious organization, and we need to say we have selective GPA in order to fit in and retain that prestige.” [Translator’s Note: “We must work with evil in order to do good.” (The evil being colleges and grades and most of the economic system.)])
(Hypothesis 2: “GPA is the most convenient way we found to select for intelligence and conscientiousness, and those are the traits we need the most.”)
(Hypothesis 3: “The university just literally requires us to do this or we’ll be shut down.”)
Won’t somebody think of the grad students!
What was the most valuable habit you had during the past decade?
What is the most valuable habit you could inculcate or strengthen over the next decade?
(Habit here broadly construed as: “specific activity that lasts anywhere from a number of seconds to half an hour or more. Examples: playing golf each morning. Better example: practicing your driving swing at 6:00am for 30 minutes (but you can give much more detail than that!). Bad example: poorly operationalized vague statements like “being more friendly”.)
See: The One Thing
Somewhat related: In which ways have you self-improved that made you feel bad for not having done it earlier?
I doubt the premise of “the one thing” book. Just looking at their example—Bill Gates—if he’d only have one skill, the computer programming, he would never get rich. (The actual developers of MS DOS did not get nearly as rich as Bill Gates.) Instead it was a combination of understanding software, being at the right moment at the right place with the right connections, abusing the monopoly and winning the legal battles, etc. So at the very least, it was software development skills and the business skills; the latter much more important than the former.
(To see a real software development demigod, look at Donald Knuth. Famous among the people who care about the craft, but nowhere as rich or generally famous as Bill Gates.)
I would expect similar stories to be mostly post-hoc fairy tales. You do dozen things; you succeed or get lucky at one and fail at the rest; you get famous for the one thing and everything else is forgotten; a few years later self-improvement gurus write books using you as an example of a laser-sharp focus and whatever is their current hypothesis of the magic that creates success. “Just choose one thing, and if you make your choice well, you can ignore everything else and your life will still become a success” is wishful thinking.
Some people get successful by iteration. They do X, and fail. Then they switch to Y, which has enough similarity with X that they get comparative advantage against people who do Y from scratch, but they fail again. They they switch to Z, which is against has some similarity with Y… and finally they succeed. The switch to Y may happen after decades of trying something else.
Some people get successful by following their dream for decades, but it takes a long time until that dream starts making profit (some artists only get world-wide recognition after they die), so they need a daily job. They probably also need some skills to do the daily job well.
To answer your question directly, recent useful habits are exercising and cooking.
(I also exercised before, but that was some random thing that came to my mind at the moment, e.g. only push-ups; the recent habit is a sequence of pull-ups, one-legged squats, push-ups, and leg raise. I also cooked before, but I recently switched to mostly vegetarian meals, and found a subset that my family is happy to eat. I also cook more frequently, and remember some of the frequent recipes, so at shop I can spontaneously decide what to cook today and buy the ingredients on the spot, and I can easily multi-task while cooking.)
My next decade will mostly be focused on teaching habits to my kids, because what they also do has a big impact on my daily life. The less they need me to micromanage them, the more time I have for everything else.
When I notice something that’s in the way of achieving my goals, look up ways other people have solved it.
Oftentimes using the 3 books technique: https://www.lesswrong.com/posts/oPEWyxJjRo4oKHzMu/the-3-books-technique-for-learning-a-new-skilll
For the next ten years:
Something about noticing trauma and healing with love.
How might a person develop INCREDIBLY low time preference? (They value their future selves in decades to a century nearly as much as they value their current selves?)
Who are people who have this, or have acquired this, and how did they do it?
Do these concepts make sense or might they be misunderstanding something? Tabooing/decomposing them, what is happening cognitively, experientially, when a human mind does this thing?
What would a literature review say?
You might be interested in Tad James’ work on Timeline Therapy. Its not exactly what I would call epistemically rigorous, but I don’t know any other source that has done as much in-depth modelling of how people represent time. and work on how to change people’s representations of time.
I wrote this before it was cool!
https://twitter.com/DonyChristie/status/1372299482235277320
I am very interested in practicing steelmanning/Ideological Turing Test with people of any skill level. I have only done it once conversationally and it felt great. I’m sure we can find things to disagree about. You can book a call here.
https://commoncog.com/blog/chinese-businessmen-superstition-doesnt-count/
Wearing a mask in a pandemic. Not putting ALL of your money on a roulette wheel. Not balancing on a tightrope without a net between two skyscrapers unless you have extensive training. Not posting about controversial things without much upside. Not posting photos of meat you cooked to Instagram if you want to have good acclaim in 200 years when eating meat is outlawed. Not building AI because it’s cool. Falling in love with people who don’t reciprocate.
The unknown unknown risk that hasn’t been considered yet. Not having enough slack dedicated to detecting this.
If you’ve gone on OkCupid for the past 7 years and still haven’t got a date from it, maybe try a different strategy. If messaging potential tenants on a 3rd-party site doesn’t work, try texting them. If asking questions on Yahoo Answers doesn’t get good answers, try a different site.
Talk to 10x the number of people; message using templates and/or simple one-liners. Invest with Other People’s Money if asymmetric upside. Write something for 5 minutes using Most Dangerous Writing App then post to 5 subreddits. Posting ideas on Twitter instead of Facebook, rationality content on LessWrong Shortform instead of longform. Yoda Timers. If running for the purpose of a runner’s high mood boost, try running 5 times that day as fast as possible. Optimizing standard processes for speed.
Posting content to 10x the people 10x faster generally has huge upside (YMMV). Programming open-source something useful and sharing it.
Roam is good for this, perhaps SuperMemo. Posting things to social media and coming up with examples of the rules is also a good way of learning content. cough
Did messaging or posting to X different places work? Try 2X, 5X, etc. 1 to N after successfully going 0 to 1.
Stating assumptions strongly and clearly so they are disconfirmable, then setting a Yoda Timer to seek counter-examples of the generalization.
Do humans actually need breaks from working, physiologically? How much of this is a cultural construct? And if it is, can those assumptions be changed? Could a person be trained to enjoyably have 100-hour workweeks? (assume, if the book Deep Work is correct that you have max 4 hours of highly productive work on a domain, that my putative powerhuman is working on 2-4 different skill domains that synergize)
I looked into this and found Deep Work’s backing disappointing (although 4 hours isn’t disproven either).
I never have a productive six-hour unbroken stretch of work, but my partner will occasionally have 6-hour bursts of very productive coding where he stays in the zone and doesn’t notice time passing. He basically looks up and realizes it’s night and everyone else had dinner hours ago. But the rest of the time he works normal hours with a more standard-to-loose level of concentration.
Unclear, but see Zvi’s Slack sequence for some good reasons why we should act as though we need breaks, even if we technically don’t.
The idea of “work” is a cultural construct. If you want to move past the cultural connotations you would need to define what you are talking about.
It’s worth noting that deep work talks about the ability of humans to perform deep work. A lot of useful work isn’t deep work in the sense Cal Newport uses the term. If I remember right he sees large parts of management as not being about Deep Work.
We do know that some people do suffer burn out from work stress. It seems that higher workload in Japan does result in higher burnout rates then in countries with lower workload.
Beware over-generalization and https://wiki.lesswrong.com/wiki/Typical_mind_fallacy. There’s a LOT of variation in human capabilities and preferences (including preferences about productivity vs rest). Some people do have 100-hour workweeks (I did for awhile, when I was self-employed. )
Try it, see how it works for you. If you’re in a position of leadership over others, give them room to find what works best for them.
Physiologically, the body can keep going for a long time if it is healthy and well maintained.
The body needs good nutrition (“real food”, adequate water), physical activity to maintain function and enough sleep.
Meet these conditions and your workers should be able to keep on going, but what’s your definition of “work”?
Agrarian based work (for some) is already every day, all year. As many hours as can be worked, are worked.
Anyone that enjoys their work, that has a drive to do it, can manage 15 hrs a day.
Sitting at a desk, staring at a screen, drinking a lot of coffee and eating processed food … not so good physiologically.
I am finding the phrasing “could a person be trained” is a little concerning… Who’s asking?!
An ambitious junior vice-president in a ethically dubious multinational corporation?
A pragmatic intergalactic manager wondering if Earthlings have a use more than as for fodder?
“Trained to enjoy” - I’d probably research altering the human brain to achieve that. Possibly fry a couple of appropriate synapses or find the right combination of chemicals.
Unless you start the ‘training’ at an early age—if you don’t know any different then 100 hour work week is just how it is.
We live in a world with large incentives to teach yourself to do something like this, so either it is too hard for a single person to come up with on their own or it is possible to find people that have done it.
Some military studies might fit what you’re looking for.
Where do you see the huge incentives? Most of high status work isn’t payed by the hour.
There’s a higher tier incentive point, where a) upper management, and b) independent artists/thinkers/etc want to get more productive work out of people or themselves. The decision of whether to pay someone by the hour is partly about what you think will produce more output (where paying by the hour might be bad because it leads people to be stingy with their time, when what they need is open space to think)
But Google still puts a lot of optimization into providing lunch, exercise, and campuses that cause people to incidentally bump into each and have conversations as a way to squeeze extra value out of people in a given day and over the course of a given year. (and company management generally prefers legibility where possible, so if it were possible to get the benefits in a more measurable and compensate-able way, they’d have tried to do so)
Thinkers and artists care about intellectual output. They don’t care about the numbers of hours they work.
Google purposefully doesn’t let people sleep at their office which is a policy that prevents people from working 100 hours per week.
Agreed. But if there was a way to work more productive hours, they would.
Not, necessarily. Not every productive hour is made equally. If you trade part of your creativity for more productive hours then might not be a worthwhile trade.
This might seem like a nitpick but it matters. The idea of chasing productive hours leads to bad ideas like the Uberman sleep schedule that do sound seductive to rationalists.
It’s a reasoning error to equate “Is X possible to do” with ” Is X possible to do without paying any price”.
The implication I meant was “if it’s possible to keep working hours of the same productivity.” If you lose creativity and your profession is creative, the hours are no longer productive.
I’m specifically arguing the original point: “there are huge incentives to try to be more productive.” If, as Dony was originally asking, it were possible to just get into a mental state where you could work productively (including creatively) indefinitely, people would have found it.
A normal programmer is working productively in the sense most people see “working productively”. The proverbial 10x programmer on the other hand is much more productive even with the same number of productive working hours.
The thesis of deep work is that there’s much higher payoff to increasing quality of your work then quantity for a knowledge works.
One of the assumptions you seem to be making is that the time not spend working is not productively used. Creative ideas often come when there’s a bit of distance to the work while showering. Working 100 hours per week means that this distance is never really achieved and the benefits that get produced when your brain can process the problem in the background while your conscious mind doesn’t interfere don’t materialize.
I think I remember from a YCombinator source that they would tell founder who tried to work 100 hours, that they need to get better about prioritizing and not try use working that much as the solution to their challenges.
It’s easier to discover that you are working at the wrong thing when you have breaks in between that give you distance that allows reflection.
I think you think I’m making assumptions I’m not. I agree with all these points – this is why the world looks the way it does. I’m saying, if there weren’t the sorts of limits that you’re describing, we’d observe people working more, more often.
Perhaps not indefinitely, but I do think there are people like this already? There are some people who are much more productive than others, even at similar intelligence levels. The simplest explanation is that these people have simply discovered a way to be productive for many hours in a day.
Personally, I know it’s at least possible to be productive for a long time (say 10 hours with a few breaks). I also think professional gamers are typically productive for this much most days.
I think the main issue is that it’s difficult to transfer insights and motivation to other people.
Has the “AI Safety”-washing grift accelerated? https://www.lakera.ai/momentum
Are you referring to “AI safety” consultants who will certify the safety of their clients’ AI projects?
Math camps are not sufficient to solve civilization’s problems.
Warning: TVTropes links
When should I outsource something I’m bad at vs leveling up at that skill?
How would you instruct a virtual assistant to help you with scheduling your day/week/etc?
The short answer is “it turns out making use of an assistant is a surprisingly high-skill task, which requires a fair amount of initial investment to understand which sort of things are easy to outsource and which are not, and how to effectively outsource them.”
Sure thing. What would you recommend for learning management?
(I count that as an answer to my other recent question too.)
“You’ve never experienced bliss, and so you’re frantically trying to patch everything up and pin it all together and screw the universe up so that it’s fixed.”—Alan Watts