I listened to David Goggins’ account of Navy SEAL training last year. They encourage you to push yourself so hard that you are at genuine risk of death or permanent disability. The first two times Goggins tried to get through he failed out because of injuries, even though he was willing to — and did — run many miles on literally broken legs. He only made it through the third time because hell week got cut short due to someone in their cohort DYING (from participating in some kind of swimming exercise while very sick with pneumonia).
I actually found the book incredibly inspiring, though it did not make me think anyone should model themselves after the Navy SEALs in particular. I also don’t think someone should run 100 miles in 24 hours with zero training and continue despite the fact that their legs are breaking and they’re shitting and pissing blood while they run, which is another thing that Goggins did.
One training exercise in the book that seemed more reasonable to me (more like an exercise and less like abject torture) was an orienteering-type thing (for I think the Army Rangers?), where the terrain was treacherous and unfamiliar and the weather dangerously cold at night. I think it’s a good test of rationality to put yourself in a genuinely high-stakes situation like that — as long as one of the choices you’re allowed to make is to call for help if you are genuinely afraid for your life. That was an option in the case of the Rangers orienteering challenge, but my point is that the thing that’s bad about SEAL hell week is that you’re considered a pussy if you quit, even if it’s out of genuine and reasonable fear for your life.
The book overall is about the idea that your limits are fake, and humans can accomplish things that seem like they should be physically impossible as long as they just don’t give up. I think that’s a concept we could work with.
I think there are quite a few rationalists who challenge themselves to do fairly hard things, like founding a successful startup, putting together a large conference on short notice at the age of 18, or publishing a good post on rationality every day for a month, things kind of like that. I think I’ve challenged myself a lot more than I would have if I weren’t in the rationalist community, but I don’t think I’ve ever tried to do something that I felt was impossible. (I think a precious few rationalists have faced the impossible — probably Holden and Eliezer, to name any at all — but they’re very much the exception rather than the rule.)
Here are some things that feel impossible:
Write something as groundbreaking as the sequences, starting in one week (that’s your planning period) and posting every day for at least a year
Cause the public collective consciousness and ~all significant intellectuals in the US to take x-risk (and especially AI x-risk) seriously, within the year
Note that I very much do not suggest that people throw themselves at this task!
Make a novel discovery in particle physics (or a similar well-established field that you’ve never studied before), within six months
Without piggybacking on any existing space exploration project, put a spacecraft of your own design / owned by you on the moon within five years
Found a new country that gets recognized by the UN
And here are some things where I can see a path to accomplishing them, but where that path feels incredibly hard and scary — these examples are specific to me:
Become fluent in Mandarin, both speaking/listening AND reading/writing, in the next three months
I have a lifetime of failure to learn Mandarin behind me, including one academic year when I really actually tried, also Mandarin is just really fucking hard
Run a marathon within the next year
I have a chronic leg injury that makes running essentially impossible, that feels insurmountable but probably in reality is not
Make a million dollars in the next six months just via investing/betting
I am a very risk-averse person and was raised to fear the stock market
Permanently fix my depression and anxiety
It’s probably not impossible but jeeeeeeeeeeeeeeezzzz
Found and run a company, like, one with actual employees and investors and a goal of growth (not just a one-person LLC, that’s cheating)
This just sounds awful in every way; I hate dealing with people and money and feel super unqualified for all of this
Again these will be different for different people. I think Eliezer’s quest to lose weight qualifies somewhere around here. I think things in this class are probably better candidates for serious rationality training exercises than the first list, though, maybe that’s wrong.
Anyway the goal is not to teach object-level skills, but to cause people to change their outlook on tasks that seem impossible. I think that’s one really important skill for rationalists/EAs to have, though not the only important skill. In any given quest you will probably learn additional useful object-level skills.
So idk those are some thoughts on one aspect of the thing. Didn’t properly feel like an answer so here it is as a comment instead.
Become fluent in Mandarin, both speaking/listening AND reading/writing, in the next three months
I have a lifetime of failure to learn Mandarin behind me, including one academic year when I really actually tried, also Mandarin is just really fucking hard
I wrote software that’s designed for this specific application. It’s basically homebrew Anki with the brakes removed, hooked up to a tokenizer, a dictionary, machine translation and a text-to-speech API. The system is unpolished, but it is in a usable state. (I use it everyday.) The whole thing is a web app, so it requires no technical knowledge to use. I’m looking for beta users in case anyone wants to try something “incredibly hard”.
Specifically for Mandarin, but I can add additional major languages just by writing a tokenizer for them. I’m working on a new system built around GPT-3 that I hope to launch August 14th. The new system should be able to support any major language right out of the box. (I don’t know if I can meet this ship date. The schedule is extremely ambitious. Moreover, OpenAI might reject the use case on the grounds it is too free-form.) It’ll also be orders of magnitude more expensive to use. Right now, I’m estimating $6 per hour.
Found a new country that gets recognized by the UN
Given current available crypto-technology I have the impression that there’s currently a window for funding states but I’m uncertain whether talking about the how to is a good idea given that it possible gives AGI’s a more power.
I listened to David Goggins’ account of Navy SEAL training last year. They encourage you to push yourself so hard that you are at genuine risk of death or permanent disability. The first two times Goggins tried to get through he failed out because of injuries, even though he was willing to — and did — run many miles on literally broken legs. He only made it through the third time because hell week got cut short due to someone in their cohort DYING (from participating in some kind of swimming exercise while very sick with pneumonia).
I actually found the book incredibly inspiring, though it did not make me think anyone should model themselves after the Navy SEALs in particular. I also don’t think someone should run 100 miles in 24 hours with zero training and continue despite the fact that their legs are breaking and they’re shitting and pissing blood while they run, which is another thing that Goggins did.
One training exercise in the book that seemed more reasonable to me (more like an exercise and less like abject torture) was an orienteering-type thing (for I think the Army Rangers?), where the terrain was treacherous and unfamiliar and the weather dangerously cold at night. I think it’s a good test of rationality to put yourself in a genuinely high-stakes situation like that — as long as one of the choices you’re allowed to make is to call for help if you are genuinely afraid for your life. That was an option in the case of the Rangers orienteering challenge, but my point is that the thing that’s bad about SEAL hell week is that you’re considered a pussy if you quit, even if it’s out of genuine and reasonable fear for your life.
The book overall is about the idea that your limits are fake, and humans can accomplish things that seem like they should be physically impossible as long as they just don’t give up. I think that’s a concept we could work with.
I think there are quite a few rationalists who challenge themselves to do fairly hard things, like founding a successful startup, putting together a large conference on short notice at the age of 18, or publishing a good post on rationality every day for a month, things kind of like that. I think I’ve challenged myself a lot more than I would have if I weren’t in the rationalist community, but I don’t think I’ve ever tried to do something that I felt was impossible. (I think a precious few rationalists have faced the impossible — probably Holden and Eliezer, to name any at all — but they’re very much the exception rather than the rule.)
Here are some things that feel impossible:
Write something as groundbreaking as the sequences, starting in one week (that’s your planning period) and posting every day for at least a year
Cause the public collective consciousness and ~all significant intellectuals in the US to take x-risk (and especially AI x-risk) seriously, within the year
Note that I very much do not suggest that people throw themselves at this task!
Make a novel discovery in particle physics (or a similar well-established field that you’ve never studied before), within six months
Without piggybacking on any existing space exploration project, put a spacecraft of your own design / owned by you on the moon within five years
Found a new country that gets recognized by the UN
And here are some things where I can see a path to accomplishing them, but where that path feels incredibly hard and scary — these examples are specific to me:
Become fluent in Mandarin, both speaking/listening AND reading/writing, in the next three months
I have a lifetime of failure to learn Mandarin behind me, including one academic year when I really actually tried, also Mandarin is just really fucking hard
Run a marathon within the next year
I have a chronic leg injury that makes running essentially impossible, that feels insurmountable but probably in reality is not
Make a million dollars in the next six months just via investing/betting
I am a very risk-averse person and was raised to fear the stock market
Permanently fix my depression and anxiety
It’s probably not impossible but jeeeeeeeeeeeeeeezzzz
Found and run a company, like, one with actual employees and investors and a goal of growth (not just a one-person LLC, that’s cheating)
This just sounds awful in every way; I hate dealing with people and money and feel super unqualified for all of this
Again these will be different for different people. I think Eliezer’s quest to lose weight qualifies somewhere around here. I think things in this class are probably better candidates for serious rationality training exercises than the first list, though, maybe that’s wrong.
Anyway the goal is not to teach object-level skills, but to cause people to change their outlook on tasks that seem impossible. I think that’s one really important skill for rationalists/EAs to have, though not the only important skill. In any given quest you will probably learn additional useful object-level skills.
So idk those are some thoughts on one aspect of the thing. Didn’t properly feel like an answer so here it is as a comment instead.
I wrote software that’s designed for this specific application. It’s basically homebrew Anki with the brakes removed, hooked up to a tokenizer, a dictionary, machine translation and a text-to-speech API. The system is unpolished, but it is in a usable state. (I use it everyday.) The whole thing is a web app, so it requires no technical knowledge to use. I’m looking for beta users in case anyone wants to try something “incredibly hard”.
I’m very interested, but only if I don’t have to pay for it, since I have literally no money. I’ve been thinking of learning Mandarin.
Specifically for Mandarin, or other languages as well?
Specifically for Mandarin, but I can add additional major languages just by writing a tokenizer for them. I’m working on a new system built around GPT-3 that I hope to launch August 14th. The new system should be able to support any major language right out of the box. (I don’t know if I can meet this ship date. The schedule is extremely ambitious. Moreover, OpenAI might reject the use case on the grounds it is too free-form.) It’ll also be orders of magnitude more expensive to use. Right now, I’m estimating $6 per hour.
Given current available crypto-technology I have the impression that there’s currently a window for funding states but I’m uncertain whether talking about the how to is a good idea given that it possible gives AGI’s a more power.