Comprehensive, I think it has the makings of a good resource, though it needs some polish. I’d imagine this would be much more useful to someone new to the ideas presented if it linked out to a bunch of papers/pages for expansion from most bulletpoints.
One thing I’d like to see added is spreading the memes of reason/evidence-based consequentialist decision making (particularly large-scale and future included) at all levels. It may be entirely accepted here, but the vast majority of humans don’t actually think that way. It’s kind of a pre-requisite for getting much momentum behind the other, more direct, goals you’ve laid out.
Make it less and less acceptable to be partisan/tribal in a moloch-fueling way in the public sphere (starting with our corner of it, spreading opportunistically).
Grow EA, so there’s funding for high-impact causes like some of the projects listed, and caring about solving problems is normalized.
Pick up potentially high-impact people with training and give them a support network of people who have an explicit goal to fix the world, like CFAR does, to create the kind of people to staff the projects.
In a few places, particularly in A1, you drift into general “things that would be good/cool”, rather than appearing to stay focused on things applicable to countering an extinction risk. Maybe there is a link that I’m missing, but other than bringing more resources I’m not sure what risk “Planetary mining” for example helps counter.
I’d advise against giving dates. AI timelines in particular could plausibly be much quicker or much slower than your suggestions, and it’d have massive knock-on effects. False confidence on specifics is not a good impression to give, maybe generalize them a bit?
“Negotiation with the simulators or prey for help”
I am working now on large explanation text which will be 40-50 pages. It will be with links. Maybe I will add the links inside the pdf.
I don’t think that I should go inside all details of decision theory and EA. I just put “rationality”.
Picking potential world saviours and educating them and providing all our support seems to be a good idea but probably we don’t have time. I will think more about it.
Planetary mining was recent addition which is addressed to people who think that Peak Oil and Peak Everything is the main risk. Personally I don’t believe in usefulness of space mining without nanotech.
The point about dates is really important. Maybe I should put more vague dates like beginning of 21 century, middle and second half? What is other way to say it more vague?
I upvoted your post and in general I think that downvoting without explanation is not good thing on LW.
Linking to the appropriate section of the explanation text would probably be better than linking to primary sources directly once that exists (which in turn would link out to primary sources).
Compressing to “rationality” is reasonable, though most readers would not understand at a glance. If you’re trying to keep it very streamlined just having a this as a lot of pointers makes sense, though perhaps alongside rationality it’d be good to have a pointer that’s more clearly directed at “make wanting to fix the future a thing which is widely accepted”, rather than rationality’s normal meanings as being effective. I’d also think it more appropriate for the A3 stream than A2, for what I have in mind at least.
I’d think creating world saviors from scratch would not be a viable option with some AI timelines, but getting good at picking up promising people in/leaving uni who have the right ethical streak and putting them in a network full of the memes of EA/X-risk reduction could plausibly give a turnaround from “person who is smart and will probably get some good job in some mildly evil corporation” to “person dedicated to trying to fix major problems/person in an earning to give career to fund interventions/person working towards top jobs to gain leverage to fix things from the inside” on the order of months, with an acceptable rate of success (even a few % changing life trajectory would be more than enough to pay back the investment of running that network in terms of x-risk reduction).
Perhaps classifying things in terms of what should be the focus right now verses things that need more steps before they become viable projects would be more useful than attempting to give dates in general? Vague dates are better, but thinking more I’m not sure if even giving wide ranges really solves the problem, our ability to forecast several very important things is highly limited. I’m not sure about a good set of labels for this though, but perhaps something like:
Immediate (aka: things which we could/are just working on right now)
Near future (single digit years? things which need some foundations, but are within sight)
Mid-term (unsure when we’ll get there, may vary significantly from topic to topic, can get a rough idea of what will likely need doing but we can’t get into the details until previous layers of tech/organization are ready)
Distant (getting much harder to forecast, major goals and projects which need large unpredictable tech advances and/or significant social changes before they’re accessible)
Outcomes (ways things could end up, when one or more of the previous projects goes through). Again, I’m not sure about these words, but using things which point more to the number of steps and difficulty of forecasting seems like a thing to explore.
And thank you. I tend to take downvotes as very strong negative reinforcement, it helps that you find my post somewhat useful.
Thank you for inspiring comment.
Yes, anonymous downvoting make me feel as I have secret enemy in the woods((
The idea of creating “world saviour” from bright students is more realistic, and effective altruists and LW did a lot in this way.
Rationality also should be elaborated and suggestion about dates classification is inspiring.
I’m very very suspicious of the idea of creating “world saviours”. In the Abrahamic tradition world saviours are expect to sweep the Earth clean of bad men with fire and sword. Yes, nice things are promised after that :-/
One person downvoted it, which means it could be anything from “I don’t like spelling corrections” to “I disagree about not giving dates”.
In general, if only one person downvotes, it is best not to ask. I don’t see anything worth downvoting in your post myself, although I wouldn’t upvote it, because it reads to me more like an attempt at compressing many applause lights into one comment without paying attention to any one than an attempt at genuine suggestions for improvement. (It’s a little -too- Less Wrongian.)
Comprehensive, I think it has the makings of a good resource, though it needs some polish. I’d imagine this would be much more useful to someone new to the ideas presented if it linked out to a bunch of papers/pages for expansion from most bulletpoints.
One thing I’d like to see added is spreading the memes of reason/evidence-based consequentialist decision making (particularly large-scale and future included) at all levels. It may be entirely accepted here, but the vast majority of humans don’t actually think that way. It’s kind of a pre-requisite for getting much momentum behind the other, more direct, goals you’ve laid out.
Make it less and less acceptable to be partisan/tribal in a moloch-fueling way in the public sphere (starting with our corner of it, spreading opportunistically).
Grow EA, so there’s funding for high-impact causes like some of the projects listed, and caring about solving problems is normalized.
Pick up potentially high-impact people with training and give them a support network of people who have an explicit goal to fix the world, like CFAR does, to create the kind of people to staff the projects.
In a few places, particularly in A1, you drift into general “things that would be good/cool”, rather than appearing to stay focused on things applicable to countering an extinction risk. Maybe there is a link that I’m missing, but other than bringing more resources I’m not sure what risk “Planetary mining” for example helps counter.
I’d advise against giving dates. AI timelines in particular could plausibly be much quicker or much slower than your suggestions, and it’d have massive knock-on effects. False confidence on specifics is not a good impression to give, maybe generalize them a bit?
“Negotiation with the simulators or prey for help”
pray?
I am working now on large explanation text which will be 40-50 pages. It will be with links. Maybe I will add the links inside the pdf.
I don’t think that I should go inside all details of decision theory and EA. I just put “rationality”.
Picking potential world saviours and educating them and providing all our support seems to be a good idea but probably we don’t have time. I will think more about it.
Planetary mining was recent addition which is addressed to people who think that Peak Oil and Peak Everything is the main risk. Personally I don’t believe in usefulness of space mining without nanotech.
The point about dates is really important. Maybe I should put more vague dates like beginning of 21 century, middle and second half? What is other way to say it more vague?
I upvoted your post and in general I think that downvoting without explanation is not good thing on LW.
“Pray” corrected.
Linking to the appropriate section of the explanation text would probably be better than linking to primary sources directly once that exists (which in turn would link out to primary sources).
Compressing to “rationality” is reasonable, though most readers would not understand at a glance. If you’re trying to keep it very streamlined just having a this as a lot of pointers makes sense, though perhaps alongside rationality it’d be good to have a pointer that’s more clearly directed at “make wanting to fix the future a thing which is widely accepted”, rather than rationality’s normal meanings as being effective. I’d also think it more appropriate for the A3 stream than A2, for what I have in mind at least.
I’d think creating world saviors from scratch would not be a viable option with some AI timelines, but getting good at picking up promising people in/leaving uni who have the right ethical streak and putting them in a network full of the memes of EA/X-risk reduction could plausibly give a turnaround from “person who is smart and will probably get some good job in some mildly evil corporation” to “person dedicated to trying to fix major problems/person in an earning to give career to fund interventions/person working towards top jobs to gain leverage to fix things from the inside” on the order of months, with an acceptable rate of success (even a few % changing life trajectory would be more than enough to pay back the investment of running that network in terms of x-risk reduction).
Perhaps classifying things in terms of what should be the focus right now verses things that need more steps before they become viable projects would be more useful than attempting to give dates in general? Vague dates are better, but thinking more I’m not sure if even giving wide ranges really solves the problem, our ability to forecast several very important things is highly limited. I’m not sure about a good set of labels for this though, but perhaps something like:
Immediate (aka: things which we could/are just working on right now)
Near future (single digit years? things which need some foundations, but are within sight)
Mid-term (unsure when we’ll get there, may vary significantly from topic to topic, can get a rough idea of what will likely need doing but we can’t get into the details until previous layers of tech/organization are ready)
Distant (getting much harder to forecast, major goals and projects which need large unpredictable tech advances and/or significant social changes before they’re accessible)
Outcomes (ways things could end up, when one or more of the previous projects goes through). Again, I’m not sure about these words, but using things which point more to the number of steps and difficulty of forecasting seems like a thing to explore.
And thank you. I tend to take downvotes as very strong negative reinforcement, it helps that you find my post somewhat useful.
Thank you for inspiring comment. Yes, anonymous downvoting make me feel as I have secret enemy in the woods(( The idea of creating “world saviour” from bright students is more realistic, and effective altruists and LW did a lot in this way. Rationality also should be elaborated and suggestion about dates classification is inspiring.
I’m very very suspicious of the idea of creating “world saviours”. In the Abrahamic tradition world saviours are expect to sweep the Earth clean of bad men with fire and sword. Yes, nice things are promised after that :-/
I’m curious about why this was downvoted?
One person downvoted it, which means it could be anything from “I don’t like spelling corrections” to “I disagree about not giving dates”.
In general, if only one person downvotes, it is best not to ask. I don’t see anything worth downvoting in your post myself, although I wouldn’t upvote it, because it reads to me more like an attempt at compressing many applause lights into one comment without paying attention to any one than an attempt at genuine suggestions for improvement. (It’s a little -too- Less Wrongian.)