Linking to the appropriate section of the explanation text would probably be better than linking to primary sources directly once that exists (which in turn would link out to primary sources).
Compressing to “rationality” is reasonable, though most readers would not understand at a glance. If you’re trying to keep it very streamlined just having a this as a lot of pointers makes sense, though perhaps alongside rationality it’d be good to have a pointer that’s more clearly directed at “make wanting to fix the future a thing which is widely accepted”, rather than rationality’s normal meanings as being effective. I’d also think it more appropriate for the A3 stream than A2, for what I have in mind at least.
I’d think creating world saviors from scratch would not be a viable option with some AI timelines, but getting good at picking up promising people in/leaving uni who have the right ethical streak and putting them in a network full of the memes of EA/X-risk reduction could plausibly give a turnaround from “person who is smart and will probably get some good job in some mildly evil corporation” to “person dedicated to trying to fix major problems/person in an earning to give career to fund interventions/person working towards top jobs to gain leverage to fix things from the inside” on the order of months, with an acceptable rate of success (even a few % changing life trajectory would be more than enough to pay back the investment of running that network in terms of x-risk reduction).
Perhaps classifying things in terms of what should be the focus right now verses things that need more steps before they become viable projects would be more useful than attempting to give dates in general? Vague dates are better, but thinking more I’m not sure if even giving wide ranges really solves the problem, our ability to forecast several very important things is highly limited. I’m not sure about a good set of labels for this though, but perhaps something like:
Immediate (aka: things which we could/are just working on right now)
Near future (single digit years? things which need some foundations, but are within sight)
Mid-term (unsure when we’ll get there, may vary significantly from topic to topic, can get a rough idea of what will likely need doing but we can’t get into the details until previous layers of tech/organization are ready)
Distant (getting much harder to forecast, major goals and projects which need large unpredictable tech advances and/or significant social changes before they’re accessible)
Outcomes (ways things could end up, when one or more of the previous projects goes through). Again, I’m not sure about these words, but using things which point more to the number of steps and difficulty of forecasting seems like a thing to explore.
And thank you. I tend to take downvotes as very strong negative reinforcement, it helps that you find my post somewhat useful.
Thank you for inspiring comment.
Yes, anonymous downvoting make me feel as I have secret enemy in the woods((
The idea of creating “world saviour” from bright students is more realistic, and effective altruists and LW did a lot in this way.
Rationality also should be elaborated and suggestion about dates classification is inspiring.
I’m very very suspicious of the idea of creating “world saviours”. In the Abrahamic tradition world saviours are expect to sweep the Earth clean of bad men with fire and sword. Yes, nice things are promised after that :-/
Linking to the appropriate section of the explanation text would probably be better than linking to primary sources directly once that exists (which in turn would link out to primary sources).
Compressing to “rationality” is reasonable, though most readers would not understand at a glance. If you’re trying to keep it very streamlined just having a this as a lot of pointers makes sense, though perhaps alongside rationality it’d be good to have a pointer that’s more clearly directed at “make wanting to fix the future a thing which is widely accepted”, rather than rationality’s normal meanings as being effective. I’d also think it more appropriate for the A3 stream than A2, for what I have in mind at least.
I’d think creating world saviors from scratch would not be a viable option with some AI timelines, but getting good at picking up promising people in/leaving uni who have the right ethical streak and putting them in a network full of the memes of EA/X-risk reduction could plausibly give a turnaround from “person who is smart and will probably get some good job in some mildly evil corporation” to “person dedicated to trying to fix major problems/person in an earning to give career to fund interventions/person working towards top jobs to gain leverage to fix things from the inside” on the order of months, with an acceptable rate of success (even a few % changing life trajectory would be more than enough to pay back the investment of running that network in terms of x-risk reduction).
Perhaps classifying things in terms of what should be the focus right now verses things that need more steps before they become viable projects would be more useful than attempting to give dates in general? Vague dates are better, but thinking more I’m not sure if even giving wide ranges really solves the problem, our ability to forecast several very important things is highly limited. I’m not sure about a good set of labels for this though, but perhaps something like:
Immediate (aka: things which we could/are just working on right now)
Near future (single digit years? things which need some foundations, but are within sight)
Mid-term (unsure when we’ll get there, may vary significantly from topic to topic, can get a rough idea of what will likely need doing but we can’t get into the details until previous layers of tech/organization are ready)
Distant (getting much harder to forecast, major goals and projects which need large unpredictable tech advances and/or significant social changes before they’re accessible)
Outcomes (ways things could end up, when one or more of the previous projects goes through). Again, I’m not sure about these words, but using things which point more to the number of steps and difficulty of forecasting seems like a thing to explore.
And thank you. I tend to take downvotes as very strong negative reinforcement, it helps that you find my post somewhat useful.
Thank you for inspiring comment. Yes, anonymous downvoting make me feel as I have secret enemy in the woods(( The idea of creating “world saviour” from bright students is more realistic, and effective altruists and LW did a lot in this way. Rationality also should be elaborated and suggestion about dates classification is inspiring.
I’m very very suspicious of the idea of creating “world saviours”. In the Abrahamic tradition world saviours are expect to sweep the Earth clean of bad men with fire and sword. Yes, nice things are promised after that :-/