Of course. Whenever someone says they want to do something impossibly hard, the proper response is to dismiss them. Either they agree with you, and you made the right call, or they disagree with you, and you’ve cemented their resolve.
But JoshuaZ is right that I think the wording is ridiculous. “Save the world” is nothing but applause lights. If it were “see a positive singularity” we would have at least gone from 0th order to 1st order. If it were “make life extension available sooner” we’ve progressed from 1st order to 2nd order. If it were “make lab-grown organs available sooner” we’ve progressed from 2nd order to 3rd order. If one comes to me with the last desire, I’d tell them to move to Wake Forest and become friends with Anthony Atala. If someone comes to me with the first desire, I pat them on the head.
Even Norman Borlaug didn’t “save the world.” He might have personally extended lifespans by billions of human-years, but that’s not close, even on a log scale. And if you want to be a second Norman Borlaug, trying to save the world seems like a poor way to approach that goal because you focus on the wrong questions. Borlaug wanted to improve the world- to make hungry people full by creating wheat varieties that were disease-resistant. He had the altruistic impulse, but he was facing a problem worked out to third order. The altruistic impulse is a great thing, but if you don’t have a third order problem yet keep looking.
And when asking for career advice, it’s relevant information where you are in that process. If you’re at 2nd order, your interests will already be narrow enough that most advice will be inapplicable to your situation. The 2nd order person in the particular track I’ve outlined already knows they will strongly benefit from getting a medical degree in America, England, or China (there may be a few other countries on this list; this isn’t my specialty) or that they should earn a bunch of money while getting familiar with efforts already underway in those countries. If you’re at 3rd order, you’re already at the point where there are a handful of appropriate opportunities, and you’re better off looking for those opportunities specifically than you are getting generic advice.
If you’re at 0th order? Then you need to spend time cultivating your interests. If you go to Silicon Valley and your only interest is “I want to get rich!” you won’t get very far. While that may be the underlying interest of everyone there, the fact that it’s so common means that it conveys very little information. The way for an individual to tackle a 0th order problem is to find a 3rd order problem, uncover some 4th order problems while investigating that problem, and solve those.
But JoshuaZ is right that I think the wording is ridiculous. “Save the world” is nothing but applause lights.
You’re reading way too much into that single line. I wanted to express the sentiment of “I want to be as effective as possible in doing good”, and there was a recent post covering that topic which happened to be named “how to save the world”, so I linked to it. If that post hadn’t been there, I might have said something like “I want to do something meaningful with my life”. I was also assuming “saving the world” and other similar expressions to be standard LW jargon for “doing as much good as possible”.
As for my actual goals… Ideally I’d like to help avert a negative singularity, though since I don’t have very high hopes of that actually being possible, I also give the goals of “just have fun” and “help people in the short term” considerable weight, and am undecided as to how much effort I’ll in the end spend explicitly on singularity matters. But to the degree that I do end up trying to help the singularity, the three main approaches I’ve been playing with are
Just make money and donate that to SIAI.
Help influence academia to become more aware of these issues.
Become well-known enough (via e.g. writing, politics) among normal people that I can help spread singularity-related ideas and hopefully get more people to take them seriously.
These are obviously not mutually exclusive, and indeed, one of the reasons I’m playing around with the idea of “freelance academia” is that it allows me to do some of the academic stuff without the commitment that e.g. getting a PhD would involve (as I’m not yet sure whether the academic approach is the one that I’d find the most rewarding). All three also have to varying extent an intrinsic appeal, beyond just the singularity aspect: I wouldn’t mind having a bit more money, intellectual work is rewarding by itself, and so is writing and having a lot of people care about your opinions.
As for the details of the academic career path, the “help avoid a negative singularity” aspect of that currently mainly involves helping write up the ideas about the singularity into concise, well-sourced papers that people can be pointed to. (Here is one example of such work—an improved, full-length version of that paper is in the works.) Beyond that, maybe with time I can come up with original insights of my own to contribute to the field, as well as build a reputation and give those singularity-related ideas more merit by producing well-regarded papers in non-singularity-related fields that I happen to be interested in.
I believe that Vaniver is making a distinction between “improve” and “save” in saying that any given individual is unlikely to have a large-scale enough impact to be described as “saving” the world, but that many people can improve the world. This point may have some validity, although Norman Borlaugh may be a relevant counterexample to show that it isn’t completely impossible.
That would be only relevant if Kaj had said “I expect to save the world” instead of “Ideally, I’d like to save the world”. I read the latter as specifying something like “all existential risks are averted and the world gets much more awesome” as an optimization target, not as something that he wants to (let alone expects to be able to) do completely and singlehandedly. And as an optimization target, it makes good sense. Why aim for imperfection? The target is the measure of utility, not a proposed action or plan on its own. (Possibly relevant: Trying to Try.)
(One thing I see about that paragraph that could be legitimately disputed is the jump from specifying the optimization target to “One way to do that involves contributing academic research, which raises the question of what’s the most effective way of doing that” without establishing that academic research is itself the best way (or at least a good way) for a very smart person to optimize the aforementioned goal. That itself would be an interesting discussion, but I think in this post it is taken as an assumption. (See also this comment.))
I read the latter as specifying something like “all existential risks are averted and the world gets much more awesome” as an optimization target, not as something that he wants to (let alone expects to be able to) do completely and singlehandedly.
This comment strikes me as rather confrontational, and also as offering advice based on a misguided understanding of my motives.
I have very little clue of what you’re trying to say.
Of course. Whenever someone says they want to do something impossibly hard, the proper response is to dismiss them. Either they agree with you, and you made the right call, or they disagree with you, and you’ve cemented their resolve.
But JoshuaZ is right that I think the wording is ridiculous. “Save the world” is nothing but applause lights. If it were “see a positive singularity” we would have at least gone from 0th order to 1st order. If it were “make life extension available sooner” we’ve progressed from 1st order to 2nd order. If it were “make lab-grown organs available sooner” we’ve progressed from 2nd order to 3rd order. If one comes to me with the last desire, I’d tell them to move to Wake Forest and become friends with Anthony Atala. If someone comes to me with the first desire, I pat them on the head.
Even Norman Borlaug didn’t “save the world.” He might have personally extended lifespans by billions of human-years, but that’s not close, even on a log scale. And if you want to be a second Norman Borlaug, trying to save the world seems like a poor way to approach that goal because you focus on the wrong questions. Borlaug wanted to improve the world- to make hungry people full by creating wheat varieties that were disease-resistant. He had the altruistic impulse, but he was facing a problem worked out to third order. The altruistic impulse is a great thing, but if you don’t have a third order problem yet keep looking.
And when asking for career advice, it’s relevant information where you are in that process. If you’re at 2nd order, your interests will already be narrow enough that most advice will be inapplicable to your situation. The 2nd order person in the particular track I’ve outlined already knows they will strongly benefit from getting a medical degree in America, England, or China (there may be a few other countries on this list; this isn’t my specialty) or that they should earn a bunch of money while getting familiar with efforts already underway in those countries. If you’re at 3rd order, you’re already at the point where there are a handful of appropriate opportunities, and you’re better off looking for those opportunities specifically than you are getting generic advice.
If you’re at 0th order? Then you need to spend time cultivating your interests. If you go to Silicon Valley and your only interest is “I want to get rich!” you won’t get very far. While that may be the underlying interest of everyone there, the fact that it’s so common means that it conveys very little information. The way for an individual to tackle a 0th order problem is to find a 3rd order problem, uncover some 4th order problems while investigating that problem, and solve those.
EDIT: I fleshed this out further is this post.
You’re reading way too much into that single line. I wanted to express the sentiment of “I want to be as effective as possible in doing good”, and there was a recent post covering that topic which happened to be named “how to save the world”, so I linked to it. If that post hadn’t been there, I might have said something like “I want to do something meaningful with my life”. I was also assuming “saving the world” and other similar expressions to be standard LW jargon for “doing as much good as possible”.
As for my actual goals… Ideally I’d like to help avert a negative singularity, though since I don’t have very high hopes of that actually being possible, I also give the goals of “just have fun” and “help people in the short term” considerable weight, and am undecided as to how much effort I’ll in the end spend explicitly on singularity matters. But to the degree that I do end up trying to help the singularity, the three main approaches I’ve been playing with are
Just make money and donate that to SIAI.
Help influence academia to become more aware of these issues.
Become well-known enough (via e.g. writing, politics) among normal people that I can help spread singularity-related ideas and hopefully get more people to take them seriously.
These are obviously not mutually exclusive, and indeed, one of the reasons I’m playing around with the idea of “freelance academia” is that it allows me to do some of the academic stuff without the commitment that e.g. getting a PhD would involve (as I’m not yet sure whether the academic approach is the one that I’d find the most rewarding). All three also have to varying extent an intrinsic appeal, beyond just the singularity aspect: I wouldn’t mind having a bit more money, intellectual work is rewarding by itself, and so is writing and having a lot of people care about your opinions.
As for the details of the academic career path, the “help avoid a negative singularity” aspect of that currently mainly involves helping write up the ideas about the singularity into concise, well-sourced papers that people can be pointed to. (Here is one example of such work—an improved, full-length version of that paper is in the works.) Beyond that, maybe with time I can come up with original insights of my own to contribute to the field, as well as build a reputation and give those singularity-related ideas more merit by producing well-regarded papers in non-singularity-related fields that I happen to be interested in.
I believe that Vaniver is making a distinction between “improve” and “save” in saying that any given individual is unlikely to have a large-scale enough impact to be described as “saving” the world, but that many people can improve the world. This point may have some validity, although Norman Borlaugh may be a relevant counterexample to show that it isn’t completely impossible.
That would be only relevant if Kaj had said “I expect to save the world” instead of “Ideally, I’d like to save the world”. I read the latter as specifying something like “all existential risks are averted and the world gets much more awesome” as an optimization target, not as something that he wants to (let alone expects to be able to) do completely and singlehandedly. And as an optimization target, it makes good sense. Why aim for imperfection? The target is the measure of utility, not a proposed action or plan on its own. (Possibly relevant: Trying to Try.)
(One thing I see about that paragraph that could be legitimately disputed is the jump from specifying the optimization target to “One way to do that involves contributing academic research, which raises the question of what’s the most effective way of doing that” without establishing that academic research is itself the best way (or at least a good way) for a very smart person to optimize the aforementioned goal. That itself would be an interesting discussion, but I think in this post it is taken as an assumption. (See also this comment.))
There is no “singlehandedly”, individual decisions control actions of many people.
Indeed it is.
Agreed that it is too confrontational.