Gosh, recurring to jsteinhart’s comment everything should add up to normality . If you feel that you’re being led by abstract reasoning in directions that feel consistently feel wrong then there’s probably something wrong with the reasoning. My own interest in existential risk reduction is that when I experience a sublime moment I want people to be able to have more of them for a long time. If all that there was was a counterintuitive abstract argument I would think about other things.
If you feel that you’re being led by abstract reasoning in directions that feel consistently feel wrong then there’s probably something wrong with the reasoning.
Yup, my confidence in the reasoning here on LW and my own ability to judge it is very low. The main reason for this is described in your post above, taken to its logical extreme you end up doing seemingly crazy stuff like trying to stop people from creating baby universes rather than solving friendly AI.
I don’t know how to deal with this. Where do I draw the line? What are the upper and lower bounds? Are risks from AI above or below the line of uncertainty that I better ignore, given my own uncertainty and the uncertainty in the meta-level reasoning involved?
I am too uneducated and probably not smart enough to figure this out, yet I face the problems that people who are much more educated and intelligent than me devised.
If a line of reasoning is leading you to do something crazy, then that line of reasoning is probably incorrect. I think that is where you should draw the line. If the reasoning is actually correct, then by learning more your intuitions will automatically fall in line with the reasoning and it will not seem crazy anymore.
In this case, I think your intuition correctly diagnoses the conclusion as crazy. Whether you are well-educated or not, the fact that you can tell the difference speaks well of you, although I think you are causing yourself way too much anxiety by worrying about whether you should accept the conclusion after all. Like I said, by learning more you will decrease the inferential distance you will have to traverse in such arguments, and better deduce whether they are valid.
That being said, I still reject these sorts of existential risk arguments based mostly on intuition, plus I am unwilling to do things with high probabilities of failure, no matter how good the situation would be in the event of success.
ETA: To clarify, I think existential risk reduction is a worthwhile goal, but I am uncomfortable with arguments advocating specific ways to reduce risk that rely on very abstract or low-probability scenarios.
The main reason for this is described in your post above, taken to its logical extreme you end up doing seemingly crazy stuff like trying to stop people from creating baby universes rather than solving friendly AI.
There are many arguments in this thread that this extreme isn’t even correct given the questionable premises, have you read them? Regardless, though, it really is important to be psychologically realistic, even if you feel you “should” be out there debating with AI researchers or something. Leading a psychologically healthy life makes it a lot less likely you’ll have completely burnt yourself out 10 years down the line when things might be more important, and it also sends a good signal to other people that you can work towards bettering the world without being some seemingly religiously devout super nerd. One XiXiDu is good, two XiXiDus is a lot better, especially if they can cooperate, and especially if those two XiXiDus can convince more XiXiDus to be a little more reflective and a little less wasteful. Even if the singularity stuff ends up being total bullshit or if something with more “should”-ness shows up, folk like you can always pivot and make the world a better place using some other strategy. That’s the benefit of keeping a healthy mind.
[Edit] I share your discomfort but this is more a matter of the uncertainty intrinsic to the world than we live in than a matter of education/intelligence. At some point a leap of faith is required.
Gosh, recurring to jsteinhart’s comment everything should add up to normality . If you feel that you’re being led by abstract reasoning in directions that feel consistently feel wrong then there’s probably something wrong with the reasoning. My own interest in existential risk reduction is that when I experience a sublime moment I want people to be able to have more of them for a long time. If all that there was was a counterintuitive abstract argument I would think about other things.
Yup, my confidence in the reasoning here on LW and my own ability to judge it is very low. The main reason for this is described in your post above, taken to its logical extreme you end up doing seemingly crazy stuff like trying to stop people from creating baby universes rather than solving friendly AI.
I don’t know how to deal with this. Where do I draw the line? What are the upper and lower bounds? Are risks from AI above or below the line of uncertainty that I better ignore, given my own uncertainty and the uncertainty in the meta-level reasoning involved?
I am too uneducated and probably not smart enough to figure this out, yet I face the problems that people who are much more educated and intelligent than me devised.
If a line of reasoning is leading you to do something crazy, then that line of reasoning is probably incorrect. I think that is where you should draw the line. If the reasoning is actually correct, then by learning more your intuitions will automatically fall in line with the reasoning and it will not seem crazy anymore.
In this case, I think your intuition correctly diagnoses the conclusion as crazy. Whether you are well-educated or not, the fact that you can tell the difference speaks well of you, although I think you are causing yourself way too much anxiety by worrying about whether you should accept the conclusion after all. Like I said, by learning more you will decrease the inferential distance you will have to traverse in such arguments, and better deduce whether they are valid.
That being said, I still reject these sorts of existential risk arguments based mostly on intuition, plus I am unwilling to do things with high probabilities of failure, no matter how good the situation would be in the event of success.
ETA: To clarify, I think existential risk reduction is a worthwhile goal, but I am uncomfortable with arguments advocating specific ways to reduce risk that rely on very abstract or low-probability scenarios.
There are many arguments in this thread that this extreme isn’t even correct given the questionable premises, have you read them? Regardless, though, it really is important to be psychologically realistic, even if you feel you “should” be out there debating with AI researchers or something. Leading a psychologically healthy life makes it a lot less likely you’ll have completely burnt yourself out 10 years down the line when things might be more important, and it also sends a good signal to other people that you can work towards bettering the world without being some seemingly religiously devout super nerd. One XiXiDu is good, two XiXiDus is a lot better, especially if they can cooperate, and especially if those two XiXiDus can convince more XiXiDus to be a little more reflective and a little less wasteful. Even if the singularity stuff ends up being total bullshit or if something with more “should”-ness shows up, folk like you can always pivot and make the world a better place using some other strategy. That’s the benefit of keeping a healthy mind.
[Edit] I share your discomfort but this is more a matter of the uncertainty intrinsic to the world than we live in than a matter of education/intelligence. At some point a leap of faith is required.