Consider more carefully your ranking of preferences, and expand your horizons quite a bit. There’re lots of ways to improve the predicted future state of humanity that are less direct, but possibly more effective, than this particular topic.
I care about the current and future state of humanity
That’s sweet of you. I’m glad.
so I think it’s good to work on existential or global catastrophic risk
That’s a pretty big jump. I’ll grant that human existential risk is important, but why is your best contribution to work directly on it? Perhaps you’d do a lot more good with a slight reduction in shipping costs or tiny improvements in safety or enjoyment of some consumer product. In the likely case that your marginal contribution to x-risk doesn’t save the world, a small improvement for a large number of people does massive amounts more good.
Regardless of whether you focus on x-risk or something else valuable, the fact that you won’t consider leaving Kagoshima is an indication that you aren’t as fully committed as you claim. IMO, that’s ok: we all have personal desires that we put ahead of the rest of the world. But you should acknowledge it and include it in your calculations.
In the likely case that your marginal contribution to x-risk doesn’t save the world
So you think that other people could contribute much more to x-risk, so I should go into areas where I can have a lot of impact? Otherwise, if everyone says »I’ll only have a small impact on x-risk. I’ll do something else.«, nobody would work on x-risk. Are you trying to get a better justification for work on x-risk out of me? At the moment I only have this: x-risk is pretty important, because we don’t want to go extinct (I don’t want humanity to go extinct or into some worse state than today). Not many people are working on x-risk. Therefore I do work on x-risk, so that there are more people working on it. Now you will tell me that I should start using numbers.
the fact that you won’t consider leaving Kagoshima is an indication that you aren’t as fully committed as you claim
What did I claim about my degree of commitment? And yes, I know that I would be more effective at improving the state of humanity if I didn’t have certain preferences about family and such.
Anyway, thanks for pushing me towards quantitative reasoning.
So you think that other people could contribute much more to x-risk
“marginal” in that sentence was meant literally—the additional contribution to the cause that you’re considering. Actually, I think there’s not much room for anybody to contribute large amounts to x-risk mitigation. Most people (and since I know nothing of you, I put you in that class) will do more good for humanity by working at something that improves near-term situations than by working on theoretical and unlikely problems.
So you think there’s not much we can do about x-risk? What makes you think that? Or, alternatively, if you think that only few people who can do much good in x-risk mitigation, what properties enable them to do that?
Oh, and why do you consider AI safety a “theoretical [or] unlikely” problem?
I think that there’s not much more that most individuals can do about x-risk as a full-time pursuit than we can as aware and interested civilians.
I also think that unfriendly AI Foom is a small part of the disaster space, compared to the current volume of unfriendly natural intelligence we face. Increase in destructive power of small (or not-so-small) groups of humans seems 20-1000x more likely (and I generally think toward the higher end of that) to filter us than a single or small number of AI entities becoming powerful enough to do so.
Or on healthcare or architecture or garbage collection or any of the billion things humans do for each other.
Some thought to far-mode issues is worthwhile, and you might be able to contribute a bit as a funder or hobbyist, but for most people, including most rationalists, it shouldn’t be your primary drive.
Quite. Whatever you consider an improvement to be. Just don’t completely discount small, likely improvements in favor of large (existential) unlikely ones.
Consider more carefully your ranking of preferences, and expand your horizons quite a bit. There’re lots of ways to improve the predicted future state of humanity that are less direct, but possibly more effective, than this particular topic.
That’s sweet of you. I’m glad.
That’s a pretty big jump. I’ll grant that human existential risk is important, but why is your best contribution to work directly on it? Perhaps you’d do a lot more good with a slight reduction in shipping costs or tiny improvements in safety or enjoyment of some consumer product. In the likely case that your marginal contribution to x-risk doesn’t save the world, a small improvement for a large number of people does massive amounts more good.
Regardless of whether you focus on x-risk or something else valuable, the fact that you won’t consider leaving Kagoshima is an indication that you aren’t as fully committed as you claim. IMO, that’s ok: we all have personal desires that we put ahead of the rest of the world. But you should acknowledge it and include it in your calculations.
So you think that other people could contribute much more to x-risk, so I should go into areas where I can have a lot of impact? Otherwise, if everyone says »I’ll only have a small impact on x-risk. I’ll do something else.«, nobody would work on x-risk. Are you trying to get a better justification for work on x-risk out of me? At the moment I only have this: x-risk is pretty important, because we don’t want to go extinct (I don’t want humanity to go extinct or into some worse state than today). Not many people are working on x-risk. Therefore I do work on x-risk, so that there are more people working on it. Now you will tell me that I should start using numbers.
What did I claim about my degree of commitment? And yes, I know that I would be more effective at improving the state of humanity if I didn’t have certain preferences about family and such.
Anyway, thanks for pushing me towards quantitative reasoning.
“marginal” in that sentence was meant literally—the additional contribution to the cause that you’re considering. Actually, I think there’s not much room for anybody to contribute large amounts to x-risk mitigation. Most people (and since I know nothing of you, I put you in that class) will do more good for humanity by working at something that improves near-term situations than by working on theoretical and unlikely problems.
So you think there’s not much we can do about x-risk? What makes you think that? Or, alternatively, if you think that only few people who can do much good in x-risk mitigation, what properties enable them to do that?
Oh, and why do you consider AI safety a “theoretical [or] unlikely” problem?
I think that there’s not much more that most individuals can do about x-risk as a full-time pursuit than we can as aware and interested civilians.
I also think that unfriendly AI Foom is a small part of the disaster space, compared to the current volume of unfriendly natural intelligence we face. Increase in destructive power of small (or not-so-small) groups of humans seems 20-1000x more likely (and I generally think toward the higher end of that) to filter us than a single or small number of AI entities becoming powerful enough to do so.
So it would be better to work on computer security? Or on education, so that we raise fewer unfriendly natural intelligences?
Also, AI safety research benefits AI research in general and AI research in general benefits humanity. Again only marginal contributions?
Or on healthcare or architecture or garbage collection or any of the billion things humans do for each other.
Some thought to far-mode issues is worthwhile, and you might be able to contribute a bit as a funder or hobbyist, but for most people, including most rationalists, it shouldn’t be your primary drive.
Perhaps you would also do more good by working in a slight increase in shipping costs.
Quite. Whatever you consider an improvement to be. Just don’t completely discount small, likely improvements in favor of large (existential) unlikely ones.