My concern with donating to SIAI in particular has been that I don’t have a clear understanding of how to measure potential success. I agree with the general cause, and that if my dollars can realistically and provably help create Friendly AI that that’d be great, but it’s not clear to me that Friendly AI is actually possible, or that the SIAI project is the best way to go about it. And frankly, the math and other facts involved are beyond my ability to really understand
This concern was cemented when I read the recent interview between SIAI (Eliezer in particular? Not sure) and Holder, and Holder asked “what do you have to say to people who are deciding whether to donate? How much effect would their dollars actually have, and isn’t this just Pascal’s Mugging?” and the answer was essentially “Right now we aren’t necessarily looking to expand and it doesn’t necessarily make sense to donate here if you aren’t involved already. Others may invoke Pascal’s Mugging on our behalf, but we don’t advocate that argument.”
I’m also biased in favor of human-centric (as opposed to trans/posthuman) solutions to current global problems. This is mostly because, well, I’m human and I like it that way. I feel like if humanity can’t solve its own problems with its current level intelligence, it’s because we were too lazy, not because we weren’t smart enough. This may change over the next decades as transhuman ideals become more concrete and less weird and scary to me. I don’t really defend this position—it’s based entirely on irrational pride - but haven’t quite found enough reasoning to abandon it.
For now, the question I’m left with is—what OTHER existential risks are out there, how cost effective are they to fix, and do we have an adequate metric to judge our success?
For now, the question I’m left with is—what OTHER existential risks are out there, how cost effective are they to fix, and do we have an adequate metric to judge our success?
The edited volume Global Catastrophic Risks addresses this question. It’s far more extensive than Nick Bostrom’s initial Existential Risks paper and provides a list of further reading after each chapter.
Here are some of the covered risks:
Astro-physical processes such as the stellar lifecycle
Human evolution
Super-volcanism
Comets and asteroids
Supernovae, gamma-ray bursts, solar flares, and cosmic rays
Climate change
Plagues and pandemics
Artificial Intelligence
Physics disasters
Social collapse
Nuclear war
Biotechnology
Nanotechnology
Totalitarianism
The book also has many chapters discussing the analysis of risk, risks and insurance, prophesies of doom in popular narratives, cognitive biases relating to risk, selection effects, and public policy.
Breakdown of the vacuum state, conversion of matter into strangelets, mini black-holes, and other things which people fear from a particle accelerator like the LHC. It boils down to, “Physics is weird, and we might find some way of killing ourselves by messing with it.”
It’s well-written, though depressing, if you take “only black holes will remain in 10^45 years” as depressing news.
Evolution is not a forward-looking algorithm, so humans could evolve in dangerous, retrograde ways, and thus extinct what we currently consider valuable about ourselves, or even the species itself should it become too dependent on current conditions.
I’m also biased in favor of human-centric (as opposed to trans/posthuman) solutions to current global problems. This is mostly because, well, I’m human and I like it that way. I feel like if humanity can’t solve its own problems with its current level intelligence, it’s because we were too lazy, not because we weren’t smart enough.
I’m curious to unpack this a bit. I have a couple of conflicting interpretations of what you might be getting at here; could you clarify?
At first, it sounded to me as if you were saying that you consider intelligence increase to be “transhuman”, but laziness reduction (diligence increase?) to be not “transhuman”. Which made me wonder, why the distinction?
Then, I thought you might be saying that laziness/diligence is morally significant to you, while intelligence increase is not morally significant. In other words, if humanity fails because we are lazy, we deserved to fail.
Am I totally misreading you? I suspect I am, at least in one of the above interpretations.
I haven’t unpacked the value/bias for myself yet, and I’m pretty sure at least part of it is inconsistent with my other values.
I’m not necessarily morally opposed to artificial (i.e. drugs or cybernetic) intelligence OR diligence enhancements. But I would be disappointed if turned out that humanity NEEDED such enhancements in order to fix its own problems.
I believe that diligence is something that can be taught, without changing anything fundamental about human nature.
My concern with donating to SIAI in particular has been that I don’t have a clear understanding of how to measure potential success. I agree with the general cause, and that if my dollars can realistically and provably help create Friendly AI that that’d be great, but it’s not clear to me that Friendly AI is actually possible, or that the SIAI project is the best way to go about it. And frankly, the math and other facts involved are beyond my ability to really understand
This concern was cemented when I read the recent interview between SIAI (Eliezer in particular? Not sure) and Holder, and Holder asked “what do you have to say to people who are deciding whether to donate? How much effect would their dollars actually have, and isn’t this just Pascal’s Mugging?” and the answer was essentially “Right now we aren’t necessarily looking to expand and it doesn’t necessarily make sense to donate here if you aren’t involved already. Others may invoke Pascal’s Mugging on our behalf, but we don’t advocate that argument.”
I’m also biased in favor of human-centric (as opposed to trans/posthuman) solutions to current global problems. This is mostly because, well, I’m human and I like it that way. I feel like if humanity can’t solve its own problems with its current level intelligence, it’s because we were too lazy, not because we weren’t smart enough. This may change over the next decades as transhuman ideals become more concrete and less weird and scary to me. I don’t really defend this position—it’s based entirely on irrational pride - but haven’t quite found enough reasoning to abandon it.
For now, the question I’m left with is—what OTHER existential risks are out there, how cost effective are they to fix, and do we have an adequate metric to judge our success?
The edited volume Global Catastrophic Risks addresses this question. It’s far more extensive than Nick Bostrom’s initial Existential Risks paper and provides a list of further reading after each chapter.
Here are some of the covered risks:
Astro-physical processes such as the stellar lifecycle
Human evolution
Super-volcanism
Comets and asteroids
Supernovae, gamma-ray bursts, solar flares, and cosmic rays
Climate change
Plagues and pandemics
Artificial Intelligence
Physics disasters
Social collapse
Nuclear war
Biotechnology
Nanotechnology
Totalitarianism
The book also has many chapters discussing the analysis of risk, risks and insurance, prophesies of doom in popular narratives, cognitive biases relating to risk, selection effects, and public policy.
What’s physics disasters?
Breakdown of the vacuum state, conversion of matter into strangelets, mini black-holes, and other things which people fear from a particle accelerator like the LHC. It boils down to, “Physics is weird, and we might find some way of killing ourselves by messing with it.”
What is the risk from Human Evolution? Maybe I should just buy the book...
It’s well-written, though depressing, if you take “only black holes will remain in 10^45 years” as depressing news.
Evolution is not a forward-looking algorithm, so humans could evolve in dangerous, retrograde ways, and thus extinct what we currently consider valuable about ourselves, or even the species itself should it become too dependent on current conditions.
I’m curious to unpack this a bit. I have a couple of conflicting interpretations of what you might be getting at here; could you clarify?
At first, it sounded to me as if you were saying that you consider intelligence increase to be “transhuman”, but laziness reduction (diligence increase?) to be not “transhuman”. Which made me wonder, why the distinction?
Then, I thought you might be saying that laziness/diligence is morally significant to you, while intelligence increase is not morally significant. In other words, if humanity fails because we are lazy, we deserved to fail.
Am I totally misreading you? I suspect I am, at least in one of the above interpretations.
I haven’t unpacked the value/bias for myself yet, and I’m pretty sure at least part of it is inconsistent with my other values.
I’m not necessarily morally opposed to artificial (i.e. drugs or cybernetic) intelligence OR diligence enhancements. But I would be disappointed if turned out that humanity NEEDED such enhancements in order to fix its own problems.
I believe that diligence is something that can be taught, without changing anything fundamental about human nature.