In line with my remarks under “Mistake #6,” I plan on gradually developing the background behind my thinking in a sequence of postings. This will give me the chance to provide others with appropriate context and refine my internal model according to the feedback that I get so that I can arrive at a more informed view.
Recurring to an earlier comment that I’ve made, I think that there’s an issue of human inability to assign numerical probabilities which is especially pronounced when one is talking about small and unstable probabilities. So I’m not sure how valuable it would be for me to attempt to give a numerical estimate. But I’ll think about answering your question after making some more postings.
I feel like you still haven’t understood the main criticism of your posts. You
have acknowledged every mistake except for having an incorrect conclusion. All
the thousands of words you’ve written avoid confronting the main point, which
is whether people should donate to SIAI. To answer this, we need four numbers:
The marginal effect that donating a dollar to SIAI has on the probabilities
of of friendly AI being developed, and of human extinction
The utilities of friendly AI and of human extinction
The utility of the marginal next-best use of money.
We don’t need exact numbers, but we emphatically do need orders of magnitude.
If you get the order magnitude of any one of 1-3 wrong, then your conclusion is
shot. The problem is, estimating orders of magnitude is a hard skill; it can be
learned, but it is not widespread. And if you don’t have that skill, you cannot
reason correctly about the topic.
So far, you have given exactly one order of magnitude estimate, and it was shot
down as ridiculous by multiple qualified people. Since then, you have
consistently refused to give any numbers whatsoever. The logical conclusion is
that, like most people, you lack the order of magnitude estimation skill. And
unfortunately, that means that you cannot speak credibly on questions where
order of magnitude estimation is required.
The tone of the last paragraph seems uncalled for. I doubt that a unitary “order of magnitude estimation skill” is the key variable here. To put a predictive spin on this I doubt that you’d find a very strong correlation between results in a Fermi calculation contest and estimates of the above probabilities among elite hard sciences PhD students.
When someone says they’re rethinking an estimate and don’t want to give a number right now, I think that’s respectable in the same way as someone who’s considering a problem and refuses to propose solutions too soon. There’s an anchoring effect that kicks in when you put down a number.
From my private communications with multifolaterose, I believe he’s acting in good faith by refusing to assign a number, for essentially that reason.
The link to Human inability to assign numerical probabilities, and the distance into the future which he deferred the request, gave me the impression that it was a matter of not wanting to assign a number at all, not merely deferring it until later. Thank you for pointing out the more charitable interpretation; you seem to have some evidence that I don’t.
Orthonormal correctly understands where I’m coming from. I feel that I have very poor information on the matter at hand and want to collect a lot more information before evaluating the cost-effectiveness of donating to SIAI relative to other charities. I fully appreciate your point that in the end it’s necessary to make quantitative comparisons and plan on doing so after learning more.
I’ll also say that I agree with rwallace’s comment that rather than giving an estimate of the probability at hand, it’s both easier and sufficient to give an estimate of
The relative magnitudes of the marginal effects of spending a dollar on X vs Y.
All the thousands of words you’ve written avoid confronting the main point, which is whether people should donate to SIAI.
I agree that my most recent post does not address the question of whether people should donate to SIAI.
So far, you have given exactly one order of magnitude estimate, and it was shot down as ridiculous by multiple qualified people. Since then, you have consistently refused to give any numbers whatsoever. The logical conclusion is that, like most people, you lack the order of magnitude estimation skill. And unfortunately, that means that you cannot speak credibly on questions where order of magnitude estimation is required.
There are many ways in which I could respond here, but I’m not sure how to respond because I’m not sure what your intent is. Is your goal to learn more from me, to teach me something new, to discredit me in the eyes of others, or something else?
Actually, my goal was to get you to give some numbers which to test whether you’ve really updated in response to criticism, or are just signalling that you have. I had threshold values in mind and associated interpretations. Unfortunately, it doesn’t seem to have worked (I put you on the defensive instead), so the test is inconclusive.
Setting aside for the moment the other questions surrounding this topic, and addressing just your main point in this comment:
The fact of the matter is that we do not have the data to meaningfully estimate numbers like this, not even to an order of magnitude, not even to ten orders of magnitude, and it is best to admit this.
Fortunately, we don’t need an order of magnitude to make meaningful decisions. What we really need to know, or at least try to guess with better than random accuracy, is:
The sign (as opposed to magnitude) of the marginal effect of spending a dollar on X.
The relative magnitudes of the marginal effects of spending a dollar on X vs Y.
Both of these are easier to at least coherently reason about, than absolute magnitudes.
The marginal effect that donating a dollar to SIAI has on the probabilities of of friendly AI being developed, and of human extinction.
P(eventual human extinction) looks enormous—since the future will be engineered. It depends on exactly what you mean, though. For example, is it still “extinction” if a future computer sucks the last million remaining human brains into the matrix. Or if it keeps their DNA around for the sake of its historical significance?
Also, what is a “friendly AI”? Say a future machine intelligence looks back on history—and tries do decide whether what happened was “friendly”. Is there some decision process they could use to determine this? If so, what is it?
At any rate, the whole analysis here seems misconceived. The “extinction of all humans” could be awful—or wonderful—depending on the circumstances and on the perspective of the observer. Values are not really objective facts that can be estimated and agreed upon.
For example, is it still “extinction” if a future computer sucks the last million remaining human brains into the matrix. Or if it keeps their DNA around for the sake of its historical significance?
Or if all humans have voluntarily [1] changed into things we can’t imagine?
[1] I sloppily assume that choice hasn’t changed too much.
All the thousands of words you’ve written avoid confronting the main point, which is whether people should donate to SIAI. To answer this, we need four numbers:
It sounds as though you are assuming that the aim of “people” is to SAVE THE WORLD.
Do you really think that?!? Have you thought that through?!?
A cursory analysis—from the perspective of basic biology—predicts that most humans can be reasonably expected to be interested in sex, fashion, food, money and status—and concerned with THE END OF THE WORLD—not so much. That seems pretty consistent with the actual interests of most people.
So: are you talking about some tiny subset of all humans? If so, which tiny subset, and what are their presumed goals—since that matters.
A cursory analysis—from the perspective of basic biology—predicts that most humans can be reasonably expected to be interested in sex, fashion, food, money and status—and concerned with THE END OF THE WORLD—not so much. That seems pretty consistent with the actual interests of most people.
People can’t have sex, eat food, follow fashion, get money, or raise their status if the world ends. Unless you absolutely refuse to say that a person wants anything at all beyond what they know they want and say they want and frequently think about wanting, it’s a trivial inference that most people do not want the world to end, and, given the other things they want, should want to help prevent the world from ending if they can.
Human wants were shaped by evolution. The world has not ended yet—so THE END OF THE WORLD is probably a rather abstract concept for many humans. If you look at 2012, Armageddon and other movies, they are obviously interested in it a bit. Indeed, the concept of THE END OF THE WORLD probably acts as a relatively novel superstimulus to the paranoia circuitry of vulnerable humans—so some people may care about it a lot.
However, if you “follow the money” you will quickly see that lipstick is widely considered to be much more important.
I’m confused by this response. Did I say something to imply that humans can only have one aim at a time? I do think that almost all humans would agree that the world being saved is better than the world not being saved, but of course that competes for money and attention with all other goals, both altruistic and selfish. I happen to think that people ought to weight saving the world highly, but I didn’t say that in the post you’re replying to, I don’t think that people actually do weight saving the world highly, and I didn’t say that I think people do weight saving the world highly. All I said was that it’s important to compute order of magnitude figures before drawing conclusions about existential risk.
If people don’t value preventing THE END OF THE WORLD highly, then they have no reason for donating to organisations which are puportedly trying to prevent DOOM.
Since some people seem to think that preventing THE END OF THE WORLD is very important—while a great many other barely seem to think twice about the issue—any attempt to obtain public agreement on these utilities seems to itself be doomed.
I remember the majority of people in the US being afraid of nuclear war w/ the USSR. This was a rational fear, although I guess the actual reason most held it was their susceptibility to propaganda and mass hysteria.
This suggests to me that there’s a difficulty getting people to care about a particular risk until some critical mass is reached, after which the fear may even become excessive.
In line with my remarks under “Mistake #6,” I plan on gradually developing the background behind my thinking in a sequence of postings. This will give me the chance to provide others with appropriate context and refine my internal model according to the feedback that I get so that I can arrive at a more informed view.
Recurring to an earlier comment that I’ve made, I think that there’s an issue of human inability to assign numerical probabilities which is especially pronounced when one is talking about small and unstable probabilities. So I’m not sure how valuable it would be for me to attempt to give a numerical estimate. But I’ll think about answering your question after making some more postings.
I feel like you still haven’t understood the main criticism of your posts. You have acknowledged every mistake except for having an incorrect conclusion. All the thousands of words you’ve written avoid confronting the main point, which is whether people should donate to SIAI. To answer this, we need four numbers:
The marginal effect that donating a dollar to SIAI has on the probabilities of of friendly AI being developed, and of human extinction
The utilities of friendly AI and of human extinction
The utility of the marginal next-best use of money.
We don’t need exact numbers, but we emphatically do need orders of magnitude. If you get the order magnitude of any one of 1-3 wrong, then your conclusion is shot. The problem is, estimating orders of magnitude is a hard skill; it can be learned, but it is not widespread. And if you don’t have that skill, you cannot reason correctly about the topic.
So far, you have given exactly one order of magnitude estimate, and it was shot down as ridiculous by multiple qualified people. Since then, you have consistently refused to give any numbers whatsoever. The logical conclusion is that, like most people, you lack the order of magnitude estimation skill. And unfortunately, that means that you cannot speak credibly on questions where order of magnitude estimation is required.
The tone of the last paragraph seems uncalled for. I doubt that a unitary “order of magnitude estimation skill” is the key variable here. To put a predictive spin on this I doubt that you’d find a very strong correlation between results in a Fermi calculation contest and estimates of the above probabilities among elite hard sciences PhD students.
When someone says they’re rethinking an estimate and don’t want to give a number right now, I think that’s respectable in the same way as someone who’s considering a problem and refuses to propose solutions too soon. There’s an anchoring effect that kicks in when you put down a number.
From my private communications with multifolaterose, I believe he’s acting in good faith by refusing to assign a number, for essentially that reason.
The link to Human inability to assign numerical probabilities, and the distance into the future which he deferred the request, gave me the impression that it was a matter of not wanting to assign a number at all, not merely deferring it until later. Thank you for pointing out the more charitable interpretation; you seem to have some evidence that I don’t.
Orthonormal correctly understands where I’m coming from. I feel that I have very poor information on the matter at hand and want to collect a lot more information before evaluating the cost-effectiveness of donating to SIAI relative to other charities. I fully appreciate your point that in the end it’s necessary to make quantitative comparisons and plan on doing so after learning more.
I’ll also say that I agree with rwallace’s comment that rather than giving an estimate of the probability at hand, it’s both easier and sufficient to give an estimate of
Thanks for articulating my thinking so accurately and concisely.
I agree that my most recent post does not address the question of whether people should donate to SIAI.
There are many ways in which I could respond here, but I’m not sure how to respond because I’m not sure what your intent is. Is your goal to learn more from me, to teach me something new, to discredit me in the eyes of others, or something else?
Actually, my goal was to get you to give some numbers which to test whether you’ve really updated in response to criticism, or are just signalling that you have. I had threshold values in mind and associated interpretations. Unfortunately, it doesn’t seem to have worked (I put you on the defensive instead), so the test is inconclusive.
Setting aside for the moment the other questions surrounding this topic, and addressing just your main point in this comment:
The fact of the matter is that we do not have the data to meaningfully estimate numbers like this, not even to an order of magnitude, not even to ten orders of magnitude, and it is best to admit this.
Fortunately, we don’t need an order of magnitude to make meaningful decisions. What we really need to know, or at least try to guess with better than random accuracy, is:
The sign (as opposed to magnitude) of the marginal effect of spending a dollar on X.
The relative magnitudes of the marginal effects of spending a dollar on X vs Y.
Both of these are easier to at least coherently reason about, than absolute magnitudes.
P(eventual human extinction) looks enormous—since the future will be engineered. It depends on exactly what you mean, though. For example, is it still “extinction” if a future computer sucks the last million remaining human brains into the matrix. Or if it keeps their DNA around for the sake of its historical significance?
Also, what is a “friendly AI”? Say a future machine intelligence looks back on history—and tries do decide whether what happened was “friendly”. Is there some decision process they could use to determine this? If so, what is it?
At any rate, the whole analysis here seems misconceived. The “extinction of all humans” could be awful—or wonderful—depending on the circumstances and on the perspective of the observer. Values are not really objective facts that can be estimated and agreed upon.
Or if all humans have voluntarily [1] changed into things we can’t imagine?
[1] I sloppily assume that choice hasn’t changed too much.
It sounds as though you are assuming that the aim of “people” is to SAVE THE WORLD.
Do you really think that?!? Have you thought that through?!?
A cursory analysis—from the perspective of basic biology—predicts that most humans can be reasonably expected to be interested in sex, fashion, food, money and status—and concerned with THE END OF THE WORLD—not so much. That seems pretty consistent with the actual interests of most people.
So: are you talking about some tiny subset of all humans? If so, which tiny subset, and what are their presumed goals—since that matters.
People can’t have sex, eat food, follow fashion, get money, or raise their status if the world ends. Unless you absolutely refuse to say that a person wants anything at all beyond what they know they want and say they want and frequently think about wanting, it’s a trivial inference that most people do not want the world to end, and, given the other things they want, should want to help prevent the world from ending if they can.
Human wants were shaped by evolution. The world has not ended yet—so THE END OF THE WORLD is probably a rather abstract concept for many humans. If you look at 2012, Armageddon and other movies, they are obviously interested in it a bit. Indeed, the concept of THE END OF THE WORLD probably acts as a relatively novel superstimulus to the paranoia circuitry of vulnerable humans—so some people may care about it a lot.
However, if you “follow the money” you will quickly see that lipstick is widely considered to be much more important.
I’m confused by this response. Did I say something to imply that humans can only have one aim at a time? I do think that almost all humans would agree that the world being saved is better than the world not being saved, but of course that competes for money and attention with all other goals, both altruistic and selfish. I happen to think that people ought to weight saving the world highly, but I didn’t say that in the post you’re replying to, I don’t think that people actually do weight saving the world highly, and I didn’t say that I think people do weight saving the world highly. All I said was that it’s important to compute order of magnitude figures before drawing conclusions about existential risk.
If people don’t value preventing THE END OF THE WORLD highly, then they have no reason for donating to organisations which are puportedly trying to prevent DOOM.
Since some people seem to think that preventing THE END OF THE WORLD is very important—while a great many other barely seem to think twice about the issue—any attempt to obtain public agreement on these utilities seems to itself be doomed.
I remember the majority of people in the US being afraid of nuclear war w/ the USSR. This was a rational fear, although I guess the actual reason most held it was their susceptibility to propaganda and mass hysteria.
This suggests to me that there’s a difficulty getting people to care about a particular risk until some critical mass is reached, after which the fear may even become excessive.