Why not just substitute donations to the Singularity Institute?
Because given my current educational background I am not able to judge the following claims (among others) and therefore perceive it as unreasonable to put all my eggs in one basket:
Superhuman Artificial Intelligence (the runaway kind, i.e. God-like and unbeatable not just at Chess or Go.)
The likelihood of exponential growth versus a slow development over many centuries.
That it is worth it to spend most on a future whose likelihood I cannot judge.
That Eliezer Yudkowsky (SIAI) is the right and only person who should be working to soften the above.
What do you expect me to do? Just believe you? Like I believed so much in the past which made sense but turned out to be wrong? And besides, my psychic condition wouldn’t allow me to devote all my resource to the SIAI without ever going to movies or the like. The thought makes me reluctant to give anything at all.
ETA
Do you have an explanation for the circumstance that you are the only semi-popular person who has figured all this out? The only person who’s aware of something that might shatter the utility of the universe, if not multiverse? Why is it that people like Vernor Vinge, Charles Stross or Ray Kurzweil are not running amok using all their influence to convince people of the risks ahead, or at least give all they have to the SIAI?
I’m talking to quite a few educated people outside this community. They are not, as some assert, irrational nerds who doubt all this for no particular reason. Rather they tell me that there are too many open questions to worry about the possibilities depicted on this site rather than other near-term risks that might very well wipe us out.
Why aren’t Eric Drexler, Gary Drescher or other AI researches like Marvin Minsky worried to the extent that they signal their support for your movement?
My judgement of and attitude towards a situation is necessarily as diffuse as my knowledge of its underlying circumstances and the reasoning involved. Therefore I perceive it as unreasonable to put all my eggs in one basket.
The state of affairs regarding the SIAI and its underlying rationale and rules of operation are not sufficiently clear to me to give it top priority.
Much of all arguments on this site involve a few propositions and the use of probability to legitimate action in case of their accuracy. Here much is uncertain to an extent that I’m not able to judge any nested probability estimations. I’m already unable to judge what the likelihood of something like the existential risk of exponential evolving superhuman AI is compared to us living in a simulated reality. Even if you tell me, am I to believe the data you base those estimations on?
Maybe after a few years of study I’ll know more. But right now, if I was forced to choose the future over the present, the SIAI or to have some fun. I’d have some fun.
Keep reading Less Wrong sequences. The fact that you used this phrase when it nakedly exposes reasoning that is a direct, obvious violation of expected utility maximization (with any external goal, that is, rather than psychological goals) tells me that rather than trying to write new material for you, I should rather advise you to keep reading what’s already been written, until it no longer seems at all plausible to you that citing Charles Stross’s disbelief is a good argument for remaining as a bystander, any more than it will seem remotely plausible to you that “all your eggs in one basket” is a consideration that should guide expected-utility-maximizing personal philanthropy (for amounts less than a million dollars, say).
And of course I was not arguing that you should give up movie tickets for SIAI. It is exactly this psychological backlash that was causing me to be sharp about the alleged “cryonics vs. SIAI” tradeoff in the first place.
The fact that you used this phrase when it nakedly exposes reasoning that is a direct, obvious violation of expected utility maximization...
What I meant to say by using that phrase is that I cannot expect, given my current knowledge, to get the promised utility payoff that would justifiy to make the SIAI a prime priority. I’m donating to the SIAI but also spend considerable amounts of resource to maximizing utility in the present. Enjoying life, so to say, is therefore a safety net given that my inability to judge the probability of a positive payoff will be answered negative in future.
...until it no longer seems at all plausible to you that citing Charles Stross’s disbelief is a good argument for remaining as a bystander...
I believe hard-SF authors certainly know a lot more than I do, so far, about related topics. I could have picked Greg Egan. That’s besides the point though, it’s not just Stross or Egan but everyone versus you and some unknown followers. What about the other Bayesians out there? Are they simply not as literate as you in the maths or maybe somehow teach but not use their own methods of reasoning and decision making?
I’m still unsure where “a million dollars” comes from.
It is an estimate of the amount you would have to donate to the most marginally effective charity, to decrease its marginal effectiveness below the previous second most marginally effective charity.
I can see following that for charities with high-probability results; I would certainly support that with respect to deciding whether to give to an African food charity versus an Asian food charity, for instance. But for something like existential risk, if there are two charities that I believe each have a 1% chance of working and an arbitrarily high, roughly equal payoff, then it seems I should want both invested in. I might pick one and then hope someone else picks the other, but it seems equivalent if not better to just give equal money to both, to hedge my bets.
Okay, I suppose I could actually pay attention to what everybody else is doing, and just give all my money to the underrepresented one until it stops being underrepresented.
This is exactly what I’m having trouble to accept, let alone look through. There seems to be a highly complicated framework of estimations to support and reinforce each other. I’m not sure how you call this in English, but in German I’d call this a castle in the air.
And before you start downvoting this comment and tell me to learn about Solomonoff induction etc., I know that what I’m saying may simply be due to a lack of education. But that’s what I’m arguing about here. And I bet that many who support the SIAI cannot interpret the reasoning which lead them to support the SIAI in the first place, or at least cannot substantiate the estimations with other kinds of evidence than a coherent internal logic of reciprocal supporting probability estimations.
The figure “a million dollars” doesn’t matter. The reasoning in this particular case is pretty simple. Assuming that you actually care about the future and not you personal self esteem (the knowledge of personally having contributed to a good outcome) there is no reason why putting all your personal eggs in one basket should matter at all. You wouldn’t want humanity to put all its eggs in one basket, but the only way you would change that would be if you were the only person to put eggs into a particular basket. There may be a particular distribution of eggs that is optimal, but unless you think the distribution of everyone else’s eggs is already optimal you shouldn’t distribute all you personal eggs the same way, you should put them in the basket that is most underrepresented (measured by marginal utility, not by ratio actual allocation to theoretical optimal allocation or any such nonsense) so to move humanities overall allocation closer to optimal. Unless you have so many eggs that the most underrepresented basket stops being that, (=”million dollars”).
This might be sound reasoning. In this particular case you’ve made up a number and more or less based it on some idea of optimal egg allocation. That is all very well, but was not exactly what I meant to say by using that phrase or by the comment you replied to and wasn’t my original intention when replying to EY.
I can follow much of the reasoning and arguments on this site. But I’m currently unable to judge their overall credence. That is, are the conclusions justified? Is the coherent framework build around the SIAI based on firm ground?
I’m concerned that although consistently so, the LW community is updating on fictional evidence. My questions in the original comment were meant to inquire the basic principles, the foundation of the sound argumentation that is based upon those basic premises. That is, are you creating models to treat subsequent models or are the propositions based on fact?
An example here is the treatment and use of MWI, the conclusions, arguments and further estimations based on it. No doubt MWI is the only consistent non-magic interpretation of quantum mechanics. But that’s it, an interpretation. A logical consistent deduction. Or should I rather call it an induction, as the inference seems to be of greater generality than the premises, at least withing the LW community? But that’s besides the point. The problem here is that such conclusions are widely considered to be weak evidence to base further speculations and estimations on it.
What I’m trying to argue here is that if the cornerstone of your argumentation, if one of your basic tenets is the likelihood of exponential evolving superhuman AI, although a valid speculation given what we know about reality, you are already in over your head with debt. Debt in the form of other kinds of evidence. Not that it is a false hypothesis, that it is not even wrong, but that you cannot base a whole movement and a huge framework of further inference and supportive argumentation on such premises, ideas that are themselves not based on firm ground.
Now you might argue, it’s all about probability estimations. Someone else might argue that reports from the past do NOT need to provide evidence to what will occur in the future. But the gist of the matter is that a coherent and consistent framework of sound argumentation based on unsupported inference is nothing more than its description implies. It is fiction. Scientific evidence is at least providing hints. Imagination allows for endless possibilities while scientific evidence at least provides hints of what might be possible and what impossible. Science only has to provide the ability to assess your data. The experience of its realization is a justification that bears a hint. Any hint that empirical criticism provides gives you new information on which you can build on. Not because it bears truth value but because it gives you an idea of where you want to go. An opportunity to try something. There’s that which seemingly fails or contradicts itself and that which seems to work and is consistent.
And that is my problem. Given my current educational background and knowledge I cannot differentiate LW between a consistent internal logic and something which is sufficiently based on empirical criticism to provide a firm substantiation of the strong arguments for action that are proclaimed on this site.
And that is my problem. Given my current educational background and knowledge I cannot differentiate LW between a consistent internal logic and something which is sufficiently based on empirical criticism to provide a firm substantiation of the strong arguments for action that are proclaimed on this site.
I cannot fault this reasoning. From everything I have read in your comments this seems to be the right conclusion for you to make given what you know. Taking the word of a somewhat non-mainstream community would be intellectually reckless. For my part there are some claims on LW that I do not feel I am capable of reaching a strong conclusion on—even accounting for respect for expert opinions.
Now I’m curious, here you have referred to “LW” thinking in general, while we can obviously consider also LW conclusions on specific topics. Of all those positions that LW has a consensus on (and are not nearly universally accepted by all educated people) are there any particular topics for which you are confident of either confirming or denying? For example “cryonics is worth a shot” seems to be far more easy to judge than conclusions about quantum mechanics and decision theory.
Because given my current educational background I am not able to judge the following claims (among others) and therefore perceive it as unreasonable to put all my eggs in one basket:
Superhuman Artificial Intelligence (the runaway kind, i.e. God-like and unbeatable not just at Chess or Go.)
Advanced real-world molecular nanotechnology (the grey goo kind the above could use to mess things up.)
The likelihood of exponential growth versus a slow development over many centuries.
That it is worth it to spend most on a future whose likelihood I cannot judge.
That Eliezer Yudkowsky (SIAI) is the right and only person who should be working to soften the above.
What do you expect me to do? Just believe you? Like I believed so much in the past which made sense but turned out to be wrong? And besides, my psychic condition wouldn’t allow me to devote all my resource to the SIAI without ever going to movies or the like. The thought makes me reluctant to give anything at all.
ETA
Do you have an explanation for the circumstance that you are the only semi-popular person who has figured all this out? The only person who’s aware of something that might shatter the utility of the universe, if not multiverse? Why is it that people like Vernor Vinge, Charles Stross or Ray Kurzweil are not running amok using all their influence to convince people of the risks ahead, or at least give all they have to the SIAI?
I’m talking to quite a few educated people outside this community. They are not, as some assert, irrational nerds who doubt all this for no particular reason. Rather they tell me that there are too many open questions to worry about the possibilities depicted on this site rather than other near-term risks that might very well wipe us out.
Why aren’t Eric Drexler, Gary Drescher or other AI researches like Marvin Minsky worried to the extent that they signal their support for your movement?
You may be forced to make a judgement under uncertainty.
My judgement of and attitude towards a situation is necessarily as diffuse as my knowledge of its underlying circumstances and the reasoning involved. Therefore I perceive it as unreasonable to put all my eggs in one basket.
The state of affairs regarding the SIAI and its underlying rationale and rules of operation are not sufficiently clear to me to give it top priority.
Much of all arguments on this site involve a few propositions and the use of probability to legitimate action in case of their accuracy. Here much is uncertain to an extent that I’m not able to judge any nested probability estimations. I’m already unable to judge what the likelihood of something like the existential risk of exponential evolving superhuman AI is compared to us living in a simulated reality. Even if you tell me, am I to believe the data you base those estimations on?
Maybe after a few years of study I’ll know more. But right now, if I was forced to choose the future over the present, the SIAI or to have some fun. I’d have some fun.
You ask a lot of good questions in these two comments. Some of them are still open questions in my mind.
Keep reading Less Wrong sequences. The fact that you used this phrase when it nakedly exposes reasoning that is a direct, obvious violation of expected utility maximization (with any external goal, that is, rather than psychological goals) tells me that rather than trying to write new material for you, I should rather advise you to keep reading what’s already been written, until it no longer seems at all plausible to you that citing Charles Stross’s disbelief is a good argument for remaining as a bystander, any more than it will seem remotely plausible to you that “all your eggs in one basket” is a consideration that should guide expected-utility-maximizing personal philanthropy (for amounts less than a million dollars, say).
And of course I was not arguing that you should give up movie tickets for SIAI. It is exactly this psychological backlash that was causing me to be sharp about the alleged “cryonics vs. SIAI” tradeoff in the first place.
What I meant to say by using that phrase is that I cannot expect, given my current knowledge, to get the promised utility payoff that would justifiy to make the SIAI a prime priority. I’m donating to the SIAI but also spend considerable amounts of resource to maximizing utility in the present. Enjoying life, so to say, is therefore a safety net given that my inability to judge the probability of a positive payoff will be answered negative in future.
I believe hard-SF authors certainly know a lot more than I do, so far, about related topics. I could have picked Greg Egan. That’s besides the point though, it’s not just Stross or Egan but everyone versus you and some unknown followers. What about the other Bayesians out there? Are they simply not as literate as you in the maths or maybe somehow teach but not use their own methods of reasoning and decision making?
Having read the sequences, I’m still unsure where “a million dollars” comes from. Why not diversify when you have less money than that?
It is an estimate of the amount you would have to donate to the most marginally effective charity, to decrease its marginal effectiveness below the previous second most marginally effective charity.
I can see following that for charities with high-probability results; I would certainly support that with respect to deciding whether to give to an African food charity versus an Asian food charity, for instance. But for something like existential risk, if there are two charities that I believe each have a 1% chance of working and an arbitrarily high, roughly equal payoff, then it seems I should want both invested in. I might pick one and then hope someone else picks the other, but it seems equivalent if not better to just give equal money to both, to hedge my bets.
Okay, I suppose I could actually pay attention to what everybody else is doing, and just give all my money to the underrepresented one until it stops being underrepresented.
This is exactly what I’m having trouble to accept, let alone look through. There seems to be a highly complicated framework of estimations to support and reinforce each other. I’m not sure how you call this in English, but in German I’d call this a castle in the air.
And before you start downvoting this comment and tell me to learn about Solomonoff induction etc., I know that what I’m saying may simply be due to a lack of education. But that’s what I’m arguing about here. And I bet that many who support the SIAI cannot interpret the reasoning which lead them to support the SIAI in the first place, or at least cannot substantiate the estimations with other kinds of evidence than a coherent internal logic of reciprocal supporting probability estimations.
The figure “a million dollars” doesn’t matter. The reasoning in this particular case is pretty simple. Assuming that you actually care about the future and not you personal self esteem (the knowledge of personally having contributed to a good outcome) there is no reason why putting all your personal eggs in one basket should matter at all. You wouldn’t want humanity to put all its eggs in one basket, but the only way you would change that would be if you were the only person to put eggs into a particular basket. There may be a particular distribution of eggs that is optimal, but unless you think the distribution of everyone else’s eggs is already optimal you shouldn’t distribute all you personal eggs the same way, you should put them in the basket that is most underrepresented (measured by marginal utility, not by ratio actual allocation to theoretical optimal allocation or any such nonsense) so to move humanities overall allocation closer to optimal. Unless you have so many eggs that the most underrepresented basket stops being that, (=”million dollars”).
This might be sound reasoning. In this particular case you’ve made up a number and more or less based it on some idea of optimal egg allocation. That is all very well, but was not exactly what I meant to say by using that phrase or by the comment you replied to and wasn’t my original intention when replying to EY.
I can follow much of the reasoning and arguments on this site. But I’m currently unable to judge their overall credence. That is, are the conclusions justified? Is the coherent framework build around the SIAI based on firm ground?
I’m concerned that although consistently so, the LW community is updating on fictional evidence. My questions in the original comment were meant to inquire the basic principles, the foundation of the sound argumentation that is based upon those basic premises. That is, are you creating models to treat subsequent models or are the propositions based on fact?
An example here is the treatment and use of MWI, the conclusions, arguments and further estimations based on it. No doubt MWI is the only consistent non-magic interpretation of quantum mechanics. But that’s it, an interpretation. A logical consistent deduction. Or should I rather call it an induction, as the inference seems to be of greater generality than the premises, at least withing the LW community? But that’s besides the point. The problem here is that such conclusions are widely considered to be weak evidence to base further speculations and estimations on it.
What I’m trying to argue here is that if the cornerstone of your argumentation, if one of your basic tenets is the likelihood of exponential evolving superhuman AI, although a valid speculation given what we know about reality, you are already in over your head with debt. Debt in the form of other kinds of evidence. Not that it is a false hypothesis, that it is not even wrong, but that you cannot base a whole movement and a huge framework of further inference and supportive argumentation on such premises, ideas that are themselves not based on firm ground.
Now you might argue, it’s all about probability estimations. Someone else might argue that reports from the past do NOT need to provide evidence to what will occur in the future. But the gist of the matter is that a coherent and consistent framework of sound argumentation based on unsupported inference is nothing more than its description implies. It is fiction. Scientific evidence is at least providing hints. Imagination allows for endless possibilities while scientific evidence at least provides hints of what might be possible and what impossible. Science only has to provide the ability to assess your data. The experience of its realization is a justification that bears a hint. Any hint that empirical criticism provides gives you new information on which you can build on. Not because it bears truth value but because it gives you an idea of where you want to go. An opportunity to try something. There’s that which seemingly fails or contradicts itself and that which seems to work and is consistent.
And that is my problem. Given my current educational background and knowledge I cannot differentiate LW between a consistent internal logic and something which is sufficiently based on empirical criticism to provide a firm substantiation of the strong arguments for action that are proclaimed on this site.
I cannot fault this reasoning. From everything I have read in your comments this seems to be the right conclusion for you to make given what you know. Taking the word of a somewhat non-mainstream community would be intellectually reckless. For my part there are some claims on LW that I do not feel I am capable of reaching a strong conclusion on—even accounting for respect for expert opinions.
Now I’m curious, here you have referred to “LW” thinking in general, while we can obviously consider also LW conclusions on specific topics. Of all those positions that LW has a consensus on (and are not nearly universally accepted by all educated people) are there any particular topics for which you are confident of either confirming or denying? For example “cryonics is worth a shot” seems to be far more easy to judge than conclusions about quantum mechanics and decision theory.