So be it first noted that everyone who complains about trying to trade off cryonics against charity, instead of movie tickets or heart transplants for old people, is absolutely correct about cryonics being unfairly discriminated against.
That said, reading through these comments, I’m a bit disturbed that no one followed the principle of using the Least Convenient Possible World / strongest argument you can reconstruct from the corpse. Why are you accepting the original poster’s premise of competing with African aid? Why not just substitute donations to the Singularity Institute?
So I know that, obviously, and yet I go around advocating people sign up for cryonics. Why? Because I’m selfish? No. Because I’m under the impression that a dollar spent on cryonics is marginally as useful as a dollar spent on the Singularity Institute? No.
Because I don’t think that money spent on cryonics actually comes out of the pocket of the Singularity Institute? Yes. Obviously. I mean, a bit of deduction would tell you that I had to believe that.
Money spent on life insurance and annual membership in a cryonics organization rapidly fades into the background of recurring expenses, just like car insurance. To the extent it substituted for anything, it would tend to substitute for buying a house smaller by $300/year on the mortgage, or retirement savings, or something else that doesn’t accomplish nearly as much good as cryonics.
There are maybe two or three people in the entire world who spend only the bare possible minimum on themselves, and contribute everything else to a rationally effective charity. They have an excuse for not signing up. No one else does.
And if you do sign up for cryonics, that contributes to a general frame of mind of “Wait, there are clever solutions to all the world’s problems, this planet I’m living in doesn’t make any sense, it’s okay to do something that other people aren’t doing, I’m part of the community of people who are part of the future, and that’s why I’m going to donate to SIAI.” It’s a gateway drug; it’s part of the ongoing lifestyle of someone with one foot in the future, staring back at a mad world and doing what they can to save it.
The basic fact about rational charity is that charity is not a matter of people starting out with fixed resources for charity and allocating them optimally. It is about the variance in the tiny little percentage of their income people give to rationally effective charity in the first place. And if I had to place my bets on empirical outcomes, I would bet that this blog post helped decrease that percentage in its readers, more than it actually resulted in any dollars going to an effective charity (i.e., SIAI, who is anyone kidding with this talk about development aid?) by helping to foster a sense of guilt and “ugh” around rational charity.
And finally, with all that said, if we actually did forget about the Singularity and the expected future value of the galaxy and take the original post at face value, if you consider the interval between a planet with slightly more developed poor countries and a planet signed up for cryonics, and ask about marginal impacts you can have on both relative to existing resources, then clearly you should be signing up for cryonics. I am tempted to add a sharp “Duh” to the end of this statement.
But of course, the actual impact of cryonics, just like the actual impact of development aid, in any rational utilitarian calculation, is simply its impact on the future of the galaxies, i.e., its impact on existential risk. Do I think that impact is net negative? Obviously not.
There are maybe two or three people in the entire world who spend only the bare possible minimum on themselves, and contribute everything else to a rationally effective charity. They have an excuse for not signing up. No one else does.
The world is full of poor people who genuinely cannot afford to sign up for cryonics. Whether they spend whatever pittance may be left to them above bare subsistence on charity or on rum is irrelevant.
The world also contains many people like me who can afford to eat and live in a decent apartment, but who can’t afford health insurance. I’m not so convinced I should be thinking about cryonics at this point either.
It’s not that cryonics is one of the best ways that you can spend money, it’s that cryonics is one of the best ways that you can spend money on yourself. Since almost everyone who is likely to read this spends a fair amount of money on themselves, almost everyone who is likely to read this would be well-served by signing up for cryonics instead of .
Short but not true. Cryonics is one of the ways that, in the self-directed part of your life, you can pretend to be part of a smarter civilization, be the sort of sane person who also fights existential risk in the other-directed part of their life. Anyone who spends money on movie tickets does not get to claim that they have no self-directed component to their life.
I don’t think I’m suggesting that people don’t have a self-directed component to their lives, though I suppose there could be some true “charity monks” or something out there. I’d be surprised, though, since I wouldn’t even count someone like Peter Singer as without self-directed elements to his life. I only left the potential exception there because I think there is a chance that someone reading the post will not have sufficient funds to purchase the life insurance necessary for cryonic preservation.
There are maybe two or three people in the entire world who spend only the bare possible minimum on themselves, and contribute everything else to a rationally effective charity. They have an excuse for not signing up. No one else does.
I guess I agree that only the specified people can be said to have made consistently rational decisions when it comes to allocating money between benefiting themselves and benefiting others (at least of those who know something about the issues). I don’t think this implies that all but these people should sign up for cryonics. General point: [Your actions cannot be described as motivated by coherent utility function unless you do A] does not imply [you ought to do A].
Simple example: Tom cares about the welfare of others as much as his own, but biases lead him to consistently act as if he cared about his welfare 1,000 times as much as the welfare of others. Tom could overcome these biases, but he has not in the past. In a moment when he is unaffected by these biases, Tom sacrifices his life to save the lives of 900 other people.
[All that said, I take your point that it may be rational for you to advocate signing up for cryonics, since cryonics money and charity money may not be substitutes.]
Are you suggesting that cryonics advocacy is in any sense an efficient use of time to reduce x-risk? I’d like to believe that since I spend time on it myself, but it seems suspiciously convenient.
There are maybe two or three people in the entire world who spend only the bare possible minimum on themselves, and contribute everything else to a rationally effective charity. They have an excuse for not signing up. No one else does.
If you are opening the scope to the entire world it would seem fair to extend the excuse to all those who don’t even have the bare possible minimum for themselves and also don’t live within 100 km of anyone who understands cryonics.
Why not just substitute donations to the Singularity Institute?
Because given my current educational background I am not able to judge the following claims (among others) and therefore perceive it as unreasonable to put all my eggs in one basket:
Superhuman Artificial Intelligence (the runaway kind, i.e. God-like and unbeatable not just at Chess or Go.)
The likelihood of exponential growth versus a slow development over many centuries.
That it is worth it to spend most on a future whose likelihood I cannot judge.
That Eliezer Yudkowsky (SIAI) is the right and only person who should be working to soften the above.
What do you expect me to do? Just believe you? Like I believed so much in the past which made sense but turned out to be wrong? And besides, my psychic condition wouldn’t allow me to devote all my resource to the SIAI without ever going to movies or the like. The thought makes me reluctant to give anything at all.
ETA
Do you have an explanation for the circumstance that you are the only semi-popular person who has figured all this out? The only person who’s aware of something that might shatter the utility of the universe, if not multiverse? Why is it that people like Vernor Vinge, Charles Stross or Ray Kurzweil are not running amok using all their influence to convince people of the risks ahead, or at least give all they have to the SIAI?
I’m talking to quite a few educated people outside this community. They are not, as some assert, irrational nerds who doubt all this for no particular reason. Rather they tell me that there are too many open questions to worry about the possibilities depicted on this site rather than other near-term risks that might very well wipe us out.
Why aren’t Eric Drexler, Gary Drescher or other AI researches like Marvin Minsky worried to the extent that they signal their support for your movement?
My judgement of and attitude towards a situation is necessarily as diffuse as my knowledge of its underlying circumstances and the reasoning involved. Therefore I perceive it as unreasonable to put all my eggs in one basket.
The state of affairs regarding the SIAI and its underlying rationale and rules of operation are not sufficiently clear to me to give it top priority.
Much of all arguments on this site involve a few propositions and the use of probability to legitimate action in case of their accuracy. Here much is uncertain to an extent that I’m not able to judge any nested probability estimations. I’m already unable to judge what the likelihood of something like the existential risk of exponential evolving superhuman AI is compared to us living in a simulated reality. Even if you tell me, am I to believe the data you base those estimations on?
Maybe after a few years of study I’ll know more. But right now, if I was forced to choose the future over the present, the SIAI or to have some fun. I’d have some fun.
Keep reading Less Wrong sequences. The fact that you used this phrase when it nakedly exposes reasoning that is a direct, obvious violation of expected utility maximization (with any external goal, that is, rather than psychological goals) tells me that rather than trying to write new material for you, I should rather advise you to keep reading what’s already been written, until it no longer seems at all plausible to you that citing Charles Stross’s disbelief is a good argument for remaining as a bystander, any more than it will seem remotely plausible to you that “all your eggs in one basket” is a consideration that should guide expected-utility-maximizing personal philanthropy (for amounts less than a million dollars, say).
And of course I was not arguing that you should give up movie tickets for SIAI. It is exactly this psychological backlash that was causing me to be sharp about the alleged “cryonics vs. SIAI” tradeoff in the first place.
The fact that you used this phrase when it nakedly exposes reasoning that is a direct, obvious violation of expected utility maximization...
What I meant to say by using that phrase is that I cannot expect, given my current knowledge, to get the promised utility payoff that would justifiy to make the SIAI a prime priority. I’m donating to the SIAI but also spend considerable amounts of resource to maximizing utility in the present. Enjoying life, so to say, is therefore a safety net given that my inability to judge the probability of a positive payoff will be answered negative in future.
...until it no longer seems at all plausible to you that citing Charles Stross’s disbelief is a good argument for remaining as a bystander...
I believe hard-SF authors certainly know a lot more than I do, so far, about related topics. I could have picked Greg Egan. That’s besides the point though, it’s not just Stross or Egan but everyone versus you and some unknown followers. What about the other Bayesians out there? Are they simply not as literate as you in the maths or maybe somehow teach but not use their own methods of reasoning and decision making?
I’m still unsure where “a million dollars” comes from.
It is an estimate of the amount you would have to donate to the most marginally effective charity, to decrease its marginal effectiveness below the previous second most marginally effective charity.
I can see following that for charities with high-probability results; I would certainly support that with respect to deciding whether to give to an African food charity versus an Asian food charity, for instance. But for something like existential risk, if there are two charities that I believe each have a 1% chance of working and an arbitrarily high, roughly equal payoff, then it seems I should want both invested in. I might pick one and then hope someone else picks the other, but it seems equivalent if not better to just give equal money to both, to hedge my bets.
Okay, I suppose I could actually pay attention to what everybody else is doing, and just give all my money to the underrepresented one until it stops being underrepresented.
This is exactly what I’m having trouble to accept, let alone look through. There seems to be a highly complicated framework of estimations to support and reinforce each other. I’m not sure how you call this in English, but in German I’d call this a castle in the air.
And before you start downvoting this comment and tell me to learn about Solomonoff induction etc., I know that what I’m saying may simply be due to a lack of education. But that’s what I’m arguing about here. And I bet that many who support the SIAI cannot interpret the reasoning which lead them to support the SIAI in the first place, or at least cannot substantiate the estimations with other kinds of evidence than a coherent internal logic of reciprocal supporting probability estimations.
The figure “a million dollars” doesn’t matter. The reasoning in this particular case is pretty simple. Assuming that you actually care about the future and not you personal self esteem (the knowledge of personally having contributed to a good outcome) there is no reason why putting all your personal eggs in one basket should matter at all. You wouldn’t want humanity to put all its eggs in one basket, but the only way you would change that would be if you were the only person to put eggs into a particular basket. There may be a particular distribution of eggs that is optimal, but unless you think the distribution of everyone else’s eggs is already optimal you shouldn’t distribute all you personal eggs the same way, you should put them in the basket that is most underrepresented (measured by marginal utility, not by ratio actual allocation to theoretical optimal allocation or any such nonsense) so to move humanities overall allocation closer to optimal. Unless you have so many eggs that the most underrepresented basket stops being that, (=”million dollars”).
This might be sound reasoning. In this particular case you’ve made up a number and more or less based it on some idea of optimal egg allocation. That is all very well, but was not exactly what I meant to say by using that phrase or by the comment you replied to and wasn’t my original intention when replying to EY.
I can follow much of the reasoning and arguments on this site. But I’m currently unable to judge their overall credence. That is, are the conclusions justified? Is the coherent framework build around the SIAI based on firm ground?
I’m concerned that although consistently so, the LW community is updating on fictional evidence. My questions in the original comment were meant to inquire the basic principles, the foundation of the sound argumentation that is based upon those basic premises. That is, are you creating models to treat subsequent models or are the propositions based on fact?
An example here is the treatment and use of MWI, the conclusions, arguments and further estimations based on it. No doubt MWI is the only consistent non-magic interpretation of quantum mechanics. But that’s it, an interpretation. A logical consistent deduction. Or should I rather call it an induction, as the inference seems to be of greater generality than the premises, at least withing the LW community? But that’s besides the point. The problem here is that such conclusions are widely considered to be weak evidence to base further speculations and estimations on it.
What I’m trying to argue here is that if the cornerstone of your argumentation, if one of your basic tenets is the likelihood of exponential evolving superhuman AI, although a valid speculation given what we know about reality, you are already in over your head with debt. Debt in the form of other kinds of evidence. Not that it is a false hypothesis, that it is not even wrong, but that you cannot base a whole movement and a huge framework of further inference and supportive argumentation on such premises, ideas that are themselves not based on firm ground.
Now you might argue, it’s all about probability estimations. Someone else might argue that reports from the past do NOT need to provide evidence to what will occur in the future. But the gist of the matter is that a coherent and consistent framework of sound argumentation based on unsupported inference is nothing more than its description implies. It is fiction. Scientific evidence is at least providing hints. Imagination allows for endless possibilities while scientific evidence at least provides hints of what might be possible and what impossible. Science only has to provide the ability to assess your data. The experience of its realization is a justification that bears a hint. Any hint that empirical criticism provides gives you new information on which you can build on. Not because it bears truth value but because it gives you an idea of where you want to go. An opportunity to try something. There’s that which seemingly fails or contradicts itself and that which seems to work and is consistent.
And that is my problem. Given my current educational background and knowledge I cannot differentiate LW between a consistent internal logic and something which is sufficiently based on empirical criticism to provide a firm substantiation of the strong arguments for action that are proclaimed on this site.
And that is my problem. Given my current educational background and knowledge I cannot differentiate LW between a consistent internal logic and something which is sufficiently based on empirical criticism to provide a firm substantiation of the strong arguments for action that are proclaimed on this site.
I cannot fault this reasoning. From everything I have read in your comments this seems to be the right conclusion for you to make given what you know. Taking the word of a somewhat non-mainstream community would be intellectually reckless. For my part there are some claims on LW that I do not feel I am capable of reaching a strong conclusion on—even accounting for respect for expert opinions.
Now I’m curious, here you have referred to “LW” thinking in general, while we can obviously consider also LW conclusions on specific topics. Of all those positions that LW has a consensus on (and are not nearly universally accepted by all educated people) are there any particular topics for which you are confident of either confirming or denying? For example “cryonics is worth a shot” seems to be far more easy to judge than conclusions about quantum mechanics and decision theory.
And yes, it seems like my post may have done more harm than good. I was not anticipating such negative reactions. What I said seems to have been construed in ways that were totally unexpected to me and which are largely unrelated to the points that I was trying to make. I take responsibility for the outcome.
Thanks for the response. I’m presently in Europe without steady internet access but look forward to writing back. My thoughts on these matters are rather detailed/intricate.
For now I’ll just say that I think that because people have such strong irrational biases against cryonics, advocacy of cryonics may (unfairly!) lower the credibility of the rationalist movement among people who it would be good to draw in to the rationalist movement. I think (but am not sure) that this factor makes cryonics advocacy substantially less fruitful than it may appear.
Thanks for the response. I’m presently in Europe without steady internet access but look forward to writing back. My thoughts on these matters are rather detailed/intricate.
For now I’ll just say that I think that because people have such strong irrational biases against cryonics, advocacy of cryonics may (unfairly!) lower the credibility of the rationalist movement among people who it would be good to draw in to the rationalist movement. I think (but am not sure) that this factor makes cryonics advocacy substantially less fruitful than it may appear.
As a disclaimer, I’ll say that I’m quite favorablyimpreaaed by you. My post may have come across as critical of
Ordinary people don’t want to sign up for cryonics, while they do want to go to movies and get heart transplants. So if multifoliaterose tells people, “Instead of signing up for cryonics, send money to Africa,” he’s much more likely to be successful than if he tells people, “Instead of going to the movies, send money to Africa.”
So yes, if you want to call this “unfair discrimination,” you can, but his whole point is to get people to engage in certain charities, and it seems he is just using a more effective means rather than a less effective one.
Easy way for multi to provide an iota of evidence that what he’s doing is effective: Find at least one person who says they canceled a cryo subscription and started sending an exactly corresponding amount of money to the Singularity Institute. If you just start sending an equal amount of money to the Singularity Institute, without canceling the cryo, then it doesn’t count as evidence in his favor, of course; and that is just what I would recommend anyone feeling guilty actually do. And if anyone actually sends the money to Africa instead, I am entirely unimpressed, and I suggest that they go outside and look up at the night sky for a while and remember what this is actually about.
Even less than signing up for cryonics do most people want to murder their children. Do expect that telling them “Instead of murdering your children, send aid to Africa (or SIAI)” will increase the amount they send to Africa/SIAI?
I think it does, since you’ll probably want to buy weapons, hire an assassin, hire a lawyer, etc. But you can change the example to “Send money to al-Qaeda” if you prefer.
I’m willing to bet that the number of LW readers seriously considering cryonics greatly outweighs the number seriously considering murdering their kids OR funding al-Qaeda. For the general population this might not be so, but as a LW wrong post it seems more than reasonable to contrast with cryonics rather than terrorism.
So be it first noted that everyone who complains about trying to trade off cryonics against charity, instead of movie tickets or heart transplants for old people, is absolutely correct about cryonics being unfairly discriminated against.
That said, reading through these comments, I’m a bit disturbed that no one followed the principle of using the Least Convenient Possible World / strongest argument you can reconstruct from the corpse. Why are you accepting the original poster’s premise of competing with African aid? Why not just substitute donations to the Singularity Institute?
So I know that, obviously, and yet I go around advocating people sign up for cryonics. Why? Because I’m selfish? No. Because I’m under the impression that a dollar spent on cryonics is marginally as useful as a dollar spent on the Singularity Institute? No.
Because I don’t think that money spent on cryonics actually comes out of the pocket of the Singularity Institute? Yes. Obviously. I mean, a bit of deduction would tell you that I had to believe that.
Money spent on life insurance and annual membership in a cryonics organization rapidly fades into the background of recurring expenses, just like car insurance. To the extent it substituted for anything, it would tend to substitute for buying a house smaller by $300/year on the mortgage, or retirement savings, or something else that doesn’t accomplish nearly as much good as cryonics.
There are maybe two or three people in the entire world who spend only the bare possible minimum on themselves, and contribute everything else to a rationally effective charity. They have an excuse for not signing up. No one else does.
And if you do sign up for cryonics, that contributes to a general frame of mind of “Wait, there are clever solutions to all the world’s problems, this planet I’m living in doesn’t make any sense, it’s okay to do something that other people aren’t doing, I’m part of the community of people who are part of the future, and that’s why I’m going to donate to SIAI.” It’s a gateway drug; it’s part of the ongoing lifestyle of someone with one foot in the future, staring back at a mad world and doing what they can to save it.
The basic fact about rational charity is that charity is not a matter of people starting out with fixed resources for charity and allocating them optimally. It is about the variance in the tiny little percentage of their income people give to rationally effective charity in the first place. And if I had to place my bets on empirical outcomes, I would bet that this blog post helped decrease that percentage in its readers, more than it actually resulted in any dollars going to an effective charity (i.e., SIAI, who is anyone kidding with this talk about development aid?) by helping to foster a sense of guilt and “ugh” around rational charity.
And finally, with all that said, if we actually did forget about the Singularity and the expected future value of the galaxy and take the original post at face value, if you consider the interval between a planet with slightly more developed poor countries and a planet signed up for cryonics, and ask about marginal impacts you can have on both relative to existing resources, then clearly you should be signing up for cryonics. I am tempted to add a sharp “Duh” to the end of this statement.
But of course, the actual impact of cryonics, just like the actual impact of development aid, in any rational utilitarian calculation, is simply its impact on the future of the galaxies, i.e., its impact on existential risk. Do I think that impact is net negative? Obviously not.
The world is full of poor people who genuinely cannot afford to sign up for cryonics. Whether they spend whatever pittance may be left to them above bare subsistence on charity or on rum is irrelevant.
The world also contains many people like me who can afford to eat and live in a decent apartment, but who can’t afford health insurance. I’m not so convinced I should be thinking about cryonics at this point either.
Short version:
It’s not that cryonics is one of the best ways that you can spend money, it’s that cryonics is one of the best ways that you can spend money on yourself. Since almost everyone who is likely to read this spends a fair amount of money on themselves, almost everyone who is likely to read this would be well-served by signing up for cryonics instead of .
Short but not true. Cryonics is one of the ways that, in the self-directed part of your life, you can pretend to be part of a smarter civilization, be the sort of sane person who also fights existential risk in the other-directed part of their life. Anyone who spends money on movie tickets does not get to claim that they have no self-directed component to their life.
I don’t think I’m suggesting that people don’t have a self-directed component to their lives, though I suppose there could be some true “charity monks” or something out there. I’d be surprised, though, since I wouldn’t even count someone like Peter Singer as without self-directed elements to his life. I only left the potential exception there because I think there is a chance that someone reading the post will not have sufficient funds to purchase the life insurance necessary for cryonic preservation.
I guess I agree that only the specified people can be said to have made consistently rational decisions when it comes to allocating money between benefiting themselves and benefiting others (at least of those who know something about the issues). I don’t think this implies that all but these people should sign up for cryonics. General point: [Your actions cannot be described as motivated by coherent utility function unless you do A] does not imply [you ought to do A].
Simple example: Tom cares about the welfare of others as much as his own, but biases lead him to consistently act as if he cared about his welfare 1,000 times as much as the welfare of others. Tom could overcome these biases, but he has not in the past. In a moment when he is unaffected by these biases, Tom sacrifices his life to save the lives of 900 other people.
[All that said, I take your point that it may be rational for you to advocate signing up for cryonics, since cryonics money and charity money may not be substitutes.]
Are you suggesting that cryonics advocacy is in any sense an efficient use of time to reduce x-risk? I’d like to believe that since I spend time on it myself, but it seems suspiciously convenient.
If you are opening the scope to the entire world it would seem fair to extend the excuse to all those who don’t even have the bare possible minimum for themselves and also don’t live within 100 km of anyone who understands cryonics.
Agreed; your correction is accepted.
Because given my current educational background I am not able to judge the following claims (among others) and therefore perceive it as unreasonable to put all my eggs in one basket:
Superhuman Artificial Intelligence (the runaway kind, i.e. God-like and unbeatable not just at Chess or Go.)
Advanced real-world molecular nanotechnology (the grey goo kind the above could use to mess things up.)
The likelihood of exponential growth versus a slow development over many centuries.
That it is worth it to spend most on a future whose likelihood I cannot judge.
That Eliezer Yudkowsky (SIAI) is the right and only person who should be working to soften the above.
What do you expect me to do? Just believe you? Like I believed so much in the past which made sense but turned out to be wrong? And besides, my psychic condition wouldn’t allow me to devote all my resource to the SIAI without ever going to movies or the like. The thought makes me reluctant to give anything at all.
ETA
Do you have an explanation for the circumstance that you are the only semi-popular person who has figured all this out? The only person who’s aware of something that might shatter the utility of the universe, if not multiverse? Why is it that people like Vernor Vinge, Charles Stross or Ray Kurzweil are not running amok using all their influence to convince people of the risks ahead, or at least give all they have to the SIAI?
I’m talking to quite a few educated people outside this community. They are not, as some assert, irrational nerds who doubt all this for no particular reason. Rather they tell me that there are too many open questions to worry about the possibilities depicted on this site rather than other near-term risks that might very well wipe us out.
Why aren’t Eric Drexler, Gary Drescher or other AI researches like Marvin Minsky worried to the extent that they signal their support for your movement?
You may be forced to make a judgement under uncertainty.
My judgement of and attitude towards a situation is necessarily as diffuse as my knowledge of its underlying circumstances and the reasoning involved. Therefore I perceive it as unreasonable to put all my eggs in one basket.
The state of affairs regarding the SIAI and its underlying rationale and rules of operation are not sufficiently clear to me to give it top priority.
Much of all arguments on this site involve a few propositions and the use of probability to legitimate action in case of their accuracy. Here much is uncertain to an extent that I’m not able to judge any nested probability estimations. I’m already unable to judge what the likelihood of something like the existential risk of exponential evolving superhuman AI is compared to us living in a simulated reality. Even if you tell me, am I to believe the data you base those estimations on?
Maybe after a few years of study I’ll know more. But right now, if I was forced to choose the future over the present, the SIAI or to have some fun. I’d have some fun.
You ask a lot of good questions in these two comments. Some of them are still open questions in my mind.
Keep reading Less Wrong sequences. The fact that you used this phrase when it nakedly exposes reasoning that is a direct, obvious violation of expected utility maximization (with any external goal, that is, rather than psychological goals) tells me that rather than trying to write new material for you, I should rather advise you to keep reading what’s already been written, until it no longer seems at all plausible to you that citing Charles Stross’s disbelief is a good argument for remaining as a bystander, any more than it will seem remotely plausible to you that “all your eggs in one basket” is a consideration that should guide expected-utility-maximizing personal philanthropy (for amounts less than a million dollars, say).
And of course I was not arguing that you should give up movie tickets for SIAI. It is exactly this psychological backlash that was causing me to be sharp about the alleged “cryonics vs. SIAI” tradeoff in the first place.
What I meant to say by using that phrase is that I cannot expect, given my current knowledge, to get the promised utility payoff that would justifiy to make the SIAI a prime priority. I’m donating to the SIAI but also spend considerable amounts of resource to maximizing utility in the present. Enjoying life, so to say, is therefore a safety net given that my inability to judge the probability of a positive payoff will be answered negative in future.
I believe hard-SF authors certainly know a lot more than I do, so far, about related topics. I could have picked Greg Egan. That’s besides the point though, it’s not just Stross or Egan but everyone versus you and some unknown followers. What about the other Bayesians out there? Are they simply not as literate as you in the maths or maybe somehow teach but not use their own methods of reasoning and decision making?
Having read the sequences, I’m still unsure where “a million dollars” comes from. Why not diversify when you have less money than that?
It is an estimate of the amount you would have to donate to the most marginally effective charity, to decrease its marginal effectiveness below the previous second most marginally effective charity.
I can see following that for charities with high-probability results; I would certainly support that with respect to deciding whether to give to an African food charity versus an Asian food charity, for instance. But for something like existential risk, if there are two charities that I believe each have a 1% chance of working and an arbitrarily high, roughly equal payoff, then it seems I should want both invested in. I might pick one and then hope someone else picks the other, but it seems equivalent if not better to just give equal money to both, to hedge my bets.
Okay, I suppose I could actually pay attention to what everybody else is doing, and just give all my money to the underrepresented one until it stops being underrepresented.
This is exactly what I’m having trouble to accept, let alone look through. There seems to be a highly complicated framework of estimations to support and reinforce each other. I’m not sure how you call this in English, but in German I’d call this a castle in the air.
And before you start downvoting this comment and tell me to learn about Solomonoff induction etc., I know that what I’m saying may simply be due to a lack of education. But that’s what I’m arguing about here. And I bet that many who support the SIAI cannot interpret the reasoning which lead them to support the SIAI in the first place, or at least cannot substantiate the estimations with other kinds of evidence than a coherent internal logic of reciprocal supporting probability estimations.
The figure “a million dollars” doesn’t matter. The reasoning in this particular case is pretty simple. Assuming that you actually care about the future and not you personal self esteem (the knowledge of personally having contributed to a good outcome) there is no reason why putting all your personal eggs in one basket should matter at all. You wouldn’t want humanity to put all its eggs in one basket, but the only way you would change that would be if you were the only person to put eggs into a particular basket. There may be a particular distribution of eggs that is optimal, but unless you think the distribution of everyone else’s eggs is already optimal you shouldn’t distribute all you personal eggs the same way, you should put them in the basket that is most underrepresented (measured by marginal utility, not by ratio actual allocation to theoretical optimal allocation or any such nonsense) so to move humanities overall allocation closer to optimal. Unless you have so many eggs that the most underrepresented basket stops being that, (=”million dollars”).
This might be sound reasoning. In this particular case you’ve made up a number and more or less based it on some idea of optimal egg allocation. That is all very well, but was not exactly what I meant to say by using that phrase or by the comment you replied to and wasn’t my original intention when replying to EY.
I can follow much of the reasoning and arguments on this site. But I’m currently unable to judge their overall credence. That is, are the conclusions justified? Is the coherent framework build around the SIAI based on firm ground?
I’m concerned that although consistently so, the LW community is updating on fictional evidence. My questions in the original comment were meant to inquire the basic principles, the foundation of the sound argumentation that is based upon those basic premises. That is, are you creating models to treat subsequent models or are the propositions based on fact?
An example here is the treatment and use of MWI, the conclusions, arguments and further estimations based on it. No doubt MWI is the only consistent non-magic interpretation of quantum mechanics. But that’s it, an interpretation. A logical consistent deduction. Or should I rather call it an induction, as the inference seems to be of greater generality than the premises, at least withing the LW community? But that’s besides the point. The problem here is that such conclusions are widely considered to be weak evidence to base further speculations and estimations on it.
What I’m trying to argue here is that if the cornerstone of your argumentation, if one of your basic tenets is the likelihood of exponential evolving superhuman AI, although a valid speculation given what we know about reality, you are already in over your head with debt. Debt in the form of other kinds of evidence. Not that it is a false hypothesis, that it is not even wrong, but that you cannot base a whole movement and a huge framework of further inference and supportive argumentation on such premises, ideas that are themselves not based on firm ground.
Now you might argue, it’s all about probability estimations. Someone else might argue that reports from the past do NOT need to provide evidence to what will occur in the future. But the gist of the matter is that a coherent and consistent framework of sound argumentation based on unsupported inference is nothing more than its description implies. It is fiction. Scientific evidence is at least providing hints. Imagination allows for endless possibilities while scientific evidence at least provides hints of what might be possible and what impossible. Science only has to provide the ability to assess your data. The experience of its realization is a justification that bears a hint. Any hint that empirical criticism provides gives you new information on which you can build on. Not because it bears truth value but because it gives you an idea of where you want to go. An opportunity to try something. There’s that which seemingly fails or contradicts itself and that which seems to work and is consistent.
And that is my problem. Given my current educational background and knowledge I cannot differentiate LW between a consistent internal logic and something which is sufficiently based on empirical criticism to provide a firm substantiation of the strong arguments for action that are proclaimed on this site.
I cannot fault this reasoning. From everything I have read in your comments this seems to be the right conclusion for you to make given what you know. Taking the word of a somewhat non-mainstream community would be intellectually reckless. For my part there are some claims on LW that I do not feel I am capable of reaching a strong conclusion on—even accounting for respect for expert opinions.
Now I’m curious, here you have referred to “LW” thinking in general, while we can obviously consider also LW conclusions on specific topics. Of all those positions that LW has a consensus on (and are not nearly universally accepted by all educated people) are there any particular topics for which you are confident of either confirming or denying? For example “cryonics is worth a shot” seems to be far more easy to judge than conclusions about quantum mechanics and decision theory.
And yes, it seems like my post may have done more harm than good. I was not anticipating such negative reactions. What I said seems to have been construed in ways that were totally unexpected to me and which are largely unrelated to the points that I was trying to make. I take responsibility for the outcome.
Thanks for the response. I’m presently in Europe without steady internet access but look forward to writing back. My thoughts on these matters are rather detailed/intricate.
For now I’ll just say that I think that because people have such strong irrational biases against cryonics, advocacy of cryonics may (unfairly!) lower the credibility of the rationalist movement among people who it would be good to draw in to the rationalist movement. I think (but am not sure) that this factor makes cryonics advocacy substantially less fruitful than it may appear.
Thanks for the response. I’m presently in Europe without steady internet access but look forward to writing back. My thoughts on these matters are rather detailed/intricate.
For now I’ll just say that I think that because people have such strong irrational biases against cryonics, advocacy of cryonics may (unfairly!) lower the credibility of the rationalist movement among people who it would be good to draw in to the rationalist movement. I think (but am not sure) that this factor makes cryonics advocacy substantially less fruitful than it may appear.
As a disclaimer, I’ll say that I’m quite favorablyimpreaaed by you. My post may have come across as critical of
Ordinary people don’t want to sign up for cryonics, while they do want to go to movies and get heart transplants. So if multifoliaterose tells people, “Instead of signing up for cryonics, send money to Africa,” he’s much more likely to be successful than if he tells people, “Instead of going to the movies, send money to Africa.”
So yes, if you want to call this “unfair discrimination,” you can, but his whole point is to get people to engage in certain charities, and it seems he is just using a more effective means rather than a less effective one.
I’m saying he’ll get them to do neither.
Easy way for multi to provide an iota of evidence that what he’s doing is effective: Find at least one person who says they canceled a cryo subscription and started sending an exactly corresponding amount of money to the Singularity Institute. If you just start sending an equal amount of money to the Singularity Institute, without canceling the cryo, then it doesn’t count as evidence in his favor, of course; and that is just what I would recommend anyone feeling guilty actually do. And if anyone actually sends the money to Africa instead, I am entirely unimpressed, and I suggest that they go outside and look up at the night sky for a while and remember what this is actually about.
Even less than signing up for cryonics do most people want to murder their children. Do expect that telling them “Instead of murdering your children, send aid to Africa (or SIAI)” will increase the amount they send to Africa/SIAI?
That isn’t relevant because murdering your children doesn’t cost money.
I think it does, since you’ll probably want to buy weapons, hire an assassin, hire a lawyer, etc. But you can change the example to “Send money to al-Qaeda” if you prefer.
I’m willing to bet that the number of LW readers seriously considering cryonics greatly outweighs the number seriously considering murdering their kids OR funding al-Qaeda. For the general population this might not be so, but as a LW wrong post it seems more than reasonable to contrast with cryonics rather than terrorism.