I’m still unsure where “a million dollars” comes from.
It is an estimate of the amount you would have to donate to the most marginally effective charity, to decrease its marginal effectiveness below the previous second most marginally effective charity.
I can see following that for charities with high-probability results; I would certainly support that with respect to deciding whether to give to an African food charity versus an Asian food charity, for instance. But for something like existential risk, if there are two charities that I believe each have a 1% chance of working and an arbitrarily high, roughly equal payoff, then it seems I should want both invested in. I might pick one and then hope someone else picks the other, but it seems equivalent if not better to just give equal money to both, to hedge my bets.
Okay, I suppose I could actually pay attention to what everybody else is doing, and just give all my money to the underrepresented one until it stops being underrepresented.
This is exactly what I’m having trouble to accept, let alone look through. There seems to be a highly complicated framework of estimations to support and reinforce each other. I’m not sure how you call this in English, but in German I’d call this a castle in the air.
And before you start downvoting this comment and tell me to learn about Solomonoff induction etc., I know that what I’m saying may simply be due to a lack of education. But that’s what I’m arguing about here. And I bet that many who support the SIAI cannot interpret the reasoning which lead them to support the SIAI in the first place, or at least cannot substantiate the estimations with other kinds of evidence than a coherent internal logic of reciprocal supporting probability estimations.
The figure “a million dollars” doesn’t matter. The reasoning in this particular case is pretty simple. Assuming that you actually care about the future and not you personal self esteem (the knowledge of personally having contributed to a good outcome) there is no reason why putting all your personal eggs in one basket should matter at all. You wouldn’t want humanity to put all its eggs in one basket, but the only way you would change that would be if you were the only person to put eggs into a particular basket. There may be a particular distribution of eggs that is optimal, but unless you think the distribution of everyone else’s eggs is already optimal you shouldn’t distribute all you personal eggs the same way, you should put them in the basket that is most underrepresented (measured by marginal utility, not by ratio actual allocation to theoretical optimal allocation or any such nonsense) so to move humanities overall allocation closer to optimal. Unless you have so many eggs that the most underrepresented basket stops being that, (=”million dollars”).
This might be sound reasoning. In this particular case you’ve made up a number and more or less based it on some idea of optimal egg allocation. That is all very well, but was not exactly what I meant to say by using that phrase or by the comment you replied to and wasn’t my original intention when replying to EY.
I can follow much of the reasoning and arguments on this site. But I’m currently unable to judge their overall credence. That is, are the conclusions justified? Is the coherent framework build around the SIAI based on firm ground?
I’m concerned that although consistently so, the LW community is updating on fictional evidence. My questions in the original comment were meant to inquire the basic principles, the foundation of the sound argumentation that is based upon those basic premises. That is, are you creating models to treat subsequent models or are the propositions based on fact?
An example here is the treatment and use of MWI, the conclusions, arguments and further estimations based on it. No doubt MWI is the only consistent non-magic interpretation of quantum mechanics. But that’s it, an interpretation. A logical consistent deduction. Or should I rather call it an induction, as the inference seems to be of greater generality than the premises, at least withing the LW community? But that’s besides the point. The problem here is that such conclusions are widely considered to be weak evidence to base further speculations and estimations on it.
What I’m trying to argue here is that if the cornerstone of your argumentation, if one of your basic tenets is the likelihood of exponential evolving superhuman AI, although a valid speculation given what we know about reality, you are already in over your head with debt. Debt in the form of other kinds of evidence. Not that it is a false hypothesis, that it is not even wrong, but that you cannot base a whole movement and a huge framework of further inference and supportive argumentation on such premises, ideas that are themselves not based on firm ground.
Now you might argue, it’s all about probability estimations. Someone else might argue that reports from the past do NOT need to provide evidence to what will occur in the future. But the gist of the matter is that a coherent and consistent framework of sound argumentation based on unsupported inference is nothing more than its description implies. It is fiction. Scientific evidence is at least providing hints. Imagination allows for endless possibilities while scientific evidence at least provides hints of what might be possible and what impossible. Science only has to provide the ability to assess your data. The experience of its realization is a justification that bears a hint. Any hint that empirical criticism provides gives you new information on which you can build on. Not because it bears truth value but because it gives you an idea of where you want to go. An opportunity to try something. There’s that which seemingly fails or contradicts itself and that which seems to work and is consistent.
And that is my problem. Given my current educational background and knowledge I cannot differentiate LW between a consistent internal logic and something which is sufficiently based on empirical criticism to provide a firm substantiation of the strong arguments for action that are proclaimed on this site.
And that is my problem. Given my current educational background and knowledge I cannot differentiate LW between a consistent internal logic and something which is sufficiently based on empirical criticism to provide a firm substantiation of the strong arguments for action that are proclaimed on this site.
I cannot fault this reasoning. From everything I have read in your comments this seems to be the right conclusion for you to make given what you know. Taking the word of a somewhat non-mainstream community would be intellectually reckless. For my part there are some claims on LW that I do not feel I am capable of reaching a strong conclusion on—even accounting for respect for expert opinions.
Now I’m curious, here you have referred to “LW” thinking in general, while we can obviously consider also LW conclusions on specific topics. Of all those positions that LW has a consensus on (and are not nearly universally accepted by all educated people) are there any particular topics for which you are confident of either confirming or denying? For example “cryonics is worth a shot” seems to be far more easy to judge than conclusions about quantum mechanics and decision theory.
Having read the sequences, I’m still unsure where “a million dollars” comes from. Why not diversify when you have less money than that?
It is an estimate of the amount you would have to donate to the most marginally effective charity, to decrease its marginal effectiveness below the previous second most marginally effective charity.
I can see following that for charities with high-probability results; I would certainly support that with respect to deciding whether to give to an African food charity versus an Asian food charity, for instance. But for something like existential risk, if there are two charities that I believe each have a 1% chance of working and an arbitrarily high, roughly equal payoff, then it seems I should want both invested in. I might pick one and then hope someone else picks the other, but it seems equivalent if not better to just give equal money to both, to hedge my bets.
Okay, I suppose I could actually pay attention to what everybody else is doing, and just give all my money to the underrepresented one until it stops being underrepresented.
This is exactly what I’m having trouble to accept, let alone look through. There seems to be a highly complicated framework of estimations to support and reinforce each other. I’m not sure how you call this in English, but in German I’d call this a castle in the air.
And before you start downvoting this comment and tell me to learn about Solomonoff induction etc., I know that what I’m saying may simply be due to a lack of education. But that’s what I’m arguing about here. And I bet that many who support the SIAI cannot interpret the reasoning which lead them to support the SIAI in the first place, or at least cannot substantiate the estimations with other kinds of evidence than a coherent internal logic of reciprocal supporting probability estimations.
The figure “a million dollars” doesn’t matter. The reasoning in this particular case is pretty simple. Assuming that you actually care about the future and not you personal self esteem (the knowledge of personally having contributed to a good outcome) there is no reason why putting all your personal eggs in one basket should matter at all. You wouldn’t want humanity to put all its eggs in one basket, but the only way you would change that would be if you were the only person to put eggs into a particular basket. There may be a particular distribution of eggs that is optimal, but unless you think the distribution of everyone else’s eggs is already optimal you shouldn’t distribute all you personal eggs the same way, you should put them in the basket that is most underrepresented (measured by marginal utility, not by ratio actual allocation to theoretical optimal allocation or any such nonsense) so to move humanities overall allocation closer to optimal. Unless you have so many eggs that the most underrepresented basket stops being that, (=”million dollars”).
This might be sound reasoning. In this particular case you’ve made up a number and more or less based it on some idea of optimal egg allocation. That is all very well, but was not exactly what I meant to say by using that phrase or by the comment you replied to and wasn’t my original intention when replying to EY.
I can follow much of the reasoning and arguments on this site. But I’m currently unable to judge their overall credence. That is, are the conclusions justified? Is the coherent framework build around the SIAI based on firm ground?
I’m concerned that although consistently so, the LW community is updating on fictional evidence. My questions in the original comment were meant to inquire the basic principles, the foundation of the sound argumentation that is based upon those basic premises. That is, are you creating models to treat subsequent models or are the propositions based on fact?
An example here is the treatment and use of MWI, the conclusions, arguments and further estimations based on it. No doubt MWI is the only consistent non-magic interpretation of quantum mechanics. But that’s it, an interpretation. A logical consistent deduction. Or should I rather call it an induction, as the inference seems to be of greater generality than the premises, at least withing the LW community? But that’s besides the point. The problem here is that such conclusions are widely considered to be weak evidence to base further speculations and estimations on it.
What I’m trying to argue here is that if the cornerstone of your argumentation, if one of your basic tenets is the likelihood of exponential evolving superhuman AI, although a valid speculation given what we know about reality, you are already in over your head with debt. Debt in the form of other kinds of evidence. Not that it is a false hypothesis, that it is not even wrong, but that you cannot base a whole movement and a huge framework of further inference and supportive argumentation on such premises, ideas that are themselves not based on firm ground.
Now you might argue, it’s all about probability estimations. Someone else might argue that reports from the past do NOT need to provide evidence to what will occur in the future. But the gist of the matter is that a coherent and consistent framework of sound argumentation based on unsupported inference is nothing more than its description implies. It is fiction. Scientific evidence is at least providing hints. Imagination allows for endless possibilities while scientific evidence at least provides hints of what might be possible and what impossible. Science only has to provide the ability to assess your data. The experience of its realization is a justification that bears a hint. Any hint that empirical criticism provides gives you new information on which you can build on. Not because it bears truth value but because it gives you an idea of where you want to go. An opportunity to try something. There’s that which seemingly fails or contradicts itself and that which seems to work and is consistent.
And that is my problem. Given my current educational background and knowledge I cannot differentiate LW between a consistent internal logic and something which is sufficiently based on empirical criticism to provide a firm substantiation of the strong arguments for action that are proclaimed on this site.
I cannot fault this reasoning. From everything I have read in your comments this seems to be the right conclusion for you to make given what you know. Taking the word of a somewhat non-mainstream community would be intellectually reckless. For my part there are some claims on LW that I do not feel I am capable of reaching a strong conclusion on—even accounting for respect for expert opinions.
Now I’m curious, here you have referred to “LW” thinking in general, while we can obviously consider also LW conclusions on specific topics. Of all those positions that LW has a consensus on (and are not nearly universally accepted by all educated people) are there any particular topics for which you are confident of either confirming or denying? For example “cryonics is worth a shot” seems to be far more easy to judge than conclusions about quantum mechanics and decision theory.