I have been convinced of this premise and now I would like to devote my life to that cause.
Beliefs are probabilistic. If your beliefs are strong enough that you want to work for one of the related causes championed by the slightly different ideologies of this group, that can be a perfectly fine thing, and you don’t have to pretend your convictions are stronger than they are, and you won’t be deemed irrational for your hard work despite a lack of certainty.
Recall:
“Many psychological experiments were conducted in the late 1950s and early 1960s in which subjects were asked to predict the outcome of an event that had a random component but yet had base-rate predictability—for example, subjects were asked to predict whether the next card the experiment turned over would be red or blue in a context in which 70% of the cards were blue, but in which the sequence of red and blue cards was totally random.
In such a situation, the strategy that will yield the highest proportion of success is to predict the more common event. For example, if 70% of the cards are blue, then predicting blue on every trial yields a 70% success rate.
What subjects tended to do instead, however, was match probabilities—that is, predict the more probable event with the relative frequency with which it occurred. For example, subjects tended to predict 70% of the time that the blue card would occur and 30% of the time that the red card would occur. Such a strategy yields a 58% success rate, because the subjects are correct 70% of the time when the blue card occurs (which happens with probability .70) and 30% of the time when the red card occurs (which happens with probability .30); .70 .70 + .30 .30 = .58...
Even if subjects think they’ve come up with a hypothesis, they don’t have to actually bet on that prediction in order to test their hypothesis. They can say, “Now if this hypothesis is correct, the next card will be red”—and then just bet on blue...
It is a counterintuitive idea that, given incomplete information, the optimal betting strategy does not resemble a typical sequence of cards.
It is a counterintuitive idea that the optimal strategy is to behave lawfully, even in an environment that has random elements...
In the dilemma of the blue and red cards, our partial knowledge tells us—on each and every round—that the best bet is blue. This advice of our partial knowledge is the same on each and every round. If 30% of the time we go against our partial knowledge and bet on red instead, then we will do worse thereby—because now we’re being outright stupid, betting on what we know is the less probable outcome.
If you bet on red every round, you would do as badly as you could possibly do; you would be 100% stupid. If you bet on red 30% of the time, faced with 30% red cards, then you’re making yourself 30% stupid. Given incomplete information, the optimal betting strategy does not resemble a typical sequence of cards.
Yes, of course I’m very well aware that beliefs are probabilistic in nature and that FAI is no exception. “Conviction” didn’t mean to imply absolute certainty on my part, but merely the recognition that creating a FAI seems very likely achievable in principle (although I’m much less certain about the time-frame) and that a FAI is way more efficient to get what people need and want, in comparison to than a million independent half-assed fixing-attempts for each and every problem we face by not-so-clever human intelligences, myself included.
How your quote relates to what you wrote is rather misty to me. I guess that you imply more knowledge of the different viewpoints in the LW community on my part, than I actually have right now. I interpreted what you were saying as: there are different viewpoints on the topic of whether neuroscience is useful for FAI, but most people here think it’s not → so by analogy I’m following the strategy to keep guessing red cards instead of blue ones (which are much more common) - yet you yourself wouldn’t say that strategy would be a waste of time and there’s still a lower yet reasonable chance neuroscience could actually contribute to FAI after all?
Is that what you meant to imply? If not, then I have no clue how the quote relates to what you wrote.
OK, I can’t think of a clear way to say this so bear with me. There is something that is logically true that goes in two directions, like modus ponens/modus tollens do, and I will first describe an inverted version of my argument. This argument would apply to someone whose thinking is the inverse of yours. Then I will explain why I have never in fact directed that argument towards such people. As the last step, I will flip the argument around so it no longer has implications for the person who is in the inverse position of yours, but instead has implications for you—implications opposite of those it had for the hypothetical inversely situated person.
If you are unsure about what is best, there is still a best action that will vey often resemble strenuous effort towards a single goal. Two related but distinct concepts leading to this conclusion are the expected Value of Information and the principle that ”… the optimal strategy is to behave lawfully, even in an environment that has random elements.”
So we see that at less than infinite levels of certainty that something ought to be done, it may still be that that thing ought to be done with full focus.
This is a true principle but can lead to cultishness. It is a bad idea to present this true argument to people because once people act with total effort for a cause, they tend to subconsciously violate the principle of behaving lawfully and conclude their total effort is reasonable only if they believe totally, and then they tell themselves that their actions are justified, and consequently that they have justified total belief.
This would be bad. Cultishness is a characteristic of human groups, so the argument that people should support the group beyond what they intuitively think would lead to net negative consequences.
It is also true that actions from internal motivations are generally more fruitful than externally motivated ones, so I would let people decide to give their time and money at their own pace.
I deploy this line of argument because you have already said that you have decided you are motivated to work hard for certain things. Assuming you decided at the optimal level of belief, that level of belief isn’t so high, and so you shouldn’t feel threatened by doubts, obligated to pretend to near certainty, or similar cultish behaviors.
Just as the uncertain should commit themselves to working hard—though one shouldn’t say so, lest cultishness increase—those working hard should remember that an epistemic state under which their actions are perfectly justified is one of considerable doubt and uncertainty.
So say, if you will, that until evidence indicates otherwise militates, you have devoted your life to a cause. There is concomitant with that no obligation at any level to say that you have been “convinced” of any premise to an extreme degree.
Beliefs are probabilistic. If your beliefs are strong enough that you want to work for one of the related causes championed by the slightly different ideologies of this group, that can be a perfectly fine thing, and you don’t have to pretend your convictions are stronger than they are, and you won’t be deemed irrational for your hard work despite a lack of certainty. Recall:
Yes, of course I’m very well aware that beliefs are probabilistic in nature and that FAI is no exception. “Conviction” didn’t mean to imply absolute certainty on my part, but merely the recognition that creating a FAI seems very likely achievable in principle (although I’m much less certain about the time-frame) and that a FAI is way more efficient to get what people need and want, in comparison to than a million independent half-assed fixing-attempts for each and every problem we face by not-so-clever human intelligences, myself included.
How your quote relates to what you wrote is rather misty to me. I guess that you imply more knowledge of the different viewpoints in the LW community on my part, than I actually have right now. I interpreted what you were saying as: there are different viewpoints on the topic of whether neuroscience is useful for FAI, but most people here think it’s not → so by analogy I’m following the strategy to keep guessing red cards instead of blue ones (which are much more common) - yet you yourself wouldn’t say that strategy would be a waste of time and there’s still a lower yet reasonable chance neuroscience could actually contribute to FAI after all?
Is that what you meant to imply? If not, then I have no clue how the quote relates to what you wrote.
OK, I can’t think of a clear way to say this so bear with me. There is something that is logically true that goes in two directions, like modus ponens/modus tollens do, and I will first describe an inverted version of my argument. This argument would apply to someone whose thinking is the inverse of yours. Then I will explain why I have never in fact directed that argument towards such people. As the last step, I will flip the argument around so it no longer has implications for the person who is in the inverse position of yours, but instead has implications for you—implications opposite of those it had for the hypothetical inversely situated person.
If you are unsure about what is best, there is still a best action that will vey often resemble strenuous effort towards a single goal. Two related but distinct concepts leading to this conclusion are the expected Value of Information and the principle that ”… the optimal strategy is to behave lawfully, even in an environment that has random elements.”
So we see that at less than infinite levels of certainty that something ought to be done, it may still be that that thing ought to be done with full focus.
This is a true principle but can lead to cultishness. It is a bad idea to present this true argument to people because once people act with total effort for a cause, they tend to subconsciously violate the principle of behaving lawfully and conclude their total effort is reasonable only if they believe totally, and then they tell themselves that their actions are justified, and consequently that they have justified total belief.
This would be bad. Cultishness is a characteristic of human groups, so the argument that people should support the group beyond what they intuitively think would lead to net negative consequences.
It is also true that actions from internal motivations are generally more fruitful than externally motivated ones, so I would let people decide to give their time and money at their own pace.
I deploy this line of argument because you have already said that you have decided you are motivated to work hard for certain things. Assuming you decided at the optimal level of belief, that level of belief isn’t so high, and so you shouldn’t feel threatened by doubts, obligated to pretend to near certainty, or similar cultish behaviors.
Just as the uncertain should commit themselves to working hard—though one shouldn’t say so, lest cultishness increase—those working hard should remember that an epistemic state under which their actions are perfectly justified is one of considerable doubt and uncertainty.
So say, if you will, that until evidence indicates otherwise militates, you have devoted your life to a cause. There is concomitant with that no obligation at any level to say that you have been “convinced” of any premise to an extreme degree.