Yes, of course I’m very well aware that beliefs are probabilistic in nature and that FAI is no exception. “Conviction” didn’t mean to imply absolute certainty on my part, but merely the recognition that creating a FAI seems very likely achievable in principle (although I’m much less certain about the time-frame) and that a FAI is way more efficient to get what people need and want, in comparison to than a million independent half-assed fixing-attempts for each and every problem we face by not-so-clever human intelligences, myself included.
How your quote relates to what you wrote is rather misty to me. I guess that you imply more knowledge of the different viewpoints in the LW community on my part, than I actually have right now. I interpreted what you were saying as: there are different viewpoints on the topic of whether neuroscience is useful for FAI, but most people here think it’s not → so by analogy I’m following the strategy to keep guessing red cards instead of blue ones (which are much more common) - yet you yourself wouldn’t say that strategy would be a waste of time and there’s still a lower yet reasonable chance neuroscience could actually contribute to FAI after all?
Is that what you meant to imply? If not, then I have no clue how the quote relates to what you wrote.
OK, I can’t think of a clear way to say this so bear with me. There is something that is logically true that goes in two directions, like modus ponens/modus tollens do, and I will first describe an inverted version of my argument. This argument would apply to someone whose thinking is the inverse of yours. Then I will explain why I have never in fact directed that argument towards such people. As the last step, I will flip the argument around so it no longer has implications for the person who is in the inverse position of yours, but instead has implications for you—implications opposite of those it had for the hypothetical inversely situated person.
If you are unsure about what is best, there is still a best action that will vey often resemble strenuous effort towards a single goal. Two related but distinct concepts leading to this conclusion are the expected Value of Information and the principle that ”… the optimal strategy is to behave lawfully, even in an environment that has random elements.”
So we see that at less than infinite levels of certainty that something ought to be done, it may still be that that thing ought to be done with full focus.
This is a true principle but can lead to cultishness. It is a bad idea to present this true argument to people because once people act with total effort for a cause, they tend to subconsciously violate the principle of behaving lawfully and conclude their total effort is reasonable only if they believe totally, and then they tell themselves that their actions are justified, and consequently that they have justified total belief.
This would be bad. Cultishness is a characteristic of human groups, so the argument that people should support the group beyond what they intuitively think would lead to net negative consequences.
It is also true that actions from internal motivations are generally more fruitful than externally motivated ones, so I would let people decide to give their time and money at their own pace.
I deploy this line of argument because you have already said that you have decided you are motivated to work hard for certain things. Assuming you decided at the optimal level of belief, that level of belief isn’t so high, and so you shouldn’t feel threatened by doubts, obligated to pretend to near certainty, or similar cultish behaviors.
Just as the uncertain should commit themselves to working hard—though one shouldn’t say so, lest cultishness increase—those working hard should remember that an epistemic state under which their actions are perfectly justified is one of considerable doubt and uncertainty.
So say, if you will, that until evidence indicates otherwise militates, you have devoted your life to a cause. There is concomitant with that no obligation at any level to say that you have been “convinced” of any premise to an extreme degree.
Yes, of course I’m very well aware that beliefs are probabilistic in nature and that FAI is no exception. “Conviction” didn’t mean to imply absolute certainty on my part, but merely the recognition that creating a FAI seems very likely achievable in principle (although I’m much less certain about the time-frame) and that a FAI is way more efficient to get what people need and want, in comparison to than a million independent half-assed fixing-attempts for each and every problem we face by not-so-clever human intelligences, myself included.
How your quote relates to what you wrote is rather misty to me. I guess that you imply more knowledge of the different viewpoints in the LW community on my part, than I actually have right now. I interpreted what you were saying as: there are different viewpoints on the topic of whether neuroscience is useful for FAI, but most people here think it’s not → so by analogy I’m following the strategy to keep guessing red cards instead of blue ones (which are much more common) - yet you yourself wouldn’t say that strategy would be a waste of time and there’s still a lower yet reasonable chance neuroscience could actually contribute to FAI after all?
Is that what you meant to imply? If not, then I have no clue how the quote relates to what you wrote.
OK, I can’t think of a clear way to say this so bear with me. There is something that is logically true that goes in two directions, like modus ponens/modus tollens do, and I will first describe an inverted version of my argument. This argument would apply to someone whose thinking is the inverse of yours. Then I will explain why I have never in fact directed that argument towards such people. As the last step, I will flip the argument around so it no longer has implications for the person who is in the inverse position of yours, but instead has implications for you—implications opposite of those it had for the hypothetical inversely situated person.
If you are unsure about what is best, there is still a best action that will vey often resemble strenuous effort towards a single goal. Two related but distinct concepts leading to this conclusion are the expected Value of Information and the principle that ”… the optimal strategy is to behave lawfully, even in an environment that has random elements.”
So we see that at less than infinite levels of certainty that something ought to be done, it may still be that that thing ought to be done with full focus.
This is a true principle but can lead to cultishness. It is a bad idea to present this true argument to people because once people act with total effort for a cause, they tend to subconsciously violate the principle of behaving lawfully and conclude their total effort is reasonable only if they believe totally, and then they tell themselves that their actions are justified, and consequently that they have justified total belief.
This would be bad. Cultishness is a characteristic of human groups, so the argument that people should support the group beyond what they intuitively think would lead to net negative consequences.
It is also true that actions from internal motivations are generally more fruitful than externally motivated ones, so I would let people decide to give their time and money at their own pace.
I deploy this line of argument because you have already said that you have decided you are motivated to work hard for certain things. Assuming you decided at the optimal level of belief, that level of belief isn’t so high, and so you shouldn’t feel threatened by doubts, obligated to pretend to near certainty, or similar cultish behaviors.
Just as the uncertain should commit themselves to working hard—though one shouldn’t say so, lest cultishness increase—those working hard should remember that an epistemic state under which their actions are perfectly justified is one of considerable doubt and uncertainty.
So say, if you will, that until evidence indicates otherwise militates, you have devoted your life to a cause. There is concomitant with that no obligation at any level to say that you have been “convinced” of any premise to an extreme degree.