Where would you fit in the typical MIRI donor here?
As long as we all agree on all logical consequences of a reasonable set of beliefs that make bridges fly and planes stay up, so to speak, I am not sure it is useful or polite to insist on anything else.
MIRI’s mission to build an FAI is a good way to think about this. Given a singleton, an all powerful machine dictator, would you want it to be like any of the people you described? If some of those people would be better leaders than others, then why wouldn’t you, to a lesser extent, insist on them becoming more like someone who you would readily empower to rule you?
Personally I wouldn’t feel comfortable entrusting any of the people you describe, given unlimited power. Neither would I trust any MIRI staff, or myself. All seem flawed in more or less subtle ways.
Regarding logical consequences, concepts such as acausal trade might very well be logical consequences of a reasonable set of beliefs that make bridges stay up, and planes fly. Yet what makes LessWrong partly awful is that all logical consequences are taken seriously. I do insist on somehow discounting these consequences, because it is unworkable, and dangerously distracting, to worry about such possibilities as e.g. a simulation shutdown. In other words, I wouldn’t entrust an FAI that would give money to a Pascalian mugger, or even one which took basilisks seriously.
I think you are going on a tangent. We are talking about beliefs, not values. I think we can all generally agree on a reasonable set of things we all think is bad, and we should insist people agree to respect those things. But why should we shun Will or Albert if they have a reasonable ethical system?
Regarding logical consequences, concepts such as acausal trade might very well be logical consequences
of a reasonable set of beliefs that make bridges stay up, and planes fly.
Sorry, but no. In order for acausal trade, basilisks, etc. to logically follow from the “reasonable set of things describing modern empirical science + math” it would have to be the case that any model (in the model theoretic sense, that is a universe we construct) consistent with the latter also contains the former. That just isn’t so.
We should take all logical consequences we can compute of things we know seriously. The entire trouble with basilisks et al. is precisely that they don’t logically follow, but are taken seriously anyways. Only concentrating on one untestable possibility out of great many is precisely what my call for tolerance for views on untestable things is meant to combat. A culture that agrees only on what we can test, and lets your mind wander about other matters will be resistant to things like basilisks simply because most members of such a culture will believe something else, and give you other convincing possibilities (and you will be unable to choose since they are all untestable anyways).
I think we can all generally agree on a reasonable set of things we all think is bad, and we should insist people agree to respect those things. But why should we shun Will or Albert if they have a reasonable ethical system?
[...]
We should take all logical consequences we can compute of things we know seriously. The entire trouble with basilisks et al. is precisely that they don’t logically follow, but are taken seriously anyways.
I am not sure I understand you here. Should we shun people who believe that the most probable model consistent with “a reasonable set of things describing modern empirical science + math” contains basilisks etc.? Or should we respect them, and be content with the possibility that their worldview might spread, and eventually dominate a certain influential subset of humanity?
What reasonable ethical system do you have in mind which could prevent people from taking dangerous actions if they believe Pascal’s mugging, or basilisks, to be a logical consequence that is to be taken seriously?
A culture that agrees only on what we can test, and lets your mind wander about other matters will be resistant to things like basilisks simply because most members of such a culture will believe something else, and give you other convincing possibilities (and you will be unable to choose since they are all untestable anyways).
Suppose there exists a highly effective model, which contains basilisks, but which is consistent with “a reasonable set of things describing modern empirical science + math”. What if this diverse culture was threatened by the propagation this model?
Or should we respect them, and be content with the possibility that their worldview might spread, and
eventually dominate a certain influential subset of humanity?
What if this diverse culture was threatened by the propagation [of] this model?
“Consistent” is a much lower bar to meet than “logically must follow.” Jehova and your green alien Bob are also consistent. Sensible religions are generally consistent.
I call for the spread of the culture of tolerance rather than the culture of religious war. History shows that the culture of tolerance will serve your goals better here. You can always find a boogieman as an excuse to knock heads—be it Scientology, Wahabi Islam, Communism or whatever. But will that help you?
Where would you fit in the typical MIRI donor here?
MIRI’s mission to build an FAI is a good way to think about this. Given a singleton, an all powerful machine dictator, would you want it to be like any of the people you described? If some of those people would be better leaders than others, then why wouldn’t you, to a lesser extent, insist on them becoming more like someone who you would readily empower to rule you?
Personally I wouldn’t feel comfortable entrusting any of the people you describe, given unlimited power. Neither would I trust any MIRI staff, or myself. All seem flawed in more or less subtle ways.
Regarding logical consequences, concepts such as acausal trade might very well be logical consequences of a reasonable set of beliefs that make bridges stay up, and planes fly. Yet what makes LessWrong partly awful is that all logical consequences are taken seriously. I do insist on somehow discounting these consequences, because it is unworkable, and dangerously distracting, to worry about such possibilities as e.g. a simulation shutdown. In other words, I wouldn’t entrust an FAI that would give money to a Pascalian mugger, or even one which took basilisks seriously.
I think you are going on a tangent. We are talking about beliefs, not values. I think we can all generally agree on a reasonable set of things we all think is bad, and we should insist people agree to respect those things. But why should we shun Will or Albert if they have a reasonable ethical system?
Sorry, but no. In order for acausal trade, basilisks, etc. to logically follow from the “reasonable set of things describing modern empirical science + math” it would have to be the case that any model (in the model theoretic sense, that is a universe we construct) consistent with the latter also contains the former. That just isn’t so.
We should take all logical consequences we can compute of things we know seriously. The entire trouble with basilisks et al. is precisely that they don’t logically follow, but are taken seriously anyways. Only concentrating on one untestable possibility out of great many is precisely what my call for tolerance for views on untestable things is meant to combat. A culture that agrees only on what we can test, and lets your mind wander about other matters will be resistant to things like basilisks simply because most members of such a culture will believe something else, and give you other convincing possibilities (and you will be unable to choose since they are all untestable anyways).
We can? That certainly doesn’t seem to be so.
Also, can you step back a hundred years or so and repeat that? :-)
[...]
I am not sure I understand you here. Should we shun people who believe that the most probable model consistent with “a reasonable set of things describing modern empirical science + math” contains basilisks etc.? Or should we respect them, and be content with the possibility that their worldview might spread, and eventually dominate a certain influential subset of humanity?
What reasonable ethical system do you have in mind which could prevent people from taking dangerous actions if they believe Pascal’s mugging, or basilisks, to be a logical consequence that is to be taken seriously?
Suppose there exists a highly effective model, which contains basilisks, but which is consistent with “a reasonable set of things describing modern empirical science + math”. What if this diverse culture was threatened by the propagation this model?
“Consistent” is a much lower bar to meet than “logically must follow.” Jehova and your green alien Bob are also consistent. Sensible religions are generally consistent.
I call for the spread of the culture of tolerance rather than the culture of religious war. History shows that the culture of tolerance will serve your goals better here. You can always find a boogieman as an excuse to knock heads—be it Scientology, Wahabi Islam, Communism or whatever. But will that help you?