Ah, I see. You were trying to defend two contradictory positions, and I did not notice when you switched between them. (This is one reason why I find it’s often a bad idea to try to defend an idea that you have abandoned, by the way; it leads to confusion.)
How could someone forget such simple principles like “love one another” in their pursuit for more knowledge?
That is actually quite possible. Step one is a person who seeks more knowledge, and finds it. That’s fine, so far. Step two is the person realises that they are a lot more knowledgeable than anyone else; that’s fine as well, but it can be like standing on the edge of a cliff. Step three is that the person becomes arrogant. They see most other people as a distraction, as sort of sub-human. This is where things start to go wrong. Step four is when the person decides that he knows what the best thing for everyone else to do is better than they do. And if they won’t do it, then he’ll make them do it.
Before long, you could very well have a person who, while he admits that it’s important to love your fellow-man in theory, in practice thinks that the best thing to do is to start the Spanish Inquisition. The fact that the Spanish Inquisition ever existed, started by people who professed “love one another” as a core tenet of their faith, shows that this can happen...
Those are good examples. Though I guess whether this is possible depends on your definition of “forget”. Speaking of the Spanish Inquisition, I am of the opinion that the Inquisitors did not forget their core tenets but that further knowledge (however flawed) gave them new means to interpret the original tenets. You could suggest that this re-interpretation was exactly what Jesus wanted to keep people from doing, of course. The question I ask Christians, then, is “What knowledge is acceptable and how should it be attained when God doesn’t encourage the utilization of all knowledge?” This would certainly be an important question for theists to answer, and may be relatively simple. I can already guess a few possible answers.
Though I guess whether this is possible depends on your definition of “forget”.
I’m assuming “to act as though ignorant of the principle in question”.
The question I ask Christians, then, is “What knowledge is acceptable and how should it be attained when God doesn’t encourage the utilization of all knowledge?”
I don’t think its the knowledge that’s dangerous, in itself. I think it’s the arrogance. Or the sophisticated argument that starts with principles X and Y and leads to actions that directly contradict principle X.
For example; consider the following principles:
Love thy neighbour as thyself
Anyone who does not profess will be tortured terribly in Hell after death, beyond anything mortals can do
That’s enough to lead to the Inquisition, by this route:
Looking at Principle 2, I do not wish myself, or those that I love to enter Hell. Considering Principle 1, I must try to save everyone from that fate, by any means possible. I must therefore attempt to convert everyone to .
(Consideration of various means snipped for brevity)
Yet there may be some people who refuse to convert, even in the face of all these arguments. In such a case, would torture be acceptable? If a person who is not tortured does not repent, then he is doomed to what is worse than a mere few months, even a mere few years of torture; he is doomed to an eternity of torture. If a person is tortured into repentance, then he is saved an eternity of torture—a net gain for the victim. If he is tortured and does not repent, then he experiences an eternity of torture in any case—in that case, he is at least no worse off. So a tortured victim is at worst no worse off, and at best a good deal better off, than a man who does not repent. However, care must be taken to ensure that the victim does not die during torture, but before repenting.
Better yet, the mere rumour of torture may lead some to repent more swiftly. Thus, judicious use of torture becomes a moral imperative.
(As an exercise, incidentally, can you spot the flaw in that chain of reasoning?)
And then you have the Inquisitors, and fear and terror and sharp knives in dark rooms...
Step four is when the person decides that he knows what the best thing for everyone else to do is better than they do. And if they won’t do it, then he’ll make them do it.
It’s worth noting that if the person successfully “found knowledge”, they are in fact correct (unless it was irrelevant knowledge, I guess.)
Historical evidence suggests that people get to step 4 before correctly finding knowledge quite often. The Spanish Inquisition is a shining example. Or communism—in its original inception, it was supposed to be a utopian paradise where everyone does what work is necessary, and enjoys fair benefits therefrom.
I suspect that a common failure mode is that one fails to take into account that many people are doing that which they are doing because they are quite happy to do it. They’ve smoothed out any sharp corners in their lifestyle that they could manage to smooth out, and see little benefit in changing to a new lifestyle, with new and unexpected sharp corners that will need smoothing.
I would therefore recommend being very, very cautious about assuming that one has successfully found sufficient knowledge.
I agree there’s a common failure mode here—I’d be inclined to say it’s simple overconfidence, and maybe overestimating your rationality relative to everyone else.
Even then, I’d most likely object to their attempts to try to dictate the actions of others; because of the common failure mode, my heuristic is to assign a very strong prior to the hypothesis that they are unsuccessful. Also, trying force has some fairly substantial negative effects; any positive effects of their proposed behaviour change would have to be significant to overcome that.
However, if they are willing to try to change the actions of others through simple persuasion without resorting to force, then I would not object. And if their proposed course of action is significantly better, then I would expect persuasion to work in at least some cases; and then these cases can be used as evidence for the proposed course of action working.
To be fair, we may have different interventions in mind here. I would also expect someone who genuinely found knowledge to use “soft force”, but maybe that’s just wishful thinking.
However, if forcing people to do things really helps, I’m all for intervention. Addicts, for example.
It’s worth noting that if the person successfully “found knowledge”, they are in fact correct (unless it was irrelevant knowledge, I guess.)
This can never be put into practice. A person can try to find knowledge, but there is nothing they can do to determine whether they have successfully found knowledge—any such attempts collapse into part of trying to find knowledge. There is no way of getting to a meta-level from which you can judge whether your efforts bore fruit. The ladder has no rungs.
No, just that while you can try harder to find knowledge, there isn’t a separate metalevel at which seeing if you really have knowledge is a different activity.
If you can receive information that provides strong Bayesian evidence that you’re belief is true, how is there “nothing they can do to determine whether they have successfully found knowledge”?
Ah, I see. You were trying to defend two contradictory positions, and I did not notice when you switched between them. (This is one reason why I find it’s often a bad idea to try to defend an idea that you have abandoned, by the way; it leads to confusion.)
That is actually quite possible. Step one is a person who seeks more knowledge, and finds it. That’s fine, so far. Step two is the person realises that they are a lot more knowledgeable than anyone else; that’s fine as well, but it can be like standing on the edge of a cliff. Step three is that the person becomes arrogant. They see most other people as a distraction, as sort of sub-human. This is where things start to go wrong. Step four is when the person decides that he knows what the best thing for everyone else to do is better than they do. And if they won’t do it, then he’ll make them do it.
Before long, you could very well have a person who, while he admits that it’s important to love your fellow-man in theory, in practice thinks that the best thing to do is to start the Spanish Inquisition. The fact that the Spanish Inquisition ever existed, started by people who professed “love one another” as a core tenet of their faith, shows that this can happen...
Those are good examples. Though I guess whether this is possible depends on your definition of “forget”. Speaking of the Spanish Inquisition, I am of the opinion that the Inquisitors did not forget their core tenets but that further knowledge (however flawed) gave them new means to interpret the original tenets. You could suggest that this re-interpretation was exactly what Jesus wanted to keep people from doing, of course. The question I ask Christians, then, is “What knowledge is acceptable and how should it be attained when God doesn’t encourage the utilization of all knowledge?” This would certainly be an important question for theists to answer, and may be relatively simple. I can already guess a few possible answers.
I’m assuming “to act as though ignorant of the principle in question”.
I don’t think its the knowledge that’s dangerous, in itself. I think it’s the arrogance. Or the sophisticated argument that starts with principles X and Y and leads to actions that directly contradict principle X.
For example; consider the following principles:
Love thy neighbour as thyself
Anyone who does not profess will be tortured terribly in Hell after death, beyond anything mortals can do
That’s enough to lead to the Inquisition, by this route:
Looking at Principle 2, I do not wish myself, or those that I love to enter Hell. Considering Principle 1, I must try to save everyone from that fate, by any means possible. I must therefore attempt to convert everyone to .
(Consideration of various means snipped for brevity)
Yet there may be some people who refuse to convert, even in the face of all these arguments. In such a case, would torture be acceptable? If a person who is not tortured does not repent, then he is doomed to what is worse than a mere few months, even a mere few years of torture; he is doomed to an eternity of torture. If a person is tortured into repentance, then he is saved an eternity of torture—a net gain for the victim. If he is tortured and does not repent, then he experiences an eternity of torture in any case—in that case, he is at least no worse off. So a tortured victim is at worst no worse off, and at best a good deal better off, than a man who does not repent. However, care must be taken to ensure that the victim does not die during torture, but before repenting.
Better yet, the mere rumour of torture may lead some to repent more swiftly. Thus, judicious use of torture becomes a moral imperative.
(As an exercise, incidentally, can you spot the flaw in that chain of reasoning?)
And then you have the Inquisitors, and fear and terror and sharp knives in dark rooms...
It’s worth noting that if the person successfully “found knowledge”, they are in fact correct (unless it was irrelevant knowledge, I guess.)
Historical evidence suggests that people get to step 4 before correctly finding knowledge quite often. The Spanish Inquisition is a shining example. Or communism—in its original inception, it was supposed to be a utopian paradise where everyone does what work is necessary, and enjoys fair benefits therefrom.
I suspect that a common failure mode is that one fails to take into account that many people are doing that which they are doing because they are quite happy to do it. They’ve smoothed out any sharp corners in their lifestyle that they could manage to smooth out, and see little benefit in changing to a new lifestyle, with new and unexpected sharp corners that will need smoothing.
I would therefore recommend being very, very cautious about assuming that one has successfully found sufficient knowledge.
I agree there’s a common failure mode here—I’d be inclined to say it’s simple overconfidence, and maybe overestimating your rationality relative to everyone else.
Still, if they’re successful...
Even then, I’d most likely object to their attempts to try to dictate the actions of others; because of the common failure mode, my heuristic is to assign a very strong prior to the hypothesis that they are unsuccessful. Also, trying force has some fairly substantial negative effects; any positive effects of their proposed behaviour change would have to be significant to overcome that.
However, if they are willing to try to change the actions of others through simple persuasion without resorting to force, then I would not object. And if their proposed course of action is significantly better, then I would expect persuasion to work in at least some cases; and then these cases can be used as evidence for the proposed course of action working.
To be fair, we may have different interventions in mind here. I would also expect someone who genuinely found knowledge to use “soft force”, but maybe that’s just wishful thinking.
However, if forcing people to do things really helps, I’m all for intervention. Addicts, for example.
I was thinking armies, secret police, so on and so forth, forcing an entire country to one’s will.
Hmmm. I hadn’t thought of addicts. You make a good point.
I think I might need to re-evaluate my heuristics on this point.
This can never be put into practice. A person can try to find knowledge, but there is nothing they can do to determine whether they have successfully found knowledge—any such attempts collapse into part of trying to find knowledge. There is no way of getting to a meta-level from which you can judge whether your efforts bore fruit. The ladder has no rungs.
raises eyebrows
You’re saying it’s impossible for any evidence to change your estimate of whether something will help people?
No, just that while you can try harder to find knowledge, there isn’t a separate metalevel at which seeing if you really have knowledge is a different activity.
If you can receive information that provides strong Bayesian evidence that you’re belief is true, how is there “nothing they can do to determine whether they have successfully found knowledge”?