Consequentialism is morally correct, but virtue ethics is what’s most effective, and deontology is what the virtuous person would use.
Consequentialism is right because it’s not about morality. (But also might be wrong, as a description, when people don’t do things for a reason, like habit.)
B: Why do you play chess?
A: To have fun. And to beat you.
If this were true, then simple belief in consequentialism would imply reflective belief in virtue ethics
Truth aside there’s issues with the implication part. Will people reach the conclusion? There’s a lot of math problems where the answer is a consequence of the properties of numbers. Does that mean you’ll know the answer some time before you die? You might be able to pick out a given one where you will find out before you die if you take the time to solve it. Ethics though, doesn’t seem to have the same guarantees, especially not around the correctness of general theories.
However, you can justifiably trust a probability distribution whose description includes running an accurate prime factorization algorithm.
That’s not a probability distribution, that’s a flowchart that terminates in “Yes” and “No”.
Truth aside there’s issues with the implication part. Will people reach the conclusion? There’s a lot of math problems where the answer is a consequence of the properties of numbers. Does that mean you’ll know the answer some time before you die? You might be able to pick out a given one where you will find out before you die if you take the time to solve it. Ethics though, doesn’t seem to have the same guarantees, especially not around the correctness of general theories.
This is part of why there could be a lot of different formalizations of the simple/reflective distinction. Do you require only that an argument exists, or do you require that the agent recognizes the argument, or something in-between? (Example of a useful in-between definition: we require only that the argument would be recognized if it were made—this is useful from the perspective of someone who cares mostly about whether an agent can be convinced.)
That’s not a probability distribution, that’s a flowchart that terminates in “Yes” and “No”.
Properly, I should have made a distinction between probability distributions and descriptions of probability distributions. But the point is that an agent can prefer to run a program and use its output in place of the agent’s own beliefs.
Do you require only that an argument exists, or do you require that the agent recognizes the argument, or something in-between?
The second one I think. The epiphany is sometimes characterized by frustration ‘why didn’t I think of that sooner?’
The optimal chess game (assuming it’s unique) might proceed from the rules, but we might never know it. Even if I have the algorithm (say in pseudocode)
If I don’t have it in code, I might not run it
If I have it code, but don’t have the compute (or sufficiently efficient techniques) I might not find out what happens when I run it for long enough
If I have the code, and the compute, then it’s just a matter of running it.* But do I get around to it?
Understanding implication isn’t usually as simple as I made it out to be above. People can work hard on a problem, and not find the answer for a lot of reasons—even if they have everything they need to know to solve it. Because they also have a lot of other information, and before they have the answer, they don’t know what is, and what isn’t relevant.
In other words, where implication is trivial and fast, reflection may be trivial and fast. If not...
The proof I never find does not move me.
*After getting the right version of the programming language downloaded, and working properly, just to do this one thing.
Consequentialism is right because it’s not about morality. (But also might be wrong, as a description, when people don’t do things for a reason, like habit.)
B: Why do you play chess?
A: To have fun. And to beat you.
Truth aside there’s issues with the implication part. Will people reach the conclusion? There’s a lot of math problems where the answer is a consequence of the properties of numbers. Does that mean you’ll know the answer some time before you die? You might be able to pick out a given one where you will find out before you die if you take the time to solve it. Ethics though, doesn’t seem to have the same guarantees, especially not around the correctness of general theories.
That’s not a probability distribution, that’s a flowchart that terminates in “Yes” and “No”.
This is part of why there could be a lot of different formalizations of the simple/reflective distinction. Do you require only that an argument exists, or do you require that the agent recognizes the argument, or something in-between? (Example of a useful in-between definition: we require only that the argument would be recognized if it were made—this is useful from the perspective of someone who cares mostly about whether an agent can be convinced.)
Properly, I should have made a distinction between probability distributions and descriptions of probability distributions. But the point is that an agent can prefer to run a program and use its output in place of the agent’s own beliefs.
The second one I think. The epiphany is sometimes characterized by frustration ‘why didn’t I think of that sooner?’
The optimal chess game (assuming it’s unique) might proceed from the rules, but we might never know it. Even if I have the algorithm (say in pseudocode)
If I don’t have it in code, I might not run it
If I have it code, but don’t have the compute (or sufficiently efficient techniques) I might not find out what happens when I run it for long enough
If I have the code, and the compute, then it’s just a matter of running it.* But do I get around to it?
Understanding implication isn’t usually as simple as I made it out to be above. People can work hard on a problem, and not find the answer for a lot of reasons—even if they have everything they need to know to solve it. Because they also have a lot of other information, and before they have the answer, they don’t know what is, and what isn’t relevant.
In other words, where implication is trivial and fast, reflection may be trivial and fast. If not...
The proof I never find does not move me.
*After getting the right version of the programming language downloaded, and working properly, just to do this one thing.