Newtons theory of relativity has flaws but it’s still a good idea and can be used in plenty of cases.
The amount of goodness approach has no objective way to determine the sizes of the amounts, so it leads to subjective bias instead of objective knowledge, and it creates unresolvable disagreements between people.
There’s nothing bad about two people with different priors coming to different conclusions. It creates an intellectual climate where a lot of different ideas get explored. Most breakthrough ideas have plenty of flaws at their birth and need to go through a lot of refinement to get valuable.
All solutions are equal because they all solve the problem.
If my problem is that I want to have a successful job interview, then I don’t have a binary outcome. I want to get the job earning as much money as possible and modeling the salary with a scalar makes much more sense than having binary judgments.
Furthermore anytime I want to maximize the probability of an outcome I also care about a scalar. Why do you think that probabilities shouldn’t be central in epistemology?
Newtons theory of relativity has flaws but it’s still a good idea and can be used in plenty of cases.
No it can’t. It can only be used in situations where it happens to agree with reality. That’s not the same as the theory being correct.
The amount of goodness approach has no objective way to determine the sizes of the amounts, so it leads to subjective bias instead of objective knowledge, and it creates unresolvable disagreements between people.
There’s nothing bad about two people with different priors coming to different conclusions. It creates an intellectual climate where a lot of different ideas get explored. Most breakthrough ideas have plenty of flaws at their birth and need to go through a lot of refinement to get valuable.
You have misunderstood the problem. The problem is not that people come to different conclusions. Rather, the problem is that people are completely arbitrarily assigning scores to ideas. Since there is no objective reality underlying their scoring, there no rational way for any two people to come to agreement on scores.
All solutions are equal because they all solve the problem.
If my problem is that I want to have a successful job interview, then I don’t have a binary outcome. I want to get the job earning as much money as possible and modeling the salary with a scalar makes much more sense than having binary judgments.
Making a judgement about whether to take a job is a yes or no judgement. Making a decision about whether to say X during a job interview is a yes or no judgement. That doesn’t prevent you from modelling salary with a scalar. If you judge that you should always take the job that earns you as much money as possible then if job A money > job B money, you will say yes to A and no to B.
Furthermore anytime I want to maximize the probability of an outcome I also care about a scalar. Why do you think that probabilities shouldn’t be central in epistemology?
An idea either solves a problem or it doesn’t.
There is no way to assign probabilities to ideas. Theories such as quantum mechanics assign probabilities to events, e.g. - radioactive decay of an atom. Assigning a probability to a theory makes no sense since there is no rule for assigning probabilities in the absence of an explanatory theory.
Newtons theory of relativity has flaws but it’s still a good idea and can be used in plenty of cases.
Is this intended to contradict something in the article?
There’s nothing bad about two people with different priors coming to different conclusions.
People often disagree, np, but if there’s no possible way to agree – if everything is just arbitrary – then you have a problem.
If my problem is that I want to have a successful job interview
That’s not a well-defined problem.
Furthermore anytime I want to maximize the probability of an outcome I also care about a scalar. Why do you think that probabilities shouldn’t be central in epistemology?
Maximizing a single metric has a binary outcome: either you did the thing which maximizes it or you didn’t.
This link proposes a new improvement on epistemology:
http://fallibleideas.com/essays/yes-no-argument
Newtons theory of relativity has flaws but it’s still a good idea and can be used in plenty of cases.
There’s nothing bad about two people with different priors coming to different conclusions. It creates an intellectual climate where a lot of different ideas get explored. Most breakthrough ideas have plenty of flaws at their birth and need to go through a lot of refinement to get valuable.
If my problem is that I want to have a successful job interview, then I don’t have a binary outcome. I want to get the job earning as much money as possible and modeling the salary with a scalar makes much more sense than having binary judgments.
Furthermore anytime I want to maximize the probability of an outcome I also care about a scalar. Why do you think that probabilities shouldn’t be central in epistemology?
No it can’t. It can only be used in situations where it happens to agree with reality. That’s not the same as the theory being correct.
You have misunderstood the problem. The problem is not that people come to different conclusions. Rather, the problem is that people are completely arbitrarily assigning scores to ideas. Since there is no objective reality underlying their scoring, there no rational way for any two people to come to agreement on scores.
Making a judgement about whether to take a job is a yes or no judgement. Making a decision about whether to say X during a job interview is a yes or no judgement. That doesn’t prevent you from modelling salary with a scalar. If you judge that you should always take the job that earns you as much money as possible then if job A money > job B money, you will say yes to A and no to B.
An idea either solves a problem or it doesn’t.
There is no way to assign probabilities to ideas. Theories such as quantum mechanics assign probabilities to events, e.g. - radioactive decay of an atom. Assigning a probability to a theory makes no sense since there is no rule for assigning probabilities in the absence of an explanatory theory.
Is this intended to contradict something in the article?
People often disagree, np, but if there’s no possible way to agree – if everything is just arbitrary – then you have a problem.
That’s not a well-defined problem.
Maximizing a single metric has a binary outcome: either you did the thing which maximizes it or you didn’t.