Sure, but why will they disagree? If I say there is 60% chance of x and you say no it is more like 70% then i can ask you why you think its 10% more likely. I know many will say “its just a feeling” but what gives that feeling? If you ask enough questions, i am confident one can drill down to the reasoning behind the feeling of discomfort at a given estimate. Another benefit of WL is it should help people get better at recognizing and understanding their subconscious feelings so they can be properly evaluated and corrected. If you do not agree, it would be really interesting to hear your thoughts on this. Thanks
From the correct perspective, it is more extraordinary that anyone agrees.
If I say there is 60% chance of x and you say no it is more like 70% then i can ask you why you think its 10% more likely. I know many will say “its just a feeling” but what gives that feeling? If you ask enough questions, i am confident one can drill down to the reasoning behind the feeling of discomfort at a given estimate.
Yes but that is not where the problems stop, it is where they get really bad. Object level disagreements can maybe be solved by people who agree on an epistemology. But people aren’t in complete agreement about epistemology. And there is no agreed meta epistemology to solve epistemological disputes..that’s done with same epistemology as before. And that circularity means we should expect people to inhabit isolated, self sufficient philosophical systems.
benefit of WL is it should help people get better at recognizing and understanding their subconscious feelings so they can be properly evaluated and corrected.
Corrected by whose definition of correct? Do you not see that you are assuming you will suddenly be able to solve the foundational problems that philosophers have been wrestling with for millennia.
,If you do not agree, it would be really interesting to hear your thoughts on this. Thanks
From the correct perspective, it is more extraordinary that anyone agrees.
Correct by whose definition? In a consistent reality that is possible to make sense of, one would expect evolved beings to start coming to the same conclusions.
Corrected by whose definition of correct?
From this question i assume you are getting at our inability to know things and the idea that what is correct for one, may not be for another. That is a big discussion but let me say that i premise this on the idea that a true skeptic realizes we can not know anything for sure and that is a great base to start building our knowledge of the world from. That vastly simplifies the world and allows us to build it up again from some very basic axioms. If it is the case that your reality is fundamentally different from mine, we should learn this as we go. Remember that there is actually only one reality—that of the viewers.
Do you not see that you are assuming you will suddenly be able to solve the foundational problems that philosophers have been wrestling with for millennia.
There were many issues wrestled with for millennia that were suddenly solved. Why should this be any different? You could ask me the opposite question of course but that attitude is not the one taken by any human who ever discover something worth while. Our chances of success may be tiny but they are better than zero, which is what they would be if no one tries. Ugh… i feel like i am writing inspirational greeting card quotes but the point still stands!
Object level disagreements can maybe be solved by people who agree on an epistemology. But people aren’t in complete agreement about epistemology. And there is no agreed meta epistemology to solve epistemological disputes..that’s done with same epistemology as before.
Is there any resources you would recommend for me as a beginner to learn about the different views or better yet, a comparison of all of them?
Correct by whose definition? In a consistent reality that is possible to make sense of, one would expect evolved beings to start coming to the same conclusions.
I wouldn’t necessarily expect that for the reasons given. You have given contrary opinion, not a counter argument.
From this question i assume you are getting at our inability to know things and the idea that what is correct for one, may not be for another. That is a big discussion but let me say that i premise this on the idea that a true skeptic realizes we can not know anything for sure and that is a great base to start building our knowledge of the world from.
I don’t see how it addresses the circularity problem.
That vastly simplifies the world and allows us to build it up again from some very basic axioms. If
Or that. Is everyone going to be on the same axioms?
It is the case that your reality is fundamentally different from mine, we should learn this as we go. Remember that there is actually only one reality—that of the viewers.
The existence of a single reality isn’t enough to guarantee convergence of beliefs for the reasons given.
Do you not see that you are assuming you will suddenly be able to solve the foundational problems that philosophers have been wrestling with for millennia.
There were many issues wrestled with for millennia that were suddenly solved. Why should this be any different?
That doesn’t make sense. The fact that something was settled eventually doesn’t mean that you probably problems are going to be settled at a time convenient for you.
I could ask me the opposite question of course but that attitude is not the one taken by any human who ever discover something worth while. Our chances of success may be tiny but they are better than zero, which is what they would be if no one tries. Ugh… i feel like i am writing inspirational greeting card quotes but the point still stands!
Yes I feel that you are talking in vague but positive generalities.
Yes I feel that you are talking in vague but positive generalities.
First, on a side note, what do you mean by “but positive”? As in idealistic?
Excuse my vagueness. I think it comes from trying to cover too much at once. I am going to pick on a fundamental idea i have and see your response because if you update my opinion on this, it will cover much of the other issues you raised.
I wrote a small post (www.wikilogicfoundation.org/351-2/) on what i view as the starting point for building knowledge. In summary it says our only knowledge is that of our thought and the inputs that influence them. It is on a similar vein to “I think therefore i am” (although, maybe it should be “thoughts, therefore thoughts are” to keep the pedantics happy) . I did not mention it in the article but if we try and break it down like this, we can see that our only purpose is to satisfy our urges. For example, if we experience a God telling us we should worship them and be ‘good’ to be rewarded, we have no reason to do this unless we want to satisfy our urge to be rewarded. So no matter our believes, we all have the same core drive - to satisfy our internal demands. The next question is whether these are best satisfied cooperatively or competitively.
However i imagine you have a lot of objections thus far so i will stop to see what you have to say about that. Feel free to link me to anything relevant explaining alternate points of view if you think a post will take too long.
What I mean by “vague but positive” is that you keep saying there is no problem, but not saying why.
I wrote a small post (www.wikilogicfoundation.org/351-2/) on what i view as the starting point for building knowledge. In summary it says our only knowledge is that of our thought and the inputs that influence them.
That’s a standard starting point. I am not seeing anything that dissolves the standard problems.
So no matter our believes, we all have the same core drive—to satisfy our internal demands.
We all have the same meta-desire, whilst having completely different object level desires. How is that helping?
I don’t agree. If you’re right, we can do it right here and now, since we do not agree, which means that we are giving different probabilities of your project working—in particular, I say the probability of your project being successful is very close to zero. You presumably think it has some reasonable probability.
I think the probability is close to zero because trying to “drill down” to force agreement between people results in fights, not in agreement. But to address your argument directly, each person will end up saying “it is just a feeling” or some equivalent, in other words they will each support their own position by reasons which are effective for them but not for the other person. You could argue that in this case they should each adopt a mean value for the probability, or something like that, but neither will do so.
And since I have given my answer, why do you think there is a reasonable probability that your project will succeed?
I think the probability is close to zero because trying to “drill down” to force agreement between people results in fights, not in agreement.
We are not in agreement here! Do you think its possible to discuss this and have one or both of us change our initial stance or will that attempt merely result in a fight? Note, i am sure it is possible to result in a fight but i do not think its a forgone conclusion. On the contrary, i think most worthwhile points of view were formed by hearing one or more opposing views on the topic.
they will each support their own position by reasons which are effective for them but not for the other person
Why must that be the case? On a shallow level it may seem so but i think if you delve deep, you can find a best case solution. Can you give an example where two people must fundamentally disagree? I suspect any example you come up with will have a “lower level” solution where they will find it is not in their best interest.
I recognize that the hidden premise on my thinking that agreement is always possible, stems from the idea that we are all trying to reach a certain goal and a true(er) map of reality helps us get there and cooperation is the best long term strategy.
I agree that we are not in agreement. And I do think that if we continue to respond to each other indefinitely, or until we agree, it will probably result in a fight. I admit that is not guaranteed, and there have been times when people that I disagree with changed their minds, and times when I did, and times when both of us did. But those cases have been in the minority.
“We are all trying to reach a certain goal and a truer map of reality helps us get there...” The problem is that people are interested in different goals and a truer map of reality is not always helpful, depending on the goal. For example, most of the people I know in real life accept false religious doctrines. One of their main goals is fitting in with the other people who accept those doctrines. Accepting a truer map of reality would not contribute to that goal, but would hinder it. I want the truth for its own sake, so I do not accept those doctrines. But they cannot agree with me, because they are interested in a different goal, and their goal would not be helped by the truth, but hindered.
I find it is more likely that the times it degenerates into a fight is due to the lack of ability on one of the debaters. The alternative is to believe that people like ourselves are somehow special. It is anecdotal but I used to be incredibly stubborn until i met some good teachers and mentors. Now i think the burden of proof lies on the claim that, despite our apparent similarities, a large portion of humans are incapable of being reasoned with no matter how good the teacher or delivery.
Of course i expect some people physically cannot reason due to brain damage or whatever. But these are a far smaller group than what i imagine you are suggesting.
I would claim their main goal is not fitting in but achieving happiness which they do by fitting in (albeit this may not be the most optimum path). And i claim this is your goal as well. If you can accept that premise, we again have to ask if you are special in some way for valuing the truth so highly? Do you not aim to be happy? I think you and i also have the same core goal we just realize that its easier to navigate to happiness with a map that closely matches reality.
Everybody benefits from a good map. That is why a good teacher can convert bull headed people like i used to be by starting with providing tools for mapping reality such education in fallacies and biases. When packaged in an easy to digest manner, tools that help improve reality maps are so useful that very few will reject them just like very few people reject how to add and subtract.
It is anecdotal but I used to be incredibly stubborn until i met some good teachers and mentors.
I guess when you say stubborn you mean that you tried to be independent and didn’t listen to other people. That’s not the issue with the person who’s religious because most of his friends are religious.
Now i think the burden of proof lies on the claim that, despite our apparent similarities, a large portion of humans are incapable of being reasoned with no matter how good the teacher or delivery.
A good teacher who teaches well can get a lot of people to adopt a specific belief but that doesn’t necessarily mean that the students get the belief through “reasoning”. If the teacher would teach a different belief on the concept he would also get that accross.
Now i think the burden of proof lies on the claim that, despite our apparent similarities, a large portion of humans are incapable of being reasoned with no matter how good the teacher or delivery.
What evidence do you have that education in fallicies or biases helps people think better?
There seem to be many people who want to believe that’s true but as far as I know the decision science literature doesn’t consider that belief to be true.
You seem to be proposing a simplistic theory of goals, much like the simplistic theory of goals that leads Eliezer to the mistaken conclusion that AI will want to take over the world.
In particular, happiness is not one unified thing that everyone is aiming at, that is the same for them and me. If I admit that I do what I do in order to be happy, then a big part of that happiness would be “knowing the truth,” while for them, that would be only a small part, or no part at all (although perhaps “claiming to possess the truth” would be a part of it for them—but it is really not the same to value claiming to possess the truth, and to value the truth.)
Additionally, using “happiness” as it is typically used, I am in fact less happy on account of valuing the truth more, and there is no guarantee that this will ever be otherwise.
Sure, but why will they disagree? If I say there is 60% chance of x and you say no it is more like 70% then i can ask you why you think its 10% more likely. I know many will say “its just a feeling” but what gives that feeling? If you ask enough questions, i am confident one can drill down to the reasoning behind the feeling of discomfort at a given estimate. Another benefit of WL is it should help people get better at recognizing and understanding their subconscious feelings so they can be properly evaluated and corrected. If you do not agree, it would be really interesting to hear your thoughts on this. Thanks
From the correct perspective, it is more extraordinary that anyone agrees.
Yes but that is not where the problems stop, it is where they get really bad. Object level disagreements can maybe be solved by people who agree on an epistemology. But people aren’t in complete agreement about epistemology. And there is no agreed meta epistemology to solve epistemological disputes..that’s done with same epistemology as before. And that circularity means we should expect people to inhabit isolated, self sufficient philosophical systems.
Corrected by whose definition of correct? Do you not see that you are assuming you will suddenly be able to solve the foundational problems that philosophers have been wrestling with for millennia.
Correct by whose definition? In a consistent reality that is possible to make sense of, one would expect evolved beings to start coming to the same conclusions.
From this question i assume you are getting at our inability to know things and the idea that what is correct for one, may not be for another. That is a big discussion but let me say that i premise this on the idea that a true skeptic realizes we can not know anything for sure and that is a great base to start building our knowledge of the world from. That vastly simplifies the world and allows us to build it up again from some very basic axioms. If it is the case that your reality is fundamentally different from mine, we should learn this as we go. Remember that there is actually only one reality—that of the viewers.
There were many issues wrestled with for millennia that were suddenly solved. Why should this be any different? You could ask me the opposite question of course but that attitude is not the one taken by any human who ever discover something worth while. Our chances of success may be tiny but they are better than zero, which is what they would be if no one tries. Ugh… i feel like i am writing inspirational greeting card quotes but the point still stands!
I wouldn’t necessarily expect that for the reasons given. You have given contrary opinion, not a counter argument.
I don’t see how it addresses the circularity problem.
Or that. Is everyone going to be on the same axioms?
The existence of a single reality isn’t enough to guarantee convergence of beliefs for the reasons given.
That doesn’t make sense. The fact that something was settled eventually doesn’t mean that you probably problems are going to be settled at a time convenient for you.
Yes I feel that you are talking in vague but positive generalities.
First, on a side note, what do you mean by “but positive”? As in idealistic? Excuse my vagueness. I think it comes from trying to cover too much at once. I am going to pick on a fundamental idea i have and see your response because if you update my opinion on this, it will cover much of the other issues you raised.
I wrote a small post (www.wikilogicfoundation.org/351-2/) on what i view as the starting point for building knowledge. In summary it says our only knowledge is that of our thought and the inputs that influence them. It is on a similar vein to “I think therefore i am” (although, maybe it should be “thoughts, therefore thoughts are” to keep the pedantics happy) . I did not mention it in the article but if we try and break it down like this, we can see that our only purpose is to satisfy our urges. For example, if we experience a God telling us we should worship them and be ‘good’ to be rewarded, we have no reason to do this unless we want to satisfy our urge to be rewarded. So no matter our believes, we all have the same core drive - to satisfy our internal demands. The next question is whether these are best satisfied cooperatively or competitively. However i imagine you have a lot of objections thus far so i will stop to see what you have to say about that. Feel free to link me to anything relevant explaining alternate points of view if you think a post will take too long.
What I mean by “vague but positive” is that you keep saying there is no problem, but not saying why.
That’s a standard starting point. I am not seeing anything that dissolves the standard problems.
We all have the same meta-desire, whilst having completely different object level desires. How is that helping?
I don’t agree. If you’re right, we can do it right here and now, since we do not agree, which means that we are giving different probabilities of your project working—in particular, I say the probability of your project being successful is very close to zero. You presumably think it has some reasonable probability.
I think the probability is close to zero because trying to “drill down” to force agreement between people results in fights, not in agreement. But to address your argument directly, each person will end up saying “it is just a feeling” or some equivalent, in other words they will each support their own position by reasons which are effective for them but not for the other person. You could argue that in this case they should each adopt a mean value for the probability, or something like that, but neither will do so.
And since I have given my answer, why do you think there is a reasonable probability that your project will succeed?
We are not in agreement here! Do you think its possible to discuss this and have one or both of us change our initial stance or will that attempt merely result in a fight? Note, i am sure it is possible to result in a fight but i do not think its a forgone conclusion. On the contrary, i think most worthwhile points of view were formed by hearing one or more opposing views on the topic.
Why must that be the case? On a shallow level it may seem so but i think if you delve deep, you can find a best case solution. Can you give an example where two people must fundamentally disagree? I suspect any example you come up with will have a “lower level” solution where they will find it is not in their best interest. I recognize that the hidden premise on my thinking that agreement is always possible, stems from the idea that we are all trying to reach a certain goal and a true(er) map of reality helps us get there and cooperation is the best long term strategy.
I agree that we are not in agreement. And I do think that if we continue to respond to each other indefinitely, or until we agree, it will probably result in a fight. I admit that is not guaranteed, and there have been times when people that I disagree with changed their minds, and times when I did, and times when both of us did. But those cases have been in the minority.
“We are all trying to reach a certain goal and a truer map of reality helps us get there...” The problem is that people are interested in different goals and a truer map of reality is not always helpful, depending on the goal. For example, most of the people I know in real life accept false religious doctrines. One of their main goals is fitting in with the other people who accept those doctrines. Accepting a truer map of reality would not contribute to that goal, but would hinder it. I want the truth for its own sake, so I do not accept those doctrines. But they cannot agree with me, because they are interested in a different goal, and their goal would not be helped by the truth, but hindered.
I find it is more likely that the times it degenerates into a fight is due to the lack of ability on one of the debaters. The alternative is to believe that people like ourselves are somehow special. It is anecdotal but I used to be incredibly stubborn until i met some good teachers and mentors. Now i think the burden of proof lies on the claim that, despite our apparent similarities, a large portion of humans are incapable of being reasoned with no matter how good the teacher or delivery. Of course i expect some people physically cannot reason due to brain damage or whatever. But these are a far smaller group than what i imagine you are suggesting.
I would claim their main goal is not fitting in but achieving happiness which they do by fitting in (albeit this may not be the most optimum path). And i claim this is your goal as well. If you can accept that premise, we again have to ask if you are special in some way for valuing the truth so highly? Do you not aim to be happy? I think you and i also have the same core goal we just realize that its easier to navigate to happiness with a map that closely matches reality. Everybody benefits from a good map. That is why a good teacher can convert bull headed people like i used to be by starting with providing tools for mapping reality such education in fallacies and biases. When packaged in an easy to digest manner, tools that help improve reality maps are so useful that very few will reject them just like very few people reject how to add and subtract.
I guess when you say stubborn you mean that you tried to be independent and didn’t listen to other people. That’s not the issue with the person who’s religious because most of his friends are religious.
A good teacher who teaches well can get a lot of people to adopt a specific belief but that doesn’t necessarily mean that the students get the belief through “reasoning”. If the teacher would teach a different belief on the concept he would also get that accross.
What evidence do you have that education in fallicies or biases helps people think better? There seem to be many people who want to believe that’s true but as far as I know the decision science literature doesn’t consider that belief to be true.
You seem to be proposing a simplistic theory of goals, much like the simplistic theory of goals that leads Eliezer to the mistaken conclusion that AI will want to take over the world.
In particular, happiness is not one unified thing that everyone is aiming at, that is the same for them and me. If I admit that I do what I do in order to be happy, then a big part of that happiness would be “knowing the truth,” while for them, that would be only a small part, or no part at all (although perhaps “claiming to possess the truth” would be a part of it for them—but it is really not the same to value claiming to possess the truth, and to value the truth.)
Additionally, using “happiness” as it is typically used, I am in fact less happy on account of valuing the truth more, and there is no guarantee that this will ever be otherwise.