There is no “why”. If there was, then the assumptions wouldn’t be called “assumptions”. If you want to have a basis for believing anything, you have start from your foundations and build up. If those foundations are supported, then by definition they are not foundations, and the “real” foundations must necessarily be further down the chain. Your only choice is to pick suitable axioms on which to base your epistemology or to become trapped in a cycle of infinite regression, moving further and further down the chain of implications to try and find where it stops, which in practice means you’ll sit there and think forever, becoming like unto a rock.
The chain won’t stop. Not unless you artificially terminate it.
I haven’t the slightest idea what you mean by “non rationalist” (or “Ok” for that matter), but I’m going to tentatively go with “yes”, if we’re taking “non rationalist” to mean “not in accordance with the approach generally advocated on LessWrong and related blogs” and “Ok” to mean “technically allowed”. If you mean something different by “non rationalist” you’re going to have to specify it, and if by “Ok” you mean “advisable to do so in everyday life”, then heck no. All in all, I’m not really sure what your point is, here.
The significance is that if rationalists respond to sceptical challenges by assuming what they can’t prove, then they are then in the same position as reformed epistemology. That is, they can’t say why their axioms are rational, and can’t say why theists are irrational, because theists who follow RE are likewise taking the existence of God as something they are assuming because they can’t prove it: rationalism becomes a label with little meaning.
The axioms of rationality are required to reason towards positive conclusions about a real world. They are not a minimal set, because sceptics have a smaller set, which can do less.
Most people probably aren’t satisfied with the sort of “less” that universal skepticism can do.
Also, some axioms are required to reason, period. Let’s say I refuse to take ~(A ∧ ~A) as an axiom. What now? (And don’t bring up paraconsistent logic, please—it’s silly.)
Meanwhile, sceptics don’t care about the external world.
And yet strangely enough, I have yet to see a self-proclaimed “skeptic” die of starvation due to not eating.
EDIT: Actually, now that I think about it, this could very easily be a selection effect. We observe no minds that behave this way, not because such minds can’t exist, but because such minds very quickly cease to exist.
If there is no why, is any set of axioms better than any other? Could one be just as justified believing that, say, what actually happened is the opposite of what one’s memories say?
(Note: I’m going to address your questions in reverse order, as the second one is easier to answer by far. I’ll go into more detail on why the first one is so hard to answer below.)
Could one be just as justified believing that, say, what actually happened is the opposite of what one’s memories say?
Certainly, if you decide to ignore probability theory, Occam’s Razor, and a whole host of other things. It’s not advisable, but it’s possible if you choose your axioms that way. If you decide to live your life under such an assumption, be sure to tell me how it turns out.
If there is no why, is any set of axioms better than any other?
At this point, I’d say you’re maybe a bit confused about the meaning of the word “better”. For something to be “better” requires a criterion by which to judge that something; you can’t just use the word “better” in a vacuum and expect the other person to be able to immediately answer you. In most contexts, this isn’t a problem because both participants generally understand and have a single accepted definition of “better”, but since you’re advocating throwing out pretty much everything, you’re going to need to define (or better yet, Taboo) “better” before I can answer your main question about a certain set of axioms being better than any other.
Certainly, if you decide to ignore probability theory, Occam’s Razor, and a whole host of other things. It’s not advisable, but it’s possible if you choose your axioms that way. If you decide to live your life under such an assumption, be sure to tell me how it turns out.
Why would one need to ignore probability theory and Occam’s Razor? Believing that the world is stagnant and that the memories one is currently thinking of are false, and that the memory of having more memories is false, seems to be a simple explanation to the universe.
At this point, I’d say you’re maybe a bit confused about the meaning of the word “better”. For something to be “better” requires a criterion by which to judge that something; you can’t just use the word “better” in a vacuum and expect the other person to be able to immediately answer you. In most contexts, this isn’t a problem because both participants generally understand and have a single accepted definition of “better”, but since you’re advocating throwing out pretty much everything, you’re going to need to define (or better yet, Taboo) “better” before I can answer your main question about a certain set of axioms being better than any other.
By better, I mean “more likely to result in true beliefs.” Or if you want to taboo true, “more likely to result in beliefs that accurately predict percepts.”
Or if you want to taboo true, “more likely to result in beliefs that accurately predict percepts.”
If I were to point out that my memories say that making some assumptions tend to lead to better perception predictions (and presumably yours also), would you accept that?
Are you actually proposing a new paradigm that you think results in systematically “better” (using your definition) beliefs? Or are you just saying that you don’t see that the paradigm of accepting these assumptions is better at a glance, and would like a more rigorous take on it? (Either is fine, I’d just respond differently depending on what you’re actually saying.)
If I were to point out that my memories say that making some assumptions tend to lead to better perception predictions (and presumably yours also), would you accept that?
I’d only believe it if you gave evidence to support it.
Are you actually proposing a new paradigm that you think results in systematically “better” (using your definition) beliefs? Or are you just saying that you don’t see that the paradigm of accepting these assumptions is better at a glance, and would like a more rigorous take on it? (Either is fine, I’d just respond differently depending on what you’re actually saying.)
The latter. What gave you the suggestion that I was proposing an improved paradigm?
Though the linked article stated that one only needs to believe that induction has a non-super-exponentially small chance of working and that a single large ordinal is well-ordered, but it did really justify this. It spoke nothing about why belief in one’s percepts and reasoning skills is needed.
Believing that the world is stagnant and that the memories one is currently thinking of are false, and that the memory of having more memories is false, seems to be a simple explanation to the universe.
Unfortunately, this still doesn’t solve the problem. You’re trying to doubt everything, even logic itself. What makes you think the concept of “truth” is even meaningful?
There is no “why”. If there was, then the assumptions wouldn’t be called “assumptions”. If you want to have a basis for believing anything, you have start from your foundations and build up. If those foundations are supported, then by definition they are not foundations, and the “real” foundations must necessarily be further down the chain. Your only choice is to pick suitable axioms on which to base your epistemology or to become trapped in a cycle of infinite regression, moving further and further down the chain of implications to try and find where it stops, which in practice means you’ll sit there and think forever, becoming like unto a rock.
The chain won’t stop. Not unless you artificially terminate it.
So it’s Ok to use non rationalist assumptions?
I haven’t the slightest idea what you mean by “non rationalist” (or “Ok” for that matter), but I’m going to tentatively go with “yes”, if we’re taking “non rationalist” to mean “not in accordance with the approach generally advocated on LessWrong and related blogs” and “Ok” to mean “technically allowed”. If you mean something different by “non rationalist” you’re going to have to specify it, and if by “Ok” you mean “advisable to do so in everyday life”, then heck no. All in all, I’m not really sure what your point is, here.
Your guesses are about right:.
The significance is that if rationalists respond to sceptical challenges by assuming what they can’t prove, then they are then in the same position as reformed epistemology. That is, they can’t say why their axioms are rational, and can’t say why theists are irrational, because theists who follow RE are likewise taking the existence of God as something they are assuming because they can’t prove it: rationalism becomes a label with little meaning.
So you’re saying that taking a few background axioms that are pretty much required to reason… is equivalent to theism.
I think you may benefit from reading The Fallacy of Grey, as well as The Relativity of Wrong.
The axioms of rationality are required to reason towards positive conclusions about a real world. They are not a minimal set, because sceptics have a smaller set, which can do less.
Most people probably aren’t satisfied with the sort of “less” that universal skepticism can do.
Also, some axioms are required to reason, period. Let’s say I refuse to take ~(A ∧ ~A) as an axiom. What now? (And don’t bring up paraconsistent logic, please—it’s silly.)
Rational axioms do less than theistic axioms, and a lot of people arent happy with that “less” either.
Not in terms of reasoning “towards positive conclusions about a real world”, they don’t.
Most of whom are theists trying to advance an agenda. “Rational” axioms, on the other hand, are required to have an agenda.
From the scepti.cs perspective, rationalists are advancing the agenda that there is a knowable external world.
No. They do less in terms of the soul and things like that, which theists care about, and rationalists don’t.
Meanwhile, sceptics don’t care about the external world.
So everything comes down, to epistemology, and epistemology comes down to values. Is that problem?
And yet strangely enough, I have yet to see a self-proclaimed “skeptic” die of starvation due to not eating.
EDIT: Actually, now that I think about it, this could very easily be a selection effect. We observe no minds that behave this way, not because such minds can’t exist, but because such minds very quickly cease to exist.
They have answers to that objection , just as rationalists have answers to theists’ objections.
If there is no why, is any set of axioms better than any other? Could one be just as justified believing that, say, what actually happened is the opposite of what one’s memories say?
(Note: I’m going to address your questions in reverse order, as the second one is easier to answer by far. I’ll go into more detail on why the first one is so hard to answer below.)
Certainly, if you decide to ignore probability theory, Occam’s Razor, and a whole host of other things. It’s not advisable, but it’s possible if you choose your axioms that way. If you decide to live your life under such an assumption, be sure to tell me how it turns out.
At this point, I’d say you’re maybe a bit confused about the meaning of the word “better”. For something to be “better” requires a criterion by which to judge that something; you can’t just use the word “better” in a vacuum and expect the other person to be able to immediately answer you. In most contexts, this isn’t a problem because both participants generally understand and have a single accepted definition of “better”, but since you’re advocating throwing out pretty much everything, you’re going to need to define (or better yet, Taboo) “better” before I can answer your main question about a certain set of axioms being better than any other.
Why would one need to ignore probability theory and Occam’s Razor? Believing that the world is stagnant and that the memories one is currently thinking of are false, and that the memory of having more memories is false, seems to be a simple explanation to the universe.
By better, I mean “more likely to result in true beliefs.” Or if you want to taboo true, “more likely to result in beliefs that accurately predict percepts.”
If I were to point out that my memories say that making some assumptions tend to lead to better perception predictions (and presumably yours also), would you accept that?
Are you actually proposing a new paradigm that you think results in systematically “better” (using your definition) beliefs? Or are you just saying that you don’t see that the paradigm of accepting these assumptions is better at a glance, and would like a more rigorous take on it? (Either is fine, I’d just respond differently depending on what you’re actually saying.)
I’d only believe it if you gave evidence to support it.
The latter. What gave you the suggestion that I was proposing an improved paradigm?
You seemed to think that not taking some assumptions could lead to better beliefs, and it wasn’t clear to me how strong your “could” was.
You seem to accept induction, so I’ll refer you to http://lesswrong.com/lw/gyf/you_only_need_faith_in_two_things/
Though the linked article stated that one only needs to believe that induction has a non-super-exponentially small chance of working and that a single large ordinal is well-ordered, but it did really justify this. It spoke nothing about why belief in one’s percepts and reasoning skills is needed.
Not in the sense that I have in mind.
Unfortunately, this still doesn’t solve the problem. You’re trying to doubt everything, even logic itself. What makes you think the concept of “truth” is even meaningful?