One doesn’t need to assume an objective reality if one wants to be agentic. One can believe that 1) Stuff you do influences your prosperity 2) It is possible select for more prosperous influences.
The use of the concept of “effective” is a bit wonky there and the word seems to carry a lot of the meaning. What I know of in my memory “effective method” is a measure of what a computer or mathematician is able to unambigiously specify. I have it hard to imagine to fairly judge a method to be ineffective.
Just because you need to have a starting point doesn’t mean that your approach needs to be axiomatic.
It is unclear why planetary consiouness would be desirable. If you admit that you can’t know what happen on the other side of the planet to a great degree you don’t have to rely on unreliable data mediums. Typically your life happens here and not there. And even if “there” is relevant to your life it usually has an intermediary through which it affects stuff “here”.
I’m very grateful that you bring up these points. Sorry for the long response, but I like your comment and would like to write down some thoughts on each part of it.
One doesn’t need to assume an objective reality if one wants to be agentic. One can believe that 1) Stuff you do influences your prosperity 2) It is possible select for more prosperous influences.
First of all, I think choosing the term “objective” in my post was a too strong, and not quite well-defined. (My post also seems at risk of circular reasoning because it somehow tries to argue for rationality using rationality.) I really should have thought more about this paragraph. You proposed an alternative to the assumption of an objective reality. While this also requires the assumption that there are some “real” rules that tell which of one’s actions cause which effects to my sensations today or in the future, and thus some form of reality, this reality indeed could be purely subjective in the sense that other “sentient beings” (if there are any) might not experience the same “reality”, the same rules.
The use of the concept of “effective” is a bit wonky there and the word seems to carry a lot of the meaning. What I know of in my memory “effective method” is a measure of what a computer or mathematician is able to unambigiously specify. I have it hard to imagine to fairly judge a method to be ineffective.
What I mean by effectiveness is a “measure of completeness”: If some method for obtaining knowledge does not obtain any knowledge at all, it is not effective at all; if it would be able to derive all true statements about the world, it would be very effective. Logic is a tool which just consists of putting existing statements together and yields new statements that are guaranteed to be true, given that the hypotheses were correct. So I’d argue that not having logic in one’s toolbox is never an advantage with respect to effectiveness.
Just because you need to have a starting point doesn’t mean that your approach needs to be axiomatic.
This is not clear to me. What do you think is the difference between an axiom and a starting point in epistemology?
It is unclear why planetary consiouness would be desirable. If you admit that you can’t know what happen on the other side of the planet to a great degree you don’t have to rely on unreliable data mediums. Typically your life happens here and not there. And even if “there” is relevant to your life it usually has an intermediary through which it affects stuff “here”.
This is also a very good point, and I’ll try to clarify. Consider an asteroid that is going to collide with Earth. There will be some point in the future where we know about the existence of the asteroid, even if only for a short time frame, depending on how deadly it is. But it can be hard to know the asteroid’s position (or even existence) in advance, although this would be much more useful.
So, in a nutshell, I’m also interested in parts of reality that do not yet strongly interact with my environment but that might interact with my environment in the future. (Another reason might be ethical: We should know when somewhere on the world someone commits genocide so that we can use our impact to do something about it.)
So maybe the problem is a lag of feedback, or hidden complexity in the causal chain between one’s own actions and the feedback, and this complexity requires one to have a deep understanding of something one cannot observe in a direct way.
With effectiveness my doubt is that you iss kinds of knowledge in your definition and that logic might be less than effective in the grander scheme of things. For example the knowledge of how to ride a bike is hard to get into the scope of logic, in that respect logic is incomplete ie it leaves a bit of knowledge out. There is the issue with Mary’s room and whether color experience counts as knowledge, we can grant her allthe math test books and science books but we can still doubt whether we have caught all knowledge. Even the context of “effective method” Turing suspected that mathematicians use a kind of “insight” that coming up with a proof is a different kind of process than following a proof. Universal turing machine captures “effective method” which encompasses all of formal mathematics that person could write down. But still doubt lingers whether that is all the intersting kind of processes.
One could also be worried about a method of knowing that encapsulates logic. Divine relevation could be posited to give vast amounts of knowledge maybe enough so that further knowledge production work ceases to be viable. There is also the “trivial theory of arithmetic” where we just assume all arithmetic truths as axioms. In such a system there are no theorems, there is only a check whether or not a thing is a axiom or not. Such a system could be all encompassing and avoid the use of logical inference.
Starting point is a bit undefined, axiomaatic approach is way more defined. Sure we don’thave super cdertani “boot-system” on how we get going. But it doesn’t feature the characteristics of a axiomatic system. In the axiomatic style you can go “Assume X. Is it the case that X?” and you can definetely that “yes, X is the case”. If you tried to shoehorn the sensory reliance in axiomatic terms it would go something like “Assume X. Now turns out that X isn’t the case” which is non-sense in proof terms. Sure there is appeal to absurdity “Entertain subthought:[Assume X. X leads to contradiction]. Because subthought is contradictory the axiom set can’t all be true at the same time. Therefore not-X.”. But when our sensory expectations are violated they are not appeals to absurdity, it is more of a trial and error of “Guess X. If X then Y is a prosperous choice. Experience of Y is very unprosperous. Regard X as bad guess.”. A purely axiomatic approach will always refes to the starting definitions to resolve issues of truth. We don’t need to guess our axioms because we assume them true, which in effect we define to be true. “Assume all Xs are Y”, “well what if I find an X that isn’t Y?”, “then it is not an X, thefore you can’t find an X that isn’t Y”
I get that getting asteroided would be my business. But knowing what half of china is going to have for lunch tomorrow really isn’t, I am fine not knowing that I am fine that I don’t have control over that they can have their culinary autonomy. When you would scan for impact asteroids you would not generally scan all things in the same way, but focus on paths and locations that could contain dangerous elements which means giving more scrutinity to some and less to others. There is also the issue of balancing the prediction horizon over several threats. Do you want to spend time getting an addtiional decade advance warning on a collider asteroid or do you want to get another decade advance warning on climate disaster? Just because you can fret about or control something doesn’t mean you should. And integrating garbage can be more dangerous than acknowledging that you don’t know.
Regarding logic and methods of knowing, I agree that logic might not be the only useful way of knowledge production, but why shouldn’t you have it in your toolbox? I’m just trying to argue that there’s no reason for anyone to neglect logical arguments if they yield new knowledge.
I agree that “prior” is a vastly better word choice than “axiom” because it allows us to refine the prior later.
The “planetary consciousness” thing also appears to me as a misunderstanding: I don’t want to propose that every information about the world should be retrieved and processed, in the same way that even in my direct environment, what my neighbour does in his house is none of my business.
One doesn’t need to assume an objective reality if one wants to be agentic. One can believe that 1) Stuff you do influences your prosperity 2) It is possible select for more prosperous influences.
The use of the concept of “effective” is a bit wonky there and the word seems to carry a lot of the meaning. What I know of in my memory “effective method” is a measure of what a computer or mathematician is able to unambigiously specify. I have it hard to imagine to fairly judge a method to be ineffective.
Just because you need to have a starting point doesn’t mean that your approach needs to be axiomatic.
It is unclear why planetary consiouness would be desirable. If you admit that you can’t know what happen on the other side of the planet to a great degree you don’t have to rely on unreliable data mediums. Typically your life happens here and not there. And even if “there” is relevant to your life it usually has an intermediary through which it affects stuff “here”.
I’m very grateful that you bring up these points. Sorry for the long response, but I like your comment and would like to write down some thoughts on each part of it.
First of all, I think choosing the term “objective” in my post was a too strong, and not quite well-defined. (My post also seems at risk of circular reasoning because it somehow tries to argue for rationality using rationality.)
I really should have thought more about this paragraph. You proposed an alternative to the assumption of an objective reality. While this also requires the assumption that there are some “real” rules that tell which of one’s actions cause which effects to my sensations today or in the future, and thus some form of reality, this reality indeed could be purely subjective in the sense that other “sentient beings” (if there are any) might not experience the same “reality”, the same rules.
What I mean by effectiveness is a “measure of completeness”: If some method for obtaining knowledge does not obtain any knowledge at all, it is not effective at all; if it would be able to derive all true statements about the world, it would be very effective. Logic is a tool which just consists of putting existing statements together and yields new statements that are guaranteed to be true, given that the hypotheses were correct. So I’d argue that not having logic in one’s toolbox is never an advantage with respect to effectiveness.
This is not clear to me. What do you think is the difference between an axiom and a starting point in epistemology?
This is also a very good point, and I’ll try to clarify. Consider an asteroid that is going to collide with Earth. There will be some point in the future where we know about the existence of the asteroid, even if only for a short time frame, depending on how deadly it is. But it can be hard to know the asteroid’s position (or even existence) in advance, although this would be much more useful.
So, in a nutshell, I’m also interested in parts of reality that do not yet strongly interact with my environment but that might interact with my environment in the future. (Another reason might be ethical: We should know when somewhere on the world someone commits genocide so that we can use our impact to do something about it.)
So maybe the problem is a lag of feedback, or hidden complexity in the causal chain between one’s own actions and the feedback, and this complexity requires one to have a deep understanding of something one cannot observe in a direct way.
With effectiveness my doubt is that you iss kinds of knowledge in your definition and that logic might be less than effective in the grander scheme of things. For example the knowledge of how to ride a bike is hard to get into the scope of logic, in that respect logic is incomplete ie it leaves a bit of knowledge out. There is the issue with Mary’s room and whether color experience counts as knowledge, we can grant her allthe math test books and science books but we can still doubt whether we have caught all knowledge. Even the context of “effective method” Turing suspected that mathematicians use a kind of “insight” that coming up with a proof is a different kind of process than following a proof. Universal turing machine captures “effective method” which encompasses all of formal mathematics that person could write down. But still doubt lingers whether that is all the intersting kind of processes.
One could also be worried about a method of knowing that encapsulates logic. Divine relevation could be posited to give vast amounts of knowledge maybe enough so that further knowledge production work ceases to be viable. There is also the “trivial theory of arithmetic” where we just assume all arithmetic truths as axioms. In such a system there are no theorems, there is only a check whether or not a thing is a axiom or not. Such a system could be all encompassing and avoid the use of logical inference.
Starting point is a bit undefined, axiomaatic approach is way more defined. Sure we don’thave super cdertani “boot-system” on how we get going. But it doesn’t feature the characteristics of a axiomatic system. In the axiomatic style you can go “Assume X. Is it the case that X?” and you can definetely that “yes, X is the case”. If you tried to shoehorn the sensory reliance in axiomatic terms it would go something like “Assume X. Now turns out that X isn’t the case” which is non-sense in proof terms. Sure there is appeal to absurdity “Entertain subthought:[Assume X. X leads to contradiction]. Because subthought is contradictory the axiom set can’t all be true at the same time. Therefore not-X.”. But when our sensory expectations are violated they are not appeals to absurdity, it is more of a trial and error of “Guess X. If X then Y is a prosperous choice. Experience of Y is very unprosperous. Regard X as bad guess.”. A purely axiomatic approach will always refes to the starting definitions to resolve issues of truth. We don’t need to guess our axioms because we assume them true, which in effect we define to be true. “Assume all Xs are Y”, “well what if I find an X that isn’t Y?”, “then it is not an X, thefore you can’t find an X that isn’t Y”
I get that getting asteroided would be my business. But knowing what half of china is going to have for lunch tomorrow really isn’t, I am fine not knowing that I am fine that I don’t have control over that they can have their culinary autonomy. When you would scan for impact asteroids you would not generally scan all things in the same way, but focus on paths and locations that could contain dangerous elements which means giving more scrutinity to some and less to others. There is also the issue of balancing the prediction horizon over several threats. Do you want to spend time getting an addtiional decade advance warning on a collider asteroid or do you want to get another decade advance warning on climate disaster? Just because you can fret about or control something doesn’t mean you should. And integrating garbage can be more dangerous than acknowledging that you don’t know.
Regarding logic and methods of knowing, I agree that logic might not be the only useful way of knowledge production, but why shouldn’t you have it in your toolbox? I’m just trying to argue that there’s no reason for anyone to neglect logical arguments if they yield new knowledge.
I agree that “prior” is a vastly better word choice than “axiom” because it allows us to refine the prior later.
The “planetary consciousness” thing also appears to me as a misunderstanding: I don’t want to propose that every information about the world should be retrieved and processed, in the same way that even in my direct environment, what my neighbour does in his house is none of my business.