Further, since the criterion for knowing what is true is unreliably known, we must be choosing that criterion on some other basis than truth, and so instead view that prior criterion as coming from usefulness to some purpose we have.
Doesn’t this have the standard issue with philosophical pragmatism, i.e. that knowing what is useful requires knowing about reality? (In other words, reducing questions of truth to questions of usefulness reduces one question to a different one that is no easier)
Certainly, ontologies must be selected partially based on criteria other than correspondence with reality (such as analytic tractability), but for these ontologies to be useful in modeling reality, they must be selected based on a pre-ontological epistemology, not only a pre-ontological telos.
In general I’d say postrationality does necessarily make use of the pre-ontological, what I’d call the ontic, things in themselves, or things-as-things. I think this is why most serious postrationalist-identifying folks I know practice some form of meditation: we need a way to get in touch with reality with as little ontology as possible, and meditation includes well-explored techniques for doing just that. Because you are right: ultimately we end up grounding things in what we don’t and can’t know (and maybe can’t even experience), which I call metaphysical speculation, and is pointing at the same thing as Kierkegaard’s “leap of faith” (although without Kierkegaard’s Christian bias). Much of the challenge we face is dealing with the problem of balancing the uncertainty from speculation and the pragmatic need to get on with life.
I guess what I am getting at is: Kierkegaard’s pre-ontology doesn’t selectively choose an ontology that has high correspondence with reality, so he has a weak pre-ontolological epistemology. It is possible to have a better pre-ontological epistemology that Kierkegaard. Meditation probably helps, as do the principles discussed in this post on problem formulation. (To the extent that I take pre-ontology/meta-ontology seriously, I guess I might be a postrationalist according to some definitions)
A specific example of a pre-ontological epistemology is a “guess-and-check-and-refine” procedure, where you get acquainted with the phenomenon of interest, come up with some different ontologies for it, check these ontologies based on factors like correspondence with (your experience of) the phenomenon and internal coherence, and refine them when they have problems and it’s possible to improve them. This has some similarities to Solomonoff induction though obviously there are important differences. Even in the absence of perfect knowledge of anything and without resolving philosophical skepticism, this procedure selectively chooses ontologies that have higher correspondence with reality.
I guess you could describe this as “selecting an ontology based on how useful it is according to your telos” but this seems like a misleading description; the specific criteria used aren’t directly about usefulness, and relate to usefulness largely through being proxies for truth.
It’s quite possible that we don’t disagree on any of these points and I’m just taking issue with your description.
It’s quite possible that we don’t disagree on any of these points and I’m just taking issue with your description.
That might well be the case. I don’t have much of an answer about how to address the ontic directly without ontology and view learning about how to more fully engage it a key aspect of the zen practice I engage in, but zen also pushes you away from using language about these topics so while I may be getting more in touch with the ontic myself I’m not developing a skill to communicate about it, largely because the two are viewed to be in conflict and learning to talk about it obscures the ability to get in direct contact with it. This seems a limitation of the techniques I’m using but I’m not (yet) in a position to either say it’s a necessary limitation or that we can go beyond it.
Telos/purpose/usefulness/will is my best way of talking about what I might describe as the impersonal animating force of the universe that exists prior to our understanding of it, but I agree something is lost when I try to nail it down into language and talk about usefulness to a purpose because it puts it in the language of measurement, although I think you are right that truth is often instrumentally so important to any purpose that it ends up dominating our concerns such that rationality practice is dramatically more effective at creating the world we desire than most anything else, hence why I try to at least occasionally emphasize that metarationality seeks to realize the limitations of rationality so that we can grapple with them while also not forgetting how useful rationality is!
Mod Note: this comment seems more confrontational than it needs to be. (A couple other comments in the thread in the thread also seem like they probably cross the line. I haven’t had time to process everything and form a clear opinion, but wanted to make at least a brief note)
(this is not a comment one way or another on the overall conversation)
Added: It seems the comment I replied to has been deleted.
We don’t have the option to trade off between truth and usefulness, because we don’t have a means of establishing truth, in the sense of correspondence to reality, separately from usefulness,in the sense of predictive accuracy . If you are a typical scientific philosopher, you will treat usefulness, or predictive power as a substitute for correspondence to reality without understanding how it could work.
I’m pretty confused at this point. You started with a fairly universal statement: “we don’t have a means of establishing truth, in the sense of correspondence to reality, separately from usefulness,in the sense of predictive accuracy”. I named a counterexample: the chocolate cake hypothesis. This invalidates the universal claim, unless I’m misinterpreting something here.
It’s transparently obvious that there are lots of hypotheses similar to the chocolate cake hypotheses (it could be a vanilla cake, or a cherry cake, or...). I’m not making any relative statement about how many of these there are compared to anything else.
Ok. What do you think of cartography? Is mapping out a new territory using tools like measurement and spacial representation a process that does not establish truth separately from predictive accuracy?
It seems wrong (and perhaps a form of scientism) to frame cartography in terms of predictive accuracy, since, while the maps do end up having high predictive accuracy, the mapmaking process does not involve predictive accuracy directly, only observation and recording, and predictive accuracy is a side effect of the fact that cartography gives accurate maps.
This actually seems like a pretty general phenomenon: predictive accuracy can’t be an input into your epistemic process, since predictions are about the future. Retrodictions (i.e. “predictions” of past events) can go into your epistemic process, but usefulness is more about predictive ability rather than retrodictive ability.
. What do you think of cartography? Is mapping out a new territory using tools like measurement and spacial representation a process that does not establish truth separately from predictive accuracy
It is a process that does not establish truth separately from predictive accuracy. You can have 100% predictively accurate cartography in a simulation.
observation and recording,
observation of what? Having a perception “as if” of something doesn’t tell you what the ultimate reality is.
I don’t know what you mean by “separate from” at this point and it’s probably not worth continuing discussion until that’s clearer. (In what sense can anything be separate from anything else in a causally connected universe?)
I mean observation in the conventional sense (getting visual input and recognizing e.g. objects in it), which in humans requires a working visual cortex. Obviously cartography doesn’t resolve philosophical skepticism and I’m not claiming that it does, only that it works in producing accurate representations of the territory given assumptions that are true of the universe we inhabit.
Yes, an agent’s goals aren’t causally or probabilistically independent of its intelligence, though perhaps a weaker claim such as “almost any combination is possible” is true.
EDIT: re philosophical skepticism: okay, so how does bringing in predictive accuracy help? That doesn’t resolve philosophical skepticism either (see: no free lunch theorems).
Even if the universal claim that complete orthogonality is impossible is true - - - I notice in passing that it is argued for with a claim about how the world works, so that you are assuming scepticism has been resolved in order to resolve scepticism - - even if it is true, the correlation between prediction and correspondence could be 0.0001%.
Predictive accuracy doesn’t help with philosophical scepticism. It is nonetheless worth pursuing because it has practical benefits.
To say that a map exists, is to propose that it has substance within the Territory. Existing within the concept of being fathomable and describable. In that sense a map is in the territory too.
Rationality: that map is not the territory.
Post rationality: “the map is not the territory” is a category error. Where the NotTerritory.map is in a broader territory that post rationality has stopped pretending doesn’t exist. That broader territory has a map that is the territory.
It’s much worse than that. We don’t have the option of selecting an ontology by its correspondence to reality, separately from because we don’t have a direct test for it, only the assumption that it coincides with predictive power, and simplicity somehow add up to an indirect test.
Doesn’t this have the standard issue with philosophical pragmatism, i.e. that knowing what is useful requires knowing about reality? (In other words, reducing questions of truth to questions of usefulness reduces one question to a different one that is no easier)
Certainly, ontologies must be selected partially based on criteria other than correspondence with reality (such as analytic tractability), but for these ontologies to be useful in modeling reality, they must be selected based on a pre-ontological epistemology, not only a pre-ontological telos.
In general I’d say postrationality does necessarily make use of the pre-ontological, what I’d call the ontic, things in themselves, or things-as-things. I think this is why most serious postrationalist-identifying folks I know practice some form of meditation: we need a way to get in touch with reality with as little ontology as possible, and meditation includes well-explored techniques for doing just that. Because you are right: ultimately we end up grounding things in what we don’t and can’t know (and maybe can’t even experience), which I call metaphysical speculation, and is pointing at the same thing as Kierkegaard’s “leap of faith” (although without Kierkegaard’s Christian bias). Much of the challenge we face is dealing with the problem of balancing the uncertainty from speculation and the pragmatic need to get on with life.
I guess what I am getting at is: Kierkegaard’s pre-ontology doesn’t selectively choose an ontology that has high correspondence with reality, so he has a weak pre-ontolological epistemology. It is possible to have a better pre-ontological epistemology that Kierkegaard. Meditation probably helps, as do the principles discussed in this post on problem formulation. (To the extent that I take pre-ontology/meta-ontology seriously, I guess I might be a postrationalist according to some definitions)
A specific example of a pre-ontological epistemology is a “guess-and-check-and-refine” procedure, where you get acquainted with the phenomenon of interest, come up with some different ontologies for it, check these ontologies based on factors like correspondence with (your experience of) the phenomenon and internal coherence, and refine them when they have problems and it’s possible to improve them. This has some similarities to Solomonoff induction though obviously there are important differences. Even in the absence of perfect knowledge of anything and without resolving philosophical skepticism, this procedure selectively chooses ontologies that have higher correspondence with reality.
I guess you could describe this as “selecting an ontology based on how useful it is according to your telos” but this seems like a misleading description; the specific criteria used aren’t directly about usefulness, and relate to usefulness largely through being proxies for truth.
It’s quite possible that we don’t disagree on any of these points and I’m just taking issue with your description.
That might well be the case. I don’t have much of an answer about how to address the ontic directly without ontology and view learning about how to more fully engage it a key aspect of the zen practice I engage in, but zen also pushes you away from using language about these topics so while I may be getting more in touch with the ontic myself I’m not developing a skill to communicate about it, largely because the two are viewed to be in conflict and learning to talk about it obscures the ability to get in direct contact with it. This seems a limitation of the techniques I’m using but I’m not (yet) in a position to either say it’s a necessary limitation or that we can go beyond it.
Telos/purpose/usefulness/will is my best way of talking about what I might describe as the impersonal animating force of the universe that exists prior to our understanding of it, but I agree something is lost when I try to nail it down into language and talk about usefulness to a purpose because it puts it in the language of measurement, although I think you are right that truth is often instrumentally so important to any purpose that it ends up dominating our concerns such that rationality practice is dramatically more effective at creating the world we desire than most anything else, hence why I try to at least occasionally emphasize that metarationality seeks to realize the limitations of rationality so that we can grapple with them while also not forgetting how useful rationality is!
Mod Note: this comment seems more confrontational than it needs to be. (A couple other comments in the thread in the thread also seem like they probably cross the line. I haven’t had time to process everything and form a clear opinion, but wanted to make at least a brief note)
(this is not a comment one way or another on the overall conversation)
Added: It seems the comment I replied to has been deleted.
We don’t have the option to trade off between truth and usefulness, because we don’t have a means of establishing truth, in the sense of correspondence to reality, separately from usefulness,in the sense of predictive accuracy . If you are a typical scientific philosopher, you will treat usefulness, or predictive power as a substitute for correspondence to reality without understanding how it could work.
Aren’t there lots of false beliefs that are compatible with good predictions and we know are false? E.g. the chocolate cake hypothesis.
Lots compared to what? How do you compare that number to the number of predictively adequate models which are false for unkown reasons?
I’m pretty confused at this point. You started with a fairly universal statement: “we don’t have a means of establishing truth, in the sense of correspondence to reality, separately from usefulness,in the sense of predictive accuracy”. I named a counterexample: the chocolate cake hypothesis. This invalidates the universal claim, unless I’m misinterpreting something here.
It’s transparently obvious that there are lots of hypotheses similar to the chocolate cake hypotheses (it could be a vanilla cake, or a cherry cake, or...). I’m not making any relative statement about how many of these there are compared to anything else.
Then let me restate my point as ‘we dont have a general means...’
Ok. What do you think of cartography? Is mapping out a new territory using tools like measurement and spacial representation a process that does not establish truth separately from predictive accuracy?
It seems wrong (and perhaps a form of scientism) to frame cartography in terms of predictive accuracy, since, while the maps do end up having high predictive accuracy, the mapmaking process does not involve predictive accuracy directly, only observation and recording, and predictive accuracy is a side effect of the fact that cartography gives accurate maps.
This actually seems like a pretty general phenomenon: predictive accuracy can’t be an input into your epistemic process, since predictions are about the future. Retrodictions (i.e. “predictions” of past events) can go into your epistemic process, but usefulness is more about predictive ability rather than retrodictive ability.
It is a process that does not establish truth separately from predictive accuracy. You can have 100% predictively accurate cartography in a simulation.
observation of what? Having a perception “as if” of something doesn’t tell you what the ultimate reality is.
I don’t know what you mean by “separate from” at this point and it’s probably not worth continuing discussion until that’s clearer. (In what sense can anything be separate from anything else in a causally connected universe?)
I mean observation in the conventional sense (getting visual input and recognizing e.g. objects in it), which in humans requires a working visual cortex. Obviously cartography doesn’t resolve philosophical skepticism and I’m not claiming that it does, only that it works in producing accurate representations of the territory given assumptions that are true of the universe we inhabit.
So the orthogonality thesis is a priori false?
But that is exactly what I am taking about!
Yes, an agent’s goals aren’t causally or probabilistically independent of its intelligence, though perhaps a weaker claim such as “almost any combination is possible” is true.
EDIT: re philosophical skepticism: okay, so how does bringing in predictive accuracy help? That doesn’t resolve philosophical skepticism either (see: no free lunch theorems).
Even if the universal claim that complete orthogonality is impossible is true - - - I notice in passing that it is argued for with a claim about how the world works, so that you are assuming scepticism has been resolved in order to resolve scepticism - - even if it is true, the correlation between prediction and correspondence could be 0.0001%.
Predictive accuracy doesn’t help with philosophical scepticism. It is nonetheless worth pursuing because it has practical benefits.
To say that a map exists, is to propose that it has substance within the Territory. Existing within the concept of being fathomable and describable. In that sense a map is in the territory too.
Rationality: that map is not the territory.
Post rationality: “the map is not the territory” is a category error. Where the NotTerritory.map is in a broader territory that post rationality has stopped pretending doesn’t exist. That broader territory has a map that is the territory.
It’s much worse than that. We don’t have the option of selecting an ontology by its correspondence to reality, separately from because we don’t have a direct test for it, only the assumption that it coincides with predictive power, and simplicity somehow add up to an indirect test.