Philosophy is a social/intellectual process taking place in the world. If you understand the world, you understand how philosophy proceeds.
What if I’m mainly interested in how philosophical reasoning ideally ought to work? (Similar to how decision theory studies how decision making normatively should work, not how it actually works in people.) Of course if we have little idea how real-world philosophical reasoning works, understanding that first would probably help a lot, but that’s not the ultimate goal, at least not for me, for both intellectual and AI reasons.
The latter because humans do a lot of bad philosophy and often can’t recognize good philosophy. (See popularity of two-boxing among professional philosophers.) I want a theory of ideal/normative philosophical reasoning so we can build AI that improves upon human philosophy, and in a way that convinces many people (because they believe the theory is right) to trust the AI’s philosophical reasoning.
This leads to a view where philosophy is one of many types of discourse/understanding that each shape each other (a non-foundationalist view). This is perhaps disappointing if you wanted ultimate foundations in some simple framework.
Sure ultimate foundations in some simple framework would be nice but I’ll take whatever I can get. How would you flesh out the non-foundationalist view?
Most thought is currently not foundationalist, but perhaps a foundational re-orientation could be found by understanding the current state of non-foundational thought.
I don’t understand this sentence at all. Please explain more?
What if I’m mainly interested in how philosophical reasoning ideally ought to work?
My view would suggest: develop a philosophical view of normativity and apply that view to the practice of philosophy itself. For example, if it is in general unethical to lie, then it is also unethical to lie about philosophy. Philosophical practice being normative would lead to some outcomes being favored over others. (It seems like a problem if you need philosophy to have a theory of normativity and a theory of normativity to do meta-philosophy and meta-philosophy to do better philosophy, but earlier versions of each theory can be used to make later versions of them, in a bootstrapping process like with compilers)
I mean normativity to include ethics, aesthetics, teleology, etc. Developing a theory of teleology in general would allow applying that theory to philosophy (taken as a system/practice/etc). It would be strange to have a distinct normative theory for philosophical practice than for other practices, since philosophical practice is a subset of practice in general; philosophical normativity is a specified variant of general normativity, analogous to normativity about other areas of study. The normative theory is mostly derived from cases other than cases of normative philosophizing, since most activity that normativity could apply to is not philosophizing.
How would you flesh out the non-foundationalist view?
That seems like describing my views about things in general, which would take a long time. The original comment was meant to indicate what is non-foundationalist about this view.
I don’t understand this sentence at all. Please explain more?
Imagine a subjective credit system. A bunch of people think other people are helpful/unhelpful to them. Maybe they help support helpful people and so people who are more helpful to helpful people (etc) succeed more. It’s subjective, there’s no foundation where there’s some terminal goal and other things are instrumental to that.
An intersubjective credit system would be the outcome of something like Pareto optimal bargaining between the people, which would lead to a unified utility function, which would imply some terminal goals and other goals being instrumental.
Speculatively, it’s possible to create an intersubjective credit system (implying a common currency) given a subjective credit system.
This might apply at multiple levels. Perhaps individual agents seem to have terminal goals because different parts of their mind create subjective credit systems and then they get transformed into an objective credit system in a way that prevents money pumps etc (usual consequences of not being a VNM agent).
I’m speculating that a certain kind of circular-seeming discourse, where area A is explained in terms of area B and vice versa, might be in some way analogous to a subjective credit network, and there might be some transformation of it that puts foundations on everything, analogous to founding an intersubjective credit network in terminal goals. Some things that look like circular reasoning can be made valid and others can’t. The cases I’m considering are like, cases where your theory of normativity depends on your theory of philosophy and your theory of philosophy depends on your theory of meta-philosophy and your theory of meta-philosophy depends on your theory of normativity, which seems kind of like a subjective credit system.
Sorry if this is confusing (it’s confusing to me too).
QQ about the qualifier ‘philosophical’ in your question “What if I’m mainly interested in how philosophical reasoning ideally ought to work?”
Are you suggesting that ‘philosophical’ reasoning differs in an essential way from other kinds of reasoning, because of the subject matter that qualifies it? Are you more or less inclined to views like Kant’s ‘Critique of Pure Reason,’ where the nature of philosophical subjects puts limits on the ability to reason about them?
I wrote a post about my current guesses at what distinguishes philosophical from other kinds of reasoning. Let me know if that doesn’t answer your question.
What if I’m mainly interested in how philosophical reasoning ideally ought to work? (Similar to how decision theory studies how decision making normatively should work, not how it actually works in people.) Of course if we have little idea how real-world philosophical reasoning works, understanding that first would probably help a lot, but that’s not the ultimate goal, at least not for me, for both intellectual and AI reasons.
The latter because humans do a lot of bad philosophy and often can’t recognize good philosophy. (See popularity of two-boxing among professional philosophers.) I want a theory of ideal/normative philosophical reasoning so we can build AI that improves upon human philosophy, and in a way that convinces many people (because they believe the theory is right) to trust the AI’s philosophical reasoning.
Sure ultimate foundations in some simple framework would be nice but I’ll take whatever I can get. How would you flesh out the non-foundationalist view?
I don’t understand this sentence at all. Please explain more?
My view would suggest: develop a philosophical view of normativity and apply that view to the practice of philosophy itself. For example, if it is in general unethical to lie, then it is also unethical to lie about philosophy. Philosophical practice being normative would lead to some outcomes being favored over others. (It seems like a problem if you need philosophy to have a theory of normativity and a theory of normativity to do meta-philosophy and meta-philosophy to do better philosophy, but earlier versions of each theory can be used to make later versions of them, in a bootstrapping process like with compilers)
I mean normativity to include ethics, aesthetics, teleology, etc. Developing a theory of teleology in general would allow applying that theory to philosophy (taken as a system/practice/etc). It would be strange to have a distinct normative theory for philosophical practice than for other practices, since philosophical practice is a subset of practice in general; philosophical normativity is a specified variant of general normativity, analogous to normativity about other areas of study. The normative theory is mostly derived from cases other than cases of normative philosophizing, since most activity that normativity could apply to is not philosophizing.
That seems like describing my views about things in general, which would take a long time. The original comment was meant to indicate what is non-foundationalist about this view.
Imagine a subjective credit system. A bunch of people think other people are helpful/unhelpful to them. Maybe they help support helpful people and so people who are more helpful to helpful people (etc) succeed more. It’s subjective, there’s no foundation where there’s some terminal goal and other things are instrumental to that.
An intersubjective credit system would be the outcome of something like Pareto optimal bargaining between the people, which would lead to a unified utility function, which would imply some terminal goals and other goals being instrumental.
Speculatively, it’s possible to create an intersubjective credit system (implying a common currency) given a subjective credit system.
This might apply at multiple levels. Perhaps individual agents seem to have terminal goals because different parts of their mind create subjective credit systems and then they get transformed into an objective credit system in a way that prevents money pumps etc (usual consequences of not being a VNM agent).
I’m speculating that a certain kind of circular-seeming discourse, where area A is explained in terms of area B and vice versa, might be in some way analogous to a subjective credit network, and there might be some transformation of it that puts foundations on everything, analogous to founding an intersubjective credit network in terminal goals. Some things that look like circular reasoning can be made valid and others can’t. The cases I’m considering are like, cases where your theory of normativity depends on your theory of philosophy and your theory of philosophy depends on your theory of meta-philosophy and your theory of meta-philosophy depends on your theory of normativity, which seems kind of like a subjective credit system.
Sorry if this is confusing (it’s confusing to me too).
QQ about the qualifier ‘philosophical’ in your question “What if I’m mainly interested in how philosophical reasoning ideally ought to work?”
Are you suggesting that ‘philosophical’ reasoning differs in an essential way from other kinds of reasoning, because of the subject matter that qualifies it? Are you more or less inclined to views like Kant’s ‘Critique of Pure Reason,’ where the nature of philosophical subjects puts limits on the ability to reason about them?
I wrote a post about my current guesses at what distinguishes philosophical from other kinds of reasoning. Let me know if that doesn’t answer your question.