Hmmm. Bas van Fraassen’s The Scientific Image takes the side of the epistemologists on scientific questions. I take Kant to be an advocate for the epistemologists in his Critique of Pure Reason, though he makes some effort to be a compromiser. Rae Langton argues that the compromises in Kant are genuinely important, and so advocates a role for both epistemology and ontology, in her Kantian Humility. Heidegger seemed to want to make ontology primary, but I can’t really recommend anything he wrote. It’s difficult to know exactly what to recommend, because this issue is thoroughly entangled with a host of other issues, and any discussion of it is heavily colored (and perhaps heavily distorted) by whichever other issues are also on the table. Still, those are a few possibilities which come to mind.
I feel like it’s more epistemological, but then I tend to think everything is. Perhaps it is another symptom of my biases, but I think it more likely that trying to build an AI will help clarify questions about ontology vs. epistemology than that anything in our present knowledge of ontology vs. epistemology will help in devising strategies for building an AI.
Well, this would be an example of one of the projects that I think may teach us something. But if you are speaking of “an ontology,” rather than just “ontology,” you may be talking about some theory of relativized ontologies, but more likely you’re not speaking about ontology in the same way as those who prioritize it over epistemology. Those who make epistemology primary still talk about things, they just disagree with the ontologists about complicated aspects of our relationship to the things and what our talk about the things means.
you may be talking about some theory of relativized ontologies, but more likely you’re not speaking about ontology in the same way as those who prioritize it over epistemology.
I’m not sure. Barry Smith who leads Basic Formal Ontology which get’s used for medical informatics writes in his “Against Fantology”-paper sentences like:
It underlies the logical atomism of Bertrand Russell, including
the central thesis according to which all form is logical form – a thesis which, be
it noted, leaves no room for a discipline of formal ontology as something separate
from formal logic.
Baysianism as described by Yvain as described in seems a bit like what Barry Smith describes as spreadsheet ontology with probability values instead of logical true false values.
Even if ontological questions can’t be setteled in a way to decide which ontology is more correct than another, it seems to me that you have to decide for one ontology to use for your AGI. Different choices of how you structure that ontology will have a substantial effect on the way the AGI reasons.
Hmmm. Bas van Fraassen’s The Scientific Image takes the side of the epistemologists on scientific questions. I take Kant to be an advocate for the epistemologists in his Critique of Pure Reason, though he makes some effort to be a compromiser. Rae Langton argues that the compromises in Kant are genuinely important, and so advocates a role for both epistemology and ontology, in her Kantian Humility. Heidegger seemed to want to make ontology primary, but I can’t really recommend anything he wrote. It’s difficult to know exactly what to recommend, because this issue is thoroughly entangled with a host of other issues, and any discussion of it is heavily colored (and perhaps heavily distorted) by whichever other issues are also on the table. Still, those are a few possibilities which come to mind.
When focusing on a issue such as friendliness of an FAI do you think that’s in the domain of epistemology or ontology?
I feel like it’s more epistemological, but then I tend to think everything is. Perhaps it is another symptom of my biases, but I think it more likely that trying to build an AI will help clarify questions about ontology vs. epistemology than that anything in our present knowledge of ontology vs. epistemology will help in devising strategies for building an AI.
Cyc calls itself an ontology. Doesn’t any AI need such an ontology to reason about the world?
Well, this would be an example of one of the projects that I think may teach us something. But if you are speaking of “an ontology,” rather than just “ontology,” you may be talking about some theory of relativized ontologies, but more likely you’re not speaking about ontology in the same way as those who prioritize it over epistemology. Those who make epistemology primary still talk about things, they just disagree with the ontologists about complicated aspects of our relationship to the things and what our talk about the things means.
I’m not sure. Barry Smith who leads Basic Formal Ontology which get’s used for medical informatics writes in his “Against Fantology”-paper sentences like:
Baysianism as described by Yvain as described in seems a bit like what Barry Smith describes as spreadsheet ontology with probability values instead of logical true false values.
Even if ontological questions can’t be setteled in a way to decide which ontology is more correct than another, it seems to me that you have to decide for one ontology to use for your AGI. Different choices of how you structure that ontology will have a substantial effect on the way the AGI reasons.