One way intelligence and goals might be related is that the ontology an agent uses (e.g. whether it thinks of the world it deals with in terms of atoms or agents or objects) as well as the mental systems it has (e.g. whether it has true/false beliefs, or probabilistic beliefs) might change how capable it is, as well as which values it can comprehend.
I think the remarks about goals being ontologically-associated, are absolutely spot on. Goals, and any “values” distinguishing among the possible future goals in the agent’s goal space, are built around that agent’s perceived (actually, inhabited is a better word) ontology.
For example, the professional ontology of a wall street financial analyst includes the objects that he or she interacts with (options, stocks, futures, dividends, and the laws and infrastructure associated with the conceptual “deductive closure” of that ontology.)
Clearly, “final”—teleological and moral – principles involving approach and avoidance judgments … say, involving insider trading (and the negative consequences at a practical level, if not the pure anethicality, of running afoul of the laws and rules of governance for trading those objects) , are only defined within an ontological universe of discourse, which contains those financial objects and the network of laws and valuations that define – and are defined by—those objects.
Smarter beings, or even ourselves, as our culture evolves, generation after generation becoming more complex, acquire new ontologies and gradually retire others. Identity theft mediated by surreptitious seeding of laptops in Starbucks with keystroke-logging viruses, is “theft” and is unethical. But trivially in 1510 BCE, the ontological stage on which this is optionally played out did not exist, and thus, the ethical valence would have been undefined, even unintelligible.
That is why, if we can solve the friendlieness problem, it will have to be by some means that gives new minds the capacity to develop robust ethical meta-intuition, that can be recruited creatively, on the fly, as these beings encounter new situations that call upon them to make new ethical judgements.
I happen to be a version of meta -ethical realist, like I am something of a mathematical platonist, but in my position, this is crossed also with a type of constructivist metaethics, apparently like that subscribed-to by John Danaher in his blog (after I followed the link and read it.) At least, his position sounds like it is similar to mine, although constructivist part of my theory is supplemented with a “weak” quasi-platonist thread, that I am trying to derive from some more fundamental meta-ontological principles (work in progress on that.)
I think the remarks about goals being ontologically-associated, are absolutely spot on. Goals, and any “values” distinguishing among the possible future goals in the agent’s goal space, are built around that agent’s perceived (actually, inhabited is a better word) ontology.
For example, the professional ontology of a wall street financial analyst includes the objects that he or she interacts with (options, stocks, futures, dividends, and the laws and infrastructure associated with the conceptual “deductive closure” of that ontology.)
Clearly, “final”—teleological and moral – principles involving approach and avoidance judgments … say, involving insider trading (and the negative consequences at a practical level, if not the pure anethicality, of running afoul of the laws and rules of governance for trading those objects) , are only defined within an ontological universe of discourse, which contains those financial objects and the network of laws and valuations that define – and are defined by—those objects.
Smarter beings, or even ourselves, as our culture evolves, generation after generation becoming more complex, acquire new ontologies and gradually retire others. Identity theft mediated by surreptitious seeding of laptops in Starbucks with keystroke-logging viruses, is “theft” and is unethical. But trivially in 1510 BCE, the ontological stage on which this is optionally played out did not exist, and thus, the ethical valence would have been undefined, even unintelligible.
That is why, if we can solve the friendlieness problem, it will have to be by some means that gives new minds the capacity to develop robust ethical meta-intuition, that can be recruited creatively, on the fly, as these beings encounter new situations that call upon them to make new ethical judgements.
I happen to be a version of meta -ethical realist, like I am something of a mathematical platonist, but in my position, this is crossed also with a type of constructivist metaethics, apparently like that subscribed-to by John Danaher in his blog (after I followed the link and read it.)
At least, his position sounds like it is similar to mine, although constructivist part of my theory is supplemented with a “weak” quasi-platonist thread, that I am trying to derive from some more fundamental meta-ontological principles (work in progress on that.)