One way intelligence and goals might be related is that the ontology an agent uses (e.g. whether it thinks of the world it deals with in terms of atoms or agents or objects) as well as the mental systems it has (e.g. whether it has true/false beliefs, or probabilistic beliefs) might change how capable it is, as well as which values it can comprehend. For instance, an agent capable of a more detailed model of the world might tend to perceive more useful ways to interact with the world, and so be more intelligent. It should also be able to represent preferences which wouldn’t have made sense in a simpler model.
One way intelligence and goals might be related is that the ontology an agent uses (e.g. whether it thinks of the world it deals with in terms of atoms or agents or objects) as well as the mental systems it has (e.g. whether it has true/false beliefs, or probabilistic beliefs) might change how capable it is...
This is totally right as well. We live inside our ontologies. I think one of the most distinctive, and important, features of acting, successfully aware minds (I won’t call them ’intelligences” because of what I am going to say further down, in this message) is this capacity to mint new ontologies as needed, and to do it well, and successfully.
Successfully means the ontological additions are useful, somewhat durable constructs, “cognitively penetrable” to our kind of mind, help us flourish, and give a viable foundation for action that “works” … as well as not backing us into a local maximum or minimum.…
By that I mean this: “successfull” minting of ontological entities enables us to mint additional ones that also “work”.
Ontologies create us as much as we create them, and this creative process is I think a key feature of “successful” viable minds.
Indeed, I think this capacity to mint new ontologies and do it well, is largely orthogonal to the other two that Bostrom mentions, i.e.
1) means-end reasoning (what Bostrom might otherwise call intelligence)
2) final or teleological selection of goals from the goal space,
and to my way of thinking…
3) minting of ontological entities “successfully” and well.
In fact, in a sense, I would put my third one in position one, ahead of means-end reasoning, if I were to give them a relative dependence. Even though orthogonal—in that they vary independently—you have to have the ability to mint ontologies, before means-end reasoning has anything to work on. And in that sense, Katja’s suggestion that ontologies can confer more power and growth potential (for more successful sentience to come), is something I think is quite right.
But I think all three are pretty self-evidentally largely orthogonal, with some qualifications that have been mentioned for Bostrom’s original two.
One way intelligence and goals might be related is that the ontology an agent uses (e.g. whether it thinks of the world it deals with in terms of atoms or agents or objects) as well as the mental systems it has (e.g. whether it has true/false beliefs, or probabilistic beliefs) might change how capable it is, as well as which values it can comprehend.
I think the remarks about goals being ontologically-associated, are absolutely spot on. Goals, and any “values” distinguishing among the possible future goals in the agent’s goal space, are built around that agent’s perceived (actually, inhabited is a better word) ontology.
For example, the professional ontology of a wall street financial analyst includes the objects that he or she interacts with (options, stocks, futures, dividends, and the laws and infrastructure associated with the conceptual “deductive closure” of that ontology.)
Clearly, “final”—teleological and moral – principles involving approach and avoidance judgments … say, involving insider trading (and the negative consequences at a practical level, if not the pure anethicality, of running afoul of the laws and rules of governance for trading those objects) , are only defined within an ontological universe of discourse, which contains those financial objects and the network of laws and valuations that define – and are defined by—those objects.
Smarter beings, or even ourselves, as our culture evolves, generation after generation becoming more complex, acquire new ontologies and gradually retire others. Identity theft mediated by surreptitious seeding of laptops in Starbucks with keystroke-logging viruses, is “theft” and is unethical. But trivially in 1510 BCE, the ontological stage on which this is optionally played out did not exist, and thus, the ethical valence would have been undefined, even unintelligible.
That is why, if we can solve the friendlieness problem, it will have to be by some means that gives new minds the capacity to develop robust ethical meta-intuition, that can be recruited creatively, on the fly, as these beings encounter new situations that call upon them to make new ethical judgements.
I happen to be a version of meta -ethical realist, like I am something of a mathematical platonist, but in my position, this is crossed also with a type of constructivist metaethics, apparently like that subscribed-to by John Danaher in his blog (after I followed the link and read it.) At least, his position sounds like it is similar to mine, although constructivist part of my theory is supplemented with a “weak” quasi-platonist thread, that I am trying to derive from some more fundamental meta-ontological principles (work in progress on that.)
One way intelligence and goals might be related is that the ontology an agent uses (e.g. whether it thinks of the world it deals with in terms of atoms or agents or objects) as well as the mental systems it has (e.g. whether it has true/false beliefs, or probabilistic beliefs) might change how capable it is, as well as which values it can comprehend. For instance, an agent capable of a more detailed model of the world might tend to perceive more useful ways to interact with the world, and so be more intelligent. It should also be able to represent preferences which wouldn’t have made sense in a simpler model.
This is totally right as well. We live inside our ontologies. I think one of the most distinctive, and important, features of acting, successfully aware minds (I won’t call them ’intelligences” because of what I am going to say further down, in this message) is this capacity to mint new ontologies as needed, and to do it well, and successfully.
Successfully means the ontological additions are useful, somewhat durable constructs, “cognitively penetrable” to our kind of mind, help us flourish, and give a viable foundation for action that “works” … as well as not backing us into a local maximum or minimum.… By that I mean this: “successfull” minting of ontological entities enables us to mint additional ones that also “work”.
Ontologies create us as much as we create them, and this creative process is I think a key feature of “successful” viable minds.
Indeed, I think this capacity to mint new ontologies and do it well, is largely orthogonal to the other two that Bostrom mentions, i.e. 1) means-end reasoning (what Bostrom might otherwise call intelligence) 2) final or teleological selection of goals from the goal space, and to my way of thinking… 3) minting of ontological entities “successfully” and well.
In fact, in a sense, I would put my third one in position one, ahead of means-end reasoning, if I were to give them a relative dependence. Even though orthogonal—in that they vary independently—you have to have the ability to mint ontologies, before means-end reasoning has anything to work on. And in that sense, Katja’s suggestion that ontologies can confer more power and growth potential (for more successful sentience to come), is something I think is quite right.
But I think all three are pretty self-evidentally largely orthogonal, with some qualifications that have been mentioned for Bostrom’s original two.
I think the remarks about goals being ontologically-associated, are absolutely spot on. Goals, and any “values” distinguishing among the possible future goals in the agent’s goal space, are built around that agent’s perceived (actually, inhabited is a better word) ontology.
For example, the professional ontology of a wall street financial analyst includes the objects that he or she interacts with (options, stocks, futures, dividends, and the laws and infrastructure associated with the conceptual “deductive closure” of that ontology.)
Clearly, “final”—teleological and moral – principles involving approach and avoidance judgments … say, involving insider trading (and the negative consequences at a practical level, if not the pure anethicality, of running afoul of the laws and rules of governance for trading those objects) , are only defined within an ontological universe of discourse, which contains those financial objects and the network of laws and valuations that define – and are defined by—those objects.
Smarter beings, or even ourselves, as our culture evolves, generation after generation becoming more complex, acquire new ontologies and gradually retire others. Identity theft mediated by surreptitious seeding of laptops in Starbucks with keystroke-logging viruses, is “theft” and is unethical. But trivially in 1510 BCE, the ontological stage on which this is optionally played out did not exist, and thus, the ethical valence would have been undefined, even unintelligible.
That is why, if we can solve the friendlieness problem, it will have to be by some means that gives new minds the capacity to develop robust ethical meta-intuition, that can be recruited creatively, on the fly, as these beings encounter new situations that call upon them to make new ethical judgements.
I happen to be a version of meta -ethical realist, like I am something of a mathematical platonist, but in my position, this is crossed also with a type of constructivist metaethics, apparently like that subscribed-to by John Danaher in his blog (after I followed the link and read it.)
At least, his position sounds like it is similar to mine, although constructivist part of my theory is supplemented with a “weak” quasi-platonist thread, that I am trying to derive from some more fundamental meta-ontological principles (work in progress on that.)