If this is supposed to be a description of how actual human brains work, I guess we naturally don’t have any “useful metrics we want to optimize for”. Instead we are driven by various impulses, which historically appeared by random mutations, and if they happened to contribute to human survival and reproduction, they were preserved and promoted by natural selection. At this moment, the impulses that sometimes make us (want to) optimize for some useful metrics are a part of that set. But they are just one among many desires, not some essential building block of the human brain.
There is some problem even with having seemingly finite goals. For example, if the machine has a probabilistic model of the world, and you ask it to make 100 paperclips, there is a potential risk—depending on the specific architecture—that the machine would recognize that it doesn’t have literally 100% certainty of having already created 100 paperclips, and will try to optimize for making this certainty as high as possible (destroying humanity as a side effect). For example, the machine may think “maybe humans are messing with my memory and visual output to make me falsely believe that I have 100 paperclips, when in reality maybe I have none; I guess it would be safer to kill them all”. So maybe the goal should instead be something like “make 100 paperclips with probability at least 99%”, but… you know, the general idea is that there may be some unnoticed way how the supposedly finite goal might spawn an infinite subtask.
Otherwise… this seems like a nice high-level view of the things, but the devil is in the details. You could write thousands of scientific papers merely on how to correctly implement things like “picture of the world”, “concept of a cat”, etc. That is, the heavy work is hidden behind these seemingly innocent words.
For a long time, the way ANNs work kinda made sense to me, and seemed to map nicely onto my (shallow) understanding of how human brain works. But I could never imagine how could the values/drives/desires be implemented in terms of ANN.
The idea that you can just quantify something you want as a metric, feed it as an input, and see if the output is closer to what we want is new to me. It was a little epiphany, that seems to make sense, so it prompted me to write this post.
Evolutionary, I guess human/animal utility function would be something like “How many copies of myself have I made? Let’s maximize that.” But from the subjective perspective, it’s probably more like “Am I receiving the pleasure from the reward system my brain happened to develop?”
For sure there are a bunch of different impulses/drives, but they all are just little rewards for transforming the current state of the world into the one our brain prefers, right? Maybe they have appeared randomly, but if you were to design one intentionally, is that how you would go about it?
Learning
Get inputs from eyes/ears.
Recognize patterns, make predictions.
Compare predictions to how things turned out, update the beliefs, improve the model of the world.
Repeat.
General intelligence taking actions towards it’s values
Perceive the difference between the state of world, and the state I want.
Use the model of the world that I’ve learned to predict the outcomes of possible actions.
If I predict that applying action to the world will lead to rewards—take action.
See how it turned out, update the model, repeat.
I agree that specific goals can also have unintended consequences. It just occurred to me that this kind of problem would be much easier to solve than trying to align the abstract values, and the outcome is the same—we get what we want.
Oh, and I totally agree that there’s probably a ton of complexity when it comes to the implementation. But it would be pretty cool to figure out at least the general idea of what intelligence and consciousness are, what things we need to implement, and how they fit together.
In real life, the problem with metrics is that if you don’t make it perfectly right (which is difficult), you can easily get something useless, often even actively harmful.
And yet, metrics often are useful in real life. You generally want to measure things. You need to know how much money you have, and it is better to know in detail the structure of your incomes and expenses. If you want to e.g. exercise regularly or stop eating chocolate, keeping a log of which days you exercised or avoided the chocolate is often a good first step.
Thus we find ourselves in a paradox that we need good metrics, but we need to remember that they are mere approximations of reality, lest we start optimizing for the metrics at the expense of the real things. (Good advice for a human, not very useful for constructing the AI.)
Evolutionary, I guess human/animal utility function would be something like “How many copies of myself have I made? Let’s maximize that.” But from the subjective perspective, it’s probably more like “Am I receiving the pleasure from the reward system my brain happened to develop?”
Yes, the “utility” of evolution is not the same as that of the evolved human.
For sure there are a bunch of different impulses/drives, but they all are just little rewards for transforming the current state of the world into the one our brain prefers, right?
Sometimes following your impulse can make you unhappy and still on average increase your fitness, for example jealousy. (Jealous people are made less happy by the idea that their partners might be cheating on them. But feeling this discomfort and guarding one’s partner increases the reproductive fitness in average.) I mean, yes, finding out that despite your suspicions your partner does not cheat on you makes you more happy (or less unhappy) than finding out that they actually do. But not worrying about the possibility would make you even more happy. Humans are instinctively not even happiness maximizers.
If this is supposed to be a description of how actual human brains work, I guess we naturally don’t have any “useful metrics we want to optimize for”. Instead we are driven by various impulses, which historically appeared by random mutations, and if they happened to contribute to human survival and reproduction, they were preserved and promoted by natural selection. At this moment, the impulses that sometimes make us (want to) optimize for some useful metrics are a part of that set. But they are just one among many desires, not some essential building block of the human brain.
There is some problem even with having seemingly finite goals. For example, if the machine has a probabilistic model of the world, and you ask it to make 100 paperclips, there is a potential risk—depending on the specific architecture—that the machine would recognize that it doesn’t have literally 100% certainty of having already created 100 paperclips, and will try to optimize for making this certainty as high as possible (destroying humanity as a side effect). For example, the machine may think “maybe humans are messing with my memory and visual output to make me falsely believe that I have 100 paperclips, when in reality maybe I have none; I guess it would be safer to kill them all”. So maybe the goal should instead be something like “make 100 paperclips with probability at least 99%”, but… you know, the general idea is that there may be some unnoticed way how the supposedly finite goal might spawn an infinite subtask.
Otherwise… this seems like a nice high-level view of the things, but the devil is in the details. You could write thousands of scientific papers merely on how to correctly implement things like “picture of the world”, “concept of a cat”, etc. That is, the heavy work is hidden behind these seemingly innocent words.
Thank you for your reply!
For a long time, the way ANNs work kinda made sense to me, and seemed to map nicely onto my (shallow) understanding of how human brain works. But I could never imagine how could the values/drives/desires be implemented in terms of ANN.
The idea that you can just quantify something you want as a metric, feed it as an input, and see if the output is closer to what we want is new to me. It was a little epiphany, that seems to make sense, so it prompted me to write this post.
Evolutionary, I guess human/animal utility function would be something like “How many copies of myself have I made? Let’s maximize that.” But from the subjective perspective, it’s probably more like “Am I receiving the pleasure from the reward system my brain happened to develop?”
For sure there are a bunch of different impulses/drives, but they all are just little rewards for transforming the current state of the world into the one our brain prefers, right? Maybe they have appeared randomly, but if you were to design one intentionally, is that how you would go about it?
Learning
Get inputs from eyes/ears.
Recognize patterns, make predictions.
Compare predictions to how things turned out, update the beliefs, improve the model of the world.
Repeat.
General intelligence taking actions towards it’s values
Perceive the difference between the state of world, and the state I want.
Use the model of the world that I’ve learned to predict the outcomes of possible actions.
If I predict that applying action to the world will lead to rewards—take action.
See how it turned out, update the model, repeat.
I agree that specific goals can also have unintended consequences. It just occurred to me that this kind of problem would be much easier to solve than trying to align the abstract values, and the outcome is the same—we get what we want.
Oh, and I totally agree that there’s probably a ton of complexity when it comes to the implementation. But it would be pretty cool to figure out at least the general idea of what intelligence and consciousness are, what things we need to implement, and how they fit together.
In real life, the problem with metrics is that if you don’t make it perfectly right (which is difficult), you can easily get something useless, often even actively harmful.
And yet, metrics often are useful in real life. You generally want to measure things. You need to know how much money you have, and it is better to know in detail the structure of your incomes and expenses. If you want to e.g. exercise regularly or stop eating chocolate, keeping a log of which days you exercised or avoided the chocolate is often a good first step.
Thus we find ourselves in a paradox that we need good metrics, but we need to remember that they are mere approximations of reality, lest we start optimizing for the metrics at the expense of the real things. (Good advice for a human, not very useful for constructing the AI.)
Yes, the “utility” of evolution is not the same as that of the evolved human.
Sometimes following your impulse can make you unhappy and still on average increase your fitness, for example jealousy. (Jealous people are made less happy by the idea that their partners might be cheating on them. But feeling this discomfort and guarding one’s partner increases the reproductive fitness in average.) I mean, yes, finding out that despite your suspicions your partner does not cheat on you makes you more happy (or less unhappy) than finding out that they actually do. But not worrying about the possibility would make you even more happy. Humans are instinctively not even happiness maximizers.