I can’t speak for anyone else, but I expect that a sufficiently well designed intelligence, faced with hard choices, makes them. If an intelligence is designed in such a way that, when faced with hard choices, it fails to make them (as happens to humans a lot), I consider that a design failure.
And yes, I expect that it makes them in such a way as to maximize the expected value of its choice.… that is, so as to insofar as possible do what is worth doing and pursue what is worth pursuing. Which presumes that at any given moment it will at least have a working belief about what is worth doing and worth pursuing.
If an intelligence is designed in such a way that it can’t make a choice because it doesn’t know what it’s trying to achieve by choosing (that is, it doesn’t know what it values), I again consider that a design failure. (Again, this happens to humans a lot.)
I can’t speak for anyone else, but I expect that a sufficiently well designed intelligence, faced with hard choices, makes them. If an intelligence is designed in such a way that, when faced with hard choices, it fails to make them (as happens to humans a lot), I consider that a design failure.
The level of executive function required of normal people to function in modern society is astonishingly high by historical standards. It’s not surprising that people have a lot of “above my pay grade” reactions to difficult decisions, and that decision-making ability is highly variable among people.
I have an enormous amount of sympathy for us humans, who are required to make these kinds of decisions with nothing but our brains. My sympathy increased radically during the period of my life when, due to traumatic brain injury, my level of executive function was highly impaired and ordering lunch became an “above my pay grade” decision. We really do astonishingly well, for what we are.
But none of that changes my belief that we aren’t especially well designed for making hard choices.
It’s also not surprising that people can’t fly across the Atlantic Ocean. But I expect a sufficiently well designed aircraft to do so.
It’s interesting that we view those who do make the tough decisions as virtuous—i.e. the commander in a war movie (I’m thinking of Bill Adama). We recognize that it is a hard but valuable thing to do!
Sure. For much of human history, the basic decision-making unit has been the household, rather than the individual, and household sizes have decreased significantly as time has gone on. With the “three generations under one roof” model, individuals could heed the sage wisdom of someone who has lived several times as long as they have when making important decisions like what career to follow or who to marry, and in many cases the social pressure to conform to the wishes of the elders was significant. As well, many people were also considered property- and so didn’t need to make decisions that would alter the course of their life, because someone else would make them for them. Serfs rarely needed to make complicated financial decisions. Limited mobility made deciding where to live easier.
Now, individuals (of both sexes!) are expected to decide who to marry and what job to pursue, mostly on their own. The replacement for the apprentice system- high school and college- provide little structure compared to traditional apprenticeships. Individuals are expected to negotiate for themselves with regards to many complicated financial transactions and be stewards of property.
(This is a good thing in general, but it is worth remembering that it’s a great thing for people who are good at being executives and mediocre to bad for people who are bad at it. As well, varying family types have been a thing for a long time, which may have had an impact on the development of societies and selected for different traits.)
A common problem that faces humans is that they often have to choose between two different things that they value (such as freedom vs. equality), without an obvious way to make a numerical comparison between the two. How many freeons equal one egaliton? It’s certainly inconvenient, but the complexity of value is a fundamentally human feature.
It seems to me that it will be very hard to come up with utility functions for fAI that capture all the things that humans find valuable in life. The topology of the systems don’t match up.
Is this a design failure? I’m not so sure. I’m not sold on the desirability of having an easily computable value function.
I would agree that we’re often in positions where we’re forced to choose between two things that we value and we just don’t know how to make that choice.
Sometimes, as you say, it’s because we don’t know how to compare the two. (Talk of numerical comparison is, I think, beside the point.) Sometimes it’s because we can’t accept giving up something of value, even in exchange for something of greater value. Sometimes it’s for other reasons.
I would agree that coming up with a way to evaluate possible states of the world that take into account all of the things humans value is very difficult. This is true whether the evaluation is by means of a utility function for fAI or via some other means. It’s a hard problem.
I would agree that replacing the hard-to-compute value function(s) I actually have with some other value function(s) that are easier to compute is not desirable.
Building an automated system that can compute the hard-to-compute value function(s) I actually have more reliably than my brain can—for example, a system that can evaluate various possible states of the world and predict which ones would actually make me satisfied and fulfilled to live in, and be right more often than I am—sounds pretty desirable to me. I have no more desire to make that calculation with my brain, given better alternatives, than I have to calculate square roots of seven-digit numbers with it.
I can’t speak for anyone else, but I expect that a sufficiently well designed intelligence, faced with hard choices, makes them. If an intelligence is designed in such a way that, when faced with hard choices, it fails to make them (as happens to humans a lot), I consider that a design failure.
And yes, I expect that it makes them in such a way as to maximize the expected value of its choice.… that is, so as to insofar as possible do what is worth doing and pursue what is worth pursuing. Which presumes that at any given moment it will at least have a working belief about what is worth doing and worth pursuing.
If an intelligence is designed in such a way that it can’t make a choice because it doesn’t know what it’s trying to achieve by choosing (that is, it doesn’t know what it values), I again consider that a design failure. (Again, this happens to humans a lot.)
The level of executive function required of normal people to function in modern society is astonishingly high by historical standards. It’s not surprising that people have a lot of “above my pay grade” reactions to difficult decisions, and that decision-making ability is highly variable among people.
100% agreed.
I have an enormous amount of sympathy for us humans, who are required to make these kinds of decisions with nothing but our brains. My sympathy increased radically during the period of my life when, due to traumatic brain injury, my level of executive function was highly impaired and ordering lunch became an “above my pay grade” decision. We really do astonishingly well, for what we are.
But none of that changes my belief that we aren’t especially well designed for making hard choices.
It’s also not surprising that people can’t fly across the Atlantic Ocean. But I expect a sufficiently well designed aircraft to do so.
It’s interesting that we view those who do make the tough decisions as virtuous—i.e. the commander in a war movie (I’m thinking of Bill Adama). We recognize that it is a hard but valuable thing to do!
Could you elaborate on this?
Sure. For much of human history, the basic decision-making unit has been the household, rather than the individual, and household sizes have decreased significantly as time has gone on. With the “three generations under one roof” model, individuals could heed the sage wisdom of someone who has lived several times as long as they have when making important decisions like what career to follow or who to marry, and in many cases the social pressure to conform to the wishes of the elders was significant. As well, many people were also considered property- and so didn’t need to make decisions that would alter the course of their life, because someone else would make them for them. Serfs rarely needed to make complicated financial decisions. Limited mobility made deciding where to live easier.
Now, individuals (of both sexes!) are expected to decide who to marry and what job to pursue, mostly on their own. The replacement for the apprentice system- high school and college- provide little structure compared to traditional apprenticeships. Individuals are expected to negotiate for themselves with regards to many complicated financial transactions and be stewards of property.
(This is a good thing in general, but it is worth remembering that it’s a great thing for people who are good at being executives and mediocre to bad for people who are bad at it. As well, varying family types have been a thing for a long time, which may have had an impact on the development of societies and selected for different traits.)
A common problem that faces humans is that they often have to choose between two different things that they value (such as freedom vs. equality), without an obvious way to make a numerical comparison between the two. How many freeons equal one egaliton? It’s certainly inconvenient, but the complexity of value is a fundamentally human feature.
It seems to me that it will be very hard to come up with utility functions for fAI that capture all the things that humans find valuable in life. The topology of the systems don’t match up.
Is this a design failure? I’m not so sure. I’m not sold on the desirability of having an easily computable value function.
I would agree that we’re often in positions where we’re forced to choose between two things that we value and we just don’t know how to make that choice.
Sometimes, as you say, it’s because we don’t know how to compare the two. (Talk of numerical comparison is, I think, beside the point.)
Sometimes it’s because we can’t accept giving up something of value, even in exchange for something of greater value.
Sometimes it’s for other reasons.
I would agree that coming up with a way to evaluate possible states of the world that take into account all of the things humans value is very difficult. This is true whether the evaluation is by means of a utility function for fAI or via some other means. It’s a hard problem.
I would agree that replacing the hard-to-compute value function(s) I actually have with some other value function(s) that are easier to compute is not desirable.
Building an automated system that can compute the hard-to-compute value function(s) I actually have more reliably than my brain can—for example, a system that can evaluate various possible states of the world and predict which ones would actually make me satisfied and fulfilled to live in, and be right more often than I am—sounds pretty desirable to me. I have no more desire to make that calculation with my brain, given better alternatives, than I have to calculate square roots of seven-digit numbers with it.
Upvoted for use of the phrase “How many freeons equal one egaliton?”