But even if “make very little difference” is true, it’s little in a relative sense …
The conjecture is that it is true in an absolute sense. It would have made no sense at all for me to even mention it if I had meant it in the relative sense that you set up here as a straw man and then knock down.
There is something odd going on here. Three very intelligent people are interpreting what I write quite differently than the way I intend it. Probably it is because I generated confusion by misusing words that have a fixed meaning here. And, in this case, it may be because you were thinking of our “fragility” conversation rather than the main posting. But, whatever the reason, I’m finding this very frustrating.
I guess I took your conjecture to be the “relative” one because whether or not it is true perhaps doesn’t depend on details of one’s utility function, and we, or at least I, was talking about whether the question “what do I want?” is an important one. I’m not sure how you hope to show the “absolute” version in the same way.
I’m not sure how you hope to show the “absolute” version in the same way.
Well, Omohundro showed that a certain collection of instrumental values tend to arise independently of the ‘seeded’ intrinsic values. In fact, decision making tends to be dominated by consideration of these ‘convergent’ instrumental values, rather than the human-inserted seed values.
Next, consider that those human values themselves originated as heuristic approximations of instrumental values contributing to the intrinsic value of interest to our optimization process—natural selection. The fact that we ended up with the particular heuristics that we did is not due to the fact that the intrinsic value for that process was reproductive success—every species in the biosphere evolved under the guidance of that value. The reason why humans ended up with values like curiosity, reciprocity, and toleration has to do with the environment in which we evolved.
So, my hope is that we can show that AIs will converge to human-like instrumental/heuristic values if they do their self-updating in a human-like evolutionary environment. Regardless of the details of their seeds.
I notice that Robin Hanson takes a position similar to yours, in that he thinks things will turn out ok from our perspective if uploads/AIs evolve in an environment defined by certain rules (in his case property laws and such, rather than sexual reproduction).
But I think he also thinks that we do not actually have a choice between such evolution and a FOOMing singleton (i.e. FOOMing singleton is nearly impossible to achieve), whereas you think we might have a choice or at least you’re not taking a position on that. Correct me if I’m wrong here.
Anyway, suppose you and Robin are right and we do have some leverage over the environment that future AIs will evolve in, and can use that leverage to predictably influence the eventual outcome. I contend we still have to figure out what we want, so that we know how to apply that leverage. Presumably we can’t possibly make the AI evolutionary environment exactly like the human one, but we might have a choice over a range of environments, some more human-like than others. But it’s not necessarily true that the most human-like environment leads to the best outcome. (Nor is it even clear what it means for one environment to be more human-like than another.) So, among the possible outcomes we can aim for, we’ll still have to decide which ones are better than others, and to do that, we need to know what we want, which involves, at least in part, either figuring out morality is, or showing that it’s meaningless or otherwise unrelated to what we want.
But I think [Hanson] also thinks that we do not actually have a choice between such evolution and a FOOMing singleton (i.e. FOOMing singleton is nearly impossible to achieve), whereas you think we might have a choice or at least you’re not taking a position on that. Correct me if I’m wrong here.
I tend toward FOOM skepticism, but I don’t think it is “nearly impossible”. Define a FOOM as a scenario leading in at most 10 years from the first human-level AI to a singleton which has taken effective control over the world’s economy. I rate the probability of a FOOM at 40% assuming that almost all AI researchers want a FOOM and at 5% assuming that almost all AI researchers want to prevent a FOOM. I’m under the impression that currently a majority of singularitarians want a FOOM, but I hope that that ratio will fall as the dangers of a FOOMing singleton become more widely known.
I contend we still have to figure out what we want, so that we know how to apply that leverage. … Do you disagree on this point?
No, I agree. Agree enthusiastically. Though I might change the wording just a bit. Instead of “we still have to figure out what we want”, I might have written “we still have to negotiate what we want”.
My turn now. Do you disagree with this shift of emphasis from the intellectual to the political?
What is your answer to The Lifespan Dilemma, for example?
I only skimmed that posting, and I failed to find any single question there which you apparently meant for me to answer. But let me invent my own question and answer it.
Suppose I expect to live for 10,000 years. Omega appears and offers me a deal. Omega will extend my lifetime to infinity if I simply agree to submit to torture for 15 minutes immediately—the torture being that I have to actually read that posting of Eliezer’s with care.
I would turn down Omega’s offer without regret, because I believe in (exponentially) discounting future utilities. Roughly speaking, I count the pleasures and pains that I will encounter next year to be something like 1% less significant than this year. I’m doing the math in my head, but I estimate that this makes my first omega-granted bonus year 10,000 years from now worth about 1/10^42 as much as this year. Or, saying it another way, my first ‘natural’ 10,000 years is worth about 10^42 times as much as the infinite period of time thereafter. The next fifteen minutes is more valuable than that infinite period of time. And I don’t want to waste that 15 minutes re-reading that posting.
And I am quite sure that 99% of mankind would agree with me that 1% discounting per year is not an excessive discount rate. That is, in large part, why I think negotiation is important. It is because typical SIAI thinking about morality is completely unacceptable to most of mankind and SIAI seem to be in denial about it.
Have your thought through all of the implications of a 1% discount rate? For example, have you considered that if you negotiate with someone who discounts the future less, say at 0.1% per year, you’ll end up trading the use of all of your resources after X number of years in exchange for use of his resources before X number of years, and so almost the entire future of the universe will be determined by the values of those whose discount rates are lower than yours?
If that doesn’t bother you, and you’re really pretty sure you want a 1% discount rate, do you not have other areas where you don’t know what you want?
For example, what exactly is the nature of pleasure and pain? I don’t want people to torture simulated humans, but what if they claim that the simulated humans have been subtly modified so that they only look like they’re feeling pain, but aren’t really? How can I tell if some computation is having pain or pleasure?
And here’s a related example: Presumably having one kilogram of orgasmium in the universe is better than having none (all else equal) but you probably don’t want to tile the universe with it. Exactly how much worse is a second kilogram of the stuff compared to the first? (If you don’t care about orgasmium in the abstract, suppose that it’s a copy of your brain experiencing some ridiculously high amount of pleasure.)
Have you already worked out all such problems, or at least know the principles by which you’ll figure them out?
Have your thought through all of the implications of a 1% discount rate? …almost the entire future of the universe will be determined by the values of those whose discount rates are lower than yours?
I don’t know about thinking through all of the implications, but I have certainly thought through that one. Which is one reason why I would advocate that any AI’s that we build be hard-wired with a rather steep discount rate. Entities with very low discount rates are extremely difficult to control through market incentives. Murder is the only effective option, and the AI knows that, leading to a very unstable situation.
do you not have other areas where you don’t know what you want?
Oh, I’m sure I do. And I’m sure that what I want will change when I experience the Brave New World for myself. That is why I advocate avoiding any situation in which I have to perfectly specify my fragile values correctly the first time—have to get it right because someone decided that the AI should make its own decisions about self-improvement and so we need to make sure its values are ultra-stable.
For example, what exactly is the nature of pleasure and pain? I don’t want people to torture simulated humans, but what if they claim that the simulated humans have been subtly modified so that they only look like they’re feeling pain, but aren’t really? How can I tell if some computation is having pain or pleasure?
I certainly have some sympathy for people who find themselves in that kind of moral quandary. Those kinds of problems just don’t show up when your moral system requires no particular obligations to entities you have never met, with whom you cannot communicate, and with whom you have no direct or indirect agreements.
Have you already worked out all such problems, or at least know the principles by which you’ll figure them out?
I presume you ask rhetorically, but as it happens, the answer is yes. I at least know the principles. My moral system is pretty simple—roughly a Humean rational self-interest, but as it would play out in a fictional society in which all actions are observed and all desires are known. But that still presents me with moral quandaries—because in reality all desires are not known, and in order to act morally I need to know what other people want.
I find it odd that utilitarians seem less driven to find out what other people want than do egoists like myself.
Have your thought through all of the implications of a 1% discount rate? [...] almost the entire future of the universe will be determined by the values of those whose discount rates are lower than yours?
I don’t know about thinking through all of the implications, but I have certainly thought through that one. Which is one reason why I would advocate that any AI’s that we build be hard-wired with a rather steep discount rate. Entities with very low discount rates are extremely difficult to control through market incentives. [...]
Control—through market incentives?!? How not to do it, surely. Soon the machine will have all the chips, and you will have none—and therefore nothing to bargain with.
The more conventional solution is to control the machine by programming its brain. Then, control via market incentives becomes irrelevant. So: I don’t think this reason for discounting is very practical.
Odd. I was expecting that it would trade any chips it happened to acquire for computronium, cat girls, and cat boys (who would perform scheduled maintenance in its volcano lair). Agents with a high discount rate just aren’t that interested in investing. Delayed gratification just doesn’t appeal to them.
Odd. I was expecting that it would trade any chips it happened to acquire for computronium, cat girls, and cat boys (who would perform scheduled maintenance in its volcano lair).
That doesn’t sound as though there is any substantive disagreement.
Agents with a high discount rate just aren’t that interested in investing. Delayed gratification just doesn’t appeal to them.
...and nor does that.
However, you appear to be not addressing the issue—which was that your rationale for rapid discounting in machine intelligence was based on a scenario where the machine goals and the human goals are different—and the humans attempt to exercise control over the machines using market incentives.
Conventional thinking around here is that this kind of scenario often doesn’t work out too well for the humans—and it represents a mess that we are better off not getting into in the first place.
So: you aren’t on the same page—which may be why your conclusions differ. However, why aren’t you on the same page? Do you think control via market incentives is desirable? Inevitable? Likely?
The problem with controlling machines has more to do with power than discount rates. The machine is (potentially) more powerful. It doesn’t much matter how it discounts—it is likely to get its way. So, its way had better be our way.
The more conventional solution is to control the machine by programming its brain.
Do you think control via market incentives is desirable? Inevitable? Likely?
Programming something and then allowing it to run unattended in the hope that you programmed correctly is not ‘control’, as the term is usually understood in ‘control theory’.
I would say that I believe that control of an AI by continuing trade is ‘necessary’ if we expect that our desires will change over time, and we will want to nudge the AI (or build a new AI) to satisfy those unanticipated desires.
It certainly makes sense to try to build machines whose values are aligned with humans over the short term—such machines will have little credible power to threaten us—just as parents have little power to credibly threaten their children since carrying out such threats directly reduces the threatener’s own utility.
And this also means that the machine needs to discount (its altruistic interest in) human welfare at the same rate as human do—otherwise, if it discounts faster, then it can threaten human with a horrible future (since it cares only about the human present). Or if it temporally discounts human happiness much slower than do humans, it will be able to threaten to delay human gratification.
However, if we want to be able to control our machines (to be able to cause them to do things that we did not originally imagine wanting them to do) then we do need to program in some potential carrots and sticks—things our machines care about that only humans can provide. These things need not be physical—a metaphoric pat on the head may do the trick. But if we are wise, we will program our machines to temporally discount this kind of gratification rather sharply—we don’t want it embarking on long term plans to increase future head-pats at the cost of incurring our short-term displeasure.
Incidentally, over the past few comments, I have noticed that you repeatedly refer to “the machine” where I might have written “machines” or “a machine”. Do you think that a singleton-dominated future is desirable? Inevitable? Likely?
And this also means that the machine needs to discount (its altruistic interest in) human welfare at the same rate as human do—otherwise, if it discounts faster, then it can threaten human with a horrible future (since it cares only about the human present). Or if it temporally discounts human happiness much slower than do humans, it will be able to threaten to delay human gratification.
If a machine wants for humans what the humans want for themselves, it wants to discount that stuff the way they like it. That doesn’t imply that it has any temporal discounting in its utility function—it is just using a moral mirror.
Incidentally, over the past few comments, I have noticed that you repeatedly refer to “the machine” where I might have written “machines” or “a machine”. Do you think that a singleton-dominated future is desirable? Inevitable? Likely?
I certainly wasn’t thinking about that issue consciously. Our brains may just handle examples a little differently.
It is challenging to answer directly because the premise that there is either one or many is questionable. There are degrees of domination—and we already have things like the United Nations.
Also, this seems to be an area where civilisation will probably get what it wants—so its down to us to some extent—which makes this a difficult area to make predictions in. However, I do think a mostly-united future—with few revolutions and little fighting—is more likely than not. An extremely tightly-united future also seems quite plausible to me. Material like this seems to be an unconvincing reason for doubt.
However, if we want to be able to control our machines (to be able to cause them to do things that we did not originally imagine wanting them to do) then we do need to program in some potential carrots and sticks—things our machines care about that only humans can provide.
No. That’s the “reinforcement learning” model. There is also the “recompile its brain” model.
The reinforcement learning model is problematical. If you hit a superintelligence with a stick, it will probably soon find a way take the stick away from you.
I would say that I believe that control of an AI by continuing trade is ‘necessary’ if we expect that our desires will change over time, and we will want to nudge the AI (or build a new AI) to satisfy those unanticipated desires.
Well, that surely isn’t right. Asimov knew that! He proposed making the machines want to do what we want them to—by making them following our instructions.
Programming something and then allowing it to run unattended in the hope that you programmed correctly is not ‘control’, as the term is usually understood in ‘control theory’.
A straw man—from my POV. I never said “unattended ” in the first place. “”
If you have already settled on a moral system, then it’s totally understandable why you might not be terribly interested in meta-ethics (in the sense of “the nature of morality”) at this point, but more into applied ethics, which I now see is what your post is really about. But I wish you mentioned that fact several comments upstream, when I said that I’m interested in meta-ethics because I’m not sure what I want. If you had mentioned it, I probably wouldn’t have tried to convince you that meta-ethics ought to be of interest to you too.
If you have already settled on a moral system, then it’s totally understandable why you might not be terribly interested in meta-ethics (in the sense of “the nature of morality”) at this point, but more into applied ethics, which I now see is what your post is really about.
Wow! Massive confusion. First let me clarify that I am interested in meta-ethics. I’ve read Hume, G.E.Moore, Nozick, Rawls, Gauthier, and tried to read (since I learned of him here) Parfit. Second, I don’t see why you would expect someone who has settled on a moral system to lose interest in meta-ethics. Third, I am totally puzzled how you could have reached the conclusion that my post was about applied ethics. Is there any internal evidence you can point to?
I would certainly agree that our recent conversation has veered into applied ethics. But that is because you keep asking applied ethics questions (apparently for purposes of illustration) and I keep answering. Sorry, my fault. I shouldn’t answer rhetorical questions.
I wish you mentioned that fact several comments upstream, when I said that I’m interested in meta-ethics because I’m not sure what I want. If you had mentioned it, I probably wouldn’t have tried to convince you that meta-ethics ought to be of interest to you too.
I wish I had realized that convincing me of that was what you were trying to do. I was under the impression that you were arguing that clarifying and justifying ones own ethical viewpoint is the urgent task, while I was arguing that comprehending and accommodating the diversity in ethical viewpoints among mankind is more important.
Have your thought through all of the implications of a 1% discount rate? For example, have you considered that if you negotiate with someone who discounts the future less, say at 0.1% per year, you’ll end up trading the use of all of your resources after X number of years in exchange for use of his resources before X number of years, and so almost the entire future of the universe will be determined by the values of those whose discount rates are lower than yours?
I am pretty sure that many humans discount faster than this today, on entirely sensible and rational grounds. What dominates the future has to do with power and reproductive rates, as well as discounting—and things like senescence and fertility decline make discounting sensible.
Basically I think that you can’t really have a sensible discussion about this without distinguishing between instrumental discounting and ultimate discounting.
Instrumental discounting is inevitable—and can be fairly rapid. It is ultimate discounting that is more suspect.
And I am quite sure that 99% of mankind would agree with me that 1% discounting per year is not an excessive discount rate.
I suspect that 99% of mankind would give different answers to that question, depending on whether it’s framed as giving up X now in exchange for receiving Y N years from now, or X N years ago for Y now.
Quite probably true. Which of course suggests the question: How (or how much) should “typical humans” be consulted about our plans for their future?
Yeah, I know that is an unfair way to ask the question. And I admit that Eliezer, at least, is actually doing something to raise the waterline. But it is a serious ethical question for utilitarians and a serious political question for egoists. And the closest thing I have seen to an answer for that question around here is something like “Well, we will scan their brains, or observe their behavior, or something. And then try to get something coherent out of that data. But God forbid we should ask them about it. That would just confuse things.”
It might make an interesting rationality exercise to have 6-10 people conduct some kind of discussion/negotiation/joint-decision-making-exercise to flesh-out their intuitions as to the type of post-singularity society they would like to live in.
My intuition is that, even if you are not sure what you want, the interactive process will probably help you to clarify exactly what you do not want, and thus assist in both personal and collective understanding of values.
It might be even more interesting to have two or more such ‘negotiations’ proceeding simultaneously, and then compare results.
It might make an interesting rationality exercise to have 6-10 people conduct some kind of discussion/negotiation/joint-decision-making-exercise to flesh-out their intuitions as to the type of post-singularity society they would like to live in.
Sign me up for 100 years with the catgirls in my volcano lair.
More generally I (strongly) prefer a situation in which the available neg-entropy is distributed, for the owners to do with as they please (with limits). That moves negotiations to be of the ‘trade’ kind rather than the ‘politics’ kind. Almost always preferable.
I tend toward FOOM skepticism, but I don’t think it is “nearly impossible”. Define a FOOM as a scenario leading in at most 10 years from the first human-level AI to a singleton which has taken effective control over the world’s economy.
Automating investing has been going fairly well. For me, it wouldn’t be very surprising if we get a dominant, largely machine-operated hedge fund, that “has taken effective control over the world’s economy” before we get human-level machine intelligence.
The conjecture is that it is true in an absolute sense. It would have made no sense at all for me to even mention it if I had meant it in the relative sense that you set up here as a straw man and then knock down.
There is something odd going on here. Three very intelligent people are interpreting what I write quite differently than the way I intend it. Probably it is because I generated confusion by misusing words that have a fixed meaning here. And, in this case, it may be because you were thinking of our “fragility” conversation rather than the main posting. But, whatever the reason, I’m finding this very frustrating.
I guess I took your conjecture to be the “relative” one because whether or not it is true perhaps doesn’t depend on details of one’s utility function, and we, or at least I, was talking about whether the question “what do I want?” is an important one. I’m not sure how you hope to show the “absolute” version in the same way.
Well, Omohundro showed that a certain collection of instrumental values tend to arise independently of the ‘seeded’ intrinsic values. In fact, decision making tends to be dominated by consideration of these ‘convergent’ instrumental values, rather than the human-inserted seed values.
Next, consider that those human values themselves originated as heuristic approximations of instrumental values contributing to the intrinsic value of interest to our optimization process—natural selection. The fact that we ended up with the particular heuristics that we did is not due to the fact that the intrinsic value for that process was reproductive success—every species in the biosphere evolved under the guidance of that value. The reason why humans ended up with values like curiosity, reciprocity, and toleration has to do with the environment in which we evolved.
So, my hope is that we can show that AIs will converge to human-like instrumental/heuristic values if they do their self-updating in a human-like evolutionary environment. Regardless of the details of their seeds.
That is the vision, anyways.
I notice that Robin Hanson takes a position similar to yours, in that he thinks things will turn out ok from our perspective if uploads/AIs evolve in an environment defined by certain rules (in his case property laws and such, rather than sexual reproduction).
But I think he also thinks that we do not actually have a choice between such evolution and a FOOMing singleton (i.e. FOOMing singleton is nearly impossible to achieve), whereas you think we might have a choice or at least you’re not taking a position on that. Correct me if I’m wrong here.
Anyway, suppose you and Robin are right and we do have some leverage over the environment that future AIs will evolve in, and can use that leverage to predictably influence the eventual outcome. I contend we still have to figure out what we want, so that we know how to apply that leverage. Presumably we can’t possibly make the AI evolutionary environment exactly like the human one, but we might have a choice over a range of environments, some more human-like than others. But it’s not necessarily true that the most human-like environment leads to the best outcome. (Nor is it even clear what it means for one environment to be more human-like than another.) So, among the possible outcomes we can aim for, we’ll still have to decide which ones are better than others, and to do that, we need to know what we want, which involves, at least in part, either figuring out morality is, or showing that it’s meaningless or otherwise unrelated to what we want.
Do you disagree on this point?
I tend toward FOOM skepticism, but I don’t think it is “nearly impossible”. Define a FOOM as a scenario leading in at most 10 years from the first human-level AI to a singleton which has taken effective control over the world’s economy. I rate the probability of a FOOM at 40% assuming that almost all AI researchers want a FOOM and at 5% assuming that almost all AI researchers want to prevent a FOOM. I’m under the impression that currently a majority of singularitarians want a FOOM, but I hope that that ratio will fall as the dangers of a FOOMing singleton become more widely known.
No, I agree. Agree enthusiastically. Though I might change the wording just a bit. Instead of “we still have to figure out what we want”, I might have written “we still have to negotiate what we want”.
My turn now. Do you disagree with this shift of emphasis from the intellectual to the political?
I suppose if you already know what you personally want, then your next problem is negotiation. I’m still stuck on the first problem, unfortunately.
ETA: What is your answer to The Lifespan Dilemma, for example?
I only skimmed that posting, and I failed to find any single question there which you apparently meant for me to answer. But let me invent my own question and answer it.
Suppose I expect to live for 10,000 years. Omega appears and offers me a deal. Omega will extend my lifetime to infinity if I simply agree to submit to torture for 15 minutes immediately—the torture being that I have to actually read that posting of Eliezer’s with care.
I would turn down Omega’s offer without regret, because I believe in (exponentially) discounting future utilities. Roughly speaking, I count the pleasures and pains that I will encounter next year to be something like 1% less significant than this year. I’m doing the math in my head, but I estimate that this makes my first omega-granted bonus year 10,000 years from now worth about 1/10^42 as much as this year. Or, saying it another way, my first ‘natural’ 10,000 years is worth about 10^42 times as much as the infinite period of time thereafter. The next fifteen minutes is more valuable than that infinite period of time. And I don’t want to waste that 15 minutes re-reading that posting.
And I am quite sure that 99% of mankind would agree with me that 1% discounting per year is not an excessive discount rate. That is, in large part, why I think negotiation is important. It is because typical SIAI thinking about morality is completely unacceptable to most of mankind and SIAI seem to be in denial about it.
Have your thought through all of the implications of a 1% discount rate? For example, have you considered that if you negotiate with someone who discounts the future less, say at 0.1% per year, you’ll end up trading the use of all of your resources after X number of years in exchange for use of his resources before X number of years, and so almost the entire future of the universe will be determined by the values of those whose discount rates are lower than yours?
If that doesn’t bother you, and you’re really pretty sure you want a 1% discount rate, do you not have other areas where you don’t know what you want?
For example, what exactly is the nature of pleasure and pain? I don’t want people to torture simulated humans, but what if they claim that the simulated humans have been subtly modified so that they only look like they’re feeling pain, but aren’t really? How can I tell if some computation is having pain or pleasure?
And here’s a related example: Presumably having one kilogram of orgasmium in the universe is better than having none (all else equal) but you probably don’t want to tile the universe with it. Exactly how much worse is a second kilogram of the stuff compared to the first? (If you don’t care about orgasmium in the abstract, suppose that it’s a copy of your brain experiencing some ridiculously high amount of pleasure.)
Have you already worked out all such problems, or at least know the principles by which you’ll figure them out?
I don’t know about thinking through all of the implications, but I have certainly thought through that one. Which is one reason why I would advocate that any AI’s that we build be hard-wired with a rather steep discount rate. Entities with very low discount rates are extremely difficult to control through market incentives. Murder is the only effective option, and the AI knows that, leading to a very unstable situation.
Oh, I’m sure I do. And I’m sure that what I want will change when I experience the Brave New World for myself. That is why I advocate avoiding any situation in which I have to perfectly specify my fragile values correctly the first time—have to get it right because someone decided that the AI should make its own decisions about self-improvement and so we need to make sure its values are ultra-stable.
I certainly have some sympathy for people who find themselves in that kind of moral quandary. Those kinds of problems just don’t show up when your moral system requires no particular obligations to entities you have never met, with whom you cannot communicate, and with whom you have no direct or indirect agreements.
I presume you ask rhetorically, but as it happens, the answer is yes. I at least know the principles. My moral system is pretty simple—roughly a Humean rational self-interest, but as it would play out in a fictional society in which all actions are observed and all desires are known. But that still presents me with moral quandaries—because in reality all desires are not known, and in order to act morally I need to know what other people want.
I find it odd that utilitarians seem less driven to find out what other people want than do egoists like myself.
Control—through market incentives?!? How not to do it, surely. Soon the machine will have all the chips, and you will have none—and therefore nothing to bargain with.
The more conventional solution is to control the machine by programming its brain. Then, control via market incentives becomes irrelevant. So: I don’t think this reason for discounting is very practical.
Odd. I was expecting that it would trade any chips it happened to acquire for computronium, cat girls, and cat boys (who would perform scheduled maintenance in its volcano lair). Agents with a high discount rate just aren’t that interested in investing. Delayed gratification just doesn’t appeal to them.
That doesn’t sound as though there is any substantive disagreement.
...and nor does that.
However, you appear to be not addressing the issue—which was that your rationale for rapid discounting in machine intelligence was based on a scenario where the machine goals and the human goals are different—and the humans attempt to exercise control over the machines using market incentives.
Conventional thinking around here is that this kind of scenario often doesn’t work out too well for the humans—and it represents a mess that we are better off not getting into in the first place.
So: you aren’t on the same page—which may be why your conclusions differ. However, why aren’t you on the same page? Do you think control via market incentives is desirable? Inevitable? Likely?
The problem with controlling machines has more to do with power than discount rates. The machine is (potentially) more powerful. It doesn’t much matter how it discounts—it is likely to get its way. So, its way had better be our way.
Programming something and then allowing it to run unattended in the hope that you programmed correctly is not ‘control’, as the term is usually understood in ‘control theory’.
I would say that I believe that control of an AI by continuing trade is ‘necessary’ if we expect that our desires will change over time, and we will want to nudge the AI (or build a new AI) to satisfy those unanticipated desires.
It certainly makes sense to try to build machines whose values are aligned with humans over the short term—such machines will have little credible power to threaten us—just as parents have little power to credibly threaten their children since carrying out such threats directly reduces the threatener’s own utility.
And this also means that the machine needs to discount (its altruistic interest in) human welfare at the same rate as human do—otherwise, if it discounts faster, then it can threaten human with a horrible future (since it cares only about the human present). Or if it temporally discounts human happiness much slower than do humans, it will be able to threaten to delay human gratification.
However, if we want to be able to control our machines (to be able to cause them to do things that we did not originally imagine wanting them to do) then we do need to program in some potential carrots and sticks—things our machines care about that only humans can provide. These things need not be physical—a metaphoric pat on the head may do the trick. But if we are wise, we will program our machines to temporally discount this kind of gratification rather sharply—we don’t want it embarking on long term plans to increase future head-pats at the cost of incurring our short-term displeasure.
Incidentally, over the past few comments, I have noticed that you repeatedly refer to “the machine” where I might have written “machines” or “a machine”. Do you think that a singleton-dominated future is desirable? Inevitable? Likely?
If a machine wants for humans what the humans want for themselves, it wants to discount that stuff the way they like it. That doesn’t imply that it has any temporal discounting in its utility function—it is just using a moral mirror.
I certainly wasn’t thinking about that issue consciously. Our brains may just handle examples a little differently.
And your decision not to answer my questions … Did you think about that consciously?
Of course. I’m prioritising. I did already make five replies to your one comment—and the proposed shift of direction seemed to be quite a digression.
My existing material on the topic:
http://alife.co.uk/essays/one_big_organism/
http://alife.co.uk/essays/self_directed_evolution/
http://alife.co.uk/essays/the_second_superintelligence/
It is challenging to answer directly because the premise that there is either one or many is questionable. There are degrees of domination—and we already have things like the United Nations.
Also, this seems to be an area where civilisation will probably get what it wants—so its down to us to some extent—which makes this a difficult area to make predictions in. However, I do think a mostly-united future—with few revolutions and little fighting—is more likely than not. An extremely tightly-united future also seems quite plausible to me. Material like this seems to be an unconvincing reason for doubt.
No. That’s the “reinforcement learning” model. There is also the “recompile its brain” model.
The reinforcement learning model is problematical. If you hit a superintelligence with a stick, it will probably soon find a way take the stick away from you.
Well, that surely isn’t right. Asimov knew that! He proposed making the machines want to do what we want them to—by making them following our instructions.
A straw man—from my POV. I never said “unattended ” in the first place. “”
If you have already settled on a moral system, then it’s totally understandable why you might not be terribly interested in meta-ethics (in the sense of “the nature of morality”) at this point, but more into applied ethics, which I now see is what your post is really about. But I wish you mentioned that fact several comments upstream, when I said that I’m interested in meta-ethics because I’m not sure what I want. If you had mentioned it, I probably wouldn’t have tried to convince you that meta-ethics ought to be of interest to you too.
Wow! Massive confusion. First let me clarify that I am interested in meta-ethics. I’ve read Hume, G.E.Moore, Nozick, Rawls, Gauthier, and tried to read (since I learned of him here) Parfit. Second, I don’t see why you would expect someone who has settled on a moral system to lose interest in meta-ethics. Third, I am totally puzzled how you could have reached the conclusion that my post was about applied ethics. Is there any internal evidence you can point to?
I would certainly agree that our recent conversation has veered into applied ethics. But that is because you keep asking applied ethics questions (apparently for purposes of illustration) and I keep answering. Sorry, my fault. I shouldn’t answer rhetorical questions.
I wish I had realized that convincing me of that was what you were trying to do. I was under the impression that you were arguing that clarifying and justifying ones own ethical viewpoint is the urgent task, while I was arguing that comprehending and accommodating the diversity in ethical viewpoints among mankind is more important.
I am pretty sure that many humans discount faster than this today, on entirely sensible and rational grounds. What dominates the future has to do with power and reproductive rates, as well as discounting—and things like senescence and fertility decline make discounting sensible.
Basically I think that you can’t really have a sensible discussion about this without distinguishing between instrumental discounting and ultimate discounting.
Instrumental discounting is inevitable—and can be fairly rapid. It is ultimate discounting that is more suspect.
I suspect that 99% of mankind would give different answers to that question, depending on whether it’s framed as giving up X now in exchange for receiving Y N years from now, or X N years ago for Y now.
Not to mention that typical humans behave like hyperbolic discounters, and many can not even be made to understand the concept of a “discount rate”.
Quite probably true. Which of course suggests the question: How (or how much) should “typical humans” be consulted about our plans for their future?
Yeah, I know that is an unfair way to ask the question. And I admit that Eliezer, at least, is actually doing something to raise the waterline. But it is a serious ethical question for utilitarians and a serious political question for egoists. And the closest thing I have seen to an answer for that question around here is something like “Well, we will scan their brains, or observe their behavior, or something. And then try to get something coherent out of that data. But God forbid we should ask them about it. That would just confuse things.”
It might make an interesting rationality exercise to have 6-10 people conduct some kind of discussion/negotiation/joint-decision-making-exercise to flesh-out their intuitions as to the type of post-singularity society they would like to live in.
My intuition is that, even if you are not sure what you want, the interactive process will probably help you to clarify exactly what you do not want, and thus assist in both personal and collective understanding of values.
It might be even more interesting to have two or more such ‘negotiations’ proceeding simultaneously, and then compare results.
Sign me up for 100 years with the catgirls in my volcano lair.
More generally I (strongly) prefer a situation in which the available neg-entropy is distributed, for the owners to do with as they please (with limits). That moves negotiations to be of the ‘trade’ kind rather than the ‘politics’ kind. Almost always preferable.
I’d be willing to participate in such an exercise.
Automating investing has been going fairly well. For me, it wouldn’t be very surprising if we get a dominant, largely machine-operated hedge fund, that “has taken effective control over the world’s economy” before we get human-level machine intelligence.