Yes but my claim was IF we had such a clear notion of value then most of the problems on this site would be solved (by this site I mean for example what popular cannons are based around as interesting problems). I think you have simply agreed with me.
When you say “problem X is easily solved by Y” it can mean either (1) “problem X is easily solved, and here is how: Y!” or (2) “if only we had Y, then problem X would easily be solved”.
Generally #1 is the more interesting statement, which is why I thought you might be saying it. (That, plus the fact that you refer to “my proposal”, which does rather suggest that you think you have an actual solution, not merely a solution conditional on another hard problem.) It transpires that you’re saying #2. OK. In that case I think I have three comments.
First: yes, given a notion of value that captures what we care about and is sufficiently precise, many of the problems people here worry about become much easier. Thus far, we agree.
Second: it is far from clear that any such notion actually exists, and as yet no one has come up with even a coherent proposal for figuring out what it might actually be. (Some of the old posts I pointed you at elsewhere argue that if there is one then it is probably very complicated and hard to reason about.)
Third: having such a notion of value is not necessarily enough. Here is an example of a problem for which it is probably not enough: Suppose we make an AI, which makes another AI, which makes another AI, etc., each one building a smarter one than itself. Or, more or less equivalently, we build an AI, which modifies itself, and then modifies itself again, etc., making itself smarter each time. We get to choose the initial AI’s values. Can we choose them in such a way that even after all these modifications we can be confident that the resulting AI—which may work very differently, and think very differently, from the one we start with—will do things we are happy about?
“When you say “problem X is easily solved by Y” it can mean either (1) “problem X is easily solved, and here is how: Y!” or (2) “if only we had Y, then problem X would easily be solved”.”
Yes I am speaking to (2) and once we understand the value of it, then I will explain why it is not insignificant.
“Third: having such a notion of value is not necessarily enough. Here is an example of a problem for which it is probably not enough: Suppose we make an AI, which makes another AI, which makes another AI, etc., each one building a smarter one than itself. Or, more or less equivalently, we build an AI, which modifies itself, and then modifies itself again, etc., making itself smarter each time. We get to choose the initial AI’s values. Can we choose them in such a way that even after all these modifications we can be confident that the resulting AI—which may work very differently, and think very differently, from the one we start with—will do things we are happy about?”
You would create the first AI to seek value, and then knowing that it is getting smarter and smarter, it would tend towards seeing the value I propose and optimize itself in relation to what I am proposing, by your own admission of how the problem you are stating works.
it would tend towards seeing the value I propose and optimize itself in relation to what I am proposing
I am not sure which of two things you are saying.
Thing One: “We program the AI with a simple principle expressed as ‘seek value’. Any sufficiently smart thing programmed to do this will converge on the One True Value System, which when followed guarantees the best available outcomes, so if the AIs get smarter and smarter and they are programmed to ‘seek value’ then they will end up seeking the One True Value and everything will be OK.”
Thing Two: “We program the AI with a perhaps-complicated value system that expresses what really matters to us. We can then be confident that it will program its successors to use the same value system, and they will program their successors to use the same value system, etc. So provided we start out with a value system that produces good outcomes, everything will be OK.”
If you are saying Thing One, then I hope you intend to give us some concrete reason to believe that all sufficiently smart agents converge on a single value system. I personally find that very difficult to believe, and I know I’m not alone in this. (Specifically, Eliezer Yudkowsky, who founded the LW site, has written a bit about how he used to believe something very similar, changed his mind, and now thinks it’s obviously wrong. I don’t know the details of exactly what EY believed or what arguments convinced him he’d been wrong.)
If you are saying Thing Two, then I think you may be overoptimistic about the link between “System S follows values V” and “System S will make sure any new systems it creates also follow values V”. This is not a thing that reliably happens when S is a human being, and it’s not difficult to think of situations in which it’s not what you’d want to happen. (Perhaps S can predict the behaviour of its successor T, and figures out that it will get more V-aligned results if T’s values are something other than V. I’m not sure that this can be plausible when T is S’s smarter successor, but it’s not obvious to me that the possibility can be ruled out.)
I REALLY appreciate this dialogue. Yup I am suggesting #1. It’s observable reality that smart agents converge to value the same thing yes, but that is the wrong way to say it. “Natural evolution will levate (aka create) the thing that all agents will converge to”, this is the correct perspective (or more valuable perspective). Also I should think that is obvious to most people here.
Eliezer Y will rethink this when he comes across what I am proposing.
Natural evolution will levate (aka create) the thing that all agents will converge to
This seems to me like a category error. The things produced by natural evolution are not values. (Though natural evolution produces things—e.g., us—that produce values.)
I should think that is obvious to most people here.
My guess is that you are wrong about that; in any case, it certainly isn’t obvious to me.
“This seems to me like a category error. The things produced by natural evolution are not values. (Though natural evolution produces things—e.g., us—that produce values.)”
I am saying, in a newtonian vs Quantum science money naturally evolves as a thing that the collective group wants, and I am suggesting this phenomenon will spread to and drive AI. This is both natural and a rational conclusion and something favorable the re-solves many paradoxes and difficult problems.
But money is not the correct word, it is an objective metric for value that is the key. Because money can also be a poor standard for objective measurement.
Given the actual observed behaviour of markets (e.g., the affair of 2008), I see little grounds for hoping that their preferences will robustly track what humans actually care enough, still less that they will do so robustly enough to answer the concerns of people who worry about AI value alignment.
Nash speaks to the crisis of 2008 and explains how it is the lack of an uncorruptable standard basis for value that stops us from achieving such a useful market. You can’t target optimal spending for optimal caring though, I just want to clear on that.
OK. And has Nash found an uncorruptable standard basis for value? Or is this meant to emerge somehow from The Market, borne aloft no doubt by the Invisible Hand? So far, that doesn’t actually seem to be happening.
The things I’ve seen about Nash’s “ideal money” proposal—which, full disclosure, I hadn’t heard of until today, so I make no guarantee to have seen enough—do not seem to suggest that Nash has in fact found an uncorruptable standard basis for value. Would you care to say more?
Yup. Firstly you fully admit that you are, previous to my entry, ignorant to Nash’s lifes work. What he spoke of and wrote of for 20 years country to country. It is what he fled the US about when he was younger, to exchange his USD for the Swiss franc because it was of superior quality, and in which the US navy tracked him down and took him back in chains (this is accepted not conspiracy).
Nash absolutely defined an incorruptible basis for valuation and most people have it labeled as an “icpi” industrial price consumption index. It is effectively an aggregate of stable prices across the global commodities, and it can be said if our money were pegged to it then it would be effectively perfectly stable over time. And of course it would need to be adjusted which means it is politically corruptible, but Nash’s actual proposal solves for this too:
…my personal view is that a practical global money might most favorably evolve through the development first of a few regional currencies of truly good quality. And then the “integration” or “coordination” of those into a global currency would become just a technical problem. (Here I am thinking of a politically neutral form of a technological utility rather than of a money which might, for example, be used to exert pressures in a conflict situation comparable to “the cold war”.)
Our view is that if it is viewed scientifically and rationally (which is psychologically difficult!) that money should have the function of a standard of measurement and thus that it should become comparable to the watt or the hour or a degree of temperature.~All quotes are from Ideal Money
you fully admit that you are, previous to my entry, ignorant to Nash’s lifes work. What he spoke of and wrote of for 20 years country to country.
First of all, we will all do better without the hectoring tone. But yes, I was ignorant of this. There is scarcely any limit to the things I don’t know about. However, nothing I have read about Nash suggests to me that it’s correct to describe “ideal money” as “his life’s work”.
It is what he fled the US about when he was younger [...] and in which the US navy tracked him down and took him back in chains
You are not helping your case here. He was, at this point, suffering pretty badly from schizophrenia. And so far as I can tell, the reasons he himself gave for leaving the US were nothing to do with the quality of US and Swiss money.
industrial consumption price index
Let me see if I’ve understood this right. You want a currency pegged to some basket of goods (“global commodities”, as you put it), which you will call “ideal money”. You then want to convert everything to money according to the prices set by a perfectly efficient infinitely liquid market, even though no such market has ever existed and no market at all is ever likely to exist for many of the things people actually care about. And you think this is a suitable foundation for the values of an AI, as a response to people who worry about the values of an AI whose vastly superhuman intellect will enable it to transform our world beyond all recognition.
What exactly do you expect to happen to the values of those “global commodities” in the presence of such an AI?
it would need to be adjusted which means it is politically corruptible [...]
(Yep.)
[...] but Nash’s actual proposal solves for this too:
But nothing in what you quote does any such thing.
Now it is important you attend to this thread
It may be important to you that I do so, but right now I have other priorities. Maybe tomorrow.
First of all, we will all do better without the hectoring tone. But yes, I was ignorant of this. There is scarcely any limit to the things I don’t know about. However, nothing I have read about Nash suggests to me that it’s correct to describe “ideal money” as “his life’s work”.
He lectured and wrote on the topic for the last 20 years of his life, and it is something he had been developing in his 30′s
You are not helping your case here. He was, at this point, suffering pretty badly from schizophrenia. And so far as I can tell, the reasons he himself gave for leaving the US were nothing to do with the quality of US and Swiss money.
Yes he was running around saying the governments are colluding against the people and he was going to be their savior. In Ideal Money he explains how the Keyneisan view of economics is comparable to bolshevik communism. These are facts and they show that he never abandoned his views when he was “schizophrenic”, and that they are in fact based on rational thinking. And yes it is his own admission that this is why he fled the US and denounced his citizenship.
Let me see if I’ve understood this right. You want a currency pegged to some basket of goods (“global commodities”, as you put it), which you will call “ideal money”. You then want to convert everything to money according to the prices set by a perfectly efficient infinitely liquid market, even though no such market has ever existed and no market at all is ever likely to exist for many of the things people actually care about. And you think this is a suitable foundation for the values of an AI, as a response to people who worry about the values of an AI whose vastly superhuman intellect will enable it to transform our world beyond all recognition.
What exactly do you expect to happen to the values of those “global commodities” in the presence of such an AI?
Yup exactly, and we are to create AI that basis its decisions on optimizing value in relation to procuring what would effectively be “ideal money”
But nothing in what you quote does any such thing.
I don’t need to do anything to show Nash made such a proposal of a unit of value expect quote him saying it is his intention. I don’t need to put the unit in your hand.
It may be important to you that I do so, but right now I have other priorities. Maybe tomorrow.
It’s simply and quick, your definition of ideal is not inline with the standard definition. Google it.
Yes he was running around saying the governments are colluding against the people and he was going to be their savior. [...]
He was also running around saying that he was Pope John XXIII because 23 was his favourite prime number. And refusing academic positions which would have given him a much better platform for advocating currency reform (had he wanted to do that) on the basis that he was already scheduled to start working as Emperor of Antarctica. And saying he was communicating with aliens from outer space.
Of course that doesn’t mean that everything he did was done for crazy reasons. But it does mean that the fact that he said or did something at this time is not any sort of evidence that it makes sense.
Could you point me to more information about the reasons he gave for leaving the US and trying to renounce his citizenship? I had a look in Nasar’s book, which is the only thing about Nash I have on my shelves, and (1) there is no index entry for “ideal money” (the concept may crop up but not be indexed, of course) and (2) its account of Nash’s time in Geneva and Paris is rather vague about why he wanted to renounce his citizenship (and indeed about why he was brought back to the US).
Yup exactly
Let me then repeat my question. What do you expect to happen to the values of those “global commodities” in the presence of an AI whose capabilities are superhuman enough to make value alignment an urgent issue? Suppose the commodities include (say) gold, Intel CPUs and cars, and the AI finds an energy-efficient way to make gold, designs some novel kind of quantum computing device that does what the Intel chips do but a million times faster, and figures out quantum gravity and uses it to invent a teleportation machine that works via wormholes? How are prices based on a basket of gold, CPUs and cars going to remain stable in that kind of situation?
[EDITED to add:] Having read a bit more about Nash’s proposal, it looks as if he had in mind minerals rather than manufactured goods; so gold might be on the list but probably not CPUs or cars. The point stands, and indeed Nash explicitly said that gold on its own wasn’t a good choice because of possible fluctuations in availability. I suggest that if we are talking about scenarios of rapid technological change, anything may change availability rapidly; and if we’re talking about scenarios of rapid technological change driven by a super-capable AI, that availability change may be under the AI’s control. None of this is good if we’re trying to use “ideal money” thus defined as a basis for the AI’s values.
(Of course it may be that no such drastic thing ever happens, either because fundamental physical laws prevent it or because we aren’t able to make an AI smart enough. But this is one of the situations people worried about AI value alignment are worried about, and the history of science and technology isn’t exactly short of new technologies that would have looked miraculous before the relevant discoveries were made.)
Or: suppose instead that the AI is superhumanly good at predicting and manipulating markets. (This strikes me as an extremely likely thing for people to try to make AIs do, and also a rather likely early step for a superintelligent but not yet superpowered AI trying to increase its influence.) How confident are you that the easiest way to achieve some goal expressed in terms of values cashed out in “ideal money” won’t be to manipulate the markets to change the correspondence between “ideal money” and other things in the world?
I don’t need to do anything [...] except quote him saying it is his intention.
But what you quoted him as saying he wanted was not (so far as I can tell) the same thing as you are now saying he wanted. We are agreed that Nash thought “ideal money” could be a universal means of valuing oil and bricks and computers. (With the caveat that I haven’t read anything like everything he said and wrote about this, and what I have read isn’t perfectly clear; so to some extent I’m taking your word for it.) But what I haven’t yet seen any sign of is that Nash thought “ideal money” could also be a universal means of valuing internal subjective things (e.g., contentment) or interpersonal things not readily turned into liquid markets (e.g., sincere declarations of love) or, in short, anything not readily traded on zero-overhead negligible-latency infinitely-liquid markets.
your definition of ideal is not in line with the standard definition.
I have (quite deliberately) not been assuming any particular definition of “ideal”; I have been taking “ideal money” as a term of art whose meaning I have attempted to infer from what you’ve said about it and what I’ve seen of Nash’s words. Of course I may have misunderstood, but not by using the wrong definition of “ideal” because I have not been assuming that “ideal money” = “money that is ideal in any more general sense”.
He was also running around saying that he was Pope John XXIII because 23 was his favourite prime number. And refusing academic positions which would have given him a much better platform for advocating currency reform (had he wanted to do that) on the basis that he was already scheduled to start working as Emperor of Antarctica. And saying he was communicating with aliens from outer space.
No you aren’t going to tell Nash how he could have brought about Ideal Money. In regard to, for example, communicating with aliens, again you are being wholly ignorant. Consider this (from Ideal Money):
We of Terra could be taught how to have ideal monetary systems if wise and benevolent extraterrestrials were to take us in hand and administer our national money systems analogously to how the British recently administered the currency of Hong Kong.~Ideal Money
See? He has been “communicating” with aliens. He was using his brain to think beyond not just nations and continents but worlds. “What would it be like for outside observers?” Is he not allowed to ask these questions? Do we not find it useful to think about how extraterrestrials would have an effect on a certain problem like our currency systems? And you call this “crazy”? Why can’t Nash make theories based on civilizations external to ours without you calling him crazy?
See, he was being logical, but people like you can’t understand him.
Of course that doesn’t mean that everything he did was done for crazy reasons. But it does mean that the fact that he said or did something at this time is not any sort of evidence that it makes sense.
This is the most sick (ill) paragraph I have traversed in a long time. You have said “Nash was saying crazy things, so he was sick, therefore the things he was saying were crazy, and so we have to talk them with a grain of salt.
Nash birthed modern complexity theory at that time and did many other amazing things when he was “sick”. He also recovered from his mental illness not because of medication but by willing himself so. These are accepted points in his bio. He says he started to reject politically orientated thinking and return to a more logical basis (in other words he realized running around telling everyone he is a god isn’t helping any argument).
Could you point me to more information about the reasons he gave for leaving the US and trying to renounce his citizenship? I had a look in Nasar’s book, which is the only thing about Nash I have on my shelves, and (1) there is no index entry for “ideal money” (the concept may crop up but not be indexed, of course) and (2) its account of Nash’s time in Geneva and Paris is rather vague about why he wanted to renounce his citizenship (and indeed about why he was brought back to the US).
“I emerged from a time of mental illness, or what is generally called mental illness...”
There is another interview he explains that the us navy took him back in chains, can’t recall the video.
Let me then repeat my question. What do you expect to happen to the values of those “global commodities” in the presence of an AI whose capabilities are superhuman enough to make value alignment an urgent issue? Suppose the commodities include (say) gold, Intel CPUs and cars, and the AI finds an energy-efficient way to make gold, designs some novel kind of quantum computing device that does what the Intel chips do but a million times faster, and figures out quantum gravity and uses it to invent a teleportation machine that works via wormholes? How are prices based on a basket of gold, CPUs and cars going to remain stable in that kind of situation?
You are messing up (badly) the accepted definition of ideal. Nonetheless Nash deals with your concerns:
We can see that times could change, especially if a “miracle energy source” were found, and thus if a good ICPI is constructed, it should not be expected to be valid as initially defined for all eternity. It would instead be appropriate for it to be regularly readjusted depending on how the patterns of international trade would actually evolve.
Here, evidently, politicians in control of the authority behind standards could corrupt the continuity of a good standard, but depending on how things were fundamentally arranged, the probabilities of serious damage through political corruption might becomes as small as the probabilities that the values of the standard meter and kilogram will be corrupted through the actions of politicians.~Ideal Money
.
[EDITED to add:] Having read a bit more about Nash’s proposal, it looks as if he had in mind minerals rather than manufactured goods; so gold might be on the list but probably not CPUs or cars. The point stands, and indeed Nash explicitly said that gold on its own wasn’t a good choice because of possible fluctuations in availability. I suggest that if we are talking about scenarios of rapid technological change, anything may change availability rapidly; and if we’re talking about scenarios of rapid technological change driven by a super-capable AI, that availability change may be under the AI’s control. None of this is good if we’re trying to use “ideal money” thus defined as a basis for the AI’s values.
No he gets more explicate tho and so something like cpu’s etc. would be sort of reasonable (but I think probably better to look at the underlying commodities used for these things). For example:
Moreover, commodities with easily and reliably calculable prices are most suitable, and relatively stable prices are very desirable. Another basic cost that could be used would be a standard transportation cost, the cost of shipping a unit quantity of something over long international distances.
(Of course it may be that no such drastic thing ever happens, either because fundamental physical laws prevent it or because we aren’t able to make an AI smart enough. But this is one of the situations people worried about AI value alignment are worried about, and the history of science and technology isn’t exactly short of new technologies that would have looked miraculous before the relevant discoveries were made.)
Yes, we are simultaneously saying the super smart thing to do would be to have ideal money (ie money comparable to an optimally chosen basket of industrial commodity prices), while also worry a super smart entity wouldn’t support the smart action. It’s clear fud.
Or: suppose instead that the AI is superhumanly good at predicting and manipulating markets. (This strikes me as an extremely likely thing for people to try to make AIs do, and also a rather likely early step for a superintelligent but not yet superpowered AI trying to increase its influence.) How confident are you that the easiest way to achieve some goal expressed in terms of values cashed out in “ideal money” won’t be to manipulate the markets to change the correspondence between “ideal money” and other things in the world?
Ideal money is not corruptible. Your definition of ideal is not accepted as standard.
But what you quoted him as saying he wanted was not (so far as I can tell) the same thing as you are now saying he wanted. We are agreed that Nash thought “ideal money” could be a universal means of valuing oil and bricks and computers. (With the caveat that I haven’t read anything like everything he said and wrote about this, and what I have read isn’t perfectly clear; so to some extent I’m taking your word for it.) But what I haven’t yet seen any sign of is that Nash thought “ideal money” could also be a universal means of valuing internal subjective things (e.g., contentment) or interpersonal things not readily turned into liquid markets (e.g., sincere declarations of love) or, in short, anything not readily traded on zero-overhead negligible-latency infinitely-liquid markets.
I might ask if you think such things could be quantified WITHOUT a standard basis for value? I mean its strawmany. Nash has an incredible proposal with a very long and intricate argument, but you are stuck arguing MY extrapolation, without understanding the underlying base argument by Nash. Gotta walk first, “What IS Ideal Money?”
I have (quite deliberately) not been assuming any particular definition of “ideal”; I have been taking “ideal money” as a term of art whose meaning I have attempted to infer from what you’ve said about it and what I’ve seen of Nash’s words. Of course I may have misunderstood, but not by using the wrong definition of “ideal” because I have not been assuming that “ideal money” = “money that is ideal in any more general sense”.
Yes this is a mistake, and its an amazing one to see from everyone. Thank you for at least partially addressing his work.
Yes but my claim was IF we had such a clear notion of value then most of the problems on this site would be solved (by this site I mean for example what popular cannons are based around as interesting problems). I think you have simply agreed with me.
When you say “problem X is easily solved by Y” it can mean either (1) “problem X is easily solved, and here is how: Y!” or (2) “if only we had Y, then problem X would easily be solved”.
Generally #1 is the more interesting statement, which is why I thought you might be saying it. (That, plus the fact that you refer to “my proposal”, which does rather suggest that you think you have an actual solution, not merely a solution conditional on another hard problem.) It transpires that you’re saying #2. OK. In that case I think I have three comments.
First: yes, given a notion of value that captures what we care about and is sufficiently precise, many of the problems people here worry about become much easier. Thus far, we agree.
Second: it is far from clear that any such notion actually exists, and as yet no one has come up with even a coherent proposal for figuring out what it might actually be. (Some of the old posts I pointed you at elsewhere argue that if there is one then it is probably very complicated and hard to reason about.)
Third: having such a notion of value is not necessarily enough. Here is an example of a problem for which it is probably not enough: Suppose we make an AI, which makes another AI, which makes another AI, etc., each one building a smarter one than itself. Or, more or less equivalently, we build an AI, which modifies itself, and then modifies itself again, etc., making itself smarter each time. We get to choose the initial AI’s values. Can we choose them in such a way that even after all these modifications we can be confident that the resulting AI—which may work very differently, and think very differently, from the one we start with—will do things we are happy about?
“When you say “problem X is easily solved by Y” it can mean either (1) “problem X is easily solved, and here is how: Y!” or (2) “if only we had Y, then problem X would easily be solved”.”
Yes I am speaking to (2) and once we understand the value of it, then I will explain why it is not insignificant.
“Third: having such a notion of value is not necessarily enough. Here is an example of a problem for which it is probably not enough: Suppose we make an AI, which makes another AI, which makes another AI, etc., each one building a smarter one than itself. Or, more or less equivalently, we build an AI, which modifies itself, and then modifies itself again, etc., making itself smarter each time. We get to choose the initial AI’s values. Can we choose them in such a way that even after all these modifications we can be confident that the resulting AI—which may work very differently, and think very differently, from the one we start with—will do things we are happy about?”
You would create the first AI to seek value, and then knowing that it is getting smarter and smarter, it would tend towards seeing the value I propose and optimize itself in relation to what I am proposing, by your own admission of how the problem you are stating works.
I am not sure which of two things you are saying.
Thing One: “We program the AI with a simple principle expressed as ‘seek value’. Any sufficiently smart thing programmed to do this will converge on the One True Value System, which when followed guarantees the best available outcomes, so if the AIs get smarter and smarter and they are programmed to ‘seek value’ then they will end up seeking the One True Value and everything will be OK.”
Thing Two: “We program the AI with a perhaps-complicated value system that expresses what really matters to us. We can then be confident that it will program its successors to use the same value system, and they will program their successors to use the same value system, etc. So provided we start out with a value system that produces good outcomes, everything will be OK.”
If you are saying Thing One, then I hope you intend to give us some concrete reason to believe that all sufficiently smart agents converge on a single value system. I personally find that very difficult to believe, and I know I’m not alone in this. (Specifically, Eliezer Yudkowsky, who founded the LW site, has written a bit about how he used to believe something very similar, changed his mind, and now thinks it’s obviously wrong. I don’t know the details of exactly what EY believed or what arguments convinced him he’d been wrong.)
If you are saying Thing Two, then I think you may be overoptimistic about the link between “System S follows values V” and “System S will make sure any new systems it creates also follow values V”. This is not a thing that reliably happens when S is a human being, and it’s not difficult to think of situations in which it’s not what you’d want to happen. (Perhaps S can predict the behaviour of its successor T, and figures out that it will get more V-aligned results if T’s values are something other than V. I’m not sure that this can be plausible when T is S’s smarter successor, but it’s not obvious to me that the possibility can be ruled out.)
I REALLY appreciate this dialogue. Yup I am suggesting #1. It’s observable reality that smart agents converge to value the same thing yes, but that is the wrong way to say it. “Natural evolution will levate (aka create) the thing that all agents will converge to”, this is the correct perspective (or more valuable perspective). Also I should think that is obvious to most people here.
Eliezer Y will rethink this when he comes across what I am proposing.
This seems to me like a category error. The things produced by natural evolution are not values. (Though natural evolution produces things—e.g., us—that produce values.)
My guess is that you are wrong about that; in any case, it certainly isn’t obvious to me.
“This seems to me like a category error. The things produced by natural evolution are not values. (Though natural evolution produces things—e.g., us—that produce values.)”
I am saying, in a newtonian vs Quantum science money naturally evolves as a thing that the collective group wants, and I am suggesting this phenomenon will spread to and drive AI. This is both natural and a rational conclusion and something favorable the re-solves many paradoxes and difficult problems.
But money is not the correct word, it is an objective metric for value that is the key. Because money can also be a poor standard for objective measurement.
Given the actual observed behaviour of markets (e.g., the affair of 2008), I see little grounds for hoping that their preferences will robustly track what humans actually care enough, still less that they will do so robustly enough to answer the concerns of people who worry about AI value alignment.
Nash speaks to the crisis of 2008 and explains how it is the lack of an uncorruptable standard basis for value that stops us from achieving such a useful market. You can’t target optimal spending for optimal caring though, I just want to clear on that.
OK. And has Nash found an uncorruptable standard basis for value? Or is this meant to emerge somehow from The Market, borne aloft no doubt by the Invisible Hand? So far, that doesn’t actually seem to be happening.
I’m afraid I don’t understand your last sentence.
Yes. And why do we ignore him?
The things I’ve seen about Nash’s “ideal money” proposal—which, full disclosure, I hadn’t heard of until today, so I make no guarantee to have seen enough—do not seem to suggest that Nash has in fact found an uncorruptable standard basis for value. Would you care to say more?
Yup. Firstly you fully admit that you are, previous to my entry, ignorant to Nash’s lifes work. What he spoke of and wrote of for 20 years country to country. It is what he fled the US about when he was younger, to exchange his USD for the Swiss franc because it was of superior quality, and in which the US navy tracked him down and took him back in chains (this is accepted not conspiracy).
Nash absolutely defined an incorruptible basis for valuation and most people have it labeled as an “icpi” industrial price consumption index. It is effectively an aggregate of stable prices across the global commodities, and it can be said if our money were pegged to it then it would be effectively perfectly stable over time. And of course it would need to be adjusted which means it is politically corruptible, but Nash’s actual proposal solves for this too:
Ideal Money is an incorruptible basis for value.
Now it is important you attend to this thread, its quick, very quick: http://lesswrong.com/lw/ogt/do_we_share_a_definition_for_the_word_ideal/
First of all, we will all do better without the hectoring tone. But yes, I was ignorant of this. There is scarcely any limit to the things I don’t know about. However, nothing I have read about Nash suggests to me that it’s correct to describe “ideal money” as “his life’s work”.
You are not helping your case here. He was, at this point, suffering pretty badly from schizophrenia. And so far as I can tell, the reasons he himself gave for leaving the US were nothing to do with the quality of US and Swiss money.
Let me see if I’ve understood this right. You want a currency pegged to some basket of goods (“global commodities”, as you put it), which you will call “ideal money”. You then want to convert everything to money according to the prices set by a perfectly efficient infinitely liquid market, even though no such market has ever existed and no market at all is ever likely to exist for many of the things people actually care about. And you think this is a suitable foundation for the values of an AI, as a response to people who worry about the values of an AI whose vastly superhuman intellect will enable it to transform our world beyond all recognition.
What exactly do you expect to happen to the values of those “global commodities” in the presence of such an AI?
(Yep.)
But nothing in what you quote does any such thing.
It may be important to you that I do so, but right now I have other priorities. Maybe tomorrow.
He lectured and wrote on the topic for the last 20 years of his life, and it is something he had been developing in his 30′s
Yes he was running around saying the governments are colluding against the people and he was going to be their savior. In Ideal Money he explains how the Keyneisan view of economics is comparable to bolshevik communism. These are facts and they show that he never abandoned his views when he was “schizophrenic”, and that they are in fact based on rational thinking. And yes it is his own admission that this is why he fled the US and denounced his citizenship.
Yup exactly, and we are to create AI that basis its decisions on optimizing value in relation to procuring what would effectively be “ideal money”
I don’t need to do anything to show Nash made such a proposal of a unit of value expect quote him saying it is his intention. I don’t need to put the unit in your hand.
It’s simply and quick, your definition of ideal is not inline with the standard definition. Google it.
He was also running around saying that he was Pope John XXIII because 23 was his favourite prime number. And refusing academic positions which would have given him a much better platform for advocating currency reform (had he wanted to do that) on the basis that he was already scheduled to start working as Emperor of Antarctica. And saying he was communicating with aliens from outer space.
Of course that doesn’t mean that everything he did was done for crazy reasons. But it does mean that the fact that he said or did something at this time is not any sort of evidence that it makes sense.
Could you point me to more information about the reasons he gave for leaving the US and trying to renounce his citizenship? I had a look in Nasar’s book, which is the only thing about Nash I have on my shelves, and (1) there is no index entry for “ideal money” (the concept may crop up but not be indexed, of course) and (2) its account of Nash’s time in Geneva and Paris is rather vague about why he wanted to renounce his citizenship (and indeed about why he was brought back to the US).
Let me then repeat my question. What do you expect to happen to the values of those “global commodities” in the presence of an AI whose capabilities are superhuman enough to make value alignment an urgent issue? Suppose the commodities include (say) gold, Intel CPUs and cars, and the AI finds an energy-efficient way to make gold, designs some novel kind of quantum computing device that does what the Intel chips do but a million times faster, and figures out quantum gravity and uses it to invent a teleportation machine that works via wormholes? How are prices based on a basket of gold, CPUs and cars going to remain stable in that kind of situation?
[EDITED to add:] Having read a bit more about Nash’s proposal, it looks as if he had in mind minerals rather than manufactured goods; so gold might be on the list but probably not CPUs or cars. The point stands, and indeed Nash explicitly said that gold on its own wasn’t a good choice because of possible fluctuations in availability. I suggest that if we are talking about scenarios of rapid technological change, anything may change availability rapidly; and if we’re talking about scenarios of rapid technological change driven by a super-capable AI, that availability change may be under the AI’s control. None of this is good if we’re trying to use “ideal money” thus defined as a basis for the AI’s values.
(Of course it may be that no such drastic thing ever happens, either because fundamental physical laws prevent it or because we aren’t able to make an AI smart enough. But this is one of the situations people worried about AI value alignment are worried about, and the history of science and technology isn’t exactly short of new technologies that would have looked miraculous before the relevant discoveries were made.)
Or: suppose instead that the AI is superhumanly good at predicting and manipulating markets. (This strikes me as an extremely likely thing for people to try to make AIs do, and also a rather likely early step for a superintelligent but not yet superpowered AI trying to increase its influence.) How confident are you that the easiest way to achieve some goal expressed in terms of values cashed out in “ideal money” won’t be to manipulate the markets to change the correspondence between “ideal money” and other things in the world?
But what you quoted him as saying he wanted was not (so far as I can tell) the same thing as you are now saying he wanted. We are agreed that Nash thought “ideal money” could be a universal means of valuing oil and bricks and computers. (With the caveat that I haven’t read anything like everything he said and wrote about this, and what I have read isn’t perfectly clear; so to some extent I’m taking your word for it.) But what I haven’t yet seen any sign of is that Nash thought “ideal money” could also be a universal means of valuing internal subjective things (e.g., contentment) or interpersonal things not readily turned into liquid markets (e.g., sincere declarations of love) or, in short, anything not readily traded on zero-overhead negligible-latency infinitely-liquid markets.
I have (quite deliberately) not been assuming any particular definition of “ideal”; I have been taking “ideal money” as a term of art whose meaning I have attempted to infer from what you’ve said about it and what I’ve seen of Nash’s words. Of course I may have misunderstood, but not by using the wrong definition of “ideal” because I have not been assuming that “ideal money” = “money that is ideal in any more general sense”.
No you aren’t going to tell Nash how he could have brought about Ideal Money. In regard to, for example, communicating with aliens, again you are being wholly ignorant. Consider this (from Ideal Money):
See? He has been “communicating” with aliens. He was using his brain to think beyond not just nations and continents but worlds. “What would it be like for outside observers?” Is he not allowed to ask these questions? Do we not find it useful to think about how extraterrestrials would have an effect on a certain problem like our currency systems? And you call this “crazy”? Why can’t Nash make theories based on civilizations external to ours without you calling him crazy?
See, he was being logical, but people like you can’t understand him.
This is the most sick (ill) paragraph I have traversed in a long time. You have said “Nash was saying crazy things, so he was sick, therefore the things he was saying were crazy, and so we have to talk them with a grain of salt.
Nash birthed modern complexity theory at that time and did many other amazing things when he was “sick”. He also recovered from his mental illness not because of medication but by willing himself so. These are accepted points in his bio. He says he started to reject politically orientated thinking and return to a more logical basis (in other words he realized running around telling everyone he is a god isn’t helping any argument).
“I emerged from a time of mental illness, or what is generally called mental illness...”
″...you could say I grew out of it.”
Those are relevant quotes otherwise.
https://www.youtube.com/watch?v=7Zb6_PZxxA0 12:40 it starts but he explains about the francs at 13:27 “When I did become disturbed I changed my money into swiss francs.
There is another interview he explains that the us navy took him back in chains, can’t recall the video.
You are messing up (badly) the accepted definition of ideal. Nonetheless Nash deals with your concerns:
.
No he gets more explicate tho and so something like cpu’s etc. would be sort of reasonable (but I think probably better to look at the underlying commodities used for these things). For example:
Yes, we are simultaneously saying the super smart thing to do would be to have ideal money (ie money comparable to an optimally chosen basket of industrial commodity prices), while also worry a super smart entity wouldn’t support the smart action. It’s clear fud.
Ideal money is not corruptible. Your definition of ideal is not accepted as standard.
I might ask if you think such things could be quantified WITHOUT a standard basis for value? I mean its strawmany. Nash has an incredible proposal with a very long and intricate argument, but you are stuck arguing MY extrapolation, without understanding the underlying base argument by Nash. Gotta walk first, “What IS Ideal Money?”
Yes this is a mistake, and its an amazing one to see from everyone. Thank you for at least partially addressing his work.