Ok. I skimmed it, and I think I understand your post well enough (if not I’ll read deeper!). What I am introducing into the dialogue is a theoretical and conceptually stable unit of value. I am saying, let’s address the problems stated in your articles as if we don’t have the problem of defining our base unit and that it exists and is agreed upon and it is stable for all time.
So here is an example from one of your links:
“Why is alignment hard?
Why expect that this problem is hard? This is the real question. You might ordinarily expect that whoever has taken on the job of building an AI is just naturally going to try to point that in a relatively nice direction. They’re not going to make evil AI. They’re not cackling villains. Why expect that their attempts to align the AI would fail if they just did everything as obviously as possible?
Here’s a bit of a fable. It’s not intended to be the most likely outcome. I’m using it as a concrete example to explain some more abstract concepts later.
With that said: What if programmers build an artificial general intelligence to optimize for smiles? Smiles are good, right? Smiles happen when good things happen.”
Do we see how we can solve this problem now? We simply optimize the AI system for value, and everyone is happy.
If someone creates “bad” AI we could measure that, and use the measurement for a counter program.
If we had a satisfactory way of doing that then, yes, a large part of the problem would be solved. Unfortunately, that’s because a large part of the problem is that we don’t have a clear notion of “value” that (1) actually captures what humans care about and (2) is precise enough for us to have any prospect of communicating it accurately and reliably to an AI.
Yes but my claim was IF we had such a clear notion of value then most of the problems on this site would be solved (by this site I mean for example what popular cannons are based around as interesting problems). I think you have simply agreed with me.
When you say “problem X is easily solved by Y” it can mean either (1) “problem X is easily solved, and here is how: Y!” or (2) “if only we had Y, then problem X would easily be solved”.
Generally #1 is the more interesting statement, which is why I thought you might be saying it. (That, plus the fact that you refer to “my proposal”, which does rather suggest that you think you have an actual solution, not merely a solution conditional on another hard problem.) It transpires that you’re saying #2. OK. In that case I think I have three comments.
First: yes, given a notion of value that captures what we care about and is sufficiently precise, many of the problems people here worry about become much easier. Thus far, we agree.
Second: it is far from clear that any such notion actually exists, and as yet no one has come up with even a coherent proposal for figuring out what it might actually be. (Some of the old posts I pointed you at elsewhere argue that if there is one then it is probably very complicated and hard to reason about.)
Third: having such a notion of value is not necessarily enough. Here is an example of a problem for which it is probably not enough: Suppose we make an AI, which makes another AI, which makes another AI, etc., each one building a smarter one than itself. Or, more or less equivalently, we build an AI, which modifies itself, and then modifies itself again, etc., making itself smarter each time. We get to choose the initial AI’s values. Can we choose them in such a way that even after all these modifications we can be confident that the resulting AI—which may work very differently, and think very differently, from the one we start with—will do things we are happy about?
“When you say “problem X is easily solved by Y” it can mean either (1) “problem X is easily solved, and here is how: Y!” or (2) “if only we had Y, then problem X would easily be solved”.”
Yes I am speaking to (2) and once we understand the value of it, then I will explain why it is not insignificant.
“Third: having such a notion of value is not necessarily enough. Here is an example of a problem for which it is probably not enough: Suppose we make an AI, which makes another AI, which makes another AI, etc., each one building a smarter one than itself. Or, more or less equivalently, we build an AI, which modifies itself, and then modifies itself again, etc., making itself smarter each time. We get to choose the initial AI’s values. Can we choose them in such a way that even after all these modifications we can be confident that the resulting AI—which may work very differently, and think very differently, from the one we start with—will do things we are happy about?”
You would create the first AI to seek value, and then knowing that it is getting smarter and smarter, it would tend towards seeing the value I propose and optimize itself in relation to what I am proposing, by your own admission of how the problem you are stating works.
it would tend towards seeing the value I propose and optimize itself in relation to what I am proposing
I am not sure which of two things you are saying.
Thing One: “We program the AI with a simple principle expressed as ‘seek value’. Any sufficiently smart thing programmed to do this will converge on the One True Value System, which when followed guarantees the best available outcomes, so if the AIs get smarter and smarter and they are programmed to ‘seek value’ then they will end up seeking the One True Value and everything will be OK.”
Thing Two: “We program the AI with a perhaps-complicated value system that expresses what really matters to us. We can then be confident that it will program its successors to use the same value system, and they will program their successors to use the same value system, etc. So provided we start out with a value system that produces good outcomes, everything will be OK.”
If you are saying Thing One, then I hope you intend to give us some concrete reason to believe that all sufficiently smart agents converge on a single value system. I personally find that very difficult to believe, and I know I’m not alone in this. (Specifically, Eliezer Yudkowsky, who founded the LW site, has written a bit about how he used to believe something very similar, changed his mind, and now thinks it’s obviously wrong. I don’t know the details of exactly what EY believed or what arguments convinced him he’d been wrong.)
If you are saying Thing Two, then I think you may be overoptimistic about the link between “System S follows values V” and “System S will make sure any new systems it creates also follow values V”. This is not a thing that reliably happens when S is a human being, and it’s not difficult to think of situations in which it’s not what you’d want to happen. (Perhaps S can predict the behaviour of its successor T, and figures out that it will get more V-aligned results if T’s values are something other than V. I’m not sure that this can be plausible when T is S’s smarter successor, but it’s not obvious to me that the possibility can be ruled out.)
I REALLY appreciate this dialogue. Yup I am suggesting #1. It’s observable reality that smart agents converge to value the same thing yes, but that is the wrong way to say it. “Natural evolution will levate (aka create) the thing that all agents will converge to”, this is the correct perspective (or more valuable perspective). Also I should think that is obvious to most people here.
Eliezer Y will rethink this when he comes across what I am proposing.
Natural evolution will levate (aka create) the thing that all agents will converge to
This seems to me like a category error. The things produced by natural evolution are not values. (Though natural evolution produces things—e.g., us—that produce values.)
I should think that is obvious to most people here.
My guess is that you are wrong about that; in any case, it certainly isn’t obvious to me.
“This seems to me like a category error. The things produced by natural evolution are not values. (Though natural evolution produces things—e.g., us—that produce values.)”
I am saying, in a newtonian vs Quantum science money naturally evolves as a thing that the collective group wants, and I am suggesting this phenomenon will spread to and drive AI. This is both natural and a rational conclusion and something favorable the re-solves many paradoxes and difficult problems.
But money is not the correct word, it is an objective metric for value that is the key. Because money can also be a poor standard for objective measurement.
Given the actual observed behaviour of markets (e.g., the affair of 2008), I see little grounds for hoping that their preferences will robustly track what humans actually care enough, still less that they will do so robustly enough to answer the concerns of people who worry about AI value alignment.
Nash speaks to the crisis of 2008 and explains how it is the lack of an uncorruptable standard basis for value that stops us from achieving such a useful market. You can’t target optimal spending for optimal caring though, I just want to clear on that.
OK. And has Nash found an uncorruptable standard basis for value? Or is this meant to emerge somehow from The Market, borne aloft no doubt by the Invisible Hand? So far, that doesn’t actually seem to be happening.
The things I’ve seen about Nash’s “ideal money” proposal—which, full disclosure, I hadn’t heard of until today, so I make no guarantee to have seen enough—do not seem to suggest that Nash has in fact found an uncorruptable standard basis for value. Would you care to say more?
Yup. Firstly you fully admit that you are, previous to my entry, ignorant to Nash’s lifes work. What he spoke of and wrote of for 20 years country to country. It is what he fled the US about when he was younger, to exchange his USD for the Swiss franc because it was of superior quality, and in which the US navy tracked him down and took him back in chains (this is accepted not conspiracy).
Nash absolutely defined an incorruptible basis for valuation and most people have it labeled as an “icpi” industrial price consumption index. It is effectively an aggregate of stable prices across the global commodities, and it can be said if our money were pegged to it then it would be effectively perfectly stable over time. And of course it would need to be adjusted which means it is politically corruptible, but Nash’s actual proposal solves for this too:
…my personal view is that a practical global money might most favorably evolve through the development first of a few regional currencies of truly good quality. And then the “integration” or “coordination” of those into a global currency would become just a technical problem. (Here I am thinking of a politically neutral form of a technological utility rather than of a money which might, for example, be used to exert pressures in a conflict situation comparable to “the cold war”.)
Our view is that if it is viewed scientifically and rationally (which is psychologically difficult!) that money should have the function of a standard of measurement and thus that it should become comparable to the watt or the hour or a degree of temperature.~All quotes are from Ideal Money
you fully admit that you are, previous to my entry, ignorant to Nash’s lifes work. What he spoke of and wrote of for 20 years country to country.
First of all, we will all do better without the hectoring tone. But yes, I was ignorant of this. There is scarcely any limit to the things I don’t know about. However, nothing I have read about Nash suggests to me that it’s correct to describe “ideal money” as “his life’s work”.
It is what he fled the US about when he was younger [...] and in which the US navy tracked him down and took him back in chains
You are not helping your case here. He was, at this point, suffering pretty badly from schizophrenia. And so far as I can tell, the reasons he himself gave for leaving the US were nothing to do with the quality of US and Swiss money.
industrial consumption price index
Let me see if I’ve understood this right. You want a currency pegged to some basket of goods (“global commodities”, as you put it), which you will call “ideal money”. You then want to convert everything to money according to the prices set by a perfectly efficient infinitely liquid market, even though no such market has ever existed and no market at all is ever likely to exist for many of the things people actually care about. And you think this is a suitable foundation for the values of an AI, as a response to people who worry about the values of an AI whose vastly superhuman intellect will enable it to transform our world beyond all recognition.
What exactly do you expect to happen to the values of those “global commodities” in the presence of such an AI?
it would need to be adjusted which means it is politically corruptible [...]
(Yep.)
[...] but Nash’s actual proposal solves for this too:
But nothing in what you quote does any such thing.
Now it is important you attend to this thread
It may be important to you that I do so, but right now I have other priorities. Maybe tomorrow.
First of all, we will all do better without the hectoring tone. But yes, I was ignorant of this. There is scarcely any limit to the things I don’t know about. However, nothing I have read about Nash suggests to me that it’s correct to describe “ideal money” as “his life’s work”.
He lectured and wrote on the topic for the last 20 years of his life, and it is something he had been developing in his 30′s
You are not helping your case here. He was, at this point, suffering pretty badly from schizophrenia. And so far as I can tell, the reasons he himself gave for leaving the US were nothing to do with the quality of US and Swiss money.
Yes he was running around saying the governments are colluding against the people and he was going to be their savior. In Ideal Money he explains how the Keyneisan view of economics is comparable to bolshevik communism. These are facts and they show that he never abandoned his views when he was “schizophrenic”, and that they are in fact based on rational thinking. And yes it is his own admission that this is why he fled the US and denounced his citizenship.
Let me see if I’ve understood this right. You want a currency pegged to some basket of goods (“global commodities”, as you put it), which you will call “ideal money”. You then want to convert everything to money according to the prices set by a perfectly efficient infinitely liquid market, even though no such market has ever existed and no market at all is ever likely to exist for many of the things people actually care about. And you think this is a suitable foundation for the values of an AI, as a response to people who worry about the values of an AI whose vastly superhuman intellect will enable it to transform our world beyond all recognition.
What exactly do you expect to happen to the values of those “global commodities” in the presence of such an AI?
Yup exactly, and we are to create AI that basis its decisions on optimizing value in relation to procuring what would effectively be “ideal money”
But nothing in what you quote does any such thing.
I don’t need to do anything to show Nash made such a proposal of a unit of value expect quote him saying it is his intention. I don’t need to put the unit in your hand.
It may be important to you that I do so, but right now I have other priorities. Maybe tomorrow.
It’s simply and quick, your definition of ideal is not inline with the standard definition. Google it.
Yes he was running around saying the governments are colluding against the people and he was going to be their savior. [...]
He was also running around saying that he was Pope John XXIII because 23 was his favourite prime number. And refusing academic positions which would have given him a much better platform for advocating currency reform (had he wanted to do that) on the basis that he was already scheduled to start working as Emperor of Antarctica. And saying he was communicating with aliens from outer space.
Of course that doesn’t mean that everything he did was done for crazy reasons. But it does mean that the fact that he said or did something at this time is not any sort of evidence that it makes sense.
Could you point me to more information about the reasons he gave for leaving the US and trying to renounce his citizenship? I had a look in Nasar’s book, which is the only thing about Nash I have on my shelves, and (1) there is no index entry for “ideal money” (the concept may crop up but not be indexed, of course) and (2) its account of Nash’s time in Geneva and Paris is rather vague about why he wanted to renounce his citizenship (and indeed about why he was brought back to the US).
Yup exactly
Let me then repeat my question. What do you expect to happen to the values of those “global commodities” in the presence of an AI whose capabilities are superhuman enough to make value alignment an urgent issue? Suppose the commodities include (say) gold, Intel CPUs and cars, and the AI finds an energy-efficient way to make gold, designs some novel kind of quantum computing device that does what the Intel chips do but a million times faster, and figures out quantum gravity and uses it to invent a teleportation machine that works via wormholes? How are prices based on a basket of gold, CPUs and cars going to remain stable in that kind of situation?
[EDITED to add:] Having read a bit more about Nash’s proposal, it looks as if he had in mind minerals rather than manufactured goods; so gold might be on the list but probably not CPUs or cars. The point stands, and indeed Nash explicitly said that gold on its own wasn’t a good choice because of possible fluctuations in availability. I suggest that if we are talking about scenarios of rapid technological change, anything may change availability rapidly; and if we’re talking about scenarios of rapid technological change driven by a super-capable AI, that availability change may be under the AI’s control. None of this is good if we’re trying to use “ideal money” thus defined as a basis for the AI’s values.
(Of course it may be that no such drastic thing ever happens, either because fundamental physical laws prevent it or because we aren’t able to make an AI smart enough. But this is one of the situations people worried about AI value alignment are worried about, and the history of science and technology isn’t exactly short of new technologies that would have looked miraculous before the relevant discoveries were made.)
Or: suppose instead that the AI is superhumanly good at predicting and manipulating markets. (This strikes me as an extremely likely thing for people to try to make AIs do, and also a rather likely early step for a superintelligent but not yet superpowered AI trying to increase its influence.) How confident are you that the easiest way to achieve some goal expressed in terms of values cashed out in “ideal money” won’t be to manipulate the markets to change the correspondence between “ideal money” and other things in the world?
I don’t need to do anything [...] except quote him saying it is his intention.
But what you quoted him as saying he wanted was not (so far as I can tell) the same thing as you are now saying he wanted. We are agreed that Nash thought “ideal money” could be a universal means of valuing oil and bricks and computers. (With the caveat that I haven’t read anything like everything he said and wrote about this, and what I have read isn’t perfectly clear; so to some extent I’m taking your word for it.) But what I haven’t yet seen any sign of is that Nash thought “ideal money” could also be a universal means of valuing internal subjective things (e.g., contentment) or interpersonal things not readily turned into liquid markets (e.g., sincere declarations of love) or, in short, anything not readily traded on zero-overhead negligible-latency infinitely-liquid markets.
your definition of ideal is not in line with the standard definition.
I have (quite deliberately) not been assuming any particular definition of “ideal”; I have been taking “ideal money” as a term of art whose meaning I have attempted to infer from what you’ve said about it and what I’ve seen of Nash’s words. Of course I may have misunderstood, but not by using the wrong definition of “ideal” because I have not been assuming that “ideal money” = “money that is ideal in any more general sense”.
He was also running around saying that he was Pope John XXIII because 23 was his favourite prime number. And refusing academic positions which would have given him a much better platform for advocating currency reform (had he wanted to do that) on the basis that he was already scheduled to start working as Emperor of Antarctica. And saying he was communicating with aliens from outer space.
No you aren’t going to tell Nash how he could have brought about Ideal Money. In regard to, for example, communicating with aliens, again you are being wholly ignorant. Consider this (from Ideal Money):
We of Terra could be taught how to have ideal monetary systems if wise and benevolent extraterrestrials were to take us in hand and administer our national money systems analogously to how the British recently administered the currency of Hong Kong.~Ideal Money
See? He has been “communicating” with aliens. He was using his brain to think beyond not just nations and continents but worlds. “What would it be like for outside observers?” Is he not allowed to ask these questions? Do we not find it useful to think about how extraterrestrials would have an effect on a certain problem like our currency systems? And you call this “crazy”? Why can’t Nash make theories based on civilizations external to ours without you calling him crazy?
See, he was being logical, but people like you can’t understand him.
Of course that doesn’t mean that everything he did was done for crazy reasons. But it does mean that the fact that he said or did something at this time is not any sort of evidence that it makes sense.
This is the most sick (ill) paragraph I have traversed in a long time. You have said “Nash was saying crazy things, so he was sick, therefore the things he was saying were crazy, and so we have to talk them with a grain of salt.
Nash birthed modern complexity theory at that time and did many other amazing things when he was “sick”. He also recovered from his mental illness not because of medication but by willing himself so. These are accepted points in his bio. He says he started to reject politically orientated thinking and return to a more logical basis (in other words he realized running around telling everyone he is a god isn’t helping any argument).
Could you point me to more information about the reasons he gave for leaving the US and trying to renounce his citizenship? I had a look in Nasar’s book, which is the only thing about Nash I have on my shelves, and (1) there is no index entry for “ideal money” (the concept may crop up but not be indexed, of course) and (2) its account of Nash’s time in Geneva and Paris is rather vague about why he wanted to renounce his citizenship (and indeed about why he was brought back to the US).
“I emerged from a time of mental illness, or what is generally called mental illness...”
There is another interview he explains that the us navy took him back in chains, can’t recall the video.
Let me then repeat my question. What do you expect to happen to the values of those “global commodities” in the presence of an AI whose capabilities are superhuman enough to make value alignment an urgent issue? Suppose the commodities include (say) gold, Intel CPUs and cars, and the AI finds an energy-efficient way to make gold, designs some novel kind of quantum computing device that does what the Intel chips do but a million times faster, and figures out quantum gravity and uses it to invent a teleportation machine that works via wormholes? How are prices based on a basket of gold, CPUs and cars going to remain stable in that kind of situation?
You are messing up (badly) the accepted definition of ideal. Nonetheless Nash deals with your concerns:
We can see that times could change, especially if a “miracle energy source” were found, and thus if a good ICPI is constructed, it should not be expected to be valid as initially defined for all eternity. It would instead be appropriate for it to be regularly readjusted depending on how the patterns of international trade would actually evolve.
Here, evidently, politicians in control of the authority behind standards could corrupt the continuity of a good standard, but depending on how things were fundamentally arranged, the probabilities of serious damage through political corruption might becomes as small as the probabilities that the values of the standard meter and kilogram will be corrupted through the actions of politicians.~Ideal Money
.
[EDITED to add:] Having read a bit more about Nash’s proposal, it looks as if he had in mind minerals rather than manufactured goods; so gold might be on the list but probably not CPUs or cars. The point stands, and indeed Nash explicitly said that gold on its own wasn’t a good choice because of possible fluctuations in availability. I suggest that if we are talking about scenarios of rapid technological change, anything may change availability rapidly; and if we’re talking about scenarios of rapid technological change driven by a super-capable AI, that availability change may be under the AI’s control. None of this is good if we’re trying to use “ideal money” thus defined as a basis for the AI’s values.
No he gets more explicate tho and so something like cpu’s etc. would be sort of reasonable (but I think probably better to look at the underlying commodities used for these things). For example:
Moreover, commodities with easily and reliably calculable prices are most suitable, and relatively stable prices are very desirable. Another basic cost that could be used would be a standard transportation cost, the cost of shipping a unit quantity of something over long international distances.
(Of course it may be that no such drastic thing ever happens, either because fundamental physical laws prevent it or because we aren’t able to make an AI smart enough. But this is one of the situations people worried about AI value alignment are worried about, and the history of science and technology isn’t exactly short of new technologies that would have looked miraculous before the relevant discoveries were made.)
Yes, we are simultaneously saying the super smart thing to do would be to have ideal money (ie money comparable to an optimally chosen basket of industrial commodity prices), while also worry a super smart entity wouldn’t support the smart action. It’s clear fud.
Or: suppose instead that the AI is superhumanly good at predicting and manipulating markets. (This strikes me as an extremely likely thing for people to try to make AIs do, and also a rather likely early step for a superintelligent but not yet superpowered AI trying to increase its influence.) How confident are you that the easiest way to achieve some goal expressed in terms of values cashed out in “ideal money” won’t be to manipulate the markets to change the correspondence between “ideal money” and other things in the world?
Ideal money is not corruptible. Your definition of ideal is not accepted as standard.
But what you quoted him as saying he wanted was not (so far as I can tell) the same thing as you are now saying he wanted. We are agreed that Nash thought “ideal money” could be a universal means of valuing oil and bricks and computers. (With the caveat that I haven’t read anything like everything he said and wrote about this, and what I have read isn’t perfectly clear; so to some extent I’m taking your word for it.) But what I haven’t yet seen any sign of is that Nash thought “ideal money” could also be a universal means of valuing internal subjective things (e.g., contentment) or interpersonal things not readily turned into liquid markets (e.g., sincere declarations of love) or, in short, anything not readily traded on zero-overhead negligible-latency infinitely-liquid markets.
I might ask if you think such things could be quantified WITHOUT a standard basis for value? I mean its strawmany. Nash has an incredible proposal with a very long and intricate argument, but you are stuck arguing MY extrapolation, without understanding the underlying base argument by Nash. Gotta walk first, “What IS Ideal Money?”
I have (quite deliberately) not been assuming any particular definition of “ideal”; I have been taking “ideal money” as a term of art whose meaning I have attempted to infer from what you’ve said about it and what I’ve seen of Nash’s words. Of course I may have misunderstood, but not by using the wrong definition of “ideal” because I have not been assuming that “ideal money” = “money that is ideal in any more general sense”.
Yes this is a mistake, and its an amazing one to see from everyone. Thank you for at least partially addressing his work.
Another point—“What I am introducing into the dialogue is a theoretical and conceptually stable unit of value.”
Without a full and perfect system model, I argued that creating perfectly aligned metrics, liek a unit of value, is impossible. (To be fair, I really argued that point in the follow-up piece; www.ribbonfarm.com/2016/09/29/soft-bias-of-underspecified-goals/ ) So if our model for human values is simplified in any way, it’s impossible to guarantee convergence to the same goal without a full and perfect systems model to test it against.
“If someone creates “bad” AI we could measure that, and use the measurement for a counter program.”
(I’m just going to address this point in this comment.) The space of potential bad programs is vast—and the opposite of a disastrous values misalignment is almost always a different values misalignment, not alignment.
In two dimensions, think of a misaligned wheel; it’s very unlikely to be exactly 180 degrees (or 90 degrees) away from proper alignment. Pointing the car in a relatively nice direction is better than pointing it straight at the highway divider wall—but even a slight misalignment will eventually lead to going off-road. And the worry is that we need to have a general solution before we allow the car to get to 55 MPH, much less 100+. But you argue that we can measure the misalignment. True! If we had a way to measure the angle between its alignment and the correct one, we could ignore the misaligned wheel angle, and simple minimize the misalignment -which means the measure of divergence implicitly contains the correct alignment.
For an AI value function, the same is true. If we had a measure of misalignment, we could minimize it. The tricky part is that we don’t have such a metric, and any correct such metric would be implicitly equivalent to solving the original problem. Perhaps this is a fruitful avenue, since recasting the problem this way can help—and it’s similar to some of the approaches I’ve heard Dario Amodei mention regarding value alignment in machine learning systems. So it’s potentially a good insight, but insufficient on its own.
If someone creates “bad” AI then we may all be dead before we have the chance to “use the measurement for a counter program”. (Taking “AI” here to mean “terrifyingly superintelligent AI”, because that’s the scenario we’re particularly keen to defuse. If it turns out that that isn’t possible, or that it’s possible but takes centuries, then these problems are much less important.)
That’s sort of moot for 2 reasons. Firstly what I have proposed would be the game theoretically optimal approach to solving the problem of a super terrbad ai. There is no better approach against such a player. I would also suggest there is no other reasonable approach. And so this speaks to the speed in relation to other possible proposed solutions.
Now of course we are still being theoretical here, but its relevant to point that out.
The currently known means for finding game-theoretically optimal choices are, shall we say, impractical in this sort of situation. I mean, chess is game-theoretically trivial (in terms of the sort of game theory I take it you have in mind) -- but actually finding an optimal strategy involves vastly more computation than we have any means of deploying, and even finding a strategy good enough to play as well as the best human players took multiple decades of work by many smart people and a whole lot of Moore’s law.
Perhaps I’m not understanding your argument, though. Why does what you say make what I say “sort of moot”?
So lets take poker for example. I have argued (lets take it as an assumption which should be fine) that poker players never have enough empirical evidence to know their own winrates. It’s always a guess and since the game isn’t solved they are really guessing about whether they are profitable and how profitable they are. IF they had a standard basis for value then it could be arranged that players brute force the solution to poker. That is to say if players knew who was playing correctly then they would tend towards the correct players strategy.
So there is an argument, to be explored, that the reason we can’t solve chess is because we are not using our biggest computer which is the entirety of our markets.
The reason your points are “moot” or not significant, is because there is not theoretically possible “better’ way of dealing with ai, than having a stable metric of value.
This happens because objective value is perfectly tied to objective morality. That which we all value is that which we all feel is good.
there is an argument [...] that the reason we can’t solve chess is because we are not using our biggest computer which is the entirety of our markets.
“The entirety of our markets” do not have anywhere near enough computational power to solve chess. (At least, not unless someone comes up with a novel way of solving chess that’s much cleverer than anything currently known.)
That which we all value is that which we all feel is good.
It sounds as if this is meant to be shorthand for some sort of argument for your thesis (though I’m not sure exactly what thesis) but if so I am not optimistic about the prospects for the argument’s success given that “we all” don’t value or feel the same things as one another.
“The entirety of our markets” do not have anywhere near enough computational power to solve chess. (At least, not unless someone comes up with a novel way of solving chess that’s much cleverer than anything currently known.)
It is the opinion of some well established (and historical) economics philosophers the markets can determine the optimal distribution of our commodities. Such an endeavor is at least several orders of magnitudes higher than the computing power required to solve chess.
“It sounds as if this is meant to be shorthand for some sort of argument for your thesis (though I’m not sure exactly what thesis) but if so I am not optimistic about the prospects for the argument’s success given that “we all” don’t value or feel the same things as one another.”
You have stepped outside the premise again, which is a stable metric of value, this implies objectivity, which implies we all agree on the value of it. This is the premise.
It is the opinion of some well established [...] economics philosophers that markets can determine the optimal distribution of our commodities. [...]
Let me know when they get their Fields Medals (or perhaps, if it turns out that they’re right but that the ways in which markets do this are noncomputable) their Nobel prizes, and then we can discuss this further.
[...] we all agree on the value of it. This is the premise.
Oh. Then your premise is flatly wrong, since people in fact don’t all agree about value.
(In any case, “objective” doesn’t imply everyone agrees. Whether life on earth has been around for more than a million years is a matter of objective fact, but people manage to disagree about it.)
Well I am speaking of Hayek Nash and Szabo (and smith) and I don’t think medals makes for a strong argument (especially vs the stated fellows.
“Oh. Then your premise is flatly wrong, since people in fact don’t all agree about value.”
By what definition and application of the word premise, is it “wrong”? I am suggesting we take the premise as given, and I would like to speak of the implications. Calling it wrong is silly.
“(In any case, “objective” doesn’t imply everyone agrees. Whether life on earth has been around for more than a million years is a matter of objective fact, but people manage to disagree about it.)”
The nature of money is such that “everyone agrees” that is how it becomes money and it is therefore and thus “objective”. But I am not yet speaking to that, I am speaking to the premise which is a value metric that everyone DOES agree on.
I don’t think medals makes for a strong argument (especially vs the stated fellows).
Maybe you are misunderstanding my argument, which isn’t “a bunch of clever people think differently, so Hayek et al must be wrong” but “if you are correctly describing what Hayek et al claim, and if they are right about that, then someone has found either an algorithm worthy of the Fields medal or a discovery of non-algorithmic physics worthy of a Nobel prize”.
I am suggesting we take the premise as given, and I would like to speak of the implications.
I am suggesting that if I take at face value what you say about the premise, then it is known to be false, and I am not very interesting in taking as given something that is known to be false. (But very likely you do not actually mean to claim what on the face of it you seem to be claiming, namely that everyone actually agrees about what matters.)
The nature of money is such that “everyone agrees” that is how it becomes money
I think this is exactly wrong. Prices (in a sufficiently free and sufficiently liquid market) tend to equalize, but not because everyone agrees but because when people disagree there are ways to get rich by noticing the fact, and when you do that the result is to move others closer to agreement.
In any case, this only works when you have markets with no transaction costs, and plenty of liquidity. There are many things for which no such markets exist or seem likely to exist. (Random example: I care whether and how dearly my wife loves me. No doubt I would pay, if need and opportunity arose, to have her love me more rather than less. But there is no market in my wife’s love, it’s hard to see how there ever could be, if you tried to make one it’s hard to see how it would actually help anything, and by trading in such a market I would gravely disturb the very thing the market was trying to price. This is not an observation about the fuzziness of the word “love”; essentially all of that would remain true if you operationalized it in terms of affectionate-sounding words, physical intimacy, kind deeds, and so forth.)
Yes Nash will get the medals for Ideal Money, this is what I am suggesting.
I am not proposing something “false” as a premise. I am saying, assume an objective metric for value exists (and then lets tend to the ramifications/implications). There is nothing false about that....
What I am saying about money, that you want to suggest is false, is that it is our most objective valuation metric. There is no more objective device for measuring value, in this world.
The rest you are suggesting is a way of saying we don’t have free markets now, but if we continue to improve we will asymptotically approach it at the limits. Then you might agree at the limits our money will be stable in the valuation sense and COULD be such a metric (but its value isn’t stable at present time!)
In regard to your wifes love the market value’s it at a constant in relation to this theoretical notion, that your subjective valuation disagrees with the ultimate objective metric (remember its a premise that doesn’t necessarily exist) doesn’t break the standard.
I am saying, assume an objective metric for value exists. [...] There is nothing false about that
If, in fact, no objective metric for value exists, then there is something false about it. If, less dramatically, your preferred candidate for an objective metric doesn’t exist (or, perhaps better, exists but doesn’t have the properties required of such a metric) and we have no good way of telling whether some other objective metric exists, then there’s something unsatisfactory about it even if not quite “false” (though in that case, indeed, it might be reasonable to say “let’s suppose there is, and see what follows”).
What I am saying about money [...] is that it is our most objective valuation metric.
Ah, now that’s a different claim altogether. Our most objective versus actually objective. Unfortunately, the latter is what we need.
a way of saying we don’t have free markets now, but if we continue to improve we will asymptotically approach it at the limits.
The first part, kinda but only kinda. The second, not so much. Markets can deviate from ideality in ways other than not being “free”. For instance, they can have transaction costs. Not only because of taxation, bid-offer spreads, and the like, but also (and I think unavoidably) because doing things takes effort. They can have granularity problems. (If I have a bunch of books, there is no mechanism by which I can sell half of one of them.) They can simply not exist. Hence, “only kinda”. And I see no reason whatever to expect markets to move inexorably towards perfect freedom, perfect liquidity, zero transaction costs, infinitely fine granularity, etc., etc., etc. Hence “not so much”.
I don’t understand your last paragraph at all. “The market values it at a constant in relation with this theoretical notion”—what theoretical notion? what does it mean to “value it at a constant”? It sounds as if you are saying that I may be wrong about how much I care how much my wife loves me, if “the market” disagrees; that sounds pretty ridiculous but I can’t tell how ridiculous until I understand how the market is supposedly valuing it, which at present I don’t.
If, in fact, no objective metric for value exists, then there is something false about it
I doubt it is accepted logic to suggest a premise is intrinsically false.
.>If, less dramatically, your preferred candidate for an objective metric doesn’t exist (or, perhaps better, exists but doesn’t have the properties required of such a metric) and we have no good way of telling whether some other objective metric exists, then there’s something unsatisfactory about it even if not quite “false” (though in that case, indeed, it might be reasonable to say “let’s suppose there is, and see what follows”).
Yes this. I will make it satisfactory, in a jiffy.
Ah, now that’s a different claim altogether. Our most objective versus actually objective. Unfortunately, the latter is what we need.
No we need both. They are both useful, and I present both, in the context of what is useful (and therefore wanted).
The first part, kinda but only kinda. The second, not so much. Markets can deviate from ideality in ways other than not being “free”. For instance, they can have transaction costs. Not only because of taxation, bid-offer spreads, and the like, but also (and I think unavoidably) because doing things takes effort. They can have granularity problems. (If I have a bunch of books, there is no mechanism by which I can sell half of one of them.) They can simply not exist. Hence, “only kinda”. And I see no reason whatever to expect markets to move inexorably towards perfect freedom, perfect liquidity, zero transaction costs, infinitely fine granularity, etc., etc., etc. Hence “not so much”.
Yes all these things I mean to say, as friction and inefficiency, would suggest it is not free, and you speak to all Szabo’s articles and Nash’s works which I am familiar with. But I also say this in a manner such “provided we continue to evolve rationally” or “provided technology continues to evolve”. I don’t need to prove we WILL evolve rationally and our tech will not take a step back. I don’t need to prove that to show in this thought experiment what the end game is.
I don’t understand your last paragraph at all. “The market values it at a constant in relation with this theoretical notion”—what theoretical notion? what does it mean to “value it at a constant”? It sounds as if you are saying that I may be wrong about how much I care how much my wife loves me, if “the market” disagrees; that sounds pretty ridiculous but I can’t tell how ridiculous until I understand how the market is supposedly valuing it, which at present I don’t.
You aren’t expected to understand how we get to the conclusion, just that there is a basis for value, a unit of it, that everyone accepts. It doesn’t matter if a person disagrees, they still have to use it because the general society has deemed it “that thing”. And “that thing” that we all generally accept is actually called money. I am not saying anything that isn’t completely accepted by society.
Go to a store and try to pay with something other than money. Go try to pay your taxes in a random good. They aren’t accepted. Its silly to argue you could do this.
I doubt it is accepted logic to suggest a premise is intrinsically false.
I’m not sure what your objection actually is. If someone comes along and says “I have a solution to the problems in the Middle East. Let us first of all suppose that Israel is located in Western Europe and that all Jews and Arabs have converted to Christianity” then it is perfectly in order to say no, those things aren’t actually true, and there’s little point discussing what would follow if they were. If you are seriously claiming that money provides a solution to what previously looked like difficult value-alignment problems because everyone agrees on how much money everything is worth, then this is about as obviously untrue as our hypothetical diplomat’s premise. I expect you aren’t actually saying quite that; perhaps at some point you will clarify just what you are saying.
all these things [...] as friction and inefficiency, would suggest it is not free
Many of them seem to me to have other obvious causes.
“provided we continue to evolve rationally” or “provided technology continues to evolve”
I don’t see much sign that humanity is “evolving rationally”, at least not if that’s meant to mean that we’re somehow approaching perfect rationality. (It’s not even clear what that means without infinite computational resources, which there’s also no reason to think we’re approaching; in fact, there are fundamental physical reasons to think we can’t be.)
You aren’t expected to understand how we get to the conclusion
If you are not interested in explaining how you reach your conclusions, then I am not interested in talking to you. Please let me know whether you are or not, and if not then I can stop wasting my time.
I am not saying anything that isn’t completely accepted by society.
You are doing a good job of giving the impression that you are. There is certainly nothing resembling a consensus across “society” that money answers all questions of value.
I’m not sure what your objection actually is. If someone comes along and says “I have a solution to the problems in the Middle East. Let us first of all suppose that Israel is located in Western Europe and that all Jews and Arabs have converted to Christianity” then it is perfectly in order to say no, those things aren’t actually true, and there’s little point discussing what would follow if they were. If you are seriously claiming that money provides a solution to what previously looked like difficult value-alignment problems because everyone agrees on how much money everything is worth, then this is about as obviously untrue as our hypothetical diplomat’s premise. I expect you aren’t actually saying quite that; perhaps at some point you will clarify just what you are saying.
Yes exactly. You want to say because the premise is silly or not reality then it cannot be useful. That is wholly untrue and I think I recall reading an article here about this. Can we not use premises that lead to useful conclusions that don’t rely on the premise? You have no basis for denying that we can. I know this. Can I ask you if we share the definition of ideal: http://lesswrong.com/lw/ogt/do_we_share_a_definition_for_the_word_ideal/
I don’t see much sign that humanity is “evolving rationally”, at least not if that’s meant to mean that we’re somehow approaching perfect rationality. (It’s not even clear what that means without infinite computational resources, which there’s also no reason to think we’re approaching; in fact, there are fundamental physical reasons to think we can’t be.)
Yes because you don’t know that our rationality is tied to the quality of our money in the Nashian sense, or in other words if our money is stable in relation to an objective metric for value then we become (by definition of some objective truth) more rational. I can’t make this point though, without Nash’s works.
If you are not interested in explaining how you reach your conclusions, then I am not interested in talking to you. Please let me know whether you are or not, and if not then I can stop wasting my time.
Yes I am in the process of it, and you might likely be near understanding, but it takes a moment to present and the mod took my legs out.
You are doing a good job of giving the impression that you are. There is certainly nothing resembling a consensus across “society” that money answers all questions of value.
No that is not what I said or how I said it. Money exists because we need to all agree on the value of something in order to have efficiency in the markets. To say “I don’t agree with the American dollar” doesn’t change that.
You want to say because the premise is silly or not reality then it cannot be useful.
Not quite. It can be interesting and useful to consider counterfactual scenarios. But I think it’s important to be explicit about them being counterfactual. And, because you can scarcely ever change just one thing about the world, it’s also important to clarify how other things are (counterfactually) changing to accommodate the main change you have in mind.
So, in this case, if I understand you right what you’re actually saying is something like this. “Consider a world in which there is a universally agreed-upon currency that suffers no inflation or deflation, perhaps by being somehow pegged to a basket of other assets of fixed value; and that is immune to other defects X, Y, Z suffered by existing currencies. Suppose that in our hypothetical world there are markets that produce universally-agreed-upon prices for all goods without exception, including abstract ones like “understanding physics” and emotionally fraught ones like “getting on well with one’s parents” and so forth. Then, let us consider what would happen to problems of AI value alignment in such a world. I claim that most of these problems would go away; we could simply tell the AI to seek value as measured by this universally-agreed currency.”
That might make for an interesting discussion (though I think you will need to adjust your tone if you want many people to enjoy discussions with you). But if you try to start the same discussion by saying or implying that there is such a currency, you shouldn’t be surprised if many of the responses you get are mostly saying “oh no there isn’t”.
Even when you do make it clear that this is a counterfactual, you should expect some responses along similar lines. If what someone actually cares about is AI value alignment in the real world, or at least in plausible future real worlds, then a counterfactual like this will be interesting to them only in so far as it actually illuminates the issue in the real world. If the counterfactual world is too different from the real world, it may fail to do that. At the very least, you should be ready to explain the relevance of your counterfactual to the real world. (“We can bring about that world, and we should do so.” “We can make models of what such a currency would actually look like, and use those for value alignment.” “Considering this more convenient world will let us separate out other difficult issues around value alignment.” Or whatever.)
No that is not what I said or how I said it.
OK. But it looks to me as if something like the stronger claim I treated you as making is actually needed for “ideal money” to be any kind of solution to AI value alignment problems. And what you said before was definitely that we do all agree on money, but now you seem to have retreated to the weaker claim that we will or we might or we would in a suitably abstracted world or something.
Actually, it was visible to me too, but I didn’t see any particular need to introduce it to the discussion until such time as Flinter sees fit to do so. (I have seen a blog that I am pretty sure is Flinter’s, and a few other writings on similar topics that I’m pretty sure are also his.)
(My impression is that Flinter thinks something like bitcoin will serve his purposes, but not necessarily bitcoin itself as it now is.)
After painting the picture of what Ideal Money, Nash explains the intrinsic difficulties of bringing it about. Then he comes up with the concept of “asymptotically ideal money”:
The idea seems paradoxical, but by speaking of “inflation targeting” these responsible official are effectively CONFESSING…that it is indeed after all possible to control inflation by controlling the supply of money (as if by limiting the amount of individual “prints” that could be made of a work of art being produced as “prints).~Ideal Money
M. Friedman acquired fame through teaching the linkage between the supply of money and, effectively, its value. In retrospect it seems as if elementary, but Friedman was as if a teacher who re-taught to American economists the classical concept of the “law of supply and demand”, this in connection with money.
Nash explains the parameters of gold in regard to why we have historically valued it, he is methodical, and he also explains golds weaknesses in this context.
Its too difficult to cut to, because the nature of this problem is such that we all have incredibly cognitive bias towards not understanding it or seeing it.
But it looks to me as if something like the stronger claim I treated you as making is actually needed for “ideal money” to be any kind of solution to AI value alignment problems.
I did not come here to specifically make claims in regard to AI. What does it mean to ignore Nash’s works, his argument, and the general concept of what Ideal Money is...and then to say that my delivery and argument is weak in regard to AI?
And what you said before was definitely that we do all agree on money, but now you seem to have retreated to the weaker claim that we will or we might or we would in a suitably abstracted world or something.
No you have not understood the nature of money. A money is chosen by the general market, it is propriety. This is what I mean to say in this regard, no more, no less. To tell me you don’t like money therefore not “everyone” uses it is petty and simply perpetuating conflict.
There is nothing to argue about in regard to pointing out that we converge on it, in the sense that we all socially agree to it. If you want to show that I am wrong by saying that you specifically don’t, or one , or two people, then you are not interesting in dialogue you are being petty and silly.
It means, in this context, “the first word of the technical term ‘ideal money’ which Flinter has been using, and which I am hoping at some point he will give us his actual definition of”.
If I start by saying there IS such a currency? What does “ideal” mean to you?
You began by saying this:
I would like to suggest, as a blanket observation and proposal, that most of these difficult problems described, especially on a site like this, are easily solvable with the introduction of an objective and ultra-stable metric for valuation.
which, as I said at the time, looks at least as much like “There is such a metric” as like “Let’s explore the consequences of having such a metric”. Then later you said “It converges on money” (not, e.g., “it and money converge on a single coherent metric of value”). Then when asked whether you were saying that Nash has actually found an incorruptible measure of value, you said yes.
I appreciate that when asked explicitly whether such a thing exists you say no. But you don’t seem to be taking any steps to avoid giving the impression that it’s already around.
I did not come here to specifically make claims in regard to AI.
Nope. But you introduced this whole business in the context of AI value alignment, and the possible relevance of your (interpretation of Nash’s) proposal to the Less Wrong community rests partly on its applicability to that sort of problem.
What does it mean to ignore Nash’s works, his argument, and the general concept of what Ideal Money is … and then to say that my delivery and argument is weak in regard to AI?
I’m here discussing this stuff with you. I am not (so far as I am aware) ignoring anything you say. What exactly is your objection? That I didn’t, as soon as you mentioned John Nash, go off and spend a week studying his thoughts on this matter before responding to you? I have read the Nash lecture you linked, and also his earlier paper on Ideal Money published in the Southern Economic Journal. What do you think I am ignoring, and why do you think I am ignoring it?
But your question is an odd one. It seems to be asking, more or less, “How dare you have interests and priorities that differ from mine?”. I hope it’s clear that that question isn’t actually the sort that deserves an answer.
No you have not understood the nature of money. A money is chosen by the general market, it is propriety.
I think I understand the nature of money OK, but I’m not sure I understand what you are saying about it. “A money”? Do you mean a currency, or do you mean a monetary valuation of a good, or something else? What is “the general market”, in a world where there are lots and lots of different markets, many of which use different currencies? In the language I speak, “propriety” mostly means “the quality of being proper” which seems obviously not to be your meaning. It also (much less commonly) means “ownership”, which seems a more likely meaning, but I’m not sure what it actually means to say “money is ownership”. Would you care to clarify?
This is what I mean to say in this regard, no more, no less.
It seems to me entirely different from your earlier statements to which I was replying. Perhaps everything will become clearer when you explain more carefully what you mean by “A money is chosen by the general market, it is propriety”.
To tell me you don’t like money therefore not “everyone” uses it [...]
Clearly our difficulties of communication run both ways. I have told you neither of those things. I like money a great deal, and while indeed not everyone uses it (there are, I think, some societies around that don’t use money) it’s close enough to universally used for most purposes. (Though not everyone uses the same money, of course.)
I genuinely don’t see how to get from anything I have said to “you don’t like money therefore not everyone uses it”.
There is nothing to argue about in regard to pointing out that we converge on it, in the sense that we all socially agree to it.
I think, again, some clarification is called for. When you spoke of “converging on money”, you surely didn’t just mean that (almost) everyone uses money. The claim I thought you were making, in context, was something like this: “If we imagine people getting smarter and more rational without limit, their value systems will necessarily converge to a particular limit, and that limit is money.” (Which, in turn, I take to mean something like this: to decide which of X and Y is better, compute their prices and compare numerically.) It wasn’t clear at the time what sort of “money” you meant, but you said explicitly that the results are knowable and had been found by John Nash. All of this goes much, much further than saying that we all use money, and further than saying that we have (or might in the future hope to have) a consistent set of prices for tradeable goods.
It would be very helpful if you would say clearly and explicitly what you mean by saying that values “converge on money”.
[...] you specifically [...] or one, or two people [...]
I mentioned my own attitudes not in order to say “I am a counterexample, therefore your universal generalization is false” but to say “I am a counterexample, and I see no reason to think I am vastly atypical, therefore your universal generalization is probably badly false”. I apologize if that wasn’t clear enough.
It means, in this context, “the first word of the technical term ‘ideal money’ which Flinter has been using, and which I am hoping at some point he will give us his actual definition of”.
Ideal, the standard definition, means implies that it is conceptual.
You began by saying this:
I would like to suggest, as a blanket observation and proposal, that most of these difficult problems described, especially on a site like this, are easily solvable with the introduction of an objective and ultra-stable metric for valuation.
which, as I said at the time, looks at least as much like “There is such a metric” as like “Let’s explore the consequences of having such a metric”. Then later you said “It converges on money” (not, e.g., “it and money converge on a single coherent metric of value”). Then when asked whether you were saying that Nash has actually found an incorruptible measure of value, you said yes.
Yes he did and he explains it perfectly. And its a device, I introduced into the dialogue and showed how it is to be properly used.
I appreciate that when asked explicitly whether such a thing exists you say no. But you don’t seem to be taking any steps to avoid giving the impression that it’s already around.
It’s conceptual in nature.
Nope. But you introduced this whole business in the context of AI value alignment, and the possible relevance of your (interpretation of Nash’s) proposal to the Less Wrong community rests partly on its applicability to that sort of problem.
Yup we’ll get to that.
I’m here discussing this stuff with you. I am not (so far as I am aware) ignoring anything you say. What exactly is your objection? That I didn’t, as soon as you mentioned John Nash, go off and spend a week studying his thoughts on this matter before responding to you? I have read the Nash lecture you linked, and also his earlier paper on Ideal Money published in the Southern Economic Journal. What do you think I am ignoring, and why do you think I am ignoring it?
Nope, those are past sentiments, my new ones are I appreciate the dialogue.
But your question is an odd one. It seems to be asking, more or less, “How dare you have interests and priorities that differ from mine?”. I hope it’s clear that that question isn’t actually the sort that deserves an answer.
Yes but its a product of never actual entering sincere dialogue with intelligent players on the topic of Ideal Money so I have to be sharp when we are not addressing it and instead addressing complex subject, AI, in relation to Ideal Money but before understanding Ideal Money (which is FAR more difficult to understand than AI).
I think I understand the nature of money OK, but I’m not sure I understand what you are saying about it. “A money”? Do you mean a currency, or do you mean a monetary valuation of a good, or something else? What is “the general market”, in a world where there are lots and lots of different markets, many of which use different currencies? In the language I speak, “propriety” mostly means “the quality of being proper” which seems obviously not to be your meaning. It also (much less commonly) means “ownership”, which seems a more likely meaning, but I’m not sure what it actually means to say “money is ownership”. Would you care to clarify?
Why aren’t you using generally accepted definitions?
the state or quality of conforming to conventionally accepted standards of behavior or morals.
the details or rules of behavior conventionally considered to be correct.
the condition of being right, appropriate, or fitting.
Yes money can mean many things, but if we thing of the purpose of it and how and why it exists it is effectively that thing which we all generally agree on. If one or two people play a different game that doesn’t invalidate the money. Money serves a purpose that involves all of us supporting it through unwritten social contract. There is nothing else that serves that purpose better. It is the nature of money.
It seems to me entirely different from your earlier statements to which I was replying. Perhaps everything will become clearer when you explain more carefully what you mean by “A money is chosen by the general market, it is propriety”.
Money is the general accepted form of exchange. There is nothing here to investigate, its a simple statement.
Clearly our difficulties of communication run both ways. I have told you neither of those things. I like money a great deal, and while indeed not everyone uses it (there are, I think, some societies around that don’t use money) it’s close enough to universally used for most purposes. (Though not everyone uses the same money, of course.)
Yes.
I genuinely don’t see how to get from anything I have said to “you don’t like money therefore not everyone uses it”.
Money has the quality that it is levated by our collective need for an objective value metric. But if I say “our” and someone says “well you are wrong because not EVERYONE uses money” then I won’t engage with them because they are being dumb.
I think, again, some clarification is called for. When you spoke of “converging on money”, you surely didn’t just mean that (almost) everyone uses money. The claim I thought you were making, in context, was something like this: “If we imagine people getting smarter and more rational without limit, their value systems will necessarily converge to a particular limit, and that limit is money.” (Which, in turn, I take to mean something like this: to decide which of X and Y is better, compute their prices and compare numerically.) It wasn’t clear at the time what sort of “money” you meant, but you said explicitly that the results are knowable and had been found by John Nash. All of this goes much, much further than saying that we all use money, and further than saying that we have (or might in the future hope to have) a consistent set of prices for tradeable goods.
We all converge to money and to use a single money, it is the nature of the universe. It is obvious money will bridge us with AI and help us interact. And yes this convergence will be such that we will solve all complex problems with it, but we need it to be stable to begin to do that.
So in the future, you will do what money tells you. You won’t say, I’m going to do something that doesn’t procure much money, because it will be the irrational thing to do.
It would be very helpful if you would say clearly and explicitly what you mean by saying that values “converge on money”.
Does everyone believe in Christianity? Does everyone converge on it? Does everyone converge on their beliefs in the after life?
No but the nature of money is such that its the one thing we all agree on. Again telling me no we don’t just shows you are stupid. This is an obvious point, it is the purpose of money, and I’m not continuing on this path of dialogue because its asinine.
I mentioned my own attitudes not in order to say “I am a counterexample, therefore your universal generalization is false” but to say “I am a counterexample, and I see no reason to think I am vastly atypical, therefore your universal generalization is probably badly false”. I apologize if that wasn’t clear enough.
Yes you live in a reality in which you don’t acknowledge money, and I am supposed to believe that. You don’t use money, you don’t get paid in money, you don’t buy things with money, you don’t save money. And I am supposed to think you are intelligent for pretending this?
We all agree on money, it is the thing we all converge on. Here is the accepted definition of converge:
tend to meet at a point.
approximate in the sum of its terms toward a definite limit.
I think what you’re missing is that metrics are difficult—I’ve written about that point in a number of contexts; www.ribbonfarm.com/2016/06/09/goodharts-law-and-why-measurement-is-hard/
There are more specific metric / goal problems with AI; Eliezer wrote this https://intelligence.org/2016/12/28/ai-alignment-why-its-hard-and-where-to-start/ - and Dario Amodei has been working on it as well; https://openai.com/blog/faulty-reward-functions/ - and there is a lot more in this vein!
Ok. I skimmed it, and I think I understand your post well enough (if not I’ll read deeper!). What I am introducing into the dialogue is a theoretical and conceptually stable unit of value. I am saying, let’s address the problems stated in your articles as if we don’t have the problem of defining our base unit and that it exists and is agreed upon and it is stable for all time.
So here is an example from one of your links:
“Why is alignment hard?
Why expect that this problem is hard? This is the real question. You might ordinarily expect that whoever has taken on the job of building an AI is just naturally going to try to point that in a relatively nice direction. They’re not going to make evil AI. They’re not cackling villains. Why expect that their attempts to align the AI would fail if they just did everything as obviously as possible?
Here’s a bit of a fable. It’s not intended to be the most likely outcome. I’m using it as a concrete example to explain some more abstract concepts later.
With that said: What if programmers build an artificial general intelligence to optimize for smiles? Smiles are good, right? Smiles happen when good things happen.”
Do we see how we can solve this problem now? We simply optimize the AI system for value, and everyone is happy.
If someone creates “bad” AI we could measure that, and use the measurement for a counter program.
“Simply”?
If we had a satisfactory way of doing that then, yes, a large part of the problem would be solved. Unfortunately, that’s because a large part of the problem is that we don’t have a clear notion of “value” that (1) actually captures what humans care about and (2) is precise enough for us to have any prospect of communicating it accurately and reliably to an AI.
Yes but my claim was IF we had such a clear notion of value then most of the problems on this site would be solved (by this site I mean for example what popular cannons are based around as interesting problems). I think you have simply agreed with me.
When you say “problem X is easily solved by Y” it can mean either (1) “problem X is easily solved, and here is how: Y!” or (2) “if only we had Y, then problem X would easily be solved”.
Generally #1 is the more interesting statement, which is why I thought you might be saying it. (That, plus the fact that you refer to “my proposal”, which does rather suggest that you think you have an actual solution, not merely a solution conditional on another hard problem.) It transpires that you’re saying #2. OK. In that case I think I have three comments.
First: yes, given a notion of value that captures what we care about and is sufficiently precise, many of the problems people here worry about become much easier. Thus far, we agree.
Second: it is far from clear that any such notion actually exists, and as yet no one has come up with even a coherent proposal for figuring out what it might actually be. (Some of the old posts I pointed you at elsewhere argue that if there is one then it is probably very complicated and hard to reason about.)
Third: having such a notion of value is not necessarily enough. Here is an example of a problem for which it is probably not enough: Suppose we make an AI, which makes another AI, which makes another AI, etc., each one building a smarter one than itself. Or, more or less equivalently, we build an AI, which modifies itself, and then modifies itself again, etc., making itself smarter each time. We get to choose the initial AI’s values. Can we choose them in such a way that even after all these modifications we can be confident that the resulting AI—which may work very differently, and think very differently, from the one we start with—will do things we are happy about?
“When you say “problem X is easily solved by Y” it can mean either (1) “problem X is easily solved, and here is how: Y!” or (2) “if only we had Y, then problem X would easily be solved”.”
Yes I am speaking to (2) and once we understand the value of it, then I will explain why it is not insignificant.
“Third: having such a notion of value is not necessarily enough. Here is an example of a problem for which it is probably not enough: Suppose we make an AI, which makes another AI, which makes another AI, etc., each one building a smarter one than itself. Or, more or less equivalently, we build an AI, which modifies itself, and then modifies itself again, etc., making itself smarter each time. We get to choose the initial AI’s values. Can we choose them in such a way that even after all these modifications we can be confident that the resulting AI—which may work very differently, and think very differently, from the one we start with—will do things we are happy about?”
You would create the first AI to seek value, and then knowing that it is getting smarter and smarter, it would tend towards seeing the value I propose and optimize itself in relation to what I am proposing, by your own admission of how the problem you are stating works.
I am not sure which of two things you are saying.
Thing One: “We program the AI with a simple principle expressed as ‘seek value’. Any sufficiently smart thing programmed to do this will converge on the One True Value System, which when followed guarantees the best available outcomes, so if the AIs get smarter and smarter and they are programmed to ‘seek value’ then they will end up seeking the One True Value and everything will be OK.”
Thing Two: “We program the AI with a perhaps-complicated value system that expresses what really matters to us. We can then be confident that it will program its successors to use the same value system, and they will program their successors to use the same value system, etc. So provided we start out with a value system that produces good outcomes, everything will be OK.”
If you are saying Thing One, then I hope you intend to give us some concrete reason to believe that all sufficiently smart agents converge on a single value system. I personally find that very difficult to believe, and I know I’m not alone in this. (Specifically, Eliezer Yudkowsky, who founded the LW site, has written a bit about how he used to believe something very similar, changed his mind, and now thinks it’s obviously wrong. I don’t know the details of exactly what EY believed or what arguments convinced him he’d been wrong.)
If you are saying Thing Two, then I think you may be overoptimistic about the link between “System S follows values V” and “System S will make sure any new systems it creates also follow values V”. This is not a thing that reliably happens when S is a human being, and it’s not difficult to think of situations in which it’s not what you’d want to happen. (Perhaps S can predict the behaviour of its successor T, and figures out that it will get more V-aligned results if T’s values are something other than V. I’m not sure that this can be plausible when T is S’s smarter successor, but it’s not obvious to me that the possibility can be ruled out.)
I REALLY appreciate this dialogue. Yup I am suggesting #1. It’s observable reality that smart agents converge to value the same thing yes, but that is the wrong way to say it. “Natural evolution will levate (aka create) the thing that all agents will converge to”, this is the correct perspective (or more valuable perspective). Also I should think that is obvious to most people here.
Eliezer Y will rethink this when he comes across what I am proposing.
This seems to me like a category error. The things produced by natural evolution are not values. (Though natural evolution produces things—e.g., us—that produce values.)
My guess is that you are wrong about that; in any case, it certainly isn’t obvious to me.
“This seems to me like a category error. The things produced by natural evolution are not values. (Though natural evolution produces things—e.g., us—that produce values.)”
I am saying, in a newtonian vs Quantum science money naturally evolves as a thing that the collective group wants, and I am suggesting this phenomenon will spread to and drive AI. This is both natural and a rational conclusion and something favorable the re-solves many paradoxes and difficult problems.
But money is not the correct word, it is an objective metric for value that is the key. Because money can also be a poor standard for objective measurement.
Given the actual observed behaviour of markets (e.g., the affair of 2008), I see little grounds for hoping that their preferences will robustly track what humans actually care enough, still less that they will do so robustly enough to answer the concerns of people who worry about AI value alignment.
Nash speaks to the crisis of 2008 and explains how it is the lack of an uncorruptable standard basis for value that stops us from achieving such a useful market. You can’t target optimal spending for optimal caring though, I just want to clear on that.
OK. And has Nash found an uncorruptable standard basis for value? Or is this meant to emerge somehow from The Market, borne aloft no doubt by the Invisible Hand? So far, that doesn’t actually seem to be happening.
I’m afraid I don’t understand your last sentence.
Yes. And why do we ignore him?
The things I’ve seen about Nash’s “ideal money” proposal—which, full disclosure, I hadn’t heard of until today, so I make no guarantee to have seen enough—do not seem to suggest that Nash has in fact found an uncorruptable standard basis for value. Would you care to say more?
Yup. Firstly you fully admit that you are, previous to my entry, ignorant to Nash’s lifes work. What he spoke of and wrote of for 20 years country to country. It is what he fled the US about when he was younger, to exchange his USD for the Swiss franc because it was of superior quality, and in which the US navy tracked him down and took him back in chains (this is accepted not conspiracy).
Nash absolutely defined an incorruptible basis for valuation and most people have it labeled as an “icpi” industrial price consumption index. It is effectively an aggregate of stable prices across the global commodities, and it can be said if our money were pegged to it then it would be effectively perfectly stable over time. And of course it would need to be adjusted which means it is politically corruptible, but Nash’s actual proposal solves for this too:
Ideal Money is an incorruptible basis for value.
Now it is important you attend to this thread, its quick, very quick: http://lesswrong.com/lw/ogt/do_we_share_a_definition_for_the_word_ideal/
First of all, we will all do better without the hectoring tone. But yes, I was ignorant of this. There is scarcely any limit to the things I don’t know about. However, nothing I have read about Nash suggests to me that it’s correct to describe “ideal money” as “his life’s work”.
You are not helping your case here. He was, at this point, suffering pretty badly from schizophrenia. And so far as I can tell, the reasons he himself gave for leaving the US were nothing to do with the quality of US and Swiss money.
Let me see if I’ve understood this right. You want a currency pegged to some basket of goods (“global commodities”, as you put it), which you will call “ideal money”. You then want to convert everything to money according to the prices set by a perfectly efficient infinitely liquid market, even though no such market has ever existed and no market at all is ever likely to exist for many of the things people actually care about. And you think this is a suitable foundation for the values of an AI, as a response to people who worry about the values of an AI whose vastly superhuman intellect will enable it to transform our world beyond all recognition.
What exactly do you expect to happen to the values of those “global commodities” in the presence of such an AI?
(Yep.)
But nothing in what you quote does any such thing.
It may be important to you that I do so, but right now I have other priorities. Maybe tomorrow.
He lectured and wrote on the topic for the last 20 years of his life, and it is something he had been developing in his 30′s
Yes he was running around saying the governments are colluding against the people and he was going to be their savior. In Ideal Money he explains how the Keyneisan view of economics is comparable to bolshevik communism. These are facts and they show that he never abandoned his views when he was “schizophrenic”, and that they are in fact based on rational thinking. And yes it is his own admission that this is why he fled the US and denounced his citizenship.
Yup exactly, and we are to create AI that basis its decisions on optimizing value in relation to procuring what would effectively be “ideal money”
I don’t need to do anything to show Nash made such a proposal of a unit of value expect quote him saying it is his intention. I don’t need to put the unit in your hand.
It’s simply and quick, your definition of ideal is not inline with the standard definition. Google it.
He was also running around saying that he was Pope John XXIII because 23 was his favourite prime number. And refusing academic positions which would have given him a much better platform for advocating currency reform (had he wanted to do that) on the basis that he was already scheduled to start working as Emperor of Antarctica. And saying he was communicating with aliens from outer space.
Of course that doesn’t mean that everything he did was done for crazy reasons. But it does mean that the fact that he said or did something at this time is not any sort of evidence that it makes sense.
Could you point me to more information about the reasons he gave for leaving the US and trying to renounce his citizenship? I had a look in Nasar’s book, which is the only thing about Nash I have on my shelves, and (1) there is no index entry for “ideal money” (the concept may crop up but not be indexed, of course) and (2) its account of Nash’s time in Geneva and Paris is rather vague about why he wanted to renounce his citizenship (and indeed about why he was brought back to the US).
Let me then repeat my question. What do you expect to happen to the values of those “global commodities” in the presence of an AI whose capabilities are superhuman enough to make value alignment an urgent issue? Suppose the commodities include (say) gold, Intel CPUs and cars, and the AI finds an energy-efficient way to make gold, designs some novel kind of quantum computing device that does what the Intel chips do but a million times faster, and figures out quantum gravity and uses it to invent a teleportation machine that works via wormholes? How are prices based on a basket of gold, CPUs and cars going to remain stable in that kind of situation?
[EDITED to add:] Having read a bit more about Nash’s proposal, it looks as if he had in mind minerals rather than manufactured goods; so gold might be on the list but probably not CPUs or cars. The point stands, and indeed Nash explicitly said that gold on its own wasn’t a good choice because of possible fluctuations in availability. I suggest that if we are talking about scenarios of rapid technological change, anything may change availability rapidly; and if we’re talking about scenarios of rapid technological change driven by a super-capable AI, that availability change may be under the AI’s control. None of this is good if we’re trying to use “ideal money” thus defined as a basis for the AI’s values.
(Of course it may be that no such drastic thing ever happens, either because fundamental physical laws prevent it or because we aren’t able to make an AI smart enough. But this is one of the situations people worried about AI value alignment are worried about, and the history of science and technology isn’t exactly short of new technologies that would have looked miraculous before the relevant discoveries were made.)
Or: suppose instead that the AI is superhumanly good at predicting and manipulating markets. (This strikes me as an extremely likely thing for people to try to make AIs do, and also a rather likely early step for a superintelligent but not yet superpowered AI trying to increase its influence.) How confident are you that the easiest way to achieve some goal expressed in terms of values cashed out in “ideal money” won’t be to manipulate the markets to change the correspondence between “ideal money” and other things in the world?
But what you quoted him as saying he wanted was not (so far as I can tell) the same thing as you are now saying he wanted. We are agreed that Nash thought “ideal money” could be a universal means of valuing oil and bricks and computers. (With the caveat that I haven’t read anything like everything he said and wrote about this, and what I have read isn’t perfectly clear; so to some extent I’m taking your word for it.) But what I haven’t yet seen any sign of is that Nash thought “ideal money” could also be a universal means of valuing internal subjective things (e.g., contentment) or interpersonal things not readily turned into liquid markets (e.g., sincere declarations of love) or, in short, anything not readily traded on zero-overhead negligible-latency infinitely-liquid markets.
I have (quite deliberately) not been assuming any particular definition of “ideal”; I have been taking “ideal money” as a term of art whose meaning I have attempted to infer from what you’ve said about it and what I’ve seen of Nash’s words. Of course I may have misunderstood, but not by using the wrong definition of “ideal” because I have not been assuming that “ideal money” = “money that is ideal in any more general sense”.
No you aren’t going to tell Nash how he could have brought about Ideal Money. In regard to, for example, communicating with aliens, again you are being wholly ignorant. Consider this (from Ideal Money):
See? He has been “communicating” with aliens. He was using his brain to think beyond not just nations and continents but worlds. “What would it be like for outside observers?” Is he not allowed to ask these questions? Do we not find it useful to think about how extraterrestrials would have an effect on a certain problem like our currency systems? And you call this “crazy”? Why can’t Nash make theories based on civilizations external to ours without you calling him crazy?
See, he was being logical, but people like you can’t understand him.
This is the most sick (ill) paragraph I have traversed in a long time. You have said “Nash was saying crazy things, so he was sick, therefore the things he was saying were crazy, and so we have to talk them with a grain of salt.
Nash birthed modern complexity theory at that time and did many other amazing things when he was “sick”. He also recovered from his mental illness not because of medication but by willing himself so. These are accepted points in his bio. He says he started to reject politically orientated thinking and return to a more logical basis (in other words he realized running around telling everyone he is a god isn’t helping any argument).
“I emerged from a time of mental illness, or what is generally called mental illness...”
″...you could say I grew out of it.”
Those are relevant quotes otherwise.
https://www.youtube.com/watch?v=7Zb6_PZxxA0 12:40 it starts but he explains about the francs at 13:27 “When I did become disturbed I changed my money into swiss francs.
There is another interview he explains that the us navy took him back in chains, can’t recall the video.
You are messing up (badly) the accepted definition of ideal. Nonetheless Nash deals with your concerns:
.
No he gets more explicate tho and so something like cpu’s etc. would be sort of reasonable (but I think probably better to look at the underlying commodities used for these things). For example:
Yes, we are simultaneously saying the super smart thing to do would be to have ideal money (ie money comparable to an optimally chosen basket of industrial commodity prices), while also worry a super smart entity wouldn’t support the smart action. It’s clear fud.
Ideal money is not corruptible. Your definition of ideal is not accepted as standard.
I might ask if you think such things could be quantified WITHOUT a standard basis for value? I mean its strawmany. Nash has an incredible proposal with a very long and intricate argument, but you are stuck arguing MY extrapolation, without understanding the underlying base argument by Nash. Gotta walk first, “What IS Ideal Money?”
Yes this is a mistake, and its an amazing one to see from everyone. Thank you for at least partially addressing his work.
Another point—“What I am introducing into the dialogue is a theoretical and conceptually stable unit of value.”
Without a full and perfect system model, I argued that creating perfectly aligned metrics, liek a unit of value, is impossible. (To be fair, I really argued that point in the follow-up piece; www.ribbonfarm.com/2016/09/29/soft-bias-of-underspecified-goals/ ) So if our model for human values is simplified in any way, it’s impossible to guarantee convergence to the same goal without a full and perfect systems model to test it against.
“If someone creates “bad” AI we could measure that, and use the measurement for a counter program.”
(I’m just going to address this point in this comment.) The space of potential bad programs is vast—and the opposite of a disastrous values misalignment is almost always a different values misalignment, not alignment.
In two dimensions, think of a misaligned wheel; it’s very unlikely to be exactly 180 degrees (or 90 degrees) away from proper alignment. Pointing the car in a relatively nice direction is better than pointing it straight at the highway divider wall—but even a slight misalignment will eventually lead to going off-road. And the worry is that we need to have a general solution before we allow the car to get to 55 MPH, much less 100+. But you argue that we can measure the misalignment. True! If we had a way to measure the angle between its alignment and the correct one, we could ignore the misaligned wheel angle, and simple minimize the misalignment -which means the measure of divergence implicitly contains the correct alignment.
For an AI value function, the same is true. If we had a measure of misalignment, we could minimize it. The tricky part is that we don’t have such a metric, and any correct such metric would be implicitly equivalent to solving the original problem. Perhaps this is a fruitful avenue, since recasting the problem this way can help—and it’s similar to some of the approaches I’ve heard Dario Amodei mention regarding value alignment in machine learning systems. So it’s potentially a good insight, but insufficient on its own.
If someone creates “bad” AI then we may all be dead before we have the chance to “use the measurement for a counter program”. (Taking “AI” here to mean “terrifyingly superintelligent AI”, because that’s the scenario we’re particularly keen to defuse. If it turns out that that isn’t possible, or that it’s possible but takes centuries, then these problems are much less important.)
That’s sort of moot for 2 reasons. Firstly what I have proposed would be the game theoretically optimal approach to solving the problem of a super terrbad ai. There is no better approach against such a player. I would also suggest there is no other reasonable approach. And so this speaks to the speed in relation to other possible proposed solutions.
Now of course we are still being theoretical here, but its relevant to point that out.
The currently known means for finding game-theoretically optimal choices are, shall we say, impractical in this sort of situation. I mean, chess is game-theoretically trivial (in terms of the sort of game theory I take it you have in mind) -- but actually finding an optimal strategy involves vastly more computation than we have any means of deploying, and even finding a strategy good enough to play as well as the best human players took multiple decades of work by many smart people and a whole lot of Moore’s law.
Perhaps I’m not understanding your argument, though. Why does what you say make what I say “sort of moot”?
So lets take poker for example. I have argued (lets take it as an assumption which should be fine) that poker players never have enough empirical evidence to know their own winrates. It’s always a guess and since the game isn’t solved they are really guessing about whether they are profitable and how profitable they are. IF they had a standard basis for value then it could be arranged that players brute force the solution to poker. That is to say if players knew who was playing correctly then they would tend towards the correct players strategy.
So there is an argument, to be explored, that the reason we can’t solve chess is because we are not using our biggest computer which is the entirety of our markets.
The reason your points are “moot” or not significant, is because there is not theoretically possible “better’ way of dealing with ai, than having a stable metric of value.
This happens because objective value is perfectly tied to objective morality. That which we all value is that which we all feel is good.
“The entirety of our markets” do not have anywhere near enough computational power to solve chess. (At least, not unless someone comes up with a novel way of solving chess that’s much cleverer than anything currently known.)
It sounds as if this is meant to be shorthand for some sort of argument for your thesis (though I’m not sure exactly what thesis) but if so I am not optimistic about the prospects for the argument’s success given that “we all” don’t value or feel the same things as one another.
It is the opinion of some well established (and historical) economics philosophers the markets can determine the optimal distribution of our commodities. Such an endeavor is at least several orders of magnitudes higher than the computing power required to solve chess.
“It sounds as if this is meant to be shorthand for some sort of argument for your thesis (though I’m not sure exactly what thesis) but if so I am not optimistic about the prospects for the argument’s success given that “we all” don’t value or feel the same things as one another.”
You have stepped outside the premise again, which is a stable metric of value, this implies objectivity, which implies we all agree on the value of it. This is the premise.
Let me know when they get their Fields Medals (or perhaps, if it turns out that they’re right but that the ways in which markets do this are noncomputable) their Nobel prizes, and then we can discuss this further.
Oh. Then your premise is flatly wrong, since people in fact don’t all agree about value.
(In any case, “objective” doesn’t imply everyone agrees. Whether life on earth has been around for more than a million years is a matter of objective fact, but people manage to disagree about it.)
Well I am speaking of Hayek Nash and Szabo (and smith) and I don’t think medals makes for a strong argument (especially vs the stated fellows.
“Oh. Then your premise is flatly wrong, since people in fact don’t all agree about value.”
By what definition and application of the word premise, is it “wrong”? I am suggesting we take the premise as given, and I would like to speak of the implications. Calling it wrong is silly.
“(In any case, “objective” doesn’t imply everyone agrees. Whether life on earth has been around for more than a million years is a matter of objective fact, but people manage to disagree about it.)”
The nature of money is such that “everyone agrees” that is how it becomes money and it is therefore and thus “objective”. But I am not yet speaking to that, I am speaking to the premise which is a value metric that everyone DOES agree on.
Maybe you are misunderstanding my argument, which isn’t “a bunch of clever people think differently, so Hayek et al must be wrong” but “if you are correctly describing what Hayek et al claim, and if they are right about that, then someone has found either an algorithm worthy of the Fields medal or a discovery of non-algorithmic physics worthy of a Nobel prize”.
I am suggesting that if I take at face value what you say about the premise, then it is known to be false, and I am not very interesting in taking as given something that is known to be false. (But very likely you do not actually mean to claim what on the face of it you seem to be claiming, namely that everyone actually agrees about what matters.)
I think this is exactly wrong. Prices (in a sufficiently free and sufficiently liquid market) tend to equalize, but not because everyone agrees but because when people disagree there are ways to get rich by noticing the fact, and when you do that the result is to move others closer to agreement.
In any case, this only works when you have markets with no transaction costs, and plenty of liquidity. There are many things for which no such markets exist or seem likely to exist. (Random example: I care whether and how dearly my wife loves me. No doubt I would pay, if need and opportunity arose, to have her love me more rather than less. But there is no market in my wife’s love, it’s hard to see how there ever could be, if you tried to make one it’s hard to see how it would actually help anything, and by trading in such a market I would gravely disturb the very thing the market was trying to price. This is not an observation about the fuzziness of the word “love”; essentially all of that would remain true if you operationalized it in terms of affectionate-sounding words, physical intimacy, kind deeds, and so forth.)
Yes Nash will get the medals for Ideal Money, this is what I am suggesting.
I am not proposing something “false” as a premise. I am saying, assume an objective metric for value exists (and then lets tend to the ramifications/implications). There is nothing false about that....
What I am saying about money, that you want to suggest is false, is that it is our most objective valuation metric. There is no more objective device for measuring value, in this world.
The rest you are suggesting is a way of saying we don’t have free markets now, but if we continue to improve we will asymptotically approach it at the limits. Then you might agree at the limits our money will be stable in the valuation sense and COULD be such a metric (but its value isn’t stable at present time!)
In regard to your wifes love the market value’s it at a constant in relation to this theoretical notion, that your subjective valuation disagrees with the ultimate objective metric (remember its a premise that doesn’t necessarily exist) doesn’t break the standard.
If, in fact, no objective metric for value exists, then there is something false about it. If, less dramatically, your preferred candidate for an objective metric doesn’t exist (or, perhaps better, exists but doesn’t have the properties required of such a metric) and we have no good way of telling whether some other objective metric exists, then there’s something unsatisfactory about it even if not quite “false” (though in that case, indeed, it might be reasonable to say “let’s suppose there is, and see what follows”).
Ah, now that’s a different claim altogether. Our most objective versus actually objective. Unfortunately, the latter is what we need.
The first part, kinda but only kinda. The second, not so much. Markets can deviate from ideality in ways other than not being “free”. For instance, they can have transaction costs. Not only because of taxation, bid-offer spreads, and the like, but also (and I think unavoidably) because doing things takes effort. They can have granularity problems. (If I have a bunch of books, there is no mechanism by which I can sell half of one of them.) They can simply not exist. Hence, “only kinda”. And I see no reason whatever to expect markets to move inexorably towards perfect freedom, perfect liquidity, zero transaction costs, infinitely fine granularity, etc., etc., etc. Hence “not so much”.
I don’t understand your last paragraph at all. “The market values it at a constant in relation with this theoretical notion”—what theoretical notion? what does it mean to “value it at a constant”? It sounds as if you are saying that I may be wrong about how much I care how much my wife loves me, if “the market” disagrees; that sounds pretty ridiculous but I can’t tell how ridiculous until I understand how the market is supposedly valuing it, which at present I don’t.
I doubt it is accepted logic to suggest a premise is intrinsically false.
.>If, less dramatically, your preferred candidate for an objective metric doesn’t exist (or, perhaps better, exists but doesn’t have the properties required of such a metric) and we have no good way of telling whether some other objective metric exists, then there’s something unsatisfactory about it even if not quite “false” (though in that case, indeed, it might be reasonable to say “let’s suppose there is, and see what follows”).
Yes this. I will make it satisfactory, in a jiffy.
No we need both. They are both useful, and I present both, in the context of what is useful (and therefore wanted).
Yes all these things I mean to say, as friction and inefficiency, would suggest it is not free, and you speak to all Szabo’s articles and Nash’s works which I am familiar with. But I also say this in a manner such “provided we continue to evolve rationally” or “provided technology continues to evolve”. I don’t need to prove we WILL evolve rationally and our tech will not take a step back. I don’t need to prove that to show in this thought experiment what the end game is.
You aren’t expected to understand how we get to the conclusion, just that there is a basis for value, a unit of it, that everyone accepts. It doesn’t matter if a person disagrees, they still have to use it because the general society has deemed it “that thing”. And “that thing” that we all generally accept is actually called money. I am not saying anything that isn’t completely accepted by society.
Go to a store and try to pay with something other than money. Go try to pay your taxes in a random good. They aren’t accepted. Its silly to argue you could do this.
I’m not sure what your objection actually is. If someone comes along and says “I have a solution to the problems in the Middle East. Let us first of all suppose that Israel is located in Western Europe and that all Jews and Arabs have converted to Christianity” then it is perfectly in order to say no, those things aren’t actually true, and there’s little point discussing what would follow if they were. If you are seriously claiming that money provides a solution to what previously looked like difficult value-alignment problems because everyone agrees on how much money everything is worth, then this is about as obviously untrue as our hypothetical diplomat’s premise. I expect you aren’t actually saying quite that; perhaps at some point you will clarify just what you are saying.
Many of them seem to me to have other obvious causes.
I don’t see much sign that humanity is “evolving rationally”, at least not if that’s meant to mean that we’re somehow approaching perfect rationality. (It’s not even clear what that means without infinite computational resources, which there’s also no reason to think we’re approaching; in fact, there are fundamental physical reasons to think we can’t be.)
If you are not interested in explaining how you reach your conclusions, then I am not interested in talking to you. Please let me know whether you are or not, and if not then I can stop wasting my time.
You are doing a good job of giving the impression that you are. There is certainly nothing resembling a consensus across “society” that money answers all questions of value.
Yes exactly. You want to say because the premise is silly or not reality then it cannot be useful. That is wholly untrue and I think I recall reading an article here about this. Can we not use premises that lead to useful conclusions that don’t rely on the premise? You have no basis for denying that we can. I know this. Can I ask you if we share the definition of ideal: http://lesswrong.com/lw/ogt/do_we_share_a_definition_for_the_word_ideal/
Yes because you don’t know that our rationality is tied to the quality of our money in the Nashian sense, or in other words if our money is stable in relation to an objective metric for value then we become (by definition of some objective truth) more rational. I can’t make this point though, without Nash’s works.
Yes I am in the process of it, and you might likely be near understanding, but it takes a moment to present and the mod took my legs out.
No that is not what I said or how I said it. Money exists because we need to all agree on the value of something in order to have efficiency in the markets. To say “I don’t agree with the American dollar” doesn’t change that.
Not quite. It can be interesting and useful to consider counterfactual scenarios. But I think it’s important to be explicit about them being counterfactual. And, because you can scarcely ever change just one thing about the world, it’s also important to clarify how other things are (counterfactually) changing to accommodate the main change you have in mind.
So, in this case, if I understand you right what you’re actually saying is something like this. “Consider a world in which there is a universally agreed-upon currency that suffers no inflation or deflation, perhaps by being somehow pegged to a basket of other assets of fixed value; and that is immune to other defects X, Y, Z suffered by existing currencies. Suppose that in our hypothetical world there are markets that produce universally-agreed-upon prices for all goods without exception, including abstract ones like “understanding physics” and emotionally fraught ones like “getting on well with one’s parents” and so forth. Then, let us consider what would happen to problems of AI value alignment in such a world. I claim that most of these problems would go away; we could simply tell the AI to seek value as measured by this universally-agreed currency.”
That might make for an interesting discussion (though I think you will need to adjust your tone if you want many people to enjoy discussions with you). But if you try to start the same discussion by saying or implying that there is such a currency, you shouldn’t be surprised if many of the responses you get are mostly saying “oh no there isn’t”.
Even when you do make it clear that this is a counterfactual, you should expect some responses along similar lines. If what someone actually cares about is AI value alignment in the real world, or at least in plausible future real worlds, then a counterfactual like this will be interesting to them only in so far as it actually illuminates the issue in the real world. If the counterfactual world is too different from the real world, it may fail to do that. At the very least, you should be ready to explain the relevance of your counterfactual to the real world. (“We can bring about that world, and we should do so.” “We can make models of what such a currency would actually look like, and use those for value alignment.” “Considering this more convenient world will let us separate out other difficult issues around value alignment.” Or whatever.)
OK. But it looks to me as if something like the stronger claim I treated you as making is actually needed for “ideal money” to be any kind of solution to AI value alignment problems. And what you said before was definitely that we do all agree on money, but now you seem to have retreated to the weaker claim that we will or we might or we would in a suitably abstracted world or something.
I have a suspicion that there is a word hanging above your discussion, visible to Flinter but not to you. It starts with “bit” and ends with “coin”.
Ideal Money is an enthymeme. But Nash speak FAR beyond the advent of an international e-currency with a stably issued supply.
Actually, it was visible to me too, but I didn’t see any particular need to introduce it to the discussion until such time as Flinter sees fit to do so. (I have seen a blog that I am pretty sure is Flinter’s, and a few other writings on similar topics that I’m pretty sure are also his.)
(My impression is that Flinter thinks something like bitcoin will serve his purposes, but not necessarily bitcoin itself as it now is.)
After painting the picture of what Ideal Money, Nash explains the intrinsic difficulties of bringing it about. Then he comes up with the concept of “asymptotically ideal money”:
Nash explains the parameters of gold in regard to why we have historically valued it, he is methodical, and he also explains golds weaknesses in this context.
I’m impatient and prefer to cut to the chase :-)
Its too difficult to cut to, because the nature of this problem is such that we all have incredibly cognitive bias towards not understanding it or seeing it.
To the first set of paragraphs...ie:
If I start by saying there IS such a currency? What does “ideal” mean to you? I think you aren’t using the standard definition: http://lesswrong.com/r/discussion/lw/ogt/do_we_share_a_defintion_for_the_word_ideal/
I did not come here to specifically make claims in regard to AI. What does it mean to ignore Nash’s works, his argument, and the general concept of what Ideal Money is...and then to say that my delivery and argument is weak in regard to AI?
No you have not understood the nature of money. A money is chosen by the general market, it is propriety. This is what I mean to say in this regard, no more, no less. To tell me you don’t like money therefore not “everyone” uses it is petty and simply perpetuating conflict.
There is nothing to argue about in regard to pointing out that we converge on it, in the sense that we all socially agree to it. If you want to show that I am wrong by saying that you specifically don’t, or one , or two people, then you are not interesting in dialogue you are being petty and silly.
It means, in this context, “the first word of the technical term ‘ideal money’ which Flinter has been using, and which I am hoping at some point he will give us his actual definition of”.
You began by saying this:
which, as I said at the time, looks at least as much like “There is such a metric” as like “Let’s explore the consequences of having such a metric”. Then later you said “It converges on money” (not, e.g., “it and money converge on a single coherent metric of value”). Then when asked whether you were saying that Nash has actually found an incorruptible measure of value, you said yes.
I appreciate that when asked explicitly whether such a thing exists you say no. But you don’t seem to be taking any steps to avoid giving the impression that it’s already around.
Nope. But you introduced this whole business in the context of AI value alignment, and the possible relevance of your (interpretation of Nash’s) proposal to the Less Wrong community rests partly on its applicability to that sort of problem.
I’m here discussing this stuff with you. I am not (so far as I am aware) ignoring anything you say. What exactly is your objection? That I didn’t, as soon as you mentioned John Nash, go off and spend a week studying his thoughts on this matter before responding to you? I have read the Nash lecture you linked, and also his earlier paper on Ideal Money published in the Southern Economic Journal. What do you think I am ignoring, and why do you think I am ignoring it?
But your question is an odd one. It seems to be asking, more or less, “How dare you have interests and priorities that differ from mine?”. I hope it’s clear that that question isn’t actually the sort that deserves an answer.
I think I understand the nature of money OK, but I’m not sure I understand what you are saying about it. “A money”? Do you mean a currency, or do you mean a monetary valuation of a good, or something else? What is “the general market”, in a world where there are lots and lots of different markets, many of which use different currencies? In the language I speak, “propriety” mostly means “the quality of being proper” which seems obviously not to be your meaning. It also (much less commonly) means “ownership”, which seems a more likely meaning, but I’m not sure what it actually means to say “money is ownership”. Would you care to clarify?
It seems to me entirely different from your earlier statements to which I was replying. Perhaps everything will become clearer when you explain more carefully what you mean by “A money is chosen by the general market, it is propriety”.
Clearly our difficulties of communication run both ways. I have told you neither of those things. I like money a great deal, and while indeed not everyone uses it (there are, I think, some societies around that don’t use money) it’s close enough to universally used for most purposes. (Though not everyone uses the same money, of course.)
I genuinely don’t see how to get from anything I have said to “you don’t like money therefore not everyone uses it”.
I think, again, some clarification is called for. When you spoke of “converging on money”, you surely didn’t just mean that (almost) everyone uses money. The claim I thought you were making, in context, was something like this: “If we imagine people getting smarter and more rational without limit, their value systems will necessarily converge to a particular limit, and that limit is money.” (Which, in turn, I take to mean something like this: to decide which of X and Y is better, compute their prices and compare numerically.) It wasn’t clear at the time what sort of “money” you meant, but you said explicitly that the results are knowable and had been found by John Nash. All of this goes much, much further than saying that we all use money, and further than saying that we have (or might in the future hope to have) a consistent set of prices for tradeable goods.
It would be very helpful if you would say clearly and explicitly what you mean by saying that values “converge on money”.
I mentioned my own attitudes not in order to say “I am a counterexample, therefore your universal generalization is false” but to say “I am a counterexample, and I see no reason to think I am vastly atypical, therefore your universal generalization is probably badly false”. I apologize if that wasn’t clear enough.
Ideal, the standard definition, means implies that it is conceptual.
Yes he did and he explains it perfectly. And its a device, I introduced into the dialogue and showed how it is to be properly used.
It’s conceptual in nature.
Yup we’ll get to that.
Nope, those are past sentiments, my new ones are I appreciate the dialogue.
Yes but its a product of never actual entering sincere dialogue with intelligent players on the topic of Ideal Money so I have to be sharp when we are not addressing it and instead addressing complex subject, AI, in relation to Ideal Money but before understanding Ideal Money (which is FAR more difficult to understand than AI).
Why aren’t you using generally accepted definitions?
Yes money can mean many things, but if we thing of the purpose of it and how and why it exists it is effectively that thing which we all generally agree on. If one or two people play a different game that doesn’t invalidate the money. Money serves a purpose that involves all of us supporting it through unwritten social contract. There is nothing else that serves that purpose better. It is the nature of money.
Money is the general accepted form of exchange. There is nothing here to investigate, its a simple statement.
Yes.
Money has the quality that it is levated by our collective need for an objective value metric. But if I say “our” and someone says “well you are wrong because not EVERYONE uses money” then I won’t engage with them because they are being dumb.
We all converge to money and to use a single money, it is the nature of the universe. It is obvious money will bridge us with AI and help us interact. And yes this convergence will be such that we will solve all complex problems with it, but we need it to be stable to begin to do that.
So in the future, you will do what money tells you. You won’t say, I’m going to do something that doesn’t procure much money, because it will be the irrational thing to do.
Does everyone believe in Christianity? Does everyone converge on it? Does everyone converge on their beliefs in the after life?
No but the nature of money is such that its the one thing we all agree on. Again telling me no we don’t just shows you are stupid. This is an obvious point, it is the purpose of money, and I’m not continuing on this path of dialogue because its asinine.
Yes you live in a reality in which you don’t acknowledge money, and I am supposed to believe that. You don’t use money, you don’t get paid in money, you don’t buy things with money, you don’t save money. And I am supposed to think you are intelligent for pretending this?
We all agree on money, it is the thing we all converge on. Here is the accepted definition of converge: