Aside: The nub of my problem with rationality as winning is this: I think it important that people believe what I say, so I strive for something close to quaker-ism. So I might lose short term about some of the things I care about. I really want a group of people that I can trust to be truth seeking and also truth saying. LW had an emphasis for that and rationalists seem to be slipping away from it with “rationality is about winning”.
Throughout evolutionary history we have not seen creatures with better models winning over creature with worse models. Cheap and stupid sometimes in some contexts wins over expensive and smart.
If you lie, you do not get accurate argument as they will argue with your lies rather than with your truth. So do you tell the truth to get a better model or do you lie to “win”?
There is a the concept of [Energy returned on energy invested. I think there is the same concept for model building if the cost of building models does not pay off in value, then model building is not winning.
For we,the educated, wealthy, literate with a vast amount of useful information very easily accessible, the value we can get for the expense paid in doing research (at least up to a certain stage, when the VROVI is low because the low hanging fruit has gone), it makes sense to embody some of the virtues. Unless we start to look too weird because we can not hide it and people stop trusting or relating to us. But for a lot of humanity not connected to the internet the probability of creating valuable models is low (you get spirits/ancestor worship/ etc), so they can win by not doing too much modelling and doing what they know and surviving. So are we talking about “human rationality/winning” or “privileged people rationality/winning”?
I’m sorry I’ve not had enough time to put into this reply, but I think there is value in keeping the conversation flowing.
Well. We should probably distinguish between what rationality is about and what LW/rationalist communities are about. Rationality-the-mental-art is, I think, about “making optimal plays” at whatever you’re doing, which leads to winning (I prefer the former because it avoids the problem where you might only win probabilistically, which may mean you never actually win). But the community is definitely not based around “we’re each trying to win on our own and maximize our own utility functions” or anything like that. The community is interested in truth seeking and exploring rationality and how to apply it and all of those things.
Evolution doesn’t really apply. If some species could choose the way they want to evolve rationally over millions of years I expect they would clobber the competition at any goal they seek to achieve. Evolution is a big probabilistic lottery with no individuals playing it.
If you’re trying to win at “achieve X”, and lying is the best way to do that, then you can lie. If you’re trying to win at “achieve X while remaining morally upright, including not lying”, or whatever, then you don’t like. Choosing to lie or not parameterizes the game. In either game, there’s a “best way to play”, and rationality is the technique for finding it.
Of course it’s true that model-building may not be the highest return activity towards a particular goal. If you’re trying to make as much money as possible, you’ll probably benefit much more from starting a business and re-investing the profits asap and ignoring rationality entirely. But doing so with rational approaches will still beat doing so without rational approachs. If you don’t any particular goal, or you’re just generally trying to learn how to win more at things, or be generally more efficacious, then learning rationality abstractly is a good way to proceed.
“Spending time to learn rationality” certainly isn’t the best play towards most goals, but it appears to be a good one if you get high returns from it or if you have many long-term goals you don’t know how to work on. (That’s my feeling at least. I could be wrong, and someone who’s better at finding good strategies will end up doing better than me.)
In summary, “rationality is about winning” means if you’re put in situations where you have goals, rational approaches tend to win. Statistically. Like, it might take a long time before rational approaches win. There might not be enough time for it to happen. It’s the ’asymptotic behavior”.
An example: if everyone was caring a lot about chess, and your goal was to be the best at chess, you can get a lot of the way by playing a lot of chess. But if someone comes along who also played a lot of games, they might start beating. So you work to beat them, and they work to beat you, and you start training. Who eventually wins? Of course there are mental faculties, memory capabilities, maybe patience, emotional things at work. But the idea is, you’ll become the best player you can become given enough time (theoretically) via being maximally rational. If there are other techniques that are better than rationality, well, rationality will eventually find them—the whole point is that finding the best techniques is precisely rational. It doesn’t mean you will win, there are cosmic forces against you. It means you’re optimizing your ability to win.
It’s analogous to how, if a religion managed to find actually convincing proof of a divine force in the universe, that force would immediately be the domain of science. There are no observable phenomena that aren’t the domain of science. So the only things that can be religious are things you can’t possible prove occur. Equivalently, it’s always rational to use the best strategy. So if you found a new strategy, that would become the rationalists’ choice to. So the rationalist will do at least as well as you, and if you’re not jumping to better strategies when they come along, the rationalist will win. (On average, over all time.)
Well. We should probably distinguish between what rationality is about and what LW/rationalist communities are about.
Rationalists aren’t about rationality? Back in 2007 I don’t think there was a split. Maybe we need to rename rationalists if “rationality is winning” is entrenched.
LWperson: I’m a rationalist, I really care about AIrisk.
PersonWhohasReadSomeRationalityStuff: So you will lie to get whatever you want, why should I think AIrisk is as important as you say and give you money?
LWPerson: Sigh...
Rationality-the-mental-art is, I think, about “making optimal plays” at whatever you’re doing, which leads to winning (I prefer the former because it avoids the problem where you might only win probabilistically, which may mean you never actually win).
I consider every mental or computational action a “play” because it uses energy and can have a material impact on someones goals. So being more precise in your thinking or modelling is also a ‘play’ even before you make a play in the actual game.
Evolution doesn’t really apply. If some species could choose the way they want to evolve rationally over millions of years I expect they would clobber the competition at any goal they seek to achieve. Evolution is a big probabilistic lottery with no individuals playing it.
I think you missed my point about evolution.
Your version of rationality sounds a lot like fitness in evolution. We don’t not what it is but it is whatever it is that survives (wins). So if we look at evolution and the goal is survival, lots of creatures manage to survive while not having great modelling capability. This is because modelling is hard and expensive.
Fitness is also not a shared art. Ants telling birds how to be “fit” would not be a productive conversation.
I’ve run out of time again. I shall try and respond to the rest of your post later.
“I really want a group of people that I can trust to be truth seeking and also truth saying. LW had an emphasis for that and rationalists seem to be slipping away from it with “rationality is about winning”.”
And I’m saying that LW is about rationality, and rationality is how you optimally do things, and truth-seeking is a side effect. And the truth-seeking stuff in the rationality community that you like is because “a community about rationality” is naturally compelled to participate in truth-seeking, because it’s useful and interesting to rationalists. But truth-seeking isn’t inherently what rationality is.
Rationality is conceptually related to fitness. That is, “making optimal plays” should be equivalent to maximizing fitness within one’s physical parameters. More rational creatures are going to be more fit than less rational ones, assuming no other tradeoffs.
It’s irrelevant that creatures survive without being rational. Evolution is a statistical phenomenon and has nothing to do with it. If they were more rational, they’d survive better. Hence rationality is related to fitness with all physical variables kept the same. If it cost them resources to be more rational, maybe they wouldn’t survive better, but that wouldn’t be keeping the physical variables the same so it’s not interesting to point that out.
If you took any organism on earth and replaced its brain with a perfectly rational circuit that used exactly the same resources, it would, I imagine, clobber other organisms of its type in ‘fitness’ by so incredibly much that it would dominate its carbon-brained equivalent to the point of extinction in two generations or less.
I didn’t know what “shared art” meant in the initial post, and I still don’t.
I didn’t know what “shared art” meant in the initial post, and I still don’t.
So the art of rationality are techniques that we share to help each other “win” in our contexts. The thrust of my argument has been that I think rationality is a two place word. That you need a defined context to be able to talk about what “wins”. Why? Results like there is no such thing as a free lunch. If you point me at AIXI as optimal I’ll point out that it only says that there is no better algorithm over all problems, but that that is consistent with there being lots of other equally bad algorithms.
If you took any organism on earth and replaced its brain with a perfectly rational circuit that used exactly the same resources, it would, I imagine, clobber other organisms of its type in ‘fitness’ by so incredibly much that it would dominate its carbon-brained equivalent to the point of extinction in two generations or less.
This would only be by definition. Which I don’t think is a necessarily a mathematically sensible definition (all the problems in the world might have sufficient shared context).
Aside: The nub of my problem with rationality as winning is this: I think it important that people believe what I say, so I strive for something close to quaker-ism. So I might lose short term about some of the things I care about. I really want a group of people that I can trust to be truth seeking and also truth saying. LW had an emphasis for that and rationalists seem to be slipping away from it with “rationality is about winning”.
Throughout evolutionary history we have not seen creatures with better models winning over creature with worse models. Cheap and stupid sometimes in some contexts wins over expensive and smart.
If you lie, you do not get accurate argument as they will argue with your lies rather than with your truth. So do you tell the truth to get a better model or do you lie to “win”?
There is a the concept of [Energy returned on energy invested. I think there is the same concept for model building if the cost of building models does not pay off in value, then model building is not winning.
For we,the educated, wealthy, literate with a vast amount of useful information very easily accessible, the value we can get for the expense paid in doing research (at least up to a certain stage, when the VROVI is low because the low hanging fruit has gone), it makes sense to embody some of the virtues. Unless we start to look too weird because we can not hide it and people stop trusting or relating to us. But for a lot of humanity not connected to the internet the probability of creating valuable models is low (you get spirits/ancestor worship/ etc), so they can win by not doing too much modelling and doing what they know and surviving. So are we talking about “human rationality/winning” or “privileged people rationality/winning”?
I’m sorry I’ve not had enough time to put into this reply, but I think there is value in keeping the conversation flowing.
Well. We should probably distinguish between what rationality is about and what LW/rationalist communities are about. Rationality-the-mental-art is, I think, about “making optimal plays” at whatever you’re doing, which leads to winning (I prefer the former because it avoids the problem where you might only win probabilistically, which may mean you never actually win). But the community is definitely not based around “we’re each trying to win on our own and maximize our own utility functions” or anything like that. The community is interested in truth seeking and exploring rationality and how to apply it and all of those things.
Evolution doesn’t really apply. If some species could choose the way they want to evolve rationally over millions of years I expect they would clobber the competition at any goal they seek to achieve. Evolution is a big probabilistic lottery with no individuals playing it.
If you’re trying to win at “achieve X”, and lying is the best way to do that, then you can lie. If you’re trying to win at “achieve X while remaining morally upright, including not lying”, or whatever, then you don’t like. Choosing to lie or not parameterizes the game. In either game, there’s a “best way to play”, and rationality is the technique for finding it.
Of course it’s true that model-building may not be the highest return activity towards a particular goal. If you’re trying to make as much money as possible, you’ll probably benefit much more from starting a business and re-investing the profits asap and ignoring rationality entirely. But doing so with rational approaches will still beat doing so without rational approachs. If you don’t any particular goal, or you’re just generally trying to learn how to win more at things, or be generally more efficacious, then learning rationality abstractly is a good way to proceed.
“Spending time to learn rationality” certainly isn’t the best play towards most goals, but it appears to be a good one if you get high returns from it or if you have many long-term goals you don’t know how to work on. (That’s my feeling at least. I could be wrong, and someone who’s better at finding good strategies will end up doing better than me.)
In summary, “rationality is about winning” means if you’re put in situations where you have goals, rational approaches tend to win. Statistically. Like, it might take a long time before rational approaches win. There might not be enough time for it to happen. It’s the ’asymptotic behavior”.
An example: if everyone was caring a lot about chess, and your goal was to be the best at chess, you can get a lot of the way by playing a lot of chess. But if someone comes along who also played a lot of games, they might start beating. So you work to beat them, and they work to beat you, and you start training. Who eventually wins? Of course there are mental faculties, memory capabilities, maybe patience, emotional things at work. But the idea is, you’ll become the best player you can become given enough time (theoretically) via being maximally rational. If there are other techniques that are better than rationality, well, rationality will eventually find them—the whole point is that finding the best techniques is precisely rational. It doesn’t mean you will win, there are cosmic forces against you. It means you’re optimizing your ability to win.
It’s analogous to how, if a religion managed to find actually convincing proof of a divine force in the universe, that force would immediately be the domain of science. There are no observable phenomena that aren’t the domain of science. So the only things that can be religious are things you can’t possible prove occur. Equivalently, it’s always rational to use the best strategy. So if you found a new strategy, that would become the rationalists’ choice to. So the rationalist will do at least as well as you, and if you’re not jumping to better strategies when they come along, the rationalist will win. (On average, over all time.)
Rationalists aren’t about rationality? Back in 2007 I don’t think there was a split. Maybe we need to rename rationalists if “rationality is winning” is entrenched.
LWperson: I’m a rationalist, I really care about AIrisk.
PersonWhohasReadSomeRationalityStuff: So you will lie to get whatever you want, why should I think AIrisk is as important as you say and give you money?
LWPerson: Sigh...
I consider every mental or computational action a “play” because it uses energy and can have a material impact on someones goals. So being more precise in your thinking or modelling is also a ‘play’ even before you make a play in the actual game.
I think you missed my point about evolution.
Your version of rationality sounds a lot like fitness in evolution. We don’t not what it is but it is whatever it is that survives (wins). So if we look at evolution and the goal is survival, lots of creatures manage to survive while not having great modelling capability. This is because modelling is hard and expensive.
Fitness is also not a shared art. Ants telling birds how to be “fit” would not be a productive conversation.
I’ve run out of time again. I shall try and respond to the rest of your post later.
You had written
“I really want a group of people that I can trust to be truth seeking and also truth saying. LW had an emphasis for that and rationalists seem to be slipping away from it with “rationality is about winning”.”
And I’m saying that LW is about rationality, and rationality is how you optimally do things, and truth-seeking is a side effect. And the truth-seeking stuff in the rationality community that you like is because “a community about rationality” is naturally compelled to participate in truth-seeking, because it’s useful and interesting to rationalists. But truth-seeking isn’t inherently what rationality is.
Rationality is conceptually related to fitness. That is, “making optimal plays” should be equivalent to maximizing fitness within one’s physical parameters. More rational creatures are going to be more fit than less rational ones, assuming no other tradeoffs.
It’s irrelevant that creatures survive without being rational. Evolution is a statistical phenomenon and has nothing to do with it. If they were more rational, they’d survive better. Hence rationality is related to fitness with all physical variables kept the same. If it cost them resources to be more rational, maybe they wouldn’t survive better, but that wouldn’t be keeping the physical variables the same so it’s not interesting to point that out.
If you took any organism on earth and replaced its brain with a perfectly rational circuit that used exactly the same resources, it would, I imagine, clobber other organisms of its type in ‘fitness’ by so incredibly much that it would dominate its carbon-brained equivalent to the point of extinction in two generations or less.
I didn’t know what “shared art” meant in the initial post, and I still don’t.
So the art of rationality are techniques that we share to help each other “win” in our contexts. The thrust of my argument has been that I think rationality is a two place word. That you need a defined context to be able to talk about what “wins”. Why? Results like there is no such thing as a free lunch. If you point me at AIXI as optimal I’ll point out that it only says that there is no better algorithm over all problems, but that that is consistent with there being lots of other equally bad algorithms.
This would only be by definition. Which I don’t think is a necessarily a mathematically sensible definition (all the problems in the world might have sufficient shared context).