Is there something fundamentally wrong with the Universe?
Yes. Obviously. Put on an economist’s glasses, look at it through the framework of mechanism design. The incentives it sets up for the agents embedded in it are abhorrent. Just off the top of my head: limited resources, physical offense often triumphing over physical defense, everything always going to disorder as the system evolves (which also causes that offense > defense inequality), lack of built-in machinery for inviolable contracts which makes non-defection hard to enforce, plus the local natural agent-generating processes (“evolution”) approach intelligence from the bottom up and naturally select for the stupidest/most malfunctioning possible agents...
And this is what we get as the natural end product: dog eat dog, a war of all against all, a myriad of agents with disparate values ruthlessly competing with each other over finite scraps, unable to work together because they simultaneously lack any convenient coordination/enforcement mechanisms and are too stupid to coordinate voluntarily, and unable to opt out of the war and build their own safe garden.
Terrible design. Even we could do better. Even we do do better — when designing institutions and states, or virtual worlds.
If we were programming the universe from scratch, we’d do a better job, pick the laws of physics that naturally incentivize niceness and civilization instead of cancer. Out of the box, with no need for the embedded agents to cobble together homemade solutions.
It makes more sense to me to assign fault at the level where the problem lies
Eh, not sure it makes sense to think in terms of blame here, though. Blame is ultimately about credit assignment, which is about identifying which parts of the system are performing well or poorly and how to fix them. And, well— I mean, sure, I’m down to take the fight to God and/or our Matrix Lords eventually. But in the meantime, it still makes sense to assign blame to the things we can affect at the current moment, instead of prematurely skipping to the end.
lack of built-in machinery for inviolable contracts which makes non-defection hard to enforce
Out of topic: if you change nothing else about the universe, an easy to use “magical” mechanism for inviolable contracts would be a dreadful thing. As soon as you have power of life or death over someone you can pretty much force into irrevocable slavery. I suppose we could imagine a “good” working society using that mechanism. But more probably almost all humans would be slaves, serving maybe a single small group of aristocrats.
You might want to add a “free of influence” condition to the contract system, but in a society that normalizes absolute power (such as many ancient monarchies), that becomes difficult to define.
If you suddenly introduce it in the middle of our universe’s execution, sure. The scenario I was considering is where it exists from the beginning, with life evolving to take advantage of it from the get-go. In which case… Well, it really depends on the specific evolutionary setup, but plausibly organisms would evolve to accept death rather than a bad deal in such situations (the way humans evolved e. g. death-before-dishonor), and so most deals made would be net-positive.
I didn’t spend much time considering specific mechanisms, though; by all means, I can imagine it going perversely too.
But in the meantime, it still makes sense to assign blame to the things we can affect at the current moment, instead of prematurely skipping to the end.
Thank you for your comment. Reading it, I want to ask something. Since you are aware that the design is terrible, and there are no safeguards, and we have to use homemade solutions in a hostile environment, who are the humans that make things better? And how do you argue that the chances of them “winning” are somehow higher than of those that follow the natural incentives of the Universe?
You say that I am prematurely skipping, but that also implies that we will somehow reach the end, does it not? And so you say I should stay and be patient, when if we are just going to spiral into madness or chaos eventually, making a conscious choice early instead of wasting time and resources on staying in a sinking ship, seem to be just as valid a choice, does it not?
And how do you argue that the chances of them “winning” are somehow higher than of those that follow the natural incentives of the Universe?
I mean, their chances, whatever they may be, would sure get worse if they stopped running credit/blame-assignment algorithms on the systems under their control in order to incrementally improve their efficiency and competitiveness, and instead sat around like rocks assigning it all to the Primal Mover, waiting to die?
You say that I am prematurely skipping, but that also implies that we will somehow reach the end, does it not?
Mm, that seems like a separate topic. See here for advice on “how to cope with living in a probably-doomed world”.
Personally, it doesn’t seem completely hopeless, and it would sure be sad if we could’ve made it, but lost because many of us decided it looks too hopeless, and so didn’t even try.
I mean, their chances, whatever they may be, would sure get worse if they stopped running credit/blame-assignment algorithms on the systems under their control in order to incrementally improve their efficiency and competitiveness, and instead sat around like rocks assigning it all to the Primal Mover, waiting to die?
Well, I do not argue that the approach I chose Could be seen as some kind of “giving up” mentality, but that also requires you to read that into it. But isn’t it also quite the leap to claim that assigning systematically too much responsibility to the people and systems around you, will lead to an increase in effectiveness and competitiveness? In contrast, whatever system you are working under would then function less precisely and correctly. Yes, it takes a leap in abstraction and mentality, and precise thinking is dangerous for society at large, but That is a different discussion. As I wrote in my approach to the Question, It isn’t people that enables killing, it is the Universe. And Humans, as a species, wouldn’t have had the option to kill, even if they wanted, if the rules were different. At least, that is one possible way to frame it. There are others, but I chose this one.
you say that I am prematurely skipping, but that also implies that we will somehow reach the end, does it not?
Mm, that seems like a separate topic. See here for advice on “how to cope with living in a probably-doomed world”.
Personally, it doesn’t seem completely hopeless, and it would sure be sad if we could’ve made it, but lost because many of us decided it looks too hopeless, and so didn’t even try.
Hm, I just wonder how you view Efilism or Negative Utilitaranism, as mentioned in a comment below. It isn’t only a question of if we can reach the end, but also if we should try.
Since by saying ‘lost’, you imply that there is only one way to win. What about the argument that we should not gamble away the future of mankind, on the probability that things will work out. That premise opens up the possibility for a different kind of Win. For example, to acknowledge and agree that the Risk is too high, this place too volatile, and therefore to willingly disable that possibility altogether. Is that option something you view as a complete loss?
But isn’t it also quite the leap to claim that assigning systematically too much responsibility to the people and systems around you, will lead to an increase in effectiveness and competitiveness?
Oh, that depends on the mechanism by which you “assign responsibility”. I mean that in the fully abstract sense of tracking the causality from outcomes to the parts of the system that contributed to them and and adjusting the subsystems to improve the outcomes, which should by definition improve the system performance.
I don’t mean some specific implementation of this mechanism like “shame people for underperforming and moral failings”, if that’s what you’re reading into it — I mean the kinds of credit-assignment that would actually work in practice to improve performance.
It isn’t only a question of if we can reach the end, but also if we should try.
I think we’re either going to get an unqualified utopia, or go extinct, with very little probability on “an eternal hellscape”. So giving up right away can only decrease expected utility, never increase it: we won’t be turning away from a gamble of “X% chance of 3^^^3 utility, (100-X)% chance of −3^^^3 utility”, we’d be going from “Y% chance of 3^^^3 utility, (100-Y)% chance of 0 utility” straight to “100% chance of 0 utility”.
the only system I am aware of in which that is possible, as of now, would be my own body… Still, what would be an “improvement to the system performance” is also a matter of conviction, values and goals, or am I not understanding it correctly?
Since you believe that we are going to get an unqualified utopia, or go extinct, with very little probability on “an eternal hellscape”, how do you view people that disagree with you? When I look at both sides of that equation, one part of it can be different probabilities assigned to ‘terribleness’, but another point might a different threshold for where you draw the line at what chances are worth taking.
Because if you get a Pandora’s box, and by choosing to open it you have a 99 % chance of a fantastic life for you and everything else, 0.9% chance of simply perishing, and 0.099… chance of getting terrible torture for you and everything else—statistically speaking it might seem quite safe to open it.
BUT, why would you open a box when there is even the slightest possibility of that outcome? Why not simply let it be? Why not wait till you reach a point where there is 0 chance of an eternal hellscape? And if you aren’t sure that Humanity will reach that point, then is it so weird that people turn Misanthropic? Not because they like hating others, necessarily, but maybe because they completely and totally abhor the constant risk we would be running of things turning to an eternal hellscape; without seeing a way for it to change.
Yes, you can argue that we can create an A.I. that can ‘fix’ that problem—but that is circular reasoning. If an extremely volatile species, in a competitive, hostile environment, are going to ‘solve the problems once and for all’ - History doesn’t really say that we make a great job of it. If we can’t fix our problems Before we create an A.I., it simply shouldn’t be made.
If you believe that Human nature is the problem, then you harp on till others take your concerns seriously, and they are adequately addressed. That, of course, goes both ways. In that sense, to give up or resign isn’t right either. There are many ways to improve or fix a problem, not just one.
Yes. Obviously. Put on an economist’s glasses, look at it through the framework of mechanism design. The incentives it sets up for the agents embedded in it are abhorrent. Just off the top of my head: limited resources, physical offense often triumphing over physical defense, everything always going to disorder as the system evolves (which also causes that offense > defense inequality), lack of built-in machinery for inviolable contracts which makes non-defection hard to enforce, plus the local natural agent-generating processes (“evolution”) approach intelligence from the bottom up and naturally select for the stupidest/most malfunctioning possible agents...
And this is what we get as the natural end product: dog eat dog, a war of all against all, a myriad of agents with disparate values ruthlessly competing with each other over finite scraps, unable to work together because they simultaneously lack any convenient coordination/enforcement mechanisms and are too stupid to coordinate voluntarily, and unable to opt out of the war and build their own safe garden.
Terrible design. Even we could do better. Even we do do better — when designing institutions and states, or virtual worlds.
If we were programming the universe from scratch, we’d do a better job, pick the laws of physics that naturally incentivize niceness and civilization instead of cancer. Out of the box, with no need for the embedded agents to cobble together homemade solutions.
Eh, not sure it makes sense to think in terms of blame here, though. Blame is ultimately about credit assignment, which is about identifying which parts of the system are performing well or poorly and how to fix them. And, well— I mean, sure, I’m down to take the fight to God and/or our Matrix Lords eventually. But in the meantime, it still makes sense to assign blame to the things we can affect at the current moment, instead of prematurely skipping to the end.
Out of topic: if you change nothing else about the universe, an easy to use “magical” mechanism for inviolable contracts would be a dreadful thing. As soon as you have power of life or death over someone you can pretty much force into irrevocable slavery. I suppose we could imagine a “good” working society using that mechanism. But more probably almost all humans would be slaves, serving maybe a single small group of aristocrats.
You might want to add a “free of influence” condition to the contract system, but in a society that normalizes absolute power (such as many ancient monarchies), that becomes difficult to define.
If you suddenly introduce it in the middle of our universe’s execution, sure. The scenario I was considering is where it exists from the beginning, with life evolving to take advantage of it from the get-go. In which case… Well, it really depends on the specific evolutionary setup, but plausibly organisms would evolve to accept death rather than a bad deal in such situations (the way humans evolved e. g. death-before-dishonor), and so most deals made would be net-positive.
I didn’t spend much time considering specific mechanisms, though; by all means, I can imagine it going perversely too.
How would life even evolve in the first place with such a system in place?
Have you thought through all the common thought experiments and methods described on LW before posting this?
From one perspective, nature does kind of incentivize cooperation in the long term. See The Goddess of Everything Else.
Hello Thane Ruthenis,
Thank you for your comment. Reading it, I want to ask something. Since you are aware that the design is terrible, and there are no safeguards, and we have to use homemade solutions in a hostile environment, who are the humans that make things better? And how do you argue that the chances of them “winning” are somehow higher than of those that follow the natural incentives of the Universe?
You say that I am prematurely skipping, but that also implies that we will somehow reach the end, does it not? And so you say I should stay and be patient, when if we are just going to spiral into madness or chaos eventually, making a conscious choice early instead of wasting time and resources on staying in a sinking ship, seem to be just as valid a choice, does it not?
Kindly,
Caerulea-Lawrence
I mean, their chances, whatever they may be, would sure get worse if they stopped running credit/blame-assignment algorithms on the systems under their control in order to incrementally improve their efficiency and competitiveness, and instead sat around like rocks assigning it all to the Primal Mover, waiting to die?
Mm, that seems like a separate topic. See here for advice on “how to cope with living in a probably-doomed world”.
Personally, it doesn’t seem completely hopeless, and it would sure be sad if we could’ve made it, but lost because many of us decided it looks too hopeless, and so didn’t even try.
Well, I do not argue that the approach I chose Could be seen as some kind of “giving up” mentality, but that also requires you to read that into it. But isn’t it also quite the leap to claim that assigning systematically too much responsibility to the people and systems around you, will lead to an increase in effectiveness and competitiveness? In contrast, whatever system you are working under would then function less precisely and correctly.
Yes, it takes a leap in abstraction and mentality, and precise thinking is dangerous for society at large, but That is a different discussion.
As I wrote in my approach to the Question, It isn’t people that enables killing, it is the Universe. And Humans, as a species, wouldn’t have had the option to kill, even if they wanted, if the rules were different. At least, that is one possible way to frame it. There are others, but I chose this one.
Hm, I just wonder how you view Efilism or Negative Utilitaranism, as mentioned in a comment below. It isn’t only a question of if we can reach the end, but also if we should try.
Since by saying ‘lost’, you imply that there is only one way to win.
What about the argument that we should not gamble away the future of mankind, on the probability that things will work out. That premise opens up the possibility for a different kind of Win. For example, to acknowledge and agree that the Risk is too high, this place too volatile, and therefore to willingly disable that possibility altogether. Is that option something you view as a complete loss?
Oh, that depends on the mechanism by which you “assign responsibility”. I mean that in the fully abstract sense of tracking the causality from outcomes to the parts of the system that contributed to them and and adjusting the subsystems to improve the outcomes, which should by definition improve the system performance.
I don’t mean some specific implementation of this mechanism like “shame people for underperforming and moral failings”, if that’s what you’re reading into it — I mean the kinds of credit-assignment that would actually work in practice to improve performance.
I think we’re either going to get an unqualified utopia, or go extinct, with very little probability on “an eternal hellscape”. So giving up right away can only decrease expected utility, never increase it: we won’t be turning away from a gamble of “X% chance of 3^^^3 utility, (100-X)% chance of −3^^^3 utility”, we’d be going from “Y% chance of 3^^^3 utility, (100-Y)% chance of 0 utility” straight to “100% chance of 0 utility”.
Hello again,
the only system I am aware of in which that is possible, as of now, would be my own body… Still, what would be an “improvement to the system performance” is also a matter of conviction, values and goals, or am I not understanding it correctly?
Since you believe that we are going to get an unqualified utopia, or go extinct, with very little probability on “an eternal hellscape”, how do you view people that disagree with you? When I look at both sides of that equation, one part of it can be different probabilities assigned to ‘terribleness’, but another point might a different threshold for where you draw the line at what chances are worth taking.
Because if you get a Pandora’s box, and by choosing to open it you have a 99 % chance of a fantastic life for you and everything else, 0.9% chance of simply perishing, and 0.099… chance of getting terrible torture for you and everything else—statistically speaking it might seem quite safe to open it.
BUT, why would you open a box when there is even the slightest possibility of that outcome? Why not simply let it be? Why not wait till you reach a point where there is 0 chance of an eternal hellscape?
And if you aren’t sure that Humanity will reach that point, then is it so weird that people turn Misanthropic? Not because they like hating others, necessarily, but maybe because they completely and totally abhor the constant risk we would be running of things turning to an eternal hellscape; without seeing a way for it to change.
Yes, you can argue that we can create an A.I. that can ‘fix’ that problem—but that is circular reasoning. If an extremely volatile species, in a competitive, hostile environment, are going to ‘solve the problems once and for all’ - History doesn’t really say that we make a great job of it.
If we can’t fix our problems Before we create an A.I., it simply shouldn’t be made.
If you believe that Human nature is the problem, then you harp on till others take your concerns seriously, and they are adequately addressed. That, of course, goes both ways. In that sense, to give up or resign isn’t right either. There are many ways to improve or fix a problem, not just one.
Caerulea-Lawrence