I mean, their chances, whatever they may be, would sure get worse if they stopped running credit/blame-assignment algorithms on the systems under their control in order to incrementally improve their efficiency and competitiveness, and instead sat around like rocks assigning it all to the Primal Mover, waiting to die?
Well, I do not argue that the approach I chose Could be seen as some kind of “giving up” mentality, but that also requires you to read that into it. But isn’t it also quite the leap to claim that assigning systematically too much responsibility to the people and systems around you, will lead to an increase in effectiveness and competitiveness? In contrast, whatever system you are working under would then function less precisely and correctly. Yes, it takes a leap in abstraction and mentality, and precise thinking is dangerous for society at large, but That is a different discussion. As I wrote in my approach to the Question, It isn’t people that enables killing, it is the Universe. And Humans, as a species, wouldn’t have had the option to kill, even if they wanted, if the rules were different. At least, that is one possible way to frame it. There are others, but I chose this one.
you say that I am prematurely skipping, but that also implies that we will somehow reach the end, does it not?
Mm, that seems like a separate topic. See here for advice on “how to cope with living in a probably-doomed world”.
Personally, it doesn’t seem completely hopeless, and it would sure be sad if we could’ve made it, but lost because many of us decided it looks too hopeless, and so didn’t even try.
Hm, I just wonder how you view Efilism or Negative Utilitaranism, as mentioned in a comment below. It isn’t only a question of if we can reach the end, but also if we should try.
Since by saying ‘lost’, you imply that there is only one way to win. What about the argument that we should not gamble away the future of mankind, on the probability that things will work out. That premise opens up the possibility for a different kind of Win. For example, to acknowledge and agree that the Risk is too high, this place too volatile, and therefore to willingly disable that possibility altogether. Is that option something you view as a complete loss?
But isn’t it also quite the leap to claim that assigning systematically too much responsibility to the people and systems around you, will lead to an increase in effectiveness and competitiveness?
Oh, that depends on the mechanism by which you “assign responsibility”. I mean that in the fully abstract sense of tracking the causality from outcomes to the parts of the system that contributed to them and and adjusting the subsystems to improve the outcomes, which should by definition improve the system performance.
I don’t mean some specific implementation of this mechanism like “shame people for underperforming and moral failings”, if that’s what you’re reading into it — I mean the kinds of credit-assignment that would actually work in practice to improve performance.
It isn’t only a question of if we can reach the end, but also if we should try.
I think we’re either going to get an unqualified utopia, or go extinct, with very little probability on “an eternal hellscape”. So giving up right away can only decrease expected utility, never increase it: we won’t be turning away from a gamble of “X% chance of 3^^^3 utility, (100-X)% chance of −3^^^3 utility”, we’d be going from “Y% chance of 3^^^3 utility, (100-Y)% chance of 0 utility” straight to “100% chance of 0 utility”.
the only system I am aware of in which that is possible, as of now, would be my own body… Still, what would be an “improvement to the system performance” is also a matter of conviction, values and goals, or am I not understanding it correctly?
Since you believe that we are going to get an unqualified utopia, or go extinct, with very little probability on “an eternal hellscape”, how do you view people that disagree with you? When I look at both sides of that equation, one part of it can be different probabilities assigned to ‘terribleness’, but another point might a different threshold for where you draw the line at what chances are worth taking.
Because if you get a Pandora’s box, and by choosing to open it you have a 99 % chance of a fantastic life for you and everything else, 0.9% chance of simply perishing, and 0.099… chance of getting terrible torture for you and everything else—statistically speaking it might seem quite safe to open it.
BUT, why would you open a box when there is even the slightest possibility of that outcome? Why not simply let it be? Why not wait till you reach a point where there is 0 chance of an eternal hellscape? And if you aren’t sure that Humanity will reach that point, then is it so weird that people turn Misanthropic? Not because they like hating others, necessarily, but maybe because they completely and totally abhor the constant risk we would be running of things turning to an eternal hellscape; without seeing a way for it to change.
Yes, you can argue that we can create an A.I. that can ‘fix’ that problem—but that is circular reasoning. If an extremely volatile species, in a competitive, hostile environment, are going to ‘solve the problems once and for all’ - History doesn’t really say that we make a great job of it. If we can’t fix our problems Before we create an A.I., it simply shouldn’t be made.
If you believe that Human nature is the problem, then you harp on till others take your concerns seriously, and they are adequately addressed. That, of course, goes both ways. In that sense, to give up or resign isn’t right either. There are many ways to improve or fix a problem, not just one.
Well, I do not argue that the approach I chose Could be seen as some kind of “giving up” mentality, but that also requires you to read that into it. But isn’t it also quite the leap to claim that assigning systematically too much responsibility to the people and systems around you, will lead to an increase in effectiveness and competitiveness? In contrast, whatever system you are working under would then function less precisely and correctly.
Yes, it takes a leap in abstraction and mentality, and precise thinking is dangerous for society at large, but That is a different discussion.
As I wrote in my approach to the Question, It isn’t people that enables killing, it is the Universe. And Humans, as a species, wouldn’t have had the option to kill, even if they wanted, if the rules were different. At least, that is one possible way to frame it. There are others, but I chose this one.
Hm, I just wonder how you view Efilism or Negative Utilitaranism, as mentioned in a comment below. It isn’t only a question of if we can reach the end, but also if we should try.
Since by saying ‘lost’, you imply that there is only one way to win.
What about the argument that we should not gamble away the future of mankind, on the probability that things will work out. That premise opens up the possibility for a different kind of Win. For example, to acknowledge and agree that the Risk is too high, this place too volatile, and therefore to willingly disable that possibility altogether. Is that option something you view as a complete loss?
Oh, that depends on the mechanism by which you “assign responsibility”. I mean that in the fully abstract sense of tracking the causality from outcomes to the parts of the system that contributed to them and and adjusting the subsystems to improve the outcomes, which should by definition improve the system performance.
I don’t mean some specific implementation of this mechanism like “shame people for underperforming and moral failings”, if that’s what you’re reading into it — I mean the kinds of credit-assignment that would actually work in practice to improve performance.
I think we’re either going to get an unqualified utopia, or go extinct, with very little probability on “an eternal hellscape”. So giving up right away can only decrease expected utility, never increase it: we won’t be turning away from a gamble of “X% chance of 3^^^3 utility, (100-X)% chance of −3^^^3 utility”, we’d be going from “Y% chance of 3^^^3 utility, (100-Y)% chance of 0 utility” straight to “100% chance of 0 utility”.
Hello again,
the only system I am aware of in which that is possible, as of now, would be my own body… Still, what would be an “improvement to the system performance” is also a matter of conviction, values and goals, or am I not understanding it correctly?
Since you believe that we are going to get an unqualified utopia, or go extinct, with very little probability on “an eternal hellscape”, how do you view people that disagree with you? When I look at both sides of that equation, one part of it can be different probabilities assigned to ‘terribleness’, but another point might a different threshold for where you draw the line at what chances are worth taking.
Because if you get a Pandora’s box, and by choosing to open it you have a 99 % chance of a fantastic life for you and everything else, 0.9% chance of simply perishing, and 0.099… chance of getting terrible torture for you and everything else—statistically speaking it might seem quite safe to open it.
BUT, why would you open a box when there is even the slightest possibility of that outcome? Why not simply let it be? Why not wait till you reach a point where there is 0 chance of an eternal hellscape?
And if you aren’t sure that Humanity will reach that point, then is it so weird that people turn Misanthropic? Not because they like hating others, necessarily, but maybe because they completely and totally abhor the constant risk we would be running of things turning to an eternal hellscape; without seeing a way for it to change.
Yes, you can argue that we can create an A.I. that can ‘fix’ that problem—but that is circular reasoning. If an extremely volatile species, in a competitive, hostile environment, are going to ‘solve the problems once and for all’ - History doesn’t really say that we make a great job of it.
If we can’t fix our problems Before we create an A.I., it simply shouldn’t be made.
If you believe that Human nature is the problem, then you harp on till others take your concerns seriously, and they are adequately addressed. That, of course, goes both ways. In that sense, to give up or resign isn’t right either. There are many ways to improve or fix a problem, not just one.
Caerulea-Lawrence