But isn’t it also quite the leap to claim that assigning systematically too much responsibility to the people and systems around you, will lead to an increase in effectiveness and competitiveness?
Oh, that depends on the mechanism by which you “assign responsibility”. I mean that in the fully abstract sense of tracking the causality from outcomes to the parts of the system that contributed to them and and adjusting the subsystems to improve the outcomes, which should by definition improve the system performance.
I don’t mean some specific implementation of this mechanism like “shame people for underperforming and moral failings”, if that’s what you’re reading into it — I mean the kinds of credit-assignment that would actually work in practice to improve performance.
It isn’t only a question of if we can reach the end, but also if we should try.
I think we’re either going to get an unqualified utopia, or go extinct, with very little probability on “an eternal hellscape”. So giving up right away can only decrease expected utility, never increase it: we won’t be turning away from a gamble of “X% chance of 3^^^3 utility, (100-X)% chance of −3^^^3 utility”, we’d be going from “Y% chance of 3^^^3 utility, (100-Y)% chance of 0 utility” straight to “100% chance of 0 utility”.
the only system I am aware of in which that is possible, as of now, would be my own body… Still, what would be an “improvement to the system performance” is also a matter of conviction, values and goals, or am I not understanding it correctly?
Since you believe that we are going to get an unqualified utopia, or go extinct, with very little probability on “an eternal hellscape”, how do you view people that disagree with you? When I look at both sides of that equation, one part of it can be different probabilities assigned to ‘terribleness’, but another point might a different threshold for where you draw the line at what chances are worth taking.
Because if you get a Pandora’s box, and by choosing to open it you have a 99 % chance of a fantastic life for you and everything else, 0.9% chance of simply perishing, and 0.099… chance of getting terrible torture for you and everything else—statistically speaking it might seem quite safe to open it.
BUT, why would you open a box when there is even the slightest possibility of that outcome? Why not simply let it be? Why not wait till you reach a point where there is 0 chance of an eternal hellscape? And if you aren’t sure that Humanity will reach that point, then is it so weird that people turn Misanthropic? Not because they like hating others, necessarily, but maybe because they completely and totally abhor the constant risk we would be running of things turning to an eternal hellscape; without seeing a way for it to change.
Yes, you can argue that we can create an A.I. that can ‘fix’ that problem—but that is circular reasoning. If an extremely volatile species, in a competitive, hostile environment, are going to ‘solve the problems once and for all’ - History doesn’t really say that we make a great job of it. If we can’t fix our problems Before we create an A.I., it simply shouldn’t be made.
If you believe that Human nature is the problem, then you harp on till others take your concerns seriously, and they are adequately addressed. That, of course, goes both ways. In that sense, to give up or resign isn’t right either. There are many ways to improve or fix a problem, not just one.
Oh, that depends on the mechanism by which you “assign responsibility”. I mean that in the fully abstract sense of tracking the causality from outcomes to the parts of the system that contributed to them and and adjusting the subsystems to improve the outcomes, which should by definition improve the system performance.
I don’t mean some specific implementation of this mechanism like “shame people for underperforming and moral failings”, if that’s what you’re reading into it — I mean the kinds of credit-assignment that would actually work in practice to improve performance.
I think we’re either going to get an unqualified utopia, or go extinct, with very little probability on “an eternal hellscape”. So giving up right away can only decrease expected utility, never increase it: we won’t be turning away from a gamble of “X% chance of 3^^^3 utility, (100-X)% chance of −3^^^3 utility”, we’d be going from “Y% chance of 3^^^3 utility, (100-Y)% chance of 0 utility” straight to “100% chance of 0 utility”.
Hello again,
the only system I am aware of in which that is possible, as of now, would be my own body… Still, what would be an “improvement to the system performance” is also a matter of conviction, values and goals, or am I not understanding it correctly?
Since you believe that we are going to get an unqualified utopia, or go extinct, with very little probability on “an eternal hellscape”, how do you view people that disagree with you? When I look at both sides of that equation, one part of it can be different probabilities assigned to ‘terribleness’, but another point might a different threshold for where you draw the line at what chances are worth taking.
Because if you get a Pandora’s box, and by choosing to open it you have a 99 % chance of a fantastic life for you and everything else, 0.9% chance of simply perishing, and 0.099… chance of getting terrible torture for you and everything else—statistically speaking it might seem quite safe to open it.
BUT, why would you open a box when there is even the slightest possibility of that outcome? Why not simply let it be? Why not wait till you reach a point where there is 0 chance of an eternal hellscape?
And if you aren’t sure that Humanity will reach that point, then is it so weird that people turn Misanthropic? Not because they like hating others, necessarily, but maybe because they completely and totally abhor the constant risk we would be running of things turning to an eternal hellscape; without seeing a way for it to change.
Yes, you can argue that we can create an A.I. that can ‘fix’ that problem—but that is circular reasoning. If an extremely volatile species, in a competitive, hostile environment, are going to ‘solve the problems once and for all’ - History doesn’t really say that we make a great job of it.
If we can’t fix our problems Before we create an A.I., it simply shouldn’t be made.
If you believe that Human nature is the problem, then you harp on till others take your concerns seriously, and they are adequately addressed. That, of course, goes both ways. In that sense, to give up or resign isn’t right either. There are many ways to improve or fix a problem, not just one.
Caerulea-Lawrence