This one is actually really subtle, and I forget the solution, and it’s in the metaethics sequence somewhere (look for pebblesorters), but the punchline is that the outcome sucks.
So yes, you and your Diabolus-1 “win”, but the outcome still sucks.
I suffered a lot of abuse as a child; as a result, sometimes my mind enters a state where its adopted optimization process is “maximize suck”. In this state, I tend to be MORE rational about my goals than I am when I’m in a more ‘positive’ state.
So I don’t have to stretch very far to imagine a situation where the outcome sucking—MAZIMALLY sucking—is the damned POINT. Because fuck you (and fuck me too).
Not necessarily. Plenty of people think the Saw films are awesome. Plenty of people on 4chan think that posting flashing images to epileptic support boards is awesome, and pushing developmentally disabled children to commit suicide and then harassing their parents forever with pictures of the suicide is awesome.
They will, in fact, use “awesome” explicitly to describe what they’re doing.
I thought the first Saw film was awesome. It was a cool gory story about making the most of life. It’s fiction, so nobody actually got hurt and there is no secondary consideration of awesomeness there.
Some people think that the prospect of making disabled kids commit suicide is awesome; fewer people think that actually doing so is awesome. I don’t think that people who actually do so are awesome.
I think that’s a relatively standard use of “awesome”.
Much for the same reasons that people can be mistaken about their own desires, people can be mistaken about what they would actually consider awesome if they were to engage in an accurate modeling of all the facts. E.g. People who post flashing images to epileptic boards or suicide pictures to battered parents are either 1) failing to truly envision the potential results of their actions and consequently overvaluing the immediate minor awesomeness of the irony of the post or whatever vs. the distant, unseen, major anti-awesomeness of seizures/suicides, or 2) they’re actually socio- or psychopaths. Given the infrequency of real sociopathy, it’s safe to assume a lot of the former happens, especially over the impersonal, empathy-sapping environment of the Internet.
The answer you are referrign to is probably the utiiitarian one that you morally-should maximisie everyone’s preferences, not just your own. But that’s already going well beyond to the naive “awesomeness” theory presented above.
Look, here’s the problem with that whole line of reasoning, right in the joy of the merely good:
First, our esteemed host says:
Every time you say should, it includes an implicit criterion of choice; there is no should-ness that can be abstracted away from any criterion.
And then, he says:
But there are possible minds that implement any utility function, so you don’t get any advice there about what you should do.
But then suddenly, at the end, he flips around and says:
Look to the living child, successfully dragged off the train tracks. There you will find your justification. What ever should be more important than that?
Speaking as a living child who has been dragged ONTO train tracks, I don’t buy it. Domination and infliction of misery are just as “awesome” as altruism and sharing joy.
I guess the crux of my point is, if you don’t think that the weak are contemptible and deserve to suffer, what are you gonna do about it? Because just trying to convince people that it’s less “awesome” than giving them all free spaceships is going to get you ignored or shot, depending on how much the big boys think you’ll interfere with them inflicting suffering for the lulz.
Because just trying to convince people that it’s less “awesome” than giving them all free spaceships is going to get you ignored or shot, depending on how much the big boys think you’ll interfere with them inflicting suffering for the lulz.
This doesn’t seem to be true over the long haul—somehow, the average behavior of the big boys has become less cruel. Part of this is punishment, but even getting punishment into place takes convincing people, some of them high status, that letting the people in charge do what they please isn’t the most awesome alternative.
Alternatively, maybe the cruelest never get convinced, it’s just people have been gradually solving the coordination problem for those who don’t want cruelty.
At least, at the levels most people operate at. Things tend to get better from the top down; for the bottom-dwellers, things are still pretty desperate and terrifying.
Let me put it this way: I would be willing to bet my entire net worth, if it weren’t negative, that if some kind of uplifting Singularity happens, my broke ass gets left behind because I won’t have anything to invest into it and I won’t have any way to signal that I’m worth more than my raw materials.
By the point of the singularity no human has any instrumental value. Everything any human can do, a nanotech robot AI can do better. No one will be able to signal usefulness or have anything to invest; we will all be instrumentally worthless.
If the singularity goes well at all, though, humanity will get its shit together and save everyone anyways, because people are intrinsically valuable. There will be no concern for the cost of maintaining or uplifting people, because it will be trivially small next to the sheer power we would have, and the value of saving a friend.
Don’t assume that everyone else will stay uncaring, once they have the capacity to care. We would save you, along with everyone else.
Don’t assume that everyone else will stay uncaring, once they have the capacity to care. We would save you, along with everyone else.
Downvotes for being unreasonably dramatic.
Rather than downvoting, how about trying to explain why “caring” is a universal value to someone who’s never experienced “caring”? How about trying to explain why, in all the design-space of posthuman optimization processes, I should bet that the one that gets picked is the one where “caring” applies to my sorry ass?
We have enough resources to feed and shelter the world right now, and we don’t. So saying that “once we have the resources to care, we will” seems like the sort of BS that our esteemed host warns us about—the assumption that just because something is all-powerful and all-wise, it will be all-good.
I grew up worshipping a Calvinist dick of a deity, so pull the other one.
And another thing:
people are intrinsically valuable
It’s all well and good for YOU to claim that people are intrinsically valuable, you probably have enough resources to avoid getting spit on and lectured about “bootstraps” when you say it. Some of us aren’t so lucky.
If whatever it is that gets to do the deciding is evaluating people based on their use as raw materials, things have gone horribly wrong. In fact, that’s basically the exact definition of “horribly wrong” that seems to be in common use around here. As a corollary, there’s a lot that’s wrong with the current state of affairs.
By the point of the singularity no human has any instrumental value. ... humanity will get its shit together and save everyone anyways, because people are intrinsically valuable.
It’s a tricky point, I expect humans (in their present form) will also have insignificant terminal value, compared to other things that could be created. The question of whether the contemporary humans will remain in some form depends on how bad it is to discard the old humans, compared to how much value gets lost to inefficiency by keeping (and improving) the old humans. Given how much value could be created de novo, even a slight inefficiency might be more significant than any terminal value the present humanity contributes. (Discarding humans won’t be a wrong outcome if it happens, because it will only happen if it turns out to be a better outcome, assuming a FAI.)
Even if we did, something non-awesome would STILL be able to “win” if it had enough resources and was well-optimized. At which point, why isn’t its “non-awesome” idea more important than your idea of “awesome”?
(Yes, this is the old Is-Ought thing; I’m still not convinced that it’s a fallacy. I think I might be a nihilist at heart.)
Then which one of us is right just comes down to an appeal to force, doesn’t it? (and all those various kinds of meta-force).
I.e., if I can incarnate my Diabolus-1 Antifriendly AI and grow it up to conquer the universe before you can get your Friendly AI up to speed, I win.
This one is actually really subtle, and I forget the solution, and it’s in the metaethics sequence somewhere (look for pebblesorters), but the punchline is that the outcome sucks.
So yes, you and your Diabolus-1 “win”, but the outcome still sucks.
Sure, but… okay, I’m going to go concrete here.
I suffered a lot of abuse as a child; as a result, sometimes my mind enters a state where its adopted optimization process is “maximize suck”. In this state, I tend to be MORE rational about my goals than I am when I’m in a more ‘positive’ state.
So I don’t have to stretch very far to imagine a situation where the outcome sucking—MAZIMALLY sucking—is the damned POINT. Because fuck you (and fuck me too).
So the outcome still sucks, if you are not maximizing actual awesomeness.
Not necessarily. Plenty of people think the Saw films are awesome. Plenty of people on 4chan think that posting flashing images to epileptic support boards is awesome, and pushing developmentally disabled children to commit suicide and then harassing their parents forever with pictures of the suicide is awesome.
They will, in fact, use “awesome” explicitly to describe what they’re doing.
I thought the first Saw film was awesome. It was a cool gory story about making the most of life. It’s fiction, so nobody actually got hurt and there is no secondary consideration of awesomeness there.
Some people think that the prospect of making disabled kids commit suicide is awesome; fewer people think that actually doing so is awesome. I don’t think that people who actually do so are awesome.
I think that’s a relatively standard use of “awesome”.
Much for the same reasons that people can be mistaken about their own desires, people can be mistaken about what they would actually consider awesome if they were to engage in an accurate modeling of all the facts. E.g. People who post flashing images to epileptic boards or suicide pictures to battered parents are either 1) failing to truly envision the potential results of their actions and consequently overvaluing the immediate minor awesomeness of the irony of the post or whatever vs. the distant, unseen, major anti-awesomeness of seizures/suicides, or 2) they’re actually socio- or psychopaths. Given the infrequency of real sociopathy, it’s safe to assume a lot of the former happens, especially over the impersonal, empathy-sapping environment of the Internet.
The answer you are referrign to is probably the utiiitarian one that you morally-should maximisie everyone’s preferences, not just your own. But that’s already going well beyond to the naive “awesomeness” theory presented above.
This is assuming, of course, that “awesome” is subjective. Maybe if we had a universal scale of awesomeness...
You guys are thinking too hard about this.
Either don’t think about it, and maximize awesome (of which there is only one).
Or read the metaethics sequence, where you will realize that you need to maximize awesome (of which there is only one).
Look, here’s the problem with that whole line of reasoning, right in the joy of the merely good:
First, our esteemed host says:
And then, he says:
But then suddenly, at the end, he flips around and says:
Speaking as a living child who has been dragged ONTO train tracks, I don’t buy it. Domination and infliction of misery are just as “awesome” as altruism and sharing joy.
I guess the crux of my point is, if you don’t think that the weak are contemptible and deserve to suffer, what are you gonna do about it? Because just trying to convince people that it’s less “awesome” than giving them all free spaceships is going to get you ignored or shot, depending on how much the big boys think you’ll interfere with them inflicting suffering for the lulz.
This doesn’t seem to be true over the long haul—somehow, the average behavior of the big boys has become less cruel. Part of this is punishment, but even getting punishment into place takes convincing people, some of them high status, that letting the people in charge do what they please isn’t the most awesome alternative.
Alternatively, maybe the cruelest never get convinced, it’s just people have been gradually solving the coordination problem for those who don’t want cruelty.
At least, at the levels most people operate at. Things tend to get better from the top down; for the bottom-dwellers, things are still pretty desperate and terrifying.
I agree, but I think your initial general claim of no hope for improvement was too strong.
Let me put it this way: I would be willing to bet my entire net worth, if it weren’t negative, that if some kind of uplifting Singularity happens, my broke ass gets left behind because I won’t have anything to invest into it and I won’t have any way to signal that I’m worth more than my raw materials.
You’re wrong.
By the point of the singularity no human has any instrumental value. Everything any human can do, a nanotech robot AI can do better. No one will be able to signal usefulness or have anything to invest; we will all be instrumentally worthless.
If the singularity goes well at all, though, humanity will get its shit together and save everyone anyways, because people are intrinsically valuable. There will be no concern for the cost of maintaining or uplifting people, because it will be trivially small next to the sheer power we would have, and the value of saving a friend.
Don’t assume that everyone else will stay uncaring, once they have the capacity to care. We would save you, along with everyone else.
Downvotes for being unreasonably dramatic.
Rather than downvoting, how about trying to explain why “caring” is a universal value to someone who’s never experienced “caring”? How about trying to explain why, in all the design-space of posthuman optimization processes, I should bet that the one that gets picked is the one where “caring” applies to my sorry ass?
We have enough resources to feed and shelter the world right now, and we don’t. So saying that “once we have the resources to care, we will” seems like the sort of BS that our esteemed host warns us about—the assumption that just because something is all-powerful and all-wise, it will be all-good.
I grew up worshipping a Calvinist dick of a deity, so pull the other one.
And another thing:
It’s all well and good for YOU to claim that people are intrinsically valuable, you probably have enough resources to avoid getting spit on and lectured about “bootstraps” when you say it. Some of us aren’t so lucky.
If whatever it is that gets to do the deciding is evaluating people based on their use as raw materials, things have gone horribly wrong. In fact, that’s basically the exact definition of “horribly wrong” that seems to be in common use around here. As a corollary, there’s a lot that’s wrong with the current state of affairs.
It’s a tricky point, I expect humans (in their present form) will also have insignificant terminal value, compared to other things that could be created. The question of whether the contemporary humans will remain in some form depends on how bad it is to discard the old humans, compared to how much value gets lost to inefficiency by keeping (and improving) the old humans. Given how much value could be created de novo, even a slight inefficiency might be more significant than any terminal value the present humanity contributes. (Discarding humans won’t be a wrong outcome if it happens, because it will only happen if it turns out to be a better outcome, assuming a FAI.)
Bur there is no evidence that there real world works on he Awesomeness theory.
Even if we did, something non-awesome would STILL be able to “win” if it had enough resources and was well-optimized. At which point, why isn’t its “non-awesome” idea more important than your idea of “awesome”?
(Yes, this is the old Is-Ought thing; I’m still not convinced that it’s a fallacy. I think I might be a nihilist at heart.)
If you taboo “important” you might discover you don’t know what you’re talking about.