FWIW, I don’t find it hard to imagine a world where automated tools that require fewer resources to maintain than I do are at least as good as I am at doing any job I can do.
Yes, agreed that automated tools with human-level intelligence are implicit in the scenario. I’m not quite sure what “predictions” you have in mind, though.
That was poorly phrased, sorry. I meant it’s difficult to reason about in general. Like, I expect futures with human-level machine intelligences to be really unstable and either turn into FAI heaven or uFAI hell rapidly. I also expect them to not be particularly resource constrained, such that the marginal effects of one human wireheading would be pretty much nil. But I hold all beliefs about this sort of future with very low confidence.
If I want to know how important the argument from social good is to my judgments about wireheading, one approach to teasing that out is to consider a hypothetical world in which there is no net social good to my not wireheading, and see how I judge wireheading in that world. One way to visualize such a hypothetical world is to assume that automated tools capable of doing everything I can do already exist, which is to say tools at least as “smart” as I am for some rough-and-ready definition of “smart”.
Yes, for such a world to be at all stable, I have to assume that such tools aren’t full AGIs in the sense LW uses the term—in particular, that they can’t self-improve any better than I can. Maybe that’s really unlikely, but I don’t find that this limits my ability to visualize it for purposes of the thought experiment.
For my own part, as I said in an earlier comment, I find that the argument from social good is rather compelling to me… at least, if I posit a world in which nothing I might do improves the world in any way, I feel much more comfortable about the decision to wirehead.
...if I posit a world in which nothing I might do improves the world in any way, I feel much more comfortable about the decision to wirehead.
Agreed. If you’ll reread my comment a few levels above, I mention the resource argument is an exception in that I could see situations in which it applied (I find my welfare recipient much more likely than your scenario, but either way, same argument).
It’s primarily the “your friends will be happy for you” bit that I couldn’t imagine, but trying to imagine it made me think of worlds where I was evil.
I mean, I basically have to think of scenarios where it’d really be best for everybody if I suicide. The only difference between wireheading and suicide with regards to the rest of the universe is that suicides consume even fewer resources. Currently I think suicide is a bad choice for everyone with the few obvious exceptions.
Well, you know your friends better than I do, obviously.
That said, if a friend of mine moved somewhere where i could no longer communicate with them, but I was confident that they were happy there, my inclination would be to be happy for them. Obviously that can be overridden by other factors, but again it’s not difficult to imagine.
That the social aspect is where most of the concern seems to be is interesting.
I have to wonder what situation would result in wireheading being permanent (no exceptions), without some kind of contact with the outside world as an option. If the economic motivation behind technology doesn’t change dramatically by the time wireheading becomes possible, it’d need to have commercial appeal. Even if a simulation tricks someone who wants to get out into believing they’ve gotten out, if they had a pre-existing social network that notices them not coming out of it, the backlash could still hurt the providers.
I know for me personally, I have so few social ties at present that I don’t see any reason not to wirehead. I can think of one person who I might be unpleasantly surprised to discover had wireheaded, but that person seems like he’d only do that if things got so incredibly bad that humanity looked something like doomed. (Where “doomed” is… pretty broadly defined, I guess.). If the option to wirehead was given to me tomorrow, though, I might ask it to wait a few months just to see if I could maintain sufficient motivation to attempt to do anything with the real world.
I think the interesting discussion to be had here is to explore why my brain thinks of a wire-headed person as effectively dead, but yours thinks they’ve just moved to antartica.
I think it’s the permanence that makes most of the difference for me. And the fact that I can’t visit them even in principle, and the fact that they won’t be making any new friends. The fact that their social network will have zero links for some reason seems highly relevant.
FWIW, I don’t find it hard to imagine a world where automated tools that require fewer resources to maintain than I do are at least as good as I am at doing any job I can do.
Ah, see, for me that sort of world has human level machine intelligence, which makes it really hard to make predictions about.
Yes, agreed that automated tools with human-level intelligence are implicit in the scenario.
I’m not quite sure what “predictions” you have in mind, though.
That was poorly phrased, sorry. I meant it’s difficult to reason about in general. Like, I expect futures with human-level machine intelligences to be really unstable and either turn into FAI heaven or uFAI hell rapidly. I also expect them to not be particularly resource constrained, such that the marginal effects of one human wireheading would be pretty much nil. But I hold all beliefs about this sort of future with very low confidence.
Confidence isn’t really the issue, here.
If I want to know how important the argument from social good is to my judgments about wireheading, one approach to teasing that out is to consider a hypothetical world in which there is no net social good to my not wireheading, and see how I judge wireheading in that world. One way to visualize such a hypothetical world is to assume that automated tools capable of doing everything I can do already exist, which is to say tools at least as “smart” as I am for some rough-and-ready definition of “smart”.
Yes, for such a world to be at all stable, I have to assume that such tools aren’t full AGIs in the sense LW uses the term—in particular, that they can’t self-improve any better than I can. Maybe that’s really unlikely, but I don’t find that this limits my ability to visualize it for purposes of the thought experiment.
For my own part, as I said in an earlier comment, I find that the argument from social good is rather compelling to me… at least, if I posit a world in which nothing I might do improves the world in any way, I feel much more comfortable about the decision to wirehead.
Agreed. If you’ll reread my comment a few levels above, I mention the resource argument is an exception in that I could see situations in which it applied (I find my welfare recipient much more likely than your scenario, but either way, same argument).
It’s primarily the “your friends will be happy for you” bit that I couldn’t imagine, but trying to imagine it made me think of worlds where I was evil.
I mean, I basically have to think of scenarios where it’d really be best for everybody if I suicide. The only difference between wireheading and suicide with regards to the rest of the universe is that suicides consume even fewer resources. Currently I think suicide is a bad choice for everyone with the few obvious exceptions.
Well, you know your friends better than I do, obviously.
That said, if a friend of mine moved somewhere where i could no longer communicate with them, but I was confident that they were happy there, my inclination would be to be happy for them. Obviously that can be overridden by other factors, but again it’s not difficult to imagine.
That the social aspect is where most of the concern seems to be is interesting.
I have to wonder what situation would result in wireheading being permanent (no exceptions), without some kind of contact with the outside world as an option. If the economic motivation behind technology doesn’t change dramatically by the time wireheading becomes possible, it’d need to have commercial appeal. Even if a simulation tricks someone who wants to get out into believing they’ve gotten out, if they had a pre-existing social network that notices them not coming out of it, the backlash could still hurt the providers.
I know for me personally, I have so few social ties at present that I don’t see any reason not to wirehead. I can think of one person who I might be unpleasantly surprised to discover had wireheaded, but that person seems like he’d only do that if things got so incredibly bad that humanity looked something like doomed. (Where “doomed” is… pretty broadly defined, I guess.). If the option to wirehead was given to me tomorrow, though, I might ask it to wait a few months just to see if I could maintain sufficient motivation to attempt to do anything with the real world.
I think the interesting discussion to be had here is to explore why my brain thinks of a wire-headed person as effectively dead, but yours thinks they’ve just moved to antartica.
I think it’s the permanence that makes most of the difference for me. And the fact that I can’t visit them even in principle, and the fact that they won’t be making any new friends. The fact that their social network will have zero links for some reason seems highly relevant.