Also, 2 seems to imply there’s a reason to do historical simulations without the knowledge of the simulees, and to thus effectively have billions of sentient beings lead suboptimal miserable lives without their consent.
If the project is government-funded, and the government is any conceivable direct descendent of our current forms of government, then this seems like the default result.
When was the last time you didn’t hear a religious conservative claim “But they’re not really alive, they’re just machines, they don’t have a soul!” or some similar argument (or the shorthand “Who cares? They’re just robots, not living things.”) whenever the subject of reverse machine ethics (i.e. how we treat computers, robots and AIs and whether they should be happy or something) came up?
I also don’t see the point of making historical simulations of that kind in the first place. It strikes me as unnecessarily complex and costly.
Finally, if we’ve reached that level of development so that virtual constructs mistake themselves for conscious, one would think humans would have developed their use as councelors, pets, company, and so on, and would have come to value them as “more than machines”, not because it’s true, but because they want it to be. We humans have a lot of trouble fulfilling each other’s emotional needs, and, if machines were able to do it better, we’d probably refuse to acknowledge their love, affection, esteem, trust, and so on, as lies.
On the other hand, we might also want to believe that they aren’t “real people” so that we can misuse, abuse and destroy them whenever that’s convenient, like we used to do with slaves and prostitutes. It certainly raises interesting questions, and I can’t presume to know enough about human psychology (or future AI development) to confidently make a prediction one way or another.
virtual constructs mistake themselves for conscious
My brain just folded in on itself and vanished. Or at least in simulation it did. I think you may have stated a basilisk, or at least one that works on my self-simulation.
I used to think I was conscious, but then I realized I was mistaken.
Whoever it was that said “I err, therefore I am” didn’t know what he was talking about… because he was wrong in thinking he was even conscious!
I used to wonder what consciousness could be, until you all shared its qualia with me.
You know, we could simply ask; “What would convince us that the simulated humans are not conscious?” “What would convince us that we ourselves are not conscious?” Because, otherwise, “unconscious homunculi” are basically the same as P-Zombies, and we’re making a useless distinction.
Nevertheless, it is possible for a machine to be mistaken about being conscious. Make it un-conscious (in some meaningful way), but make it unable to distinguish conscious from unconscious, and bias its judgment towards qualifying itself as conscious. Basically, the “mistake” would be in its definition of consciousness.
I used to think I was conscious, but then I realized I was mistaken.
Dennett actually believes somehting like that about phenomenal consciousness.
[Dennett:] These additions are perfectly real, but they are … not
made of figment, but made of judgment. There is nothing more to
phenomenology than that
[Otto:] But there seems to be!
[Dennett:] Exactly! There seems to be phenomenology. That’s a
fact that the heterophenomenologist enthusiastically concedes. But
it does not follow from this undeniable, universally attested fact thatth ere
really is phenomenology. This is the crux. (Dennett, 1991, p.
366)
(nods) There’s an amusing bit in a John Varley novel along these lines, where our hero asks a cutting-edge supercomputer AI whether it’s genuinely conscious, and it replies something like “I’ve been exploring that question for a long time, and I’m still not certain. My current working theory is that no, I’m not—nor are you, incidentally—but I am not yet confident of that.” Our hero thinks about that answer for a while and then takes a nap, IIRC.
I also don’t see the point of making historical simulations of that kind in the first place. It strikes me as unnecessarily complex and costly.
Calibration test: How many policies that have been enforced by governments worldwide would you have made the same claim for? How many are currently still being enforced? How many are currently in planning/discussion/voting/etc. but not yet implemented?
In my own, not seeing the point and this being unnecessarily complex and costly, combined with the fact that it’s something people discuss as opposed to any random other possible hypothesis, makes it just as valid a candidate for government signaling and status-gaming as many other types of policies.
However, to clarify my own position: I agree that the second premise implies some sort of motive for running such a simulation without caring for the lives of the minds inside it, but I just don’t think that the part about having billions of “merely simulated” miserable lives would be of much concern to most people with a motive to do it in the first place.
As evidence, I’ll point to the many naive answers to counterfactual muggings of the form “If you don’t give me 100$ I’ll run a trillion simulated copies of you and torture them immensely for billions of subjective years.” “Yeah, so what? They’re just simulations, not real people.”
It certainly raises interesting questions, and I can’t presume to know enough about human psychology (or future AI development) to confidently make a prediction one way or another.
I’d be careful here about constraining your thoughts to “Either magical tiny green buck-toothed AK47-wielding goblins yelling ‘Wazooomba’ exist, or they don’t, right? So it’s about 50-50 that they do.” I’m not quite sure if there even is any schelling point in-between.
I can’t say that any policy comes to mind at this point. In my country the norm has always been to claim for more government intervention and higher direct taxes, not less. If anything, I find government spending cuts are the ones that tend to be irrationally implemented and make a mess of perfectly serviceable services, and I always see those as a threat. The private sector is the one in charge of frivolities, luxuries and unnecessary stuff that people still want to spend money on, as long as it isn’t public dimes. [I’ve been downvoted over expressing this kind of opinion in the past; if you have different opinions that consider these to be blatantly stupid, I’ll still humbly request that you refrain from doing that. Please.].
It’s only something people discuss because, as far as I can tell from the superficial data I’ve collected so far, some intellectual hipster decided it would be amusing to pose a disturbing and unfalsifiable hypothesis, and see how people reacted to it and argued against it. As far as I’m concerned, until there’s any evidence at all that this reality is not the original, it simply doesn’t make sense to promote the hypothesis to our attention.
That counterfactual mugging would at least give me pause. Though I think my response would be more long the lines of attempting pre-emptive use of potentially lethal force than simply paying the bribe. That sort of dangerous, genocidal madman shouldn’t be allowed access to a computer.
I didn’t imply that there are only two ways this can go, but the hypothesis space and the state of current evidence is such that I can’t see any specific hypothesis being promoted to my attention, so I’m sticking with the default “this is the original reality” until evidence shows up to make me consider otherwise.
I’d be careful here about constraining your thoughts to “Either magical tiny green buck-toothed AK47-wielding goblins yelling ‘Wazooomba’ exist, or they don’t, right? So it’s about 50-50 that they do.” I’m not quite sure if there even is any schelling point in-between.
Complexity-based priors solve that problem. Magical tiny green buck-toothed AK47-wielding goblins yelling ‘Wazooomba’ are complex, so, in the absence of evidence that they do exist, you’re justified in concluding that they don’t. ;)
Also, 2 seems to imply there’s a reason to do historical simulations without the knowledge of the simulees, and to thus effectively have billions of sentient beings lead suboptimal miserable lives without their consent.
If the project is government-funded, and the government is any conceivable direct descendent of our current forms of government, then this seems like the default result.
When was the last time you didn’t hear a religious conservative claim “But they’re not really alive, they’re just machines, they don’t have a soul!” or some similar argument (or the shorthand “Who cares? They’re just robots, not living things.”) whenever the subject of reverse machine ethics (i.e. how we treat computers, robots and AIs and whether they should be happy or something) came up?
There’s already some effort being done in that direction. If sentient extraterrestrial lifeforms could at least be considered legal animals, what of our own human-made creatures?
I also don’t see the point of making historical simulations of that kind in the first place. It strikes me as unnecessarily complex and costly.
Finally, if we’ve reached that level of development so that virtual constructs mistake themselves for conscious, one would think humans would have developed their use as councelors, pets, company, and so on, and would have come to value them as “more than machines”, not because it’s true, but because they want it to be. We humans have a lot of trouble fulfilling each other’s emotional needs, and, if machines were able to do it better, we’d probably refuse to acknowledge their love, affection, esteem, trust, and so on, as lies.
On the other hand, we might also want to believe that they aren’t “real people” so that we can misuse, abuse and destroy them whenever that’s convenient, like we used to do with slaves and prostitutes. It certainly raises interesting questions, and I can’t presume to know enough about human psychology (or future AI development) to confidently make a prediction one way or another.
My brain just folded in on itself and vanished. Or at least in simulation it did. I think you may have stated a basilisk, or at least one that works on my self-simulation.
I used to think I was conscious, but then I realized I was mistaken.
Whoever it was that said “I err, therefore I am” didn’t know what he was talking about… because he was wrong in thinking he was even conscious!
You know, we could simply ask; “What would convince us that the simulated humans are not conscious?” “What would convince us that we ourselves are not conscious?” Because, otherwise, “unconscious homunculi” are basically the same as P-Zombies, and we’re making a useless distinction.
Nevertheless, it is possible for a machine to be mistaken about being conscious. Make it un-conscious (in some meaningful way), but make it unable to distinguish conscious from unconscious, and bias its judgment towards qualifying itself as conscious. Basically, the “mistake” would be in its definition of consciousness.
Dennett actually believes somehting like that about phenomenal consciousness.
[Dennett:] These additions are perfectly real, but they are … not made of figment, but made of judgment. There is nothing more to phenomenology than that [Otto:] But there seems to be! [Dennett:] Exactly! There seems to be phenomenology. That’s a fact that the heterophenomenologist enthusiastically concedes. But it does not follow from this undeniable, universally attested fact thatth ere really is phenomenology. This is the crux. (Dennett, 1991, p. 366)
Er… can I get the cliff’s notes on the jargon here?
(nods) There’s an amusing bit in a John Varley novel along these lines, where our hero asks a cutting-edge supercomputer AI whether it’s genuinely conscious, and it replies something like “I’ve been exploring that question for a long time, and I’m still not certain. My current working theory is that no, I’m not—nor are you, incidentally—but I am not yet confident of that.” Our hero thinks about that answer for a while and then takes a nap, IIRC.
Calibration test: How many policies that have been enforced by governments worldwide would you have made the same claim for? How many are currently still being enforced? How many are currently in planning/discussion/voting/etc. but not yet implemented?
In my own, not seeing the point and this being unnecessarily complex and costly, combined with the fact that it’s something people discuss as opposed to any random other possible hypothesis, makes it just as valid a candidate for government signaling and status-gaming as many other types of policies.
However, to clarify my own position: I agree that the second premise implies some sort of motive for running such a simulation without caring for the lives of the minds inside it, but I just don’t think that the part about having billions of “merely simulated” miserable lives would be of much concern to most people with a motive to do it in the first place.
As evidence, I’ll point to the many naive answers to counterfactual muggings of the form “If you don’t give me 100$ I’ll run a trillion simulated copies of you and torture them immensely for billions of subjective years.” “Yeah, so what? They’re just simulations, not real people.”
I’d be careful here about constraining your thoughts to “Either magical tiny green buck-toothed AK47-wielding goblins yelling ‘Wazooomba’ exist, or they don’t, right? So it’s about 50-50 that they do.” I’m not quite sure if there even is any schelling point in-between.
I can’t say that any policy comes to mind at this point. In my country the norm has always been to claim for more government intervention and higher direct taxes, not less. If anything, I find government spending cuts are the ones that tend to be irrationally implemented and make a mess of perfectly serviceable services, and I always see those as a threat. The private sector is the one in charge of frivolities, luxuries and unnecessary stuff that people still want to spend money on, as long as it isn’t public dimes. [I’ve been downvoted over expressing this kind of opinion in the past; if you have different opinions that consider these to be blatantly stupid, I’ll still humbly request that you refrain from doing that. Please.].
It’s only something people discuss because, as far as I can tell from the superficial data I’ve collected so far, some intellectual hipster decided it would be amusing to pose a disturbing and unfalsifiable hypothesis, and see how people reacted to it and argued against it. As far as I’m concerned, until there’s any evidence at all that this reality is not the original, it simply doesn’t make sense to promote the hypothesis to our attention.
That counterfactual mugging would at least give me pause. Though I think my response would be more long the lines of attempting pre-emptive use of potentially lethal force than simply paying the bribe. That sort of dangerous, genocidal madman shouldn’t be allowed access to a computer.
I didn’t imply that there are only two ways this can go, but the hypothesis space and the state of current evidence is such that I can’t see any specific hypothesis being promoted to my attention, so I’m sticking with the default “this is the original reality” until evidence shows up to make me consider otherwise.
Complexity-based priors solve that problem. Magical tiny green buck-toothed AK47-wielding goblins yelling ‘Wazooomba’ are complex, so, in the absence of evidence that they do exist, you’re justified in concluding that they don’t. ;)