To be clear—you’re saying that you would prefer that there not exist a single thing which takes negentropy and converts it into order (or whatever other general definition for ‘life’ you prefer), and may or may not have the possibility of evolving into something else more complicated, over nothing at all?
I’m thinking that the paperclipper counts as a life not worth living—an AI that wants to obsess about paperclips is about as repugnant to me as a cow that wants to be eaten. Which is to say, better than doing either of those without wanting it, but still pretty bad. Yes, I’m likely to have problems with a lot of genuinely friendly AIs.
I was assuming that both scenarios were for keeps. Certainly the paperclipper should be smart enough to ensure that; for the other, I guess I’ll assume you’re actually destroying the universe somehow.
It is a fair point but do you mean that the paperclipper is wrong in its judgement that its life is worth living, or is it merely your judgement that if you were the paperclipper your life would not be worth living by your current standards? Remember that we assume that there is no other life possible in the universe anyway—this assumption makes things more interesting.
It’s my judgement that the paperclipper’s life is not worth living. By my standards, sure; objective morality makes no sense, so what other standards could I use?
The paperclipper’s own opinion matters to me, but not all that much.
Would you engage with a particular paperclipper in a discussion (plus observation etc.) to refine your views on whether its life is worth living? (We are straying away from a nominal AIXI-type definition of “the” paperclipper but I think your initial comment warrants that. Besides, even an AIXI agent depends on both terminal values and history.)
No, if I did so it’d hack my mind and convince me to make paperclips in my own universe. Assuming it couldn’t somehow use the communications channel to directly take over our universe.
But you have banned most of the means of approximating the experience of living such a life, no? In a general case you wouldn’t be justified in your claim (where by general case I mean the situation where I have strong doubts you know the other entity, not the case of “the” paperclipper). Do you have a proof that having a single terminal value excludes having a rich structure of instrumental values? Or does the way you experience terminal values overwhelm the way you experience instrumental values?
I stand by my reducto. What is the difference between clippy enjoying paperclips vs humans enjoying icecream, and me enjoying chocolate icecream vs you enjoying strawberry? Assuming none of them are doing things that give each other negative utility, such as clippy turning you into paperclips of me paying the icecream vendor to only purchase chocolate (more for me!)
That sounds as if scenario B precluded abiogenesis from happening ever again. After all, prebiotic Earth kind of was a thing which took negentropy and (eventually) converted it into order.
The question for B might then become, under which scenario is some sort of biogenesis more likely, one in which a papperclipper exists, or one in which it doesn’t? The former includes the paperclipper itself as potential fodder for evolution, but (as was just pointed out) there’s a chance the paperclipper might work to prevent it; while the latter has it for neither fodder nor interference, leaving things to natural processes.
At what point in biogenesis/evolution/etc do you think the Great Filter does its filtering?
To be clear—you’re saying that you would prefer that there not exist a single thing which takes negentropy and converts it into order (or whatever other general definition for ‘life’ you prefer), and may or may not have the possibility of evolving into something else more complicated, over nothing at all?
I’m thinking that the paperclipper counts as a life not worth living—an AI that wants to obsess about paperclips is about as repugnant to me as a cow that wants to be eaten. Which is to say, better than doing either of those without wanting it, but still pretty bad. Yes, I’m likely to have problems with a lot of genuinely friendly AIs.
I was assuming that both scenarios were for keeps. Certainly the paperclipper should be smart enough to ensure that; for the other, I guess I’ll assume you’re actually destroying the universe somehow.
It is a fair point but do you mean that the paperclipper is wrong in its judgement that its life is worth living, or is it merely your judgement that if you were the paperclipper your life would not be worth living by your current standards? Remember that we assume that there is no other life possible in the universe anyway—this assumption makes things more interesting.
It’s my judgement that the paperclipper’s life is not worth living. By my standards, sure; objective morality makes no sense, so what other standards could I use?
The paperclipper’s own opinion matters to me, but not all that much.
Would you engage with a particular paperclipper in a discussion (plus observation etc.) to refine your views on whether its life is worth living? (We are straying away from a nominal AIXI-type definition of “the” paperclipper but I think your initial comment warrants that. Besides, even an AIXI agent depends on both terminal values and history.)
No, if I did so it’d hack my mind and convince me to make paperclips in my own universe. Assuming it couldn’t somehow use the communications channel to directly take over our universe.
I’m not quite sure what you’re asking here.
Oh well, I haven’t thought of that. I was “asking” about the methodology for judging whether a life is worth living.
Whether or not I would enjoy living it, taking into account any mental changes I would be okay with.
For a paperclipper.. yeah, no.
But you have banned most of the means of approximating the experience of living such a life, no? In a general case you wouldn’t be justified in your claim (where by general case I mean the situation where I have strong doubts you know the other entity, not the case of “the” paperclipper). Do you have a proof that having a single terminal value excludes having a rich structure of instrumental values? Or does the way you experience terminal values overwhelm the way you experience instrumental values?
Assuming that clippy (or the cow, which makes more sense) feels “enjoyment”, aren’t you just failing to model them properly?
It’s feeling enjoyment from things I dislike, and failing to pursue goals I do share. It has little value in my eyes.
Which is why I, who like chocolate icecream, categorically refuse to buy vanilla or strawberry for my friends.
Nice strawman you’ve got there. Pity if something were to.. happen to it.
The precise tastes are mostly irrelevant, as you well know. Consider instead a scenario where your friend asks you to buy a dose of cocaine.
I stand by my reducto. What is the difference between clippy enjoying paperclips vs humans enjoying icecream, and me enjoying chocolate icecream vs you enjoying strawberry? Assuming none of them are doing things that give each other negative utility, such as clippy turning you into paperclips of me paying the icecream vendor to only purchase chocolate (more for me!)
That sounds as if scenario B precluded abiogenesis from happening ever again. After all, prebiotic Earth kind of was a thing which took negentropy and (eventually) converted it into order.
The question for B might then become, under which scenario is some sort of biogenesis more likely, one in which a papperclipper exists, or one in which it doesn’t? The former includes the paperclipper itself as potential fodder for evolution, but (as was just pointed out) there’s a chance the paperclipper might work to prevent it; while the latter has it for neither fodder nor interference, leaving things to natural processes.
At what point in biogenesis/evolution/etc do you think the Great Filter does its filtering?