[ This is going to sound meaner than I intend. I like that you’re thinking about these things, and they’re sometimes fun to talk about. However, it requires some nuance, and it’s DARNED HARD to find analogies that highlight the salient features of people’s decisions and behaviors without being just silly toys that don’t relate except supeficially. ]
Really? It’s your view that AI labs are working hard to install a button that will let a random person save himself and 1000 selected co-survivors? Are they also positioning a sniper on a nearby building?
I didn’t get ANYTHING about AI rights from your post—what features of the scenario would lead me to compare with AI as a moral patient or giving it/them any legal consideration?
Thanks for the quick reply!
It is my view that AI labs are building AGI which can do everything a powerful general intelligence can do, including executing a successful world takeover plan with or without causing human extinction.
When the first AGI is misaligned, I am scared it will want to execute such a plan, which would be like pressing the button. The scenario is the most relevant when there is no aligned AGI yet, that wants to protect us.
I see now I need to clarify, the random person / man in the scenario represents the AGI itself mostly (but also a misuse situation where a non-elected person gives the command to an obedient AGI). So no, the AI labs are not working to give a fully random person this button, but they are working to give themselves this button (but also positive capabilities of course) and would an employee of an AI lab that the public did not elect, not be a random person with unknown values relative to you or me?
The sniper represents our chance to switch it off, but only when we still can, before it made secret copies. That window of opportunity is represented by only being able to shoot the man when in view of the window. Stuart Russell advocated for a kill-switch on a general system in the senate hearing recently found here on YouTube. That is Russell advocating for positioning the sniper.
what features of the scenario would lead me to compare with AI as a moral patient or giving it/them any legal consideration?
It is intuitive in modern culture for humans to see a random other human as a moral patient worthy of legal consideration. That is why I went for a random man to play the part of the AGI. Since it is unintuitive to think of an AGI as being a moral patient to many people (in my experience of talking about this).
I wrote this in the post to clarify, it is about when the AGI would be a moral patient:
I have set this up to be relevant to the situation where the man in the room is analogous to an AGI (Artificial General Intelligence) that is thought to have feelings / consciousness and thus moral value.
Does any of this change your view of the whole thing?
[ Bowing out at this point. Feel free to respond/rebut, and I’ll likely read it, but I won’t respond further. ]
Does any of this change your view of the whole thing?
Not particularly. The analogy is too distant from anything I care about, and the connection to the parts of reality that you are concerned with is pretty tenuous. It feels mostly like you’re asserting a danger, and then talking about an unrelated movie.
[ This is going to sound meaner than I intend. I like that you’re thinking about these things, and they’re sometimes fun to talk about. However, it requires some nuance, and it’s DARNED HARD to find analogies that highlight the salient features of people’s decisions and behaviors without being just silly toys that don’t relate except supeficially. ]
Really? It’s your view that AI labs are working hard to install a button that will let a random person save himself and 1000 selected co-survivors? Are they also positioning a sniper on a nearby building?
I didn’t get ANYTHING about AI rights from your post—what features of the scenario would lead me to compare with AI as a moral patient or giving it/them any legal consideration?
Thanks for the quick reply!
It is my view that AI labs are building AGI which can do everything a powerful general intelligence can do, including executing a successful world takeover plan with or without causing human extinction.
When the first AGI is misaligned, I am scared it will want to execute such a plan, which would be like pressing the button. The scenario is the most relevant when there is no aligned AGI yet, that wants to protect us.
I see now I need to clarify, the random person / man in the scenario represents the AGI itself mostly (but also a misuse situation where a non-elected person gives the command to an obedient AGI). So no, the AI labs are not working to give a fully random person this button, but they are working to give themselves this button (but also positive capabilities of course) and would an employee of an AI lab that the public did not elect, not be a random person with unknown values relative to you or me?
The sniper represents our chance to switch it off, but only when we still can, before it made secret copies. That window of opportunity is represented by only being able to shoot the man when in view of the window. Stuart Russell advocated for a kill-switch on a general system in the senate hearing recently found here on YouTube. That is Russell advocating for positioning the sniper.
It is intuitive in modern culture for humans to see a random other human as a moral patient worthy of legal consideration. That is why I went for a random man to play the part of the AGI. Since it is unintuitive to think of an AGI as being a moral patient to many people (in my experience of talking about this).
I wrote this in the post to clarify, it is about when the AGI would be a moral patient:
Does any of this change your view of the whole thing?
[ Bowing out at this point. Feel free to respond/rebut, and I’ll likely read it, but I won’t respond further. ]
Not particularly. The analogy is too distant from anything I care about, and the connection to the parts of reality that you are concerned with is pretty tenuous. It feels mostly like you’re asserting a danger, and then talking about an unrelated movie.