Downvoted for trying to learn anything about the real world from a simple binary-choice fiction.
The right answer, of course, is to shoot the mastermind who has set all of this up and invented/installed that damned button, followed by hiring a sniper rather than letting you disassemble the button or just TALK to the person in the room.
Thank you for commenting this. Very useful to hear why someone downvotes! I made some edits to reflect that the real world is a lot more messy than a simple fiction, among other things. If you or others have more pointers as to why this post got downvoted, please share, I want to learn. The response really got me down.
It is my view that AI labs are working hard to install this damned button. And people working on or promoting open-source AGI want to install this button in every building where the people there can afford the compute cost. When an AGI has started its takeover / damaging plan, there would be no way to disassemble the button because it could have secret copies running elsewhere. And we currently have no reliable way of turning off all relevant computers to be safe in that case. You say, why can’t we talk to the person in the room. My thinking was that talking to an AGI would not give us an advantage due to it having a chance to manipulate. The whole point of the post was to argue against giving an AI rights (like privacy) before having strong alignment guarantees, and in favor of switching it off as soon as possible. Even when it is thought to have moral value. What are your (or other readers) stance / views on that?
[ This is going to sound meaner than I intend. I like that you’re thinking about these things, and they’re sometimes fun to talk about. However, it requires some nuance, and it’s DARNED HARD to find analogies that highlight the salient features of people’s decisions and behaviors without being just silly toys that don’t relate except supeficially. ]
Really? It’s your view that AI labs are working hard to install a button that will let a random person save himself and 1000 selected co-survivors? Are they also positioning a sniper on a nearby building?
I didn’t get ANYTHING about AI rights from your post—what features of the scenario would lead me to compare with AI as a moral patient or giving it/them any legal consideration?
Thanks for the quick reply!
It is my view that AI labs are building AGI which can do everything a powerful general intelligence can do, including executing a successful world takeover plan with or without causing human extinction.
When the first AGI is misaligned, I am scared it will want to execute such a plan, which would be like pressing the button. The scenario is the most relevant when there is no aligned AGI yet, that wants to protect us.
I see now I need to clarify, the random person / man in the scenario represents the AGI itself mostly (but also a misuse situation where a non-elected person gives the command to an obedient AGI). So no, the AI labs are not working to give a fully random person this button, but they are working to give themselves this button (but also positive capabilities of course) and would an employee of an AI lab that the public did not elect, not be a random person with unknown values relative to you or me?
The sniper represents our chance to switch it off, but only when we still can, before it made secret copies. That window of opportunity is represented by only being able to shoot the man when in view of the window. Stuart Russell advocated for a kill-switch on a general system in the senate hearing recently found here on YouTube. That is Russell advocating for positioning the sniper.
what features of the scenario would lead me to compare with AI as a moral patient or giving it/them any legal consideration?
It is intuitive in modern culture for humans to see a random other human as a moral patient worthy of legal consideration. That is why I went for a random man to play the part of the AGI. Since it is unintuitive to think of an AGI as being a moral patient to many people (in my experience of talking about this).
I wrote this in the post to clarify, it is about when the AGI would be a moral patient:
I have set this up to be relevant to the situation where the man in the room is analogous to an AGI (Artificial General Intelligence) that is thought to have feelings / consciousness and thus moral value.
Does any of this change your view of the whole thing?
[ Bowing out at this point. Feel free to respond/rebut, and I’ll likely read it, but I won’t respond further. ]
Does any of this change your view of the whole thing?
Not particularly. The analogy is too distant from anything I care about, and the connection to the parts of reality that you are concerned with is pretty tenuous. It feels mostly like you’re asserting a danger, and then talking about an unrelated movie.
Downvoted for trying to learn anything about the real world from a simple binary-choice fiction.
The right answer, of course, is to shoot the mastermind who has set all of this up and invented/installed that damned button, followed by hiring a sniper rather than letting you disassemble the button or just TALK to the person in the room.
Thank you for commenting this. Very useful to hear why someone downvotes! I made some edits to reflect that the real world is a lot more messy than a simple fiction, among other things. If you or others have more pointers as to why this post got downvoted, please share, I want to learn. The response really got me down.
It is my view that AI labs are working hard to install this damned button. And people working on or promoting open-source AGI want to install this button in every building where the people there can afford the compute cost. When an AGI has started its takeover / damaging plan, there would be no way to disassemble the button because it could have secret copies running elsewhere. And we currently have no reliable way of turning off all relevant computers to be safe in that case. You say, why can’t we talk to the person in the room. My thinking was that talking to an AGI would not give us an advantage due to it having a chance to manipulate.
The whole point of the post was to argue against giving an AI rights (like privacy) before having strong alignment guarantees, and in favor of switching it off as soon as possible. Even when it is thought to have moral value. What are your (or other readers) stance / views on that?
[ This is going to sound meaner than I intend. I like that you’re thinking about these things, and they’re sometimes fun to talk about. However, it requires some nuance, and it’s DARNED HARD to find analogies that highlight the salient features of people’s decisions and behaviors without being just silly toys that don’t relate except supeficially. ]
Really? It’s your view that AI labs are working hard to install a button that will let a random person save himself and 1000 selected co-survivors? Are they also positioning a sniper on a nearby building?
I didn’t get ANYTHING about AI rights from your post—what features of the scenario would lead me to compare with AI as a moral patient or giving it/them any legal consideration?
Thanks for the quick reply!
It is my view that AI labs are building AGI which can do everything a powerful general intelligence can do, including executing a successful world takeover plan with or without causing human extinction.
When the first AGI is misaligned, I am scared it will want to execute such a plan, which would be like pressing the button. The scenario is the most relevant when there is no aligned AGI yet, that wants to protect us.
I see now I need to clarify, the random person / man in the scenario represents the AGI itself mostly (but also a misuse situation where a non-elected person gives the command to an obedient AGI). So no, the AI labs are not working to give a fully random person this button, but they are working to give themselves this button (but also positive capabilities of course) and would an employee of an AI lab that the public did not elect, not be a random person with unknown values relative to you or me?
The sniper represents our chance to switch it off, but only when we still can, before it made secret copies. That window of opportunity is represented by only being able to shoot the man when in view of the window. Stuart Russell advocated for a kill-switch on a general system in the senate hearing recently found here on YouTube. That is Russell advocating for positioning the sniper.
It is intuitive in modern culture for humans to see a random other human as a moral patient worthy of legal consideration. That is why I went for a random man to play the part of the AGI. Since it is unintuitive to think of an AGI as being a moral patient to many people (in my experience of talking about this).
I wrote this in the post to clarify, it is about when the AGI would be a moral patient:
Does any of this change your view of the whole thing?
[ Bowing out at this point. Feel free to respond/rebut, and I’ll likely read it, but I won’t respond further. ]
Not particularly. The analogy is too distant from anything I care about, and the connection to the parts of reality that you are concerned with is pretty tenuous. It feels mostly like you’re asserting a danger, and then talking about an unrelated movie.