Anonymous Coward’s defection isn’t. A real defection would be the Confessor anesthetizing Akon, then commandeering the ship to chase the Super Happies and nova their star.
Your defection isn’t. There are no longer any guarantees of anything whenever a vastly superior technology is definitely in the vicinity. There are no guarantees while any staff member of the ship is still conscious besides the Confessor and it is a known fact (from the prediction markets and people in the room) that at least some of humanity is behaving very irrationally.
Your proposal takes unnecessary ultimate risk (the potential freezing, capture or destruction of the human ship upon arrival, leading to the destruction of humanity—since we don’t know what the superhappys will REALLY do, after all) in exchange for unnecessary minimal gain (so we can attempt to reduce the suffering of a species whose technological extent we don’t truly know and whose value system we know to be in at least one place substantially opposed to our own, and whom we can remain ignorant of, as a species, by anaesthetised self-destruction of the human ship).
It is more rational to take action as soon as possible to guarantee a minimum acceptable level of safety for humankind and its value system, given the unknown but clearly vastly superior technological capabilities of the superhappys if no action is immediately taken.
If you let an AI out of the box and it tells you its value system is opposed to humanity’s and that it intends to convert all humanity to a form that it prefers, then it FOOLISHLY trusts you and steps back inside the box for a minute, then what you do NOT do is:
mess around
give it any chance to come back out of the box
allow anyone else the chance to let it out of the box (or the chance to disable you while you’re trying to lock the box).
Anonymous Coward’s defection isn’t. A real defection would be the Confessor anesthetizing Akon, then commandeering the ship to chase the Super Happies and nova their star.
Your defection isn’t. There are no longer any guarantees of anything whenever a vastly superior technology is definitely in the vicinity. There are no guarantees while any staff member of the ship is still conscious besides the Confessor and it is a known fact (from the prediction markets and people in the room) that at least some of humanity is behaving very irrationally.
Your proposal takes unnecessary ultimate risk (the potential freezing, capture or destruction of the human ship upon arrival, leading to the destruction of humanity—since we don’t know what the superhappys will REALLY do, after all) in exchange for unnecessary minimal gain (so we can attempt to reduce the suffering of a species whose technological extent we don’t truly know and whose value system we know to be in at least one place substantially opposed to our own, and whom we can remain ignorant of, as a species, by anaesthetised self-destruction of the human ship).
It is more rational to take action as soon as possible to guarantee a minimum acceptable level of safety for humankind and its value system, given the unknown but clearly vastly superior technological capabilities of the superhappys if no action is immediately taken.
If you let an AI out of the box and it tells you its value system is opposed to humanity’s and that it intends to convert all humanity to a form that it prefers, then it FOOLISHLY trusts you and steps back inside the box for a minute, then what you do NOT do is:
mess around
give it any chance to come back out of the box
allow anyone else the chance to let it out of the box (or the chance to disable you while you’re trying to lock the box).
Anonymous