The paperclipper is a strawman. Paperclippers would be at a powerful evolutionary/competitive disadvantage WRT non-paperclippers.
Even if you don’t believe this, I don’t see why you would think paperclippers would constitute the majority of all possible AIs. Something that helps only with non-paperclippers would still be very useful.
The paperclipper is a commonly referred to example.
Paperclippers would be at a powerful evolutionary/competitive disadvantage WRT non-paperclippers.
We are considering the case where one AGI gets built. There is no variation to apply selection pressure to.
I don’t see why you would think paperclippers would constitute the majority of all possible AIs.
I never said I did. This argument would be an actual straw man.
Something that helps only with non-paperclippers would still be very useful.
The paperclipper is an example of the class of AIs with simplistic goals, and the scenarios are similar for smiley face maximizers and orgasium maximizers. Most AI’s that fail to be Friendly will not have “kill all humans” as an intrinsic goal, so depending on them having “kill all humans” as an instrumental goal is dangerous, because they are likely to kill us out of indifference to that side effect of achieving their actual goals. Also consider near-miss AIs that create a distopian future but don’t kill all humans.
The paperclipper is a strawman. Paperclippers would be at a powerful evolutionary/competitive disadvantage WRT non-paperclippers.
Even if you don’t believe this, I don’t see why you would think paperclippers would constitute the majority of all possible AIs. Something that helps only with non-paperclippers would still be very useful.
The paperclipper is a commonly referred to example.
We are considering the case where one AGI gets built. There is no variation to apply selection pressure to.
I never said I did. This argument would be an actual straw man.
The paperclipper is an example of the class of AIs with simplistic goals, and the scenarios are similar for smiley face maximizers and orgasium maximizers. Most AI’s that fail to be Friendly will not have “kill all humans” as an intrinsic goal, so depending on them having “kill all humans” as an instrumental goal is dangerous, because they are likely to kill us out of indifference to that side effect of achieving their actual goals. Also consider near-miss AIs that create a distopian future but don’t kill all humans.