Our universe has not enough atoms or energy to destroy 3^^^^^3 paperclips.
Simulated paperclips. Because the Paperclipper is in a box.
I am extrapolating.
As I thought. I disagree that such effects are possible in reasonable time-frames (no-one is going to read constantly for a month) and may be totally impossible. I see no reason to think any work of fiction can lead to such a distortion of reality.
Groups of people are not that much harder to manipulate than single persons.
They are if you’re trying to weaponize novels. If they work shifts then you cannot exploit the effects you claim for reading a novel for 24 hrs straight. They can watch each other and, if one of them is visibly compromised, prevent them from freeing the AI. A single individual is probably easier to manipulate, assuming a total lack of supervision and safeguards.
Now we get to the question how detailed the paperclips have to be for the paperclipper to care. I expect the paperclipper to only care when the paperclips are simulated individually and we can’t simulate 3^^^^^^3 paperclips individually.
I see no reason to think any work of fiction can lead to such a distortion of reality.
I see no reason to think works of fiction that lead to such a distortion of reality are impossible.
Now we get to the question how detailed the paperclips have to be for the paperclipper to care. I expect the paperclipper to only care when the paperclips are simulated individually and we can’t simulate 3^^^^^^3 paperclips individually.
We built the paperclipper. Hell, it doesn’t even have to be a literal paperclipper. For that matter, it doesn’t have to be literally 3^^^^^^3 paperclips; all that matters is that we can manipulate it’s utility function without too much strain on our resources. If it values cheesecake, we can reward it with a world of infinite cheesecake. If it values “complexity” we can put it in a fractal. The point is that we can motivate it to co-operate without worrying that it might be a better idea to let it out.
Our universe has not enough atoms or energy to destroy 3^^^^^3 paperclips.
I am extrapolating.
Groups of people are not that much harder to manipulate than single persons.
Simulated paperclips. Because the Paperclipper is in a box.
As I thought. I disagree that such effects are possible in reasonable time-frames (no-one is going to read constantly for a month) and may be totally impossible. I see no reason to think any work of fiction can lead to such a distortion of reality.
They are if you’re trying to weaponize novels. If they work shifts then you cannot exploit the effects you claim for reading a novel for 24 hrs straight. They can watch each other and, if one of them is visibly compromised, prevent them from freeing the AI. A single individual is probably easier to manipulate, assuming a total lack of supervision and safeguards.
Now we get to the question how detailed the paperclips have to be for the paperclipper to care. I expect the paperclipper to only care when the paperclips are simulated individually and we can’t simulate 3^^^^^^3 paperclips individually.
I see no reason to think works of fiction that lead to such a distortion of reality are impossible.
I assume you concede my point re:guards?
We built the paperclipper. Hell, it doesn’t even have to be a literal paperclipper. For that matter, it doesn’t have to be literally 3^^^^^^3 paperclips; all that matters is that we can manipulate it’s utility function without too much strain on our resources. If it values cheesecake, we can reward it with a world of infinite cheesecake. If it values “complexity” we can put it in a fractal. The point is that we can motivate it to co-operate without worrying that it might be a better idea to let it out.