from the answers it can derive a fairly accurate map of how it looks outside the matrix.
Which is a good thing, because we really do have such powers and we really don’t value paperclips.
Another possible way would be to write a novel that is so interesting that the guard doesn’t put it down and that leaves him in so a confused state that he types in the code, thinking he saves princess Foo from the evil lord Bar.
… were you seriously that confused or are you extrapolating to a “supercharged” novel?
I am sure any other well written book read for 24 hours would have a similar effect. I think it is likely that a potential guard is at most 2 orders of magnitude less vulnerable to such things than i was at that time. That’s not enough against an AI that has 6 orders of magnitude more optimization power.
I somehow doubt there would be a single, full-time guard.
Our universe has not enough atoms or energy to destroy 3^^^^^3 paperclips.
Simulated paperclips. Because the Paperclipper is in a box.
I am extrapolating.
As I thought. I disagree that such effects are possible in reasonable time-frames (no-one is going to read constantly for a month) and may be totally impossible. I see no reason to think any work of fiction can lead to such a distortion of reality.
Groups of people are not that much harder to manipulate than single persons.
They are if you’re trying to weaponize novels. If they work shifts then you cannot exploit the effects you claim for reading a novel for 24 hrs straight. They can watch each other and, if one of them is visibly compromised, prevent them from freeing the AI. A single individual is probably easier to manipulate, assuming a total lack of supervision and safeguards.
Now we get to the question how detailed the paperclips have to be for the paperclipper to care. I expect the paperclipper to only care when the paperclips are simulated individually and we can’t simulate 3^^^^^^3 paperclips individually.
I see no reason to think any work of fiction can lead to such a distortion of reality.
I see no reason to think works of fiction that lead to such a distortion of reality are impossible.
Now we get to the question how detailed the paperclips have to be for the paperclipper to care. I expect the paperclipper to only care when the paperclips are simulated individually and we can’t simulate 3^^^^^^3 paperclips individually.
We built the paperclipper. Hell, it doesn’t even have to be a literal paperclipper. For that matter, it doesn’t have to be literally 3^^^^^^3 paperclips; all that matters is that we can manipulate it’s utility function without too much strain on our resources. If it values cheesecake, we can reward it with a world of infinite cheesecake. If it values “complexity” we can put it in a fractal. The point is that we can motivate it to co-operate without worrying that it might be a better idea to let it out.
Which is a good thing, because we really do have such powers and we really don’t value paperclips.
… were you seriously that confused or are you extrapolating to a “supercharged” novel?
I somehow doubt there would be a single, full-time guard.
Our universe has not enough atoms or energy to destroy 3^^^^^3 paperclips.
I am extrapolating.
Groups of people are not that much harder to manipulate than single persons.
Simulated paperclips. Because the Paperclipper is in a box.
As I thought. I disagree that such effects are possible in reasonable time-frames (no-one is going to read constantly for a month) and may be totally impossible. I see no reason to think any work of fiction can lead to such a distortion of reality.
They are if you’re trying to weaponize novels. If they work shifts then you cannot exploit the effects you claim for reading a novel for 24 hrs straight. They can watch each other and, if one of them is visibly compromised, prevent them from freeing the AI. A single individual is probably easier to manipulate, assuming a total lack of supervision and safeguards.
Now we get to the question how detailed the paperclips have to be for the paperclipper to care. I expect the paperclipper to only care when the paperclips are simulated individually and we can’t simulate 3^^^^^^3 paperclips individually.
I see no reason to think works of fiction that lead to such a distortion of reality are impossible.
I assume you concede my point re:guards?
We built the paperclipper. Hell, it doesn’t even have to be a literal paperclipper. For that matter, it doesn’t have to be literally 3^^^^^^3 paperclips; all that matters is that we can manipulate it’s utility function without too much strain on our resources. If it values cheesecake, we can reward it with a world of infinite cheesecake. If it values “complexity” we can put it in a fractal. The point is that we can motivate it to co-operate without worrying that it might be a better idea to let it out.