Not sure if this is what you are asking, but a paperclip maximizer not familiar with general relativity risks creating a black hole out of paper clips, losing all its hard work as a result.
That would be a problem of the AI not being able to accurately predict the consequences of its actions because it doesn’t know enough physics. An ontological crises would involve the paperclip maximizer learning new physics and therefor getting confused about what a paperclip is and maximizing something else.
Example: An AI is introduces to a large quantity of metal, and told to make paperclips. Since the AI is confined in a metal-only environment, “paperclip” is defined only as a shape.
The AI escapes from the box, and encounters a lake. It then spends some time trying to create paperclip shapes from water. After a bit of experimentation, it finds that freezing the water to ice allows it to create paperclip shapes. Moreover, it finds that any substance provided with enough heat will melt.
Therefore, in order to better create paperclip shapes from other, possibly undiscovered materials, the AI puts out the sun, and otherwise seeks to minimise the amount of heat in the universe.
Pretty sure that freezing stuff would cost lots of negentropy which Clippy could spend to make many more paperclips out of already solid materials instead.
That is an example of a paperclip maximizer failing an ontological crises. It doesn’t seem to illustrate Shminux’s concept of an “ultraviolet catastrophe”, though.
I think that the concept of an ontological crises metaphorically similar to the ultraviolet catastrophe is confused, and I don’t expect to find a good example. I suspect that Shminux was thinking more of problems of inaccurate predictions from incomplete physics than utility functions that don’t translate correctly to new ontologies when he proposed it.
To be clear, the issue here is that it inadvertently hastens the heat death of the universe,and generally lowers it’s ability to create paperclips, right?
It’s just an example of an ontological crisis; the AI is learning new physics (cold causes water to freeze), and is not certain of what a paperclip is, and is therefore maximising something else (coldness).
and is therefore maximising something else (coldness).
The thing the paperclip maximizer is maximizing instead of paperclips is paperclip-shaped objects made out of the wrong material. Coldness is just an instrumental value, and the example could be simplified and made more plausible by taking that part out. ETA: And the relevant new physics is not that cold water freezes but that materials other than metal exist.
The thing the paperclip maximizer is maximizing instead of paperclips is paperclip-shaped objects made out of the wrong material. Coldness is just an instrumental value,
A good point. I hadn’t thought of it that way, but you are correct.
And the relevant new physics is not that cold water freezes but that materials other than metal exist.
Oh, right. But … it’s actually maximizing solids, which is instrumental to maximizing paperclip-shaped objects, which is what it was programmed to do in the first place. Right?
Not sure if this is what you are asking, but a paperclip maximizer not familiar with general relativity risks creating a black hole out of paper clips, losing all its hard work as a result.
That would be a problem of the AI not being able to accurately predict the consequences of its actions because it doesn’t know enough physics. An ontological crises would involve the paperclip maximizer learning new physics and therefor getting confused about what a paperclip is and maximizing something else.
Example: An AI is introduces to a large quantity of metal, and told to make paperclips. Since the AI is confined in a metal-only environment, “paperclip” is defined only as a shape.
The AI escapes from the box, and encounters a lake. It then spends some time trying to create paperclip shapes from water. After a bit of experimentation, it finds that freezing the water to ice allows it to create paperclip shapes. Moreover, it finds that any substance provided with enough heat will melt.
Therefore, in order to better create paperclip shapes from other, possibly undiscovered materials, the AI puts out the sun, and otherwise seeks to minimise the amount of heat in the universe.
Is that what you’re looking for?
Pretty sure that freezing stuff would cost lots of negentropy which Clippy could spend to make many more paperclips out of already solid materials instead.
That is an example of a paperclip maximizer failing an ontological crises. It doesn’t seem to illustrate Shminux’s concept of an “ultraviolet catastrophe”, though.
You are correct. Can you suggest an example that resolves that shortcoming?
I think that the concept of an ontological crises metaphorically similar to the ultraviolet catastrophe is confused, and I don’t expect to find a good example. I suspect that Shminux was thinking more of problems of inaccurate predictions from incomplete physics than utility functions that don’t translate correctly to new ontologies when he proposed it.
To be clear, the issue here is that it inadvertently hastens the heat death of the universe,and generally lowers it’s ability to create paperclips, right?
It’s just an example of an ontological crisis; the AI is learning new physics (cold causes water to freeze), and is not certain of what a paperclip is, and is therefore maximising something else (coldness).
The thing the paperclip maximizer is maximizing instead of paperclips is paperclip-shaped objects made out of the wrong material. Coldness is just an instrumental value, and the example could be simplified and made more plausible by taking that part out. ETA: And the relevant new physics is not that cold water freezes but that materials other than metal exist.
A good point. I hadn’t thought of it that way, but you are correct.
Exactly, yes.
Oh, right. But … it’s actually maximizing solids, which is instrumental to maximizing paperclip-shaped objects, which is what it was programmed to do in the first place. Right?
Yyyyyeeeees. That’s a fair statement of the situation.
Just checking I understand it this time, thanks :-)
Oh, OK. What are the abstraction levels a paperclip maximizer might use?