No it is not hypothetical. If you build an AI with unbounded utility functions, yet human utility functions are (mostly) bounded, then you have built a (mostly) unfriendly AI. An AI that will be willing to sacrifice arbitrarily large amounts of current human utility in order to gain the resources to create a wonderful future for hypothetical future humans.
That’s diferent, though. The hypothetical I was objecting to was humans having unbounded utility functions. I think that idea is a case of making things up.
FWIW, I stand by the idea that instrumental discounting means that debating ultimate discounting vs a lack of ultimate discounting mostly represents a storm in a teacup. In practice, all agents do instrumental discounting—since the future is uncertain and difficult to directly influence.
Any debate here should really be over whether ultimate discounting on a timescale of decades is desirable—or not.
No it is not hypothetical. If you build an AI with unbounded utility functions, yet human utility functions are (mostly) bounded, then you have built a (mostly) unfriendly AI. An AI that will be willing to sacrifice arbitrarily large amounts of current human utility in order to gain the resources to create a wonderful future for hypothetical future humans.
That’s diferent, though. The hypothetical I was objecting to was humans having unbounded utility functions. I think that idea is a case of making things up.
FWIW, I stand by the idea that instrumental discounting means that debating ultimate discounting vs a lack of ultimate discounting mostly represents a storm in a teacup. In practice, all agents do instrumental discounting—since the future is uncertain and difficult to directly influence.
Any debate here should really be over whether ultimate discounting on a timescale of decades is desirable—or not.