Re: no distinction whatever is made between “intelligent” and “sentient”.
It seems like an irrelevance in this context. The paper is about self-improving systems. Normally these would be fairly advanced—and so would be intelligent and sentient.
Re: the few instances where an AI would change their utility function mentioned in the paper are certainly not exhaustive, I found the selection quite arbitrary.
How do you think these cases should be classified?
Re: The second flaw in the little abstract above was the positing of “drives”.
That’s the point of the paper. That a chess program, a paper clip maximiser, and a share-price maximiser will share some fundamental and important traits and behaviours.
Re: microeconomics applying to humans.
Humans aren’t perfect rational economic agents—but they are approximations. Of course microeconomics applies to humans.
Re: I see nothing of the vastness of mindspace in this paper.
The framework allows for arbitrary utility functions. What more do you want?
Re: no distinction whatever is made between “intelligent” and “sentient”.
It seems like an irrelevance in this context. The paper is about self-improving systems. Normally these would be fairly advanced—and so would be intelligent and sentient.
Re: the few instances where an AI would change their utility function mentioned in the paper are certainly not exhaustive, I found the selection quite arbitrary.
How do you think these cases should be classified?
Re: The second flaw in the little abstract above was the positing of “drives”.
That’s the point of the paper. That a chess program, a paper clip maximiser, and a share-price maximiser will share some fundamental and important traits and behaviours.
Re: microeconomics applying to humans.
Humans aren’t perfect rational economic agents—but they are approximations. Of course microeconomics applies to humans.
Re: I see nothing of the vastness of mindspace in this paper.
The framework allows for arbitrary utility functions. What more do you want?