I think that if a system is designed to do something, anything, it needs at least to care about doing that thing or approximate.
GPT-3 can be described in a broad sense as caring about following the current prompt (in a way affected by fine-tuning).
I wonder though if there are things that you can care about that do not have certain goals that could maximize EU. I mean a system for which the most optimal path is not to reach some certain point in a subspace of possibilities, with sacrifices on axes that the system does not care about, but to maintain some other dynamics while ignoring other axes.
Like gravity can make you reach singularity or can make you orbit (simplistic visual analogy).
I think that if a system is designed to do something, anything, it needs at least to care about doing that thing or approximate.
GPT-3 can be described in a broad sense as caring about following the current prompt (in a way affected by fine-tuning).
I wonder though if there are things that you can care about that do not have certain goals that could maximize EU. I mean a system for which the most optimal path is not to reach some certain point in a subspace of possibilities, with sacrifices on axes that the system does not care about, but to maintain some other dynamics while ignoring other axes.
Like gravity can make you reach singularity or can make you orbit (simplistic visual analogy).