I can’t as easily think of a general argument against a misaligned AI ending up convex though.
Most goals humans want you to achieve require concave-agent-like behaviors perhaps?
Most goals humans want you to achieve require concave-agent-like behaviors perhaps?