I think it’s plausible that an unaligned singularity could lead to things we consider interesting, because human values might be more generic than they appear, the apparent complexity an emergent feature resulting from power-seeking and curiosity drives or mesa-optimization. I also think the whole framework of “a singleton with a fixed utility function becomes all-powerful and optimizes that for all eternity” might be wrong, since human values don’t seem to work that way.
I think it’s plausible that an unaligned singularity could lead to things we consider interesting, because human values might be more generic than they appear, the apparent complexity an emergent feature resulting from power-seeking and curiosity drives or mesa-optimization. I also think the whole framework of “a singleton with a fixed utility function becomes all-powerful and optimizes that for all eternity” might be wrong, since human values don’t seem to work that way.