If I believed that EY can build a superhuman AGI in my lifetime that optimizes for the reflectively stable ways EY prefers the world to be, and otherwise believe what I currently do about the world (which includes not believing that better alternatives are likely in my lifetime), I would enthusiastically support (e.g., fund) that effort.
Say what I will about EY’s preferences, I see no reason to expect them to leave me worse off than the current state of affairs.
Well, surely it would depend on the alternatives.
If I believed that EY can build a superhuman AGI in my lifetime that optimizes for the reflectively stable ways EY prefers the world to be, and otherwise believe what I currently do about the world (which includes not believing that better alternatives are likely in my lifetime), I would enthusiastically support (e.g., fund) that effort.
Say what I will about EY’s preferences, I see no reason to expect them to leave me worse off than the current state of affairs.