Is anybody seriously arguing at this point that simple (trivial) goal systems will suffice for an AGI to work the way we want it >to? Yet this is the straw man that EY keeps attacking. Even Hibbard had complex goals in mind when he meant to keep >humans “happy”, although he did not communicate this well.
Comment on Accelerating Future (This is a claim I haven’t encountered before):