That seems fair. FWIW my posts about modeling humans are motivated by “maybe people will make AGI using similar algorithms as the human brain, so we should try to assess how likely this research project is to succeed, and how soon, and more importantly figure out how to ensure a good future on condition that it does in fact succeed”. I guess Kaj’s posts about modeling humans are motivated by “let’s understand humans as a window into what “human values” mean and what we want from a post-AGI future etc., and also to better understand mental health.” Sorry if I’m putting words in their mouth. I guess you’d categorize both of these as “Advancing understanding of human cognition”. Anyway nice post :-)
That seems fair. FWIW my posts about modeling humans are motivated by “maybe people will make AGI using similar algorithms as the human brain, so we should try to assess how likely this research project is to succeed, and how soon, and more importantly figure out how to ensure a good future on condition that it does in fact succeed”. I guess Kaj’s posts about modeling humans are motivated by “let’s understand humans as a window into what “human values” mean and what we want from a post-AGI future etc., and also to better understand mental health.” Sorry if I’m putting words in their mouth. I guess you’d categorize both of these as “Advancing understanding of human cognition”. Anyway nice post :-)