Not saying you’re committing this fallacy, but it does explain some of the bigger problems with “raising an AI like a child” that you might not have thought of.
Hardly dispositive. A utility function that says “learn and care what your parents care about” looks relatively simple on paper. And we know the minumum intelligence required is that of a human toddler,
I don’t think “learn and care about what your parents care about” is noticeably simpler than abstractly trying to determine what an arbitrary person cares about or CEV.
Very relevant article from the sequences: Detached Lever Fallacy.
Not saying you’re committing this fallacy, but it does explain some of the bigger problems with “raising an AI like a child” that you might not have thought of.
I completely made this mistake right up until the point I read that article.
Hardly dispositive. A utility function that says “learn and care what your parents care about” looks relatively simple on paper. And we know the minumum intelligence required is that of a human toddler,
Citation needed. That sounds extremely complex to specify.
relatively
I don’t think “learn and care about what your parents care about” is noticeably simpler than abstractly trying to determine what an arbitrary person cares about or CEV.