There is an enormous difference between my claim and what 27chaos is saying. 27chaos is suggesting that “maximizing human happiness” is computationally intractable. However, this is obviously not the case since humans maximize human values, or at least they do it to an extent which would be impressive in AI. My claim is that designing an AGI (finding the right program in the space of all possible programs) is computationally intractable. Once designed, the AGI itself is efficient, otherwise it wouldn’t count as an AGI.
Yes, I agree that what you are suggesting is different. I was pointing to the thread also because there’s further discussion in the thread beyond 27chaos’s initial point, and some of it is more relevant.
Hi Joshua, thx for commenting!
There is an enormous difference between my claim and what 27chaos is saying. 27chaos is suggesting that “maximizing human happiness” is computationally intractable. However, this is obviously not the case since humans maximize human values, or at least they do it to an extent which would be impressive in AI. My claim is that designing an AGI (finding the right program in the space of all possible programs) is computationally intractable. Once designed, the AGI itself is efficient, otherwise it wouldn’t count as an AGI.
Yes, I agree that what you are suggesting is different. I was pointing to the thread also because there’s further discussion in the thread beyond 27chaos’s initial point, and some of it is more relevant.
I see. I would be grateful if you can point to specific comments you think are the most relevant.
At the risk of sounding egotistical, this bit seems relevant.
Thanks, this is definitely relevant and thought provoking!