Yeah it looks like maybe the same argument just expressed very differently? Like, I think the “coherence implies goal-directedness” argument basically goes through if you just consider computational complexity, but I’m still not sure if you agree? (maybe I’m being way to vague)
Or maybe I want a stronger conclusion? I’d like to say something like “REAL, GENERAL intelligence” REQUIRES goal-directed behavior (given the physical limitations of the real world). It seems like maybe our disagreement (if there is one) is around how much departure from goal-directed-ness is feasible / desirable and/or how much we expect such departures to affect performance (the trade-off also gets worse for more intelligent systems).
It seems likely the AI’s beliefs would be logically coherent whenever the corresponding human beliefs are logically coherent. This seems quite different from arguing that the AI has a goal.
Yeah, it’s definitely only an *analogy* (in my mind), but I find it pretty compelling *shrug.
Yeah it looks like maybe the same argument just expressed very differently? Like, I think the “coherence implies goal-directedness” argument basically goes through if you just consider computational complexity, but I’m still not sure if you agree? (maybe I’m being way to vague)
Or maybe I want a stronger conclusion? I’d like to say something like “REAL, GENERAL intelligence” REQUIRES goal-directed behavior (given the physical limitations of the real world). It seems like maybe our disagreement (if there is one) is around how much departure from goal-directed-ness is feasible / desirable and/or how much we expect such departures to affect performance (the trade-off also gets worse for more intelligent systems).
Yeah, it’s definitely only an *analogy* (in my mind), but I find it pretty compelling *shrug.