“you would need to value your time at $5,000/57 hrs=$88 per hour to break even”
Wait, this sounds like approximately the rate where you’d start setting up an office and getting a secretary or receptionist. Which makes me wonder… is that actually what a major function of secretaries has always been?
I very much agree with this. You’re not the only one! I’ve been thinking for a while that actually, AGI is here (by all previous definitions of AGI).
Furthermore, I want to suggest that the people who are saying we don’t yet have AGI will in fact never be satisfied by what an AI does. The reason is this: An AI will never ever act like a human. By the time its ability to do basic human things like speak and drive are up to human standards (already happened), its abilities in other areas, like playing computer games and calculating, will far exceed ours. Moreover, AIs don’t have desires that are anything like ours. So there will come a time when AIs can do all the things people do—but half the internet will still be saying, “But it doesn’t take children to the park, because it doesn’t have emotional intelligence, therefore it’s still not real AGI.” That is, because AI is not like us, there will always be some human activities that AI does not do; and there will always be people who claim that this means AI cannot do those things; and they will therefore suggest that AGI has not been achieved.
The much more interesting position right now is to recognise, as the OP does, that AGI is already here; and that AIs are still not very much like us; and to wonder what that means. The next gen AIs will be obviously much cmarter than us, and yet they still won’t make money in pizza shops, as one commenter above suggested. I’ll go out on a limb here, and say, no AI will ever open a pizza shop. And in fact, that’s a stupid expectation to have of these fabulous aliens. It’s nothing more or less than saying, X doesn’t do what I would do, therefore X is wrong/not intelligent. It’s the most parochial approach to a new thing that you could possibly take.
A less parochial approach is to ask: so, these alien beings are now among us. Rather than keep compliaining that they don’t do what I do, can we ask: what do they do?
As an example of where that takes us: They’re intelligent but they don’t have intentions. Does that tell us that intentions are really at the heart of human consciousness, not intelligence? AIs are intelligent but they don’t feel pain. Are they morally salient? If not, does that imply that people are morally salient not because we’re smart, but because we hurt? Etc.