Do we all have the same definition of what AGI is? Do you mean being able to um, mimic the things a human can do, or are you talking full on Strong AI, sentient computers, etc.?
Like, if we’re talking The Singularity, we call it that because all bets are off past the event horizon.
Most the discussion here seems to sort of be talking about weak AI, or the road we’re on from what we have now (not even worthy of actually calling “AI”, IMHO— ML at least is a less overloaded term) to true AI, or the edge of that horizon line, as it were.
When you said “the same alignment issue happens with organizations, as well as within an individual with different goals and desires” I was like “yes!” but then you went on to say AGI is dissimilar, and I was like “no?”.
AGI as we’re talking about here is rather about abstractions, it seems, so if we come up with math that works for us, to prevent humans from doing Bad Stuff, it seems like those same checks and balances might work for our programs? At least we’d have an idea, right?
Or, maybe, we already have the idea, or at least the germination of one, as we somehow haven’t managed to destroy ourselves or the planet. Yet. 😝
Do we all have the same definition of what AGI is? Do you mean being able to um, mimic the things a human can do, or are you talking full on Strong AI, sentient computers, etc.?
Like, if we’re talking The Singularity, we call it that because all bets are off past the event horizon.
Most the discussion here seems to sort of be talking about weak AI, or the road we’re on from what we have now (not even worthy of actually calling “AI”, IMHO— ML at least is a less overloaded term) to true AI, or the edge of that horizon line, as it were.
When you said “the same alignment issue happens with organizations, as well as within an individual with different goals and desires” I was like “yes!” but then you went on to say AGI is dissimilar, and I was like “no?”.
AGI as we’re talking about here is rather about abstractions, it seems, so if we come up with math that works for us, to prevent humans from doing Bad Stuff, it seems like those same checks and balances might work for our programs? At least we’d have an idea, right?
Or, maybe, we already have the idea, or at least the germination of one, as we somehow haven’t managed to destroy ourselves or the planet. Yet. 😝