It’s useful in thinking about the nature of AGI. Ultimately, something that isn’t limited to messy biology and a small skull is going to outpace us pretty quickly in all domains. We might get lucky if an early one fucks up a takeover attempt and that makes us suddenly alert enough to avoid a second one; but that seems moderately unlikely.
I think you’re right that humans have something that could be termed a wisdom advantage, and maybe this is what you meant: we’ve been evolved for millions to billions of years (depending on where the relevant mechanisms started) to avoid things that might get us killed. That could be termed wisdom. AGI is not evolved but designed and trained, so it might have some nasty blind spots. Current AI certainly does.
We have a fine-tuned intuitive sense of danger that prevents us from doing things that could get us killed (at least those our intution can grapple with. The Darwin award for doing bungee jumping with a cable is an example of things intution doesn’t do well). AGI does not.
That could be substituted with careful logic; as you say, thinking longer and harder can substitute for a lot.
As for engineering, that’s partly based on math but not for the majority of the job. There’s a lot of logic of materials, chemistry, etc depending on what you’re engineering. It’s systematic thought, but also creative thought. I’m scoring this roughly a push between early AGI and humans. Currently they can do math very well but just like we do: by using an external tool. So it’s not really better integrated.
Some of our intuitions for danger also apply to abstract situations like engineering, so we’ve got that advantage.
Again, more thought can substitute for talent.
I don’t think this the wisdom advantage is true for the reasons you give. Your central argument for human advantage in wisdom:
The only way to evaluate long term wisdom is to let the agent make a decision, wait years, and see if the agent’s goals have advanced.
mostly applies to AGI as well. We don’t learn wisdom from age, we learn it from paying serious attention to the stories and lessons from those who have tried and succeeded or failed.
We do get better at it with age, but that’s only in small part from trying and failing or succeeding at particular strategies ourselves. We have a vast library of failures and successes available in others’ stories. We learn more and get better at using them by learning as we age. The AGI can do that too, faster in some regards, but missing some others.
I agree that engineering and inventing does not look like AI’s strong spot, currently. Today’s best generative AI only seem to be good at memorization, repetitive tasks and making words rhyme. They seem equally poor at engineering and wisdom. But it’s possible this can change in the future.
Same time
I still think that the first AGI won’t exceed humans at engineering and wisdom at the exact same time. From first principles, there aren’t very strong reasons why that should be the case (unless it’s one sudden jump).
Engineering vs. mental math analogy
Yes, engineering is a lot more than just math. I was trying to say that engineering was “analogous” to mental math.
The analogy is that, humans are bad at mental math because evolution did not prioritize making us good at mental math, because prehistoric humans didn’t need to add large numbers.
The human brain has tens of billions of neurons, which can fire up to a hundred times a second. Some people estimate the brain has more computing power than a computer with a quadrillion FLOPS (i.e. 1,000,000,000,000,000 numerical calculations per second, using 32 bit numbers).
With this much computing power, we’re still very bad at mental math, and can’t do 3141593 + 2718282 in our heads. Even with a lot of practice, we still struggle and get it wrong. This is because evolution did not prioritize mental math, so our attempts at “simulating the addition algorithm” are astronomically inefficient.
Likewise, I argue that evolution did not prioritize engineering ability either. How good a prehistoric spear you make depends on your trial and error with rock chipping techniques, not on whether your engineering ability can design a rocket ship. Tools were very useful back then, but tools only needed to be invented once and can be copied afterwards. An individual very smart at inventing tools might accomplish nothing, if all practical prehistoric tools of the era were already invented. There isn’t very much selection pressure for engineering ability.
Maybe humans are actually as inefficient at engineering as we are at mental math. We just don’t know about it, because all the other animals around are even worse at engineering than us. Maybe it turns out the laws of physics and mechanics are extremely easygoing, such that even awful engineers like humans can eventually build industry and technology. My guess is that human engineering not quite as inefficient as mental math, but it’s still quite inefficient.
Learning wisdom
Oh thank you for pointing out that wisdom can be learned through other people’s decisions. That is a very good point.
I agree the AGI might have advantages and disadvantages here. The advantage is, as you say, it can think much longer.
The disadvantage is that you still need a decent amount of intuitive wisdom deep down, in order to acquire learned wisdom from other people’s experiences.
What I mean is, learning about other people’s experiences doesn’t always produce wisdom. My guess is there are notorious sampling biases in what experiences other people share. People only spread the most interesting stories, when something unexpected happen.
Humans also tend to spread stories which confirm their beliefs (political beliefs, beliefs about themselves, etc.), avoid spreading stories which contradict their beliefs, and unconsciously twist or omit important details. People who unknowingly fall into echo chambers might feel like they’re building up “wisdom” from other people’s experiences, but still end up with a completely wrong model of the world.
I think the process of gaining wisdom from observing others actually levels off eventually. I think if someone not very wise spent decades learning about others’ stories, he or she might be one standard deviation wiser but not far wiser. He or she might not be wiser about new unfamiliar questions. Lots of people know everything about history, business history, etc., but still lack the wisdom to realize AI risk is worth working on.
Thinking a lot longer might not lead to a very big advantage.
I applaud floating ideas like this.
It’s useful in thinking about the nature of AGI. Ultimately, something that isn’t limited to messy biology and a small skull is going to outpace us pretty quickly in all domains. We might get lucky if an early one fucks up a takeover attempt and that makes us suddenly alert enough to avoid a second one; but that seems moderately unlikely.
I think you’re right that humans have something that could be termed a wisdom advantage, and maybe this is what you meant: we’ve been evolved for millions to billions of years (depending on where the relevant mechanisms started) to avoid things that might get us killed. That could be termed wisdom. AGI is not evolved but designed and trained, so it might have some nasty blind spots. Current AI certainly does.
We have a fine-tuned intuitive sense of danger that prevents us from doing things that could get us killed (at least those our intution can grapple with. The Darwin award for doing bungee jumping with a cable is an example of things intution doesn’t do well). AGI does not.
That could be substituted with careful logic; as you say, thinking longer and harder can substitute for a lot.
As for engineering, that’s partly based on math but not for the majority of the job. There’s a lot of logic of materials, chemistry, etc depending on what you’re engineering. It’s systematic thought, but also creative thought. I’m scoring this roughly a push between early AGI and humans. Currently they can do math very well but just like we do: by using an external tool. So it’s not really better integrated.
Some of our intuitions for danger also apply to abstract situations like engineering, so we’ve got that advantage.
Again, more thought can substitute for talent.
I don’t think this the wisdom advantage is true for the reasons you give. Your central argument for human advantage in wisdom:
mostly applies to AGI as well. We don’t learn wisdom from age, we learn it from paying serious attention to the stories and lessons from those who have tried and succeeded or failed.
We do get better at it with age, but that’s only in small part from trying and failing or succeeding at particular strategies ourselves. We have a vast library of failures and successes available in others’ stories. We learn more and get better at using them by learning as we age. The AGI can do that too, faster in some regards, but missing some others.
I agree that engineering and inventing does not look like AI’s strong spot, currently. Today’s best generative AI only seem to be good at memorization, repetitive tasks and making words rhyme. They seem equally poor at engineering and wisdom. But it’s possible this can change in the future.
Same time
I still think that the first AGI won’t exceed humans at engineering and wisdom at the exact same time. From first principles, there aren’t very strong reasons why that should be the case (unless it’s one sudden jump).
Engineering vs. mental math analogy
Yes, engineering is a lot more than just math. I was trying to say that engineering was “analogous” to mental math.
The analogy is that, humans are bad at mental math because evolution did not prioritize making us good at mental math, because prehistoric humans didn’t need to add large numbers.
The human brain has tens of billions of neurons, which can fire up to a hundred times a second. Some people estimate the brain has more computing power than a computer with a quadrillion FLOPS (i.e. 1,000,000,000,000,000 numerical calculations per second, using 32 bit numbers).
With this much computing power, we’re still very bad at mental math, and can’t do 3141593 + 2718282 in our heads. Even with a lot of practice, we still struggle and get it wrong. This is because evolution did not prioritize mental math, so our attempts at “simulating the addition algorithm” are astronomically inefficient.
Likewise, I argue that evolution did not prioritize engineering ability either. How good a prehistoric spear you make depends on your trial and error with rock chipping techniques, not on whether your engineering ability can design a rocket ship. Tools were very useful back then, but tools only needed to be invented once and can be copied afterwards. An individual very smart at inventing tools might accomplish nothing, if all practical prehistoric tools of the era were already invented. There isn’t very much selection pressure for engineering ability.
Maybe humans are actually as inefficient at engineering as we are at mental math. We just don’t know about it, because all the other animals around are even worse at engineering than us. Maybe it turns out the laws of physics and mechanics are extremely easygoing, such that even awful engineers like humans can eventually build industry and technology. My guess is that human engineering not quite as inefficient as mental math, but it’s still quite inefficient.
Learning wisdom
Oh thank you for pointing out that wisdom can be learned through other people’s decisions. That is a very good point.
I agree the AGI might have advantages and disadvantages here. The advantage is, as you say, it can think much longer.
The disadvantage is that you still need a decent amount of intuitive wisdom deep down, in order to acquire learned wisdom from other people’s experiences.
What I mean is, learning about other people’s experiences doesn’t always produce wisdom. My guess is there are notorious sampling biases in what experiences other people share. People only spread the most interesting stories, when something unexpected happen.
Humans also tend to spread stories which confirm their beliefs (political beliefs, beliefs about themselves, etc.), avoid spreading stories which contradict their beliefs, and unconsciously twist or omit important details. People who unknowingly fall into echo chambers might feel like they’re building up “wisdom” from other people’s experiences, but still end up with a completely wrong model of the world.
I think the process of gaining wisdom from observing others actually levels off eventually. I think if someone not very wise spent decades learning about others’ stories, he or she might be one standard deviation wiser but not far wiser. He or she might not be wiser about new unfamiliar questions. Lots of people know everything about history, business history, etc., but still lack the wisdom to realize AI risk is worth working on.
Thinking a lot longer might not lead to a very big advantage.
Of course I don’t know any of this for sure :/
Sorry for long reply I got carried away :)