I agree that engineering and inventing does not look like AI’s strong spot, currently. Today’s best generative AI only seem to be good at memorization, repetitive tasks and making words rhyme. They seem equally poor at engineering and wisdom. But it’s possible this can change in the future.
Same time
I still think that the first AGI won’t exceed humans at engineering and wisdom at the exact same time. From first principles, there aren’t very strong reasons why that should be the case (unless it’s one sudden jump).
Engineering vs. mental math analogy
Yes, engineering is a lot more than just math. I was trying to say that engineering was “analogous” to mental math.
The analogy is that, humans are bad at mental math because evolution did not prioritize making us good at mental math, because prehistoric humans didn’t need to add large numbers.
The human brain has tens of billions of neurons, which can fire up to a hundred times a second. Some people estimate the brain has more computing power than a computer with a quadrillion FLOPS (i.e. 1,000,000,000,000,000 numerical calculations per second, using 32 bit numbers).
With this much computing power, we’re still very bad at mental math, and can’t do 3141593 + 2718282 in our heads. Even with a lot of practice, we still struggle and get it wrong. This is because evolution did not prioritize mental math, so our attempts at “simulating the addition algorithm” are astronomically inefficient.
Likewise, I argue that evolution did not prioritize engineering ability either. How good a prehistoric spear you make depends on your trial and error with rock chipping techniques, not on whether your engineering ability can design a rocket ship. Tools were very useful back then, but tools only needed to be invented once and can be copied afterwards. An individual very smart at inventing tools might accomplish nothing, if all practical prehistoric tools of the era were already invented. There isn’t very much selection pressure for engineering ability.
Maybe humans are actually as inefficient at engineering as we are at mental math. We just don’t know about it, because all the other animals around are even worse at engineering than us. Maybe it turns out the laws of physics and mechanics are extremely easygoing, such that even awful engineers like humans can eventually build industry and technology. My guess is that human engineering not quite as inefficient as mental math, but it’s still quite inefficient.
Learning wisdom
Oh thank you for pointing out that wisdom can be learned through other people’s decisions. That is a very good point.
I agree the AGI might have advantages and disadvantages here. The advantage is, as you say, it can think much longer.
The disadvantage is that you still need a decent amount of intuitive wisdom deep down, in order to acquire learned wisdom from other people’s experiences.
What I mean is, learning about other people’s experiences doesn’t always produce wisdom. My guess is there are notorious sampling biases in what experiences other people share. People only spread the most interesting stories, when something unexpected happen.
Humans also tend to spread stories which confirm their beliefs (political beliefs, beliefs about themselves, etc.), avoid spreading stories which contradict their beliefs, and unconsciously twist or omit important details. People who unknowingly fall into echo chambers might feel like they’re building up “wisdom” from other people’s experiences, but still end up with a completely wrong model of the world.
I think the process of gaining wisdom from observing others actually levels off eventually. I think if someone not very wise spent decades learning about others’ stories, he or she might be one standard deviation wiser but not far wiser. He or she might not be wiser about new unfamiliar questions. Lots of people know everything about history, business history, etc., but still lack the wisdom to realize AI risk is worth working on.
Thinking a lot longer might not lead to a very big advantage.
I agree that engineering and inventing does not look like AI’s strong spot, currently. Today’s best generative AI only seem to be good at memorization, repetitive tasks and making words rhyme. They seem equally poor at engineering and wisdom. But it’s possible this can change in the future.
Same time
I still think that the first AGI won’t exceed humans at engineering and wisdom at the exact same time. From first principles, there aren’t very strong reasons why that should be the case (unless it’s one sudden jump).
Engineering vs. mental math analogy
Yes, engineering is a lot more than just math. I was trying to say that engineering was “analogous” to mental math.
The analogy is that, humans are bad at mental math because evolution did not prioritize making us good at mental math, because prehistoric humans didn’t need to add large numbers.
The human brain has tens of billions of neurons, which can fire up to a hundred times a second. Some people estimate the brain has more computing power than a computer with a quadrillion FLOPS (i.e. 1,000,000,000,000,000 numerical calculations per second, using 32 bit numbers).
With this much computing power, we’re still very bad at mental math, and can’t do 3141593 + 2718282 in our heads. Even with a lot of practice, we still struggle and get it wrong. This is because evolution did not prioritize mental math, so our attempts at “simulating the addition algorithm” are astronomically inefficient.
Likewise, I argue that evolution did not prioritize engineering ability either. How good a prehistoric spear you make depends on your trial and error with rock chipping techniques, not on whether your engineering ability can design a rocket ship. Tools were very useful back then, but tools only needed to be invented once and can be copied afterwards. An individual very smart at inventing tools might accomplish nothing, if all practical prehistoric tools of the era were already invented. There isn’t very much selection pressure for engineering ability.
Maybe humans are actually as inefficient at engineering as we are at mental math. We just don’t know about it, because all the other animals around are even worse at engineering than us. Maybe it turns out the laws of physics and mechanics are extremely easygoing, such that even awful engineers like humans can eventually build industry and technology. My guess is that human engineering not quite as inefficient as mental math, but it’s still quite inefficient.
Learning wisdom
Oh thank you for pointing out that wisdom can be learned through other people’s decisions. That is a very good point.
I agree the AGI might have advantages and disadvantages here. The advantage is, as you say, it can think much longer.
The disadvantage is that you still need a decent amount of intuitive wisdom deep down, in order to acquire learned wisdom from other people’s experiences.
What I mean is, learning about other people’s experiences doesn’t always produce wisdom. My guess is there are notorious sampling biases in what experiences other people share. People only spread the most interesting stories, when something unexpected happen.
Humans also tend to spread stories which confirm their beliefs (political beliefs, beliefs about themselves, etc.), avoid spreading stories which contradict their beliefs, and unconsciously twist or omit important details. People who unknowingly fall into echo chambers might feel like they’re building up “wisdom” from other people’s experiences, but still end up with a completely wrong model of the world.
I think the process of gaining wisdom from observing others actually levels off eventually. I think if someone not very wise spent decades learning about others’ stories, he or she might be one standard deviation wiser but not far wiser. He or she might not be wiser about new unfamiliar questions. Lots of people know everything about history, business history, etc., but still lack the wisdom to realize AI risk is worth working on.
Thinking a lot longer might not lead to a very big advantage.
Of course I don’t know any of this for sure :/
Sorry for long reply I got carried away :)