The first AGI may be a good engineer but bad strategist
AGI may have an advantage in engineering, but humans may have an advantage in strategy and wisdom.
AGI disadvantage in wisdom
Wisdom and strategy is much harder to evaluate than engineering ability. The only way to evaluate long term wisdom is to let the agent make a decision, wait years, and see if the agent’s goals have advanced. Evolution and natural selection had hundreds of thousands of years to optimize human wisdom, and improving wisdom is a high priority for evolution. AGI labs do not have hundreds of thousands of years, so AGI might lack wisdom.
AI algorithms produce black boxes which its creators do not understand, but somehow work in getting a desired result. Generally speaking, this only works when the desired result can be evaluated. We cannot evaluate an AI’s long term wisdom (beyond correcting mistakes which fall below the human level).
Human disadvantage in engineering
Evolution and natural selection did not give humans very good mental math ability, because adding large numbers didn’t help our prehistoric ancestors. Likewise, engineering ability only helped our prehistoric ancestors a little bit. Making spears is very helpful for survival, but it only needed to be invented once and can be copied afterwards. If you want better spears, having the engineering ability to design a jumbo jet will not help you very much. You’re better off relying on trial and error with your rock chipping techniques and testing out spears you make.
Therefore, the first AGI built *might be* very good at engineering, but bad at wisdom and strategy.
Caveat
It’s possible that even if the AGI’s intuitive wisdom and strategy are not superhuman, its actual decisions may be superhuman simply because it thinks much longer about all possible regrets, and has less ego-driven overconfidence.
Potential implications
AGI takeover
Even if the first AGI built is poor in wisdom and strategy, doesn’t mean we’re safe from AI takeover. A second AGI built by the first AGI might be much better at wisdom and strategy, and it might be misaligned due to unwise mistakes by the first AGI.
The first AGI itself isn’t necessarily safe. Poor wisdom and strategy does not mean you can’t take over the world. If you can engineer self replicating machines, even a chatbot-like level of strategizing might be enough.
It does mean that AGI control methods have a higher chance of working, contradicting the assumption that control is far less useful than alignment.
If we’re very lucky, a controlled AGI with superhuman engineering may even help us invent alignment ideas.
Self replicating nanobots
If the AGI is really good at engineering, it may be able to make self replicating nanobots.
Self replicating nanobots are dangerous because they can be weaponized, or they can accidentally go out of control and spread in a grey goo scenario.
Hierarchical mutation prevention
My idea is that self replicating nanobots should never replicate their “DNA,” or self replication instructions. Instead, each nanobot can only “download” these self replication instructions from a higher level nanobot. I’m not sure if this idea is new. I wrote a post on this.
I applaud floating ideas like this.
It’s useful in thinking about the nature of AGI. Ultimately, something that isn’t limited to messy biology and a small skull is going to outpace us pretty quickly in all domains. We might get lucky if an early one fucks up a takeover attempt and that makes us suddenly alert enough to avoid a second one; but that seems moderately unlikely.
I think you’re right that humans have something that could be termed a wisdom advantage, and maybe this is what you meant: we’ve been evolved for millions to billions of years (depending on where the relevant mechanisms started) to avoid things that might get us killed. That could be termed wisdom. AGI is not evolved but designed and trained, so it might have some nasty blind spots. Current AI certainly does.
We have a fine-tuned intuitive sense of danger that prevents us from doing things that could get us killed (at least those our intution can grapple with. The Darwin award for doing bungee jumping with a cable is an example of things intution doesn’t do well). AGI does not.
That could be substituted with careful logic; as you say, thinking longer and harder can substitute for a lot.
As for engineering, that’s partly based on math but not for the majority of the job. There’s a lot of logic of materials, chemistry, etc depending on what you’re engineering. It’s systematic thought, but also creative thought. I’m scoring this roughly a push between early AGI and humans. Currently they can do math very well but just like we do: by using an external tool. So it’s not really better integrated.
Some of our intuitions for danger also apply to abstract situations like engineering, so we’ve got that advantage.
Again, more thought can substitute for talent.
I don’t think this the wisdom advantage is true for the reasons you give. Your central argument for human advantage in wisdom:
mostly applies to AGI as well. We don’t learn wisdom from age, we learn it from paying serious attention to the stories and lessons from those who have tried and succeeded or failed.
We do get better at it with age, but that’s only in small part from trying and failing or succeeding at particular strategies ourselves. We have a vast library of failures and successes available in others’ stories. We learn more and get better at using them by learning as we age. The AGI can do that too, faster in some regards, but missing some others.
I agree that engineering and inventing does not look like AI’s strong spot, currently. Today’s best generative AI only seem to be good at memorization, repetitive tasks and making words rhyme. They seem equally poor at engineering and wisdom. But it’s possible this can change in the future.
Same time
I still think that the first AGI won’t exceed humans at engineering and wisdom at the exact same time. From first principles, there aren’t very strong reasons why that should be the case (unless it’s one sudden jump).
Engineering vs. mental math analogy
Yes, engineering is a lot more than just math. I was trying to say that engineering was “analogous” to mental math.
The analogy is that, humans are bad at mental math because evolution did not prioritize making us good at mental math, because prehistoric humans didn’t need to add large numbers.
The human brain has tens of billions of neurons, which can fire up to a hundred times a second. Some people estimate the brain has more computing power than a computer with a quadrillion FLOPS (i.e. 1,000,000,000,000,000 numerical calculations per second, using 32 bit numbers).
With this much computing power, we’re still very bad at mental math, and can’t do 3141593 + 2718282 in our heads. Even with a lot of practice, we still struggle and get it wrong. This is because evolution did not prioritize mental math, so our attempts at “simulating the addition algorithm” are astronomically inefficient.
Likewise, I argue that evolution did not prioritize engineering ability either. How good a prehistoric spear you make depends on your trial and error with rock chipping techniques, not on whether your engineering ability can design a rocket ship. Tools were very useful back then, but tools only needed to be invented once and can be copied afterwards. An individual very smart at inventing tools might accomplish nothing, if all practical prehistoric tools of the era were already invented. There isn’t very much selection pressure for engineering ability.
Maybe humans are actually as inefficient at engineering as we are at mental math. We just don’t know about it, because all the other animals around are even worse at engineering than us. Maybe it turns out the laws of physics and mechanics are extremely easygoing, such that even awful engineers like humans can eventually build industry and technology. My guess is that human engineering not quite as inefficient as mental math, but it’s still quite inefficient.
Learning wisdom
Oh thank you for pointing out that wisdom can be learned through other people’s decisions. That is a very good point.
I agree the AGI might have advantages and disadvantages here. The advantage is, as you say, it can think much longer.
The disadvantage is that you still need a decent amount of intuitive wisdom deep down, in order to acquire learned wisdom from other people’s experiences.
What I mean is, learning about other people’s experiences doesn’t always produce wisdom. My guess is there are notorious sampling biases in what experiences other people share. People only spread the most interesting stories, when something unexpected happen.
Humans also tend to spread stories which confirm their beliefs (political beliefs, beliefs about themselves, etc.), avoid spreading stories which contradict their beliefs, and unconsciously twist or omit important details. People who unknowingly fall into echo chambers might feel like they’re building up “wisdom” from other people’s experiences, but still end up with a completely wrong model of the world.
I think the process of gaining wisdom from observing others actually levels off eventually. I think if someone not very wise spent decades learning about others’ stories, he or she might be one standard deviation wiser but not far wiser. He or she might not be wiser about new unfamiliar questions. Lots of people know everything about history, business history, etc., but still lack the wisdom to realize AI risk is worth working on.
Thinking a lot longer might not lead to a very big advantage.
Of course I don’t know any of this for sure :/
Sorry for long reply I got carried away :)