AI alignment today equates to the wishful thinking of early alchemists who wanted to turn any element into gold. Alchemy is possible today, albeit being very expensive and resource consuming, But it’s possible only because we now know the atomic structure in detail and have a well established extensive periodic table which the alchemists lacked back in their day.
Even then, it’s not feasible, we’re better off mining up our gold reserves. It’s much more cost-effective.
Similarly, we might reach a point in time when AI alignment will be possible but just not feasible and also completely irrational. Allow me to indulge you in the following thought experiment to explain myself.
When an AGI is orders of magnitude more intelligent than a human, the same way a human is more intelligent than an ant. Will we dedicate our life’s sole purpose to building ant colonies with precisely engineered tunnels and use nutritional nano injections to keep its population thriving and ravishing?
Imagine all of humanity focusing all their efforts to building ant colonies and feeding them. How irrational does that sound? Won’t an AI eventually realize that? What would it do once it realizes the stupid mission that it had been on all along?
If we go down the road of forcing the AI to feed the ants, we’d have effectively created a delusional AI system that’s not unlike the paperclip maximizer.
We’d never get anywhere by keeping it limited and focused on a restricted path and not allowing it to adopt more optimal strategies.
However, there’s one waywe can stay relevant to avoid the existential risks of AI. We need to augment. Research focus needs to shift solely to Brain-Computer Interfaces. We can start with enhanced memory retention and enhance each module of our brains one by one.
Unless we keep up with AI by augmenting ourselves, humanity will perish. No matter what.
AI Alignment is Alchemy.
AI alignment today equates to the wishful thinking of early alchemists who wanted to turn any element into gold. Alchemy is possible today, albeit being very expensive and resource consuming, But it’s possible only because we now know the atomic structure in detail and have a well established extensive periodic table which the alchemists lacked back in their day.
Even then, it’s not feasible, we’re better off mining up our gold reserves. It’s much more cost-effective.
Similarly, we might reach a point in time when AI alignment will be possible but just not feasible and also completely irrational. Allow me to indulge you in the following thought experiment to explain myself.
When an AGI is orders of magnitude more intelligent than a human, the same way a human is more intelligent than an ant. Will we dedicate our life’s sole purpose to building ant colonies with precisely engineered tunnels and use nutritional nano injections to keep its population thriving and ravishing?
Imagine all of humanity focusing all their efforts to building ant colonies and feeding them. How irrational does that sound? Won’t an AI eventually realize that? What would it do once it realizes the stupid mission that it had been on all along?
If we go down the road of forcing the AI to feed the ants, we’d have effectively created a delusional AI system that’s not unlike the paperclip maximizer.
We’d never get anywhere by keeping it limited and focused on a restricted path and not allowing it to adopt more optimal strategies.
However, there’s one way we can stay relevant to avoid the existential risks of AI. We need to augment. Research focus needs to shift solely to Brain-Computer Interfaces. We can start with enhanced memory retention and enhance each module of our brains one by one.
Unless we keep up with AI by augmenting ourselves, humanity will perish. No matter what.