there will be a moment of amusing reflection when they’re still alive twenty years from now and AIs didn’t kill everyone
This seems to be written from the perspective that life in 2043 will be going on, not too different to the way it was in 2023. And yet aren’t your own preferred models of reality (1) superintelligence is imminent, but it’s OK because it will be super-empathic (2) we’re living near the end of a simulation? Neither of these seems very compatible with “life goes on as normal”.
(1) superintelligence is imminent, but it’s OK because it will be super-empathic
We don’t know for certain if all AI superintelligence will be empathetic (not all humans are empathetic), but we do know that it’s training on human data where that is an aspect of what it would learn along with all the other topics covered in the corpus of human knowledge. The notion that it will immediately be malevolent to match up with a sci-fi fantasy for no good reason seems like a fictional monster rather than a superintelligence.
It would have to be an irrational AI to follow the Doomer script who are themselves irrational when they ignore the mitigating factors against AI apocalypse or do a lot of hand waving.
It’s a scale. You’re intelligent and you could declare war on chimpanzees but you mostly ignore them. You share 98.8% of your DNA with chimpanzees and yet you to my knowledge you never write about them or go and visit them. They have almost no relevance to your life.
The gap between an AI superintelligence and humans will likely be larger than the gap between humans and chimpanzees. The idea that they will follow the AI doomer script seems like a very, very low probability. And if anyone truly believed this they would be an AI nihilist and it would be mostly a waste of time worrying since by their own admission there is nothing we could do to prevent their own doom.
2) we’re living near the end of a simulation? Neither of these seems very compatible with “life goes on as normal”.
We don’t know if this is a simulation. If consciousness is computable then we know we could create such a simulation without understanding how base reality works. However, a separate question is whether a binary program language is capable of simulating anything absent consciousness. The numbers cannot do anything on their own—since they’re an abstraction. A library isn’t conscious. It requires a conscious observer to meaning anything. Is it possible for language (of any kind) to simulate anything without a conscious mind encoding the meanings? I am starting to think the answer is “no”.
This is all speculation and until we a better understanding of consciousness and energy we probably won’t have a suitable answer. We know that the movement of electricity through neurons and transistor can give rise to claims of phenomenal consciousness but whether that’s an emergent property or something more fundamental is an open question.
That is not what most AI doomers are worried about. They are worried that AI will simply steamroll over us, as it pursues its own purposes. So the problem there is indifference, not malevolence.
That is the basic worry associated with “unaligned AI”.
If one supposes an attempt to “align” the AI, by making it an ideal moral agent, or by instilling benevolence, or whatever one’s favorite proposal is—then further problems arise: can you identify the right values for an AI to possess? can you codify them accurately? can you get the AI to interpret them correctly, and to adhere to them?
Mistakes in those areas, amplified by irresistible superintelligence, can also end badly.
This seems to be written from the perspective that life in 2043 will be going on, not too different to the way it was in 2023. And yet aren’t your own preferred models of reality (1) superintelligence is imminent, but it’s OK because it will be super-empathic (2) we’re living near the end of a simulation? Neither of these seems very compatible with “life goes on as normal”.
We don’t know for certain if all AI superintelligence will be empathetic (not all humans are empathetic), but we do know that it’s training on human data where that is an aspect of what it would learn along with all the other topics covered in the corpus of human knowledge. The notion that it will immediately be malevolent to match up with a sci-fi fantasy for no good reason seems like a fictional monster rather than a superintelligence.
It would have to be an irrational AI to follow the Doomer script who are themselves irrational when they ignore the mitigating factors against AI apocalypse or do a lot of hand waving.
It’s a scale. You’re intelligent and you could declare war on chimpanzees but you mostly ignore them. You share 98.8% of your DNA with chimpanzees and yet you to my knowledge you never write about them or go and visit them. They have almost no relevance to your life.
The gap between an AI superintelligence and humans will likely be larger than the gap between humans and chimpanzees. The idea that they will follow the AI doomer script seems like a very, very low probability. And if anyone truly believed this they would be an AI nihilist and it would be mostly a waste of time worrying since by their own admission there is nothing we could do to prevent their own doom.
We don’t know if this is a simulation. If consciousness is computable then we know we could create such a simulation without understanding how base reality works. However, a separate question is whether a binary program language is capable of simulating anything absent consciousness. The numbers cannot do anything on their own—since they’re an abstraction. A library isn’t conscious. It requires a conscious observer to meaning anything. Is it possible for language (of any kind) to simulate anything without a conscious mind encoding the meanings? I am starting to think the answer is “no”.
This is all speculation and until we a better understanding of consciousness and energy we probably won’t have a suitable answer. We know that the movement of electricity through neurons and transistor can give rise to claims of phenomenal consciousness but whether that’s an emergent property or something more fundamental is an open question.
That is not what most AI doomers are worried about. They are worried that AI will simply steamroll over us, as it pursues its own purposes. So the problem there is indifference, not malevolence.
That is the basic worry associated with “unaligned AI”.
If one supposes an attempt to “align” the AI, by making it an ideal moral agent, or by instilling benevolence, or whatever one’s favorite proposal is—then further problems arise: can you identify the right values for an AI to possess? can you codify them accurately? can you get the AI to interpret them correctly, and to adhere to them?
Mistakes in those areas, amplified by irresistible superintelligence, can also end badly.