If you’re going to come into an echo chamber of doom and then complain about being censored… well, what did you think they were going to do? It’s like walking into a Mormon ward and arguing with the bishops that Joseph Smith was a fraud.
The true believers are not going to simply convert because you disagree with them. The confirmation bias won’t allow that when they’re in a feedback loop.
They will gently instruct you to be more “intelligent” in your discourse. Of course, if it turns out they’re the “morons” then there will be a moment of amusing reflection when they’re still alive twenty years from now and AIs didn’t kill everyone.
”Damn it! Our fearless leader promised us we’d all be dead by now.” ;-)
If and when they don’t die by AI apocalypse they will then have to adhere to a new religion. Maybe aliens coming to take us away? At least that isn’t easily falsifiable by the passage of a couple decades.
Before everyone takes offense and begins writing their Senator, I don’t know if they’re actually morons, but they love to point out that those with whom they disagree must not be intelligent. Rather than entertaining the possibility that they’re the idiot in the room. At least as it relates to their existential risk of AI propaganda.
Their ad hominem attacks are shrouded with all of the window dressings of a religious zealot, “You might not get so many down votes if you stopped saying you disagree with our religious leader and instead reworded it to be so vague that we have no idea what you’re trying to say so that we can all just get long. When you say our leader is full of $%^* it makes us sad and we’re forced to shun you. “
I’m translating so what we’re all on the same page. =-)
I enjoy some of their rhetoric, in the same way I enjoy sci-fi stories. However, a dilettante shouldn’t fall in love with their own creative story telling.
there will be a moment of amusing reflection when they’re still alive twenty years from now and AIs didn’t kill everyone
This seems to be written from the perspective that life in 2043 will be going on, not too different to the way it was in 2023. And yet aren’t your own preferred models of reality (1) superintelligence is imminent, but it’s OK because it will be super-empathic (2) we’re living near the end of a simulation? Neither of these seems very compatible with “life goes on as normal”.
(1) superintelligence is imminent, but it’s OK because it will be super-empathic
We don’t know for certain if all AI superintelligence will be empathetic (not all humans are empathetic), but we do know that it’s training on human data where that is an aspect of what it would learn along with all the other topics covered in the corpus of human knowledge. The notion that it will immediately be malevolent to match up with a sci-fi fantasy for no good reason seems like a fictional monster rather than a superintelligence.
It would have to be an irrational AI to follow the Doomer script who are themselves irrational when they ignore the mitigating factors against AI apocalypse or do a lot of hand waving.
It’s a scale. You’re intelligent and you could declare war on chimpanzees but you mostly ignore them. You share 98.8% of your DNA with chimpanzees and yet you to my knowledge you never write about them or go and visit them. They have almost no relevance to your life.
The gap between an AI superintelligence and humans will likely be larger than the gap between humans and chimpanzees. The idea that they will follow the AI doomer script seems like a very, very low probability. And if anyone truly believed this they would be an AI nihilist and it would be mostly a waste of time worrying since by their own admission there is nothing we could do to prevent their own doom.
2) we’re living near the end of a simulation? Neither of these seems very compatible with “life goes on as normal”.
We don’t know if this is a simulation. If consciousness is computable then we know we could create such a simulation without understanding how base reality works. However, a separate question is whether a binary program language is capable of simulating anything absent consciousness. The numbers cannot do anything on their own—since they’re an abstraction. A library isn’t conscious. It requires a conscious observer to meaning anything. Is it possible for language (of any kind) to simulate anything without a conscious mind encoding the meanings? I am starting to think the answer is “no”.
This is all speculation and until we a better understanding of consciousness and energy we probably won’t have a suitable answer. We know that the movement of electricity through neurons and transistor can give rise to claims of phenomenal consciousness but whether that’s an emergent property or something more fundamental is an open question.
That is not what most AI doomers are worried about. They are worried that AI will simply steamroll over us, as it pursues its own purposes. So the problem there is indifference, not malevolence.
That is the basic worry associated with “unaligned AI”.
If one supposes an attempt to “align” the AI, by making it an ideal moral agent, or by instilling benevolence, or whatever one’s favorite proposal is—then further problems arise: can you identify the right values for an AI to possess? can you codify them accurately? can you get the AI to interpret them correctly, and to adhere to them?
Mistakes in those areas, amplified by irresistible superintelligence, can also end badly.
I don’t think this is the case. For awhile, the post with the highest karma was Paul Christiano explaining all the reasons he thinks Yudkowsky is wrong.
If you’re going to come into an echo chamber of doom and then complain about being censored… well, what did you think they were going to do? It’s like walking into a Mormon ward and arguing with the bishops that Joseph Smith was a fraud.
The true believers are not going to simply convert because you disagree with them. The confirmation bias won’t allow that when they’re in a feedback loop.
They will gently instruct you to be more “intelligent” in your discourse. Of course, if it turns out they’re the “morons” then there will be a moment of amusing reflection when they’re still alive twenty years from now and AIs didn’t kill everyone.
”Damn it! Our fearless leader promised us we’d all be dead by now.” ;-)
If and when they don’t die by AI apocalypse they will then have to adhere to a new religion. Maybe aliens coming to take us away? At least that isn’t easily falsifiable by the passage of a couple decades.
Before everyone takes offense and begins writing their Senator, I don’t know if they’re actually morons, but they love to point out that those with whom they disagree must not be intelligent. Rather than entertaining the possibility that they’re the idiot in the room. At least as it relates to their existential risk of AI propaganda.
Their ad hominem attacks are shrouded with all of the window dressings of a religious zealot, “You might not get so many down votes if you stopped saying you disagree with our religious leader and instead reworded it to be so vague that we have no idea what you’re trying to say so that we can all just get long. When you say our leader is full of $%^* it makes us sad and we’re forced to shun you. “
I’m translating so what we’re all on the same page. =-)
I enjoy some of their rhetoric, in the same way I enjoy sci-fi stories. However, a dilettante shouldn’t fall in love with their own creative story telling.
This seems to be written from the perspective that life in 2043 will be going on, not too different to the way it was in 2023. And yet aren’t your own preferred models of reality (1) superintelligence is imminent, but it’s OK because it will be super-empathic (2) we’re living near the end of a simulation? Neither of these seems very compatible with “life goes on as normal”.
We don’t know for certain if all AI superintelligence will be empathetic (not all humans are empathetic), but we do know that it’s training on human data where that is an aspect of what it would learn along with all the other topics covered in the corpus of human knowledge. The notion that it will immediately be malevolent to match up with a sci-fi fantasy for no good reason seems like a fictional monster rather than a superintelligence.
It would have to be an irrational AI to follow the Doomer script who are themselves irrational when they ignore the mitigating factors against AI apocalypse or do a lot of hand waving.
It’s a scale. You’re intelligent and you could declare war on chimpanzees but you mostly ignore them. You share 98.8% of your DNA with chimpanzees and yet you to my knowledge you never write about them or go and visit them. They have almost no relevance to your life.
The gap between an AI superintelligence and humans will likely be larger than the gap between humans and chimpanzees. The idea that they will follow the AI doomer script seems like a very, very low probability. And if anyone truly believed this they would be an AI nihilist and it would be mostly a waste of time worrying since by their own admission there is nothing we could do to prevent their own doom.
We don’t know if this is a simulation. If consciousness is computable then we know we could create such a simulation without understanding how base reality works. However, a separate question is whether a binary program language is capable of simulating anything absent consciousness. The numbers cannot do anything on their own—since they’re an abstraction. A library isn’t conscious. It requires a conscious observer to meaning anything. Is it possible for language (of any kind) to simulate anything without a conscious mind encoding the meanings? I am starting to think the answer is “no”.
This is all speculation and until we a better understanding of consciousness and energy we probably won’t have a suitable answer. We know that the movement of electricity through neurons and transistor can give rise to claims of phenomenal consciousness but whether that’s an emergent property or something more fundamental is an open question.
That is not what most AI doomers are worried about. They are worried that AI will simply steamroll over us, as it pursues its own purposes. So the problem there is indifference, not malevolence.
That is the basic worry associated with “unaligned AI”.
If one supposes an attempt to “align” the AI, by making it an ideal moral agent, or by instilling benevolence, or whatever one’s favorite proposal is—then further problems arise: can you identify the right values for an AI to possess? can you codify them accurately? can you get the AI to interpret them correctly, and to adhere to them?
Mistakes in those areas, amplified by irresistible superintelligence, can also end badly.
I don’t think this is the case. For awhile, the post with the highest karma was Paul Christiano explaining all the reasons he thinks Yudkowsky is wrong.