https://www.lesswrong.com/posts/Z263n4TXJimKn6A8Z/three-worlds-decide-5-8 “My lord,” the Ship’s Confessor said, “suppose the laws of physics in our universe had been such that the ancient Greeks could invent the equivalent of nuclear weapons from materials just lying around. Imagine the laws of physics had permitted a way to destroy whole countries with no more difficulty than mixing gunpowder. History would have looked quite different, would it not?”
Akon nodded, puzzled. “Well, yes,” Akon said. “It would have been shorter.”
“Aren’t we lucky that physics _didn’t_ happen to turn out that way, my lord? That in our own time, the laws of physics _don’t_ permit cheap, irresistable superweapons?”
Akon furrowed his brow -
“But my lord,” said the Ship’s Confessor, “do we really know what we _think_ we know? What _different_ evidence would we see, if things were otherwise? After all—if _you_ happened to be a physicist, and _you_ happened to notice an easy way to wreak enormous destruction using off-the-shelf hardware—would _you_ run out and tell you?”
https://www.lesswrong.com/posts/sKRts4bY7Fo9fXnmQ/a-conversation-about-progress-and-safety LUCA: … But if the wrong person gets their hands on it, or if it’s a super-decentralized technology where anybody can do anything and the offense/defense balance isn’t clear, then you can really screw things up. I think that’s why it becomes a harder issue. It becomes even harder when these technologies are super general purpose, which makes them really difficult to stop or not get distributed or embedded. If you think of all the potential upsides you could have from AI, but also all the potential downsides you could have if just one person uses it for a really bad thing—that seems really difficult. …
Damn, this suggests that all those people who said “human mind is magical; a machine cannot think because it wouldn’t have a soul or quantum magic” were actually trying to protect us from the AI apocalypse. We were too stupid to understand, and too arrogant to defer to the wisdom of the crowd. And now we are doomed.
A couple of quotes on my mind these days....
https://www.lesswrong.com/posts/Z263n4TXJimKn6A8Z/three-worlds-decide-5-8
“My lord,” the Ship’s Confessor said, “suppose the laws of physics in our universe had been such that the ancient Greeks could invent the equivalent of nuclear weapons from materials just lying around. Imagine the laws of physics had permitted a way to destroy whole countries with no more difficulty than mixing gunpowder. History would have looked quite different, would it not?”
Akon nodded, puzzled. “Well, yes,” Akon said. “It would have been shorter.”
“Aren’t we lucky that physics _didn’t_ happen to turn out that way, my lord? That in our own time, the laws of physics _don’t_ permit cheap, irresistable superweapons?”
Akon furrowed his brow -
“But my lord,” said the Ship’s Confessor, “do we really know what we _think_ we know? What _different_ evidence would we see, if things were otherwise? After all—if _you_ happened to be a physicist, and _you_ happened to notice an easy way to wreak enormous destruction using off-the-shelf hardware—would _you_ run out and tell you?”
https://www.lesswrong.com/posts/sKRts4bY7Fo9fXnmQ/a-conversation-about-progress-and-safety
LUCA: … But if the wrong person gets their hands on it, or if it’s a super-decentralized technology where anybody can do anything and the offense/defense balance isn’t clear, then you can really screw things up. I think that’s why it becomes a harder issue. It becomes even harder when these technologies are super general purpose, which makes them really difficult to stop or not get distributed or embedded. If you think of all the potential upsides you could have from AI, but also all the potential downsides you could have if just one person uses it for a really bad thing—that seems really difficult. …
Damn, this suggests that all those people who said “human mind is magical; a machine cannot think because it wouldn’t have a soul or quantum magic” were actually trying to protect us from the AI apocalypse. We were too stupid to understand, and too arrogant to defer to the wisdom of the crowd. And now we are doomed.
galaxy brain take XD