I think humanity shouldn’t work on uploading either, because it comes with very large risks that Sam Hughes summarized as “data can’t defend itself”. Biological cognitive enhancement is a much better option.
To save other people the trouble, I’ll note here that I managed to figure out that “data can’t defend itself” is a line in Sam’s novel Ra, and got Bing Chat (GPT-4) to explain its meaning:
The phrase “data can’t defend itself” is said by Adam King, one of the characters in the web page, which is a part of a science fiction novel called Ra. King is arguing with his daughter Natalie, who wants to send the human race into a virtual reality inside Ra, a powerful artificial intelligence that is consuming the Earth. King believes that reality is the only thing that matters, and that data, or the information that represents the human minds, is vulnerable to manipulation, corruption, or destruction by Ra or other forces. He thinks that by giving up their physical existence, the humans are surrendering themselves to a fate worse than death. He is opposed to Natalie’s plan, which he sees as a betrayal of the real world that he fought to defend.
This does not clearly parallel the proposed plan of creating uploads before other forms of AI, so I guess cousin_it is referring to the general vulnerability of uploads to abuse?
Yeah. If you’re an upload, the server owner’s power over you is absolute. There’s no precedent for this kind of power in reality, and I don’t think we should bring it into existence.
Other fictional examples are the White Christmas episode of Black Mirror, where an upload gets tortured while being run at high speed, so that in a minute many years of bad stuff have happened and can’t be undone; and Iain Banks’ Surface Detail, where religious groups run simulated hells for people they don’t like, and this large scale atrocity can be undetectable from outside.
It’s even less apt than that, because in the narrative universe, the human race is fighting a rearguard action against uploaded humans who have decisively won the war against non-uploaded humanity.
In-universe King is an unreliable actively manipulative narrator, but even in that context, his concern is that his uploaded faction will be defenseless against the stronger uploaded faction once everyone is uploaded. (Not that they were well-defended in the counterfactual, since, well, they had just finished losing the war.)
I am curious how cousin_it has a different interpretation of that line in its context.
Apparently the forum’s markdown implementation does not support spoilers (and I can’t find it in the WYSIWIYG editor either).
I’m sympathetic to spoiler concerns in general, but where the medium doesn’t allow hiding them, the context has focused on analysis rather than appreciation, and major related points have been spoiled upthread, I think the benefits of leaving it here outweigh the downsides.
I’ve added a warning at the top, and put in spoiler markdown in case the forum upgrades its parsing.
Here’s the editor guide section for spoilers. (Note that I tested the instructions for markdown, and that does indeed seem broken in a weird way; the WYSIWYG spoilers still work normally but only support “block” spoilers; you can’t do it for partial bits of lines.)
In this case I think a warning at the top of the comment is sufficient, given the context of the rest of the thread, so up to whether you want to try to reformat your comment around our technical limitations.
Given that uploads may be able to think faster than regular humans, make copies of themselves to save on cost of learning, more easily alter their brains, etc., I think it’s more likely that regular humans will be unable to effectively defend themselves if a conflict arises.
Sure, in theory you could use cryptography to protect uploads from tampering, at the cost of slowdown by a factor of N. But in practice the economic advantages of running uploads more cheaply, in centralized server farms that’ll claim to be secure, will outweigh that. And then (again, in practice, as opposed to theory) it’ll be about as secure as people’s personal data and credit card numbers today: there’ll be regular large-scale leaks and they’ll be swept under the rug.
To be honest, these points seem so obvious that MIRI’s support of uploading makes me more skeptical of MIRI. The correct position is the one described by Frank Herbert: don’t put intelligence in computers, full stop.
I generally feel that biological intelligence augmentation, or a biosingularity is by far the best option and one can hope such enhanced individuals realize to forestall AI for all realistic futures.
With biology, there is life and love. Without biology, there is nothing.
I think humanity shouldn’t work on uploading either, because it comes with very large risks that Sam Hughes summarized as “data can’t defend itself”. Biological cognitive enhancement is a much better option.
To save other people the trouble, I’ll note here that I managed to figure out that “data can’t defend itself” is a line in Sam’s novel Ra, and got Bing Chat (GPT-4) to explain its meaning:
This does not clearly parallel the proposed plan of creating uploads before other forms of AI, so I guess cousin_it is referring to the general vulnerability of uploads to abuse?
https://qntm.org/lena
Though see also the author’s essay “Lena” isn’t about uploading.
Yeah. If you’re an upload, the server owner’s power over you is absolute. There’s no precedent for this kind of power in reality, and I don’t think we should bring it into existence.
Other fictional examples are the White Christmas episode of Black Mirror, where an upload gets tortured while being run at high speed, so that in a minute many years of bad stuff have happened and can’t be undone; and Iain Banks’ Surface Detail, where religious groups run simulated hells for people they don’t like, and this large scale atrocity can be undetectable from outside.
(Severe plot spoilers for Ra.)
It’s even less apt than that, because in the narrative universe, the human race is fighting a rearguard action against uploaded humans who have decisively won the war against non-uploaded humanity.
In-universe King is an
unreliableactively manipulative narrator, but even in that context, his concern is that his uploaded faction will be defenseless against the stronger uploaded faction once everyone is uploaded. (Not that they were well-defended in the counterfactual, since, well, they had just finished losing the war.)I am curious how cousin_it has a different interpretation of that line in its context.
Please remove the spoilers or use spoiler text?
Apparently the forum’s markdown implementation does not support spoilers (and I can’t find it in the WYSIWIYG editor either).
I’m sympathetic to spoiler concerns in general, but where the medium doesn’t allow hiding them, the context has focused on analysis rather than appreciation, and major related points have been spoiled upthread, I think the benefits of leaving it here outweigh the downsides.
I’ve added a warning at the top, and put in spoiler markdown in case the forum upgrades its parsing.
It should support spoilers
My spoiler
Here’s the editor guide section for spoilers. (Note that I tested the instructions for markdown, and that does indeed seem broken in a weird way; the WYSIWYG spoilers still work normally but only support “block” spoilers; you can’t do it for partial bits of lines.)
In this case I think a warning at the top of the comment is sufficient, given the context of the rest of the thread, so up to whether you want to try to reformat your comment around our technical limitations.
Given that uploads may be able to think faster than regular humans, make copies of themselves to save on cost of learning, more easily alter their brains, etc., I think it’s more likely that regular humans will be unable to effectively defend themselves if a conflict arises.
It is tricky, but there might be some ways for data to defend itself.
Sure, in theory you could use cryptography to protect uploads from tampering, at the cost of slowdown by a factor of N. But in practice the economic advantages of running uploads more cheaply, in centralized server farms that’ll claim to be secure, will outweigh that. And then (again, in practice, as opposed to theory) it’ll be about as secure as people’s personal data and credit card numbers today: there’ll be regular large-scale leaks and they’ll be swept under the rug.
To be honest, these points seem so obvious that MIRI’s support of uploading makes me more skeptical of MIRI. The correct position is the one described by Frank Herbert: don’t put intelligence in computers, full stop.
I generally feel that biological intelligence augmentation, or a biosingularity is by far the best option and one can hope such enhanced individuals realize to forestall AI for all realistic futures.
With biology, there is life and love. Without biology, there is nothing.