4chan would have entire threads devoted to building worse hells. Yes. Seriously. They really would. And then they would instantiate those hells.
They really would at that. It seems you are concerned here about malicious actual trolls specifically. I suppose if the technology and knowledge was disseminated to that degree (before something actually foomed) then that would be the most important threat. My first thoughts had gone towards researchers with the capabilities and interest to research this kind of technology themselves who are merely callous and who are indifferent to the suffering of their simulated conscious ‘guinea pigs’ for the aforementioned .
So if you ever have an insight that constitutes incremental progress toward being able to run lots of small, stupid, suffering conscious agents on a home computer
At what level of formalization does this kind of ‘incremental progress’ start to count? I ask because your philosophical essays on reductionism, consciousness and zombies is something that seems to be incremental progress towards that end (but which I certainly wouldn’t consider a mistake to publish or a net risk).
What is the suffering of a few in the face of Science? Pain is all relative, as is eternity. We’ve done far worse. I’m sure we have.
(I’m not a huge fan of SCP in general, but I like a few stories with the “infohazard” tag, and I’m amused by how LW-ish those can get.)
At what level of formalization does this kind of ‘incremental progress’ start to count? I ask because your philosophical essays on reductionism, consciousness and zombies is something that seems to be incremental progress towards that end (but which I certainly wouldn’t consider a mistake to publish or a net risk).
Eliezer could argue that the incremental progress towards stopping the risk outweighs the danger, same as with the general FAI/uFAI secrecy debate.
Eliezer could argue that the incremental progress towards stopping the risk outweighs the danger, same as with the general FAI/uFAI secrecy debate.
I think EY vastly overrates security through obscurity. Szilard keeping results about graphite and neutrons secret happened before the Internet; now there’s this thing called the Streisand effect.
They really would at that. It seems you are concerned here about malicious actual trolls specifically. I suppose if the technology and knowledge was disseminated to that degree (before something actually foomed) then that would be the most important threat. My first thoughts had gone towards researchers with the capabilities and interest to research this kind of technology themselves who are merely callous and who are indifferent to the suffering of their simulated conscious ‘guinea pigs’ for the aforementioned .
At what level of formalization does this kind of ‘incremental progress’ start to count? I ask because your philosophical essays on reductionism, consciousness and zombies is something that seems to be incremental progress towards that end (but which I certainly wouldn’t consider a mistake to publish or a net risk).
Related.
(I’m not a huge fan of SCP in general, but I like a few stories with the “infohazard” tag, and I’m amused by how LW-ish those can get.)
Eliezer could argue that the incremental progress towards stopping the risk outweighs the danger, same as with the general FAI/uFAI secrecy debate.
I think EY vastly overrates security through obscurity. Szilard keeping results about graphite and neutrons secret happened before the Internet; now there’s this thing called the Streisand effect.
I can’t find the quote on that page. Is it from somewhere else (or an earlier version) or am I missing something?
White text. (Apparently there’s a few more hidden features in the entry, but I only found this one.)
I, um, still can’t find it. This white text is on the page you linked to, yes? About the videos that are probably soultraps?
EDIT: Nevermind, got it.
Ah, thanks.