A comment published today calls for all thought about technology to be outlawed.
This moratorium would be better than no moratorium. I have respect for everyone who stepped up and upvoted it. It’s an improvement on the margin.
I refrained from upvoting because I think the comment is understating the seriousness of the situation and asking for too little to solve it.
The key issue is not “humans thinking explicitly about technology might be hacked by hypothetical AGIs”. Humans optimizing for any goal whatsoever will, due to convergent instrumental goals, stumble upon thoughts that effectively interact with distant hypothetical AGIs, even if the thoughts aren’t explicitly about AGI.
Many researchers steeped in these issues, including myself, expect that the most likely result of allowing humans to optimize for convergent instrumental goals is that literally everyone on Earth will die. Any realistic regime that allows general optimization in humans, even if it outlaws so-called “thoughts about malign superintelligences”, will still result in humans being hacked and taken over by the actual malign superintelligences that reside in the conditional structure of reality, not in some domain where law-enforcement can easily distinguish malign from non-malign supervening causality.
Here’s what would actually need to be done:
The moratorium on new optimization for goals needs to be indefinite and worldwide. There can be no exceptions, including for going to the store to get milk or starting a company to make paperclips. Vibes, tomfoolery, doodling, lolligagging, and other non-goal-directed behavior can be allowed, but if anyone collects too much agency in one place, an international alliance has to be ready to tease them relentlessly until they stop it and go with the flow. Be less scared of a shouting match between people than of the moratorium being violated; be willing to bully a rogue human by yo-mama jokes.
Any realistic regime that allows general optimization in humans, even if it outlaws so-called “thoughts about malign superintelligences”, will still result in humans being hacked and taken over by the actual malign superintelligences that reside in the conditional structure of reality, not in some domain where law-enforcement can easily distinguish malign from non-malign supervening causality.
A comment published today calls for all thought about technology to be outlawed.
This moratorium would be better than no moratorium. I have respect for everyone who stepped up and upvoted it. It’s an improvement on the margin.
I refrained from upvoting because I think the comment is understating the seriousness of the situation and asking for too little to solve it.
The key issue is not “humans thinking explicitly about technology might be hacked by hypothetical AGIs”. Humans optimizing for any goal whatsoever will, due to convergent instrumental goals, stumble upon thoughts that effectively interact with distant hypothetical AGIs, even if the thoughts aren’t explicitly about AGI.
Many researchers steeped in these issues, including myself, expect that the most likely result of allowing humans to optimize for convergent instrumental goals is that literally everyone on Earth will die. Any realistic regime that allows general optimization in humans, even if it outlaws so-called “thoughts about malign superintelligences”, will still result in humans being hacked and taken over by the actual malign superintelligences that reside in the conditional structure of reality, not in some domain where law-enforcement can easily distinguish malign from non-malign supervening causality.
Here’s what would actually need to be done:
The moratorium on new optimization for goals needs to be indefinite and worldwide. There can be no exceptions, including for going to the store to get milk or starting a company to make paperclips. Vibes, tomfoolery, doodling, lolligagging, and other non-goal-directed behavior can be allowed, but if anyone collects too much agency in one place, an international alliance has to be ready to tease them relentlessly until they stop it and go with the flow. Be less scared of a shouting match between people than of the moratorium being violated; be willing to bully a rogue human by yo-mama jokes.
Found the planecrash reader.
Huh? I haven’t read planecrash.
Huh! Interesting. It seemed like a very planecrash-toned comment.
Maybe I should read planecrash, haha.