The counter-concern is that if humanity can’t talk about things that sound like sci-fi, then we just die. We’re inventing AGI, whose big core characteristic is ‘a technology that enables future technologies’. We need to somehow become able to start actually talking about AGI.
One strategy would be ‘open with the normal-sounding stuff, then introduce increasingly weird stuff only when people are super bought into the normal stuff’. Some problems with this:
A large chunk of current discussion and research happens in public; if it had to happen in private because it isn’t optimized for looking normal, a lot of it wouldn’t happen at all.
More generally: AGI discourse isn’t an obstacle course or a curriculum, such that we can control the order of ideas and strictly segregate the newbies from the old guard. Blog posts, research papers, social media exchanges, etc. freely circulate among people of all varieties.
It’s a dishonest/manipulative sort of strategy — which makes it ethically questionable, is liable to fuel other trust-degrading behavior in the community, and is liable to drive away people with higher discourse standards.
A lot of the core arguments and hazards have no ‘normal-sounding’ equivalent. To sound normal, you have to skip those considerations altogether, or swap them out for much weaker arguments.
In exchange for attracting more people who are allergic to anything that sounds ‘sci-fi’, you lose people who are happy to speak to the substance of ideas even when they sound weird; and you lose sharp people who can tell that your arguments are relatively weak and PR-spun, but would have joined the conversation if the arguments and reasoning on display had been crisper and more obviously candid.
Another strategy would be ‘keep the field normal now, then turn weird later’. But how do you make a growing research field pivot? What’s the trigger? Why should we expect this to work, as opposed to just permanently diluting the field with false beliefs, dishonest norms, and low-relevance work?
My perception is that a large amount of work to date has gone into trying to soften and spin ideas so that they sound less weird or “sci-fi”; whereas relatively little work has gone into candidly stating beliefs, acknowledging that this stuff is weird, and clearly stating why you think it’s true anyway.
I don’t expect the latter strategy to work in all cases, but I do think it would be an overall better strategy, both in terms of ‘recruiting more of the people likeliest to solve the alignment problem’, and in terms of having fewer toxic effects on norms and trust within the field. Just being able to believe what people say is a very valuable thing in a position like ours.
Fair point, and one worth making in the course of talking about sci-fi sounding things! I’m not asking anyone to represent their beliefs dishonestly, but rather introduce them gently. I’m personally not an expert, but I’m not convinced of the viability of nanotech, so if it’s not necessary (rather it’s sufficient) to the argument, it seems prudent to stick to more clearly plausible pathways to takeover as demonstrations of sufficiency, while still maintaining that weirder sounding stuff is something one ought to expect when dealing with something much smarter than you.
If you’re trying to persuade smart programmers who are somewhat wary of sci-fi stuff, and you think nanotech is likely to play a major role in AGI strategy, but you think it isn’t strictly necessary for the current argument you’re making, then my default advice would be:
Be friendly and patient; get curious about the other person’s perspective, and ask questions to try to understand where they’re coming from; and put effort into showing your work and providing indicators that you’re a reasonable sort of person.
Wear your weird beliefs on your sleeve; be open about them, and if you want to acknowledge that they sound weird, feel free to do so. At least mention nanotech, even if you choose not to focus on it because it’s not strictly necessary for the argument at hand, it comes with a larger inferential gap, etc.
Right, alignment advocates really underestimate the degree to which talking about sci-fi sounding tech is a sticking point for people
The counter-concern is that if humanity can’t talk about things that sound like sci-fi, then we just die. We’re inventing AGI, whose big core characteristic is ‘a technology that enables future technologies’. We need to somehow become able to start actually talking about AGI.
One strategy would be ‘open with the normal-sounding stuff, then introduce increasingly weird stuff only when people are super bought into the normal stuff’. Some problems with this:
A large chunk of current discussion and research happens in public; if it had to happen in private because it isn’t optimized for looking normal, a lot of it wouldn’t happen at all.
More generally: AGI discourse isn’t an obstacle course or a curriculum, such that we can control the order of ideas and strictly segregate the newbies from the old guard. Blog posts, research papers, social media exchanges, etc. freely circulate among people of all varieties.
It’s a dishonest/manipulative sort of strategy — which makes it ethically questionable, is liable to fuel other trust-degrading behavior in the community, and is liable to drive away people with higher discourse standards.
A lot of the core arguments and hazards have no ‘normal-sounding’ equivalent. To sound normal, you have to skip those considerations altogether, or swap them out for much weaker arguments.
In exchange for attracting more people who are allergic to anything that sounds ‘sci-fi’, you lose people who are happy to speak to the substance of ideas even when they sound weird; and you lose sharp people who can tell that your arguments are relatively weak and PR-spun, but would have joined the conversation if the arguments and reasoning on display had been crisper and more obviously candid.
Another strategy would be ‘keep the field normal now, then turn weird later’. But how do you make a growing research field pivot? What’s the trigger? Why should we expect this to work, as opposed to just permanently diluting the field with false beliefs, dishonest norms, and low-relevance work?
My perception is that a large amount of work to date has gone into trying to soften and spin ideas so that they sound less weird or “sci-fi”; whereas relatively little work has gone into candidly stating beliefs, acknowledging that this stuff is weird, and clearly stating why you think it’s true anyway.
I don’t expect the latter strategy to work in all cases, but I do think it would be an overall better strategy, both in terms of ‘recruiting more of the people likeliest to solve the alignment problem’, and in terms of having fewer toxic effects on norms and trust within the field. Just being able to believe what people say is a very valuable thing in a position like ours.
Fair point, and one worth making in the course of talking about sci-fi sounding things! I’m not asking anyone to represent their beliefs dishonestly, but rather introduce them gently. I’m personally not an expert, but I’m not convinced of the viability of nanotech, so if it’s not necessary (rather it’s sufficient) to the argument, it seems prudent to stick to more clearly plausible pathways to takeover as demonstrations of sufficiency, while still maintaining that weirder sounding stuff is something one ought to expect when dealing with something much smarter than you.
If you’re trying to persuade smart programmers who are somewhat wary of sci-fi stuff, and you think nanotech is likely to play a major role in AGI strategy, but you think it isn’t strictly necessary for the current argument you’re making, then my default advice would be:
Be friendly and patient; get curious about the other person’s perspective, and ask questions to try to understand where they’re coming from; and put effort into showing your work and providing indicators that you’re a reasonable sort of person.
Wear your weird beliefs on your sleeve; be open about them, and if you want to acknowledge that they sound weird, feel free to do so. At least mention nanotech, even if you choose not to focus on it because it’s not strictly necessary for the argument at hand, it comes with a larger inferential gap, etc.