I’m not particularly good at persuading people of things. So, consider the fact that I convinced my father, who is a nontechnical, religious 60yo man, that AGI was a serious problem within a 45-60 minute conversation. To the point that he was actually, unironically concerned that I wasn’t doing more to help. I get this reaction or something equivalent regularly, from people of very different backgrounds, of different intelligence levels, ethnicities, political tribes. Not all of them go off and devote their lives to alignment like you did, but they at least buy into an intellectual position and now have a stake in the ground. With pleasant reminders, they often try to talk to their friends.
I didn’t have to get him to change the gears in his mind that led him to Christianity in order for me to get him to agree with me about AI, just like environmentalists don’t have to turn democrats into expert scientists to get them to understand global warming. For as many people for whom the “convergent instrumental goals” thing has to be formalized, there are either people who just get it, defer their understanding to others, or are able to use some completely different model that drags them to the same conclusions. There’s a difference between “the arguments and details surrounding AGI risk sufficient to mobilize” and “the fully elaborated causal chain”.
Obviously “go talk to everyone until they agree” isn’t a scalable solution, and I don’t have a definitive one or else I’d go do it. Perhaps arbital was partly an attempt to accelerate this process? But you can see why it seems highly counterintuitive to me that it would be literally impossible to bring people onto the AGI risk train without giving them a textbook of epistemology first, to the point that I’d question someone who seems right about almost everything.
(It’s possible you’re talking about “understanding the problem in enough detail to solve it technically” and I’m talking about “doing whatever reasoning they need to to arrive at the correct conclusion and maybe help us talk to DeepMind/FAIR researchers who would be better equipped to solve it technically if they had more peer pressure”, in which case that’s that.)
I’m not particularly good at persuading people of things. So, consider the fact that I convinced my father, who is a nontechnical, religious 60yo man, that AGI was a serious problem within a 45-60 minute conversation. To the point that he was actually, unironically concerned that I wasn’t doing more to help. I get this reaction or something equivalent regularly, from people of very different backgrounds, of different intelligence levels, ethnicities, political tribes. Not all of them go off and devote their lives to alignment like you did, but they at least buy into an intellectual position and now have a stake in the ground. With pleasant reminders, they often try to talk to their friends.
I didn’t have to get him to change the gears in his mind that led him to Christianity in order for me to get him to agree with me about AI, just like environmentalists don’t have to turn democrats into expert scientists to get them to understand global warming. For as many people for whom the “convergent instrumental goals” thing has to be formalized, there are either people who just get it, defer their understanding to others, or are able to use some completely different model that drags them to the same conclusions. There’s a difference between “the arguments and details surrounding AGI risk sufficient to mobilize” and “the fully elaborated causal chain”.
Obviously “go talk to everyone until they agree” isn’t a scalable solution, and I don’t have a definitive one or else I’d go do it. Perhaps arbital was partly an attempt to accelerate this process? But you can see why it seems highly counterintuitive to me that it would be literally impossible to bring people onto the AGI risk train without giving them a textbook of epistemology first, to the point that I’d question someone who seems right about almost everything.
(It’s possible you’re talking about “understanding the problem in enough detail to solve it technically” and I’m talking about “doing whatever reasoning they need to to arrive at the correct conclusion and maybe help us talk to DeepMind/FAIR researchers who would be better equipped to solve it technically if they had more peer pressure”, in which case that’s that.)