That’s “laymen relative to understanding general AI considerations”; you could still be e.g. a philosopher or narrow-AI focused researcher whose opinion was relevant, but who didn’t have the background knowledge to realize this particular bit.
At the time when that paper was published, MIRI was still more focused on an outreach than a research role, and getting laymen interested in the field so they could eventually become non-laymen was important. Also the article in which the “dopamine drip” example came from, was in the New Yorker, so obviously something aimed at a popular audience.
Less charitable answers:
At least some of us were also laymen at the time when some of those articles were written, and didn’t have enough knowledge to realize that this argument was kinda silly from a more sophisticated perspective. I don’t want to imply this for any others since I don’t know what was going on in their heads, but I did personally read drafts of e.g. IE&ME and had a chance to comment it but didn’t catch this bit. And I’m pretty sure that my failure to catch it wasn’t because of a conscious “well this is a little off but it’s okay as an argument for laymen” calculation.
Option 3: most human beings would (at best) drug inconvenient people into submission if they had the power, and the ones talking as if we had a known way to avoid this are the ones who look naive.
Charitable answers:
That’s “laymen relative to understanding general AI considerations”; you could still be e.g. a philosopher or narrow-AI focused researcher whose opinion was relevant, but who didn’t have the background knowledge to realize this particular bit.
At the time when that paper was published, MIRI was still more focused on an outreach than a research role, and getting laymen interested in the field so they could eventually become non-laymen was important. Also the article in which the “dopamine drip” example came from, was in the New Yorker, so obviously something aimed at a popular audience.
Less charitable answers:
At least some of us were also laymen at the time when some of those articles were written, and didn’t have enough knowledge to realize that this argument was kinda silly from a more sophisticated perspective. I don’t want to imply this for any others since I don’t know what was going on in their heads, but I did personally read drafts of e.g. IE&ME and had a chance to comment it but didn’t catch this bit. And I’m pretty sure that my failure to catch it wasn’t because of a conscious “well this is a little off but it’s okay as an argument for laymen” calculation.
Thanks!
Option 3: most human beings would (at best) drug inconvenient people into submission if they had the power, and the ones talking as if we had a known way to avoid this are the ones who look naive.