Where lies the value in convincing people who afterwards won’t be useful for “our cause” anyway (we’re talking fAGI I assume)?
It’s called “getting vote and then passing the bills”. We get people to vote for us for what they thought they heard us say. We’ll pass the bills that are about what we actually said. Doing away with metaphors: we get them to actually listen to what we have to say by giving a great first impression, then once we have them captivated we start showing them the fine print, most notably the bits about how much they suck and they need to change and if they listen to us everything in their lives will be better and they’ll be spiritually and emotionally more fulfilling and awesome. Parties do this. Religions do this. Universities do this. Parents do this. Lovers do this. Why should we be any different? Once we get to the point where we can persuade them that talking about rationality in clown suits is a perfectly reasonable idea, the rest is pretty much done.
How about the other two? And we don’t just want to be different, we want to change them. Rationalists should win, even if our winning conditions might not be what our adversaries expect.
I suspect we have a very different conception of how the future is going to pan out in terms of what role the public perception and acceptance of AGI will play.
I understand your point: Lure em’ in with happytalk, then bash em’ with a rationality course. (“Excuse me Miss, how would you like a free rationality test”?) However, I simply don’t think that we can positively prepare a sizable proportion of the public (let alone the GLOBAL public) for the arrival of AGI by simply teaching rationality skills. I believe our idea of the future will just continue to compete with countless other worldviews in the public memesphere, without ever becoming truly mainstream until it is “too late” and we face something akin to a hard takeoff.
I don’t really think that we can (or need to) reach a consensus within the public for the successful takeoff of fAGI. Quite to the contrary, I actually worry that carrying our view to the mainstream will have adverse effects, especially once they realize that we aren’t some kind of technophile crackpot religion, but that the futuristic picture we try to paint is actually possible and not at all unlikely to happen. I prefer to face apathy over antagonism when push comes to shove—and since AGI could spring into existence very rapidly and take everyone apart from “those in the know” by surprise, I would hate to lose that element of surprise over our potentially numerous “enemies”.
Now of course I don’t know which path will yield the best result: confronting the public hard and fast or keeping a low profile? I suspect this may become one of the few hot-button topics where our community will have widely diverging opinions, because we simply lack a way to accurately model how people will behave upon encountering the potential threat of AGI (especially so far in advance). Just remember, that the world doesn’t consist entirely of the US and that fAGI will impact everyone. I think it is likely, that we may face serious violence once our vision of the future becomes more known and gains additional credibility by exponential improvements in advanced technologies. There are players on this planet who will not be happy to see an AGI come out of America, or for that matter Eliezer’s or whoever’s garage. (Which is why I’d strongly advocate a semi-covert international effort when it comes to the development of friendly AGI)
It is incredibly hard to predict the future behavior of people, but on a gut-level I absolutely favor an international semi-stealthy approach. It seems to be by far the safest course to take. Once the concept of the singularity and fAGI gains traction in the spheres of science and maybe even politics (perhaps in a decade or two), I would hope that minds in AI and AGI from all over the world join an international initiative to develop this sucker together. (Think CERN). To be honest, I can’t think of any other approach to develop the later stages of AGI that doesn’t look doomed from the start (not doomed in terms of being technically unfeasible, but doomed in terms of significant others thinking: “we’re not letting this suspicious organization/country take over the world with their dubious AI”. Remember that AGI is potentially much more destructive than any nuclear warhead and powers not involved in its development may blow a gasket upon realizing the potential danger.)
So from my point of view the public perception and acceptance of AGI is a comparatively negligible factor in the overall bigger picture. “People” don’t get a say in weapons development, and I predict they won’t get a say when it comes to AGI. (And we should be glad they don’t if you ask me.)
PS: When you’re just talking about teaching rationality to people however, the way to go is to lobby it into the school curriculum as a full-blown subject. Every other plan to educate the public on “all things rational” completely pales in terms of effectiveness. Teaming up with the skeptics and the “new” atheists may be very helpful for this purpose, but of course we should never let ourselves be associated with such “ungodly” worldviews while advertising our rationalistic concepts.
Very interesting post overall. Ciuld you refer me to article about this particular problem? I feel humans should be allowed to choose their collective destiny together, but I don’t know whether it’s such a bad idea to hide it from them if it will result in this. Are we on the way to becoming the new Project Manhattan?
And yes, getting it into the curriculum is great, but first we need to train teachers, and the teachers’ teachers, etc. and develop a pedagogy that works with kids, who are infamous for not beling able to make the distinctions we make or assimilating the concepts we assimilate, at certain ages, so it’d have to be really fine-tuned to be optimal.
I’ve rewritten my comment and posted it as a standalone article. I’ve somewhat improved it, so you may want to read the other one.
I am not aware of any articles concerning the problem of how we should approach self-improving AGI, I was just hacking my personal thoughts on this matter into the keyboard. If you are talking about the potentially disastrous effects of public rage over the matter of AGI, then Hugo de Garis comes to mind—but personally I find his downright apocalyptic scenarios of societal upheaval and wars over AI a bit ridiculous and hyper-pessimistic, given that as far as I know he lacks any really substantial arguments to support such a disastrous scenario.
EDIT: I have revised my opinion of Hugo’s views after watching all parts of his youtube interview on the following YTchannel: http://www.youtube.com/user/TheRationalFuture. He does make a lot of valid points and I would advise everyone to take a look in order to broaden one’s perspective.
It’s called “getting vote and then passing the bills”. We get people to vote for us for what they thought they heard us say. We’ll pass the bills that are about what we actually said. Doing away with metaphors: we get them to actually listen to what we have to say by giving a great first impression, then once we have them captivated we start showing them the fine print, most notably the bits about how much they suck and they need to change and if they listen to us everything in their lives will be better and they’ll be spiritually and emotionally more fulfilling and awesome. Parties do this. Religions do this. Universities do this. Parents do this. Lovers do this. Why should we be any different? Once we get to the point where we can persuade them that talking about rationality in clown suits is a perfectly reasonable idea, the rest is pretty much done.
Um. Because we want to be different from political parties and religions?
How about the other two? And we don’t just want to be different, we want to change them. Rationalists should win, even if our winning conditions might not be what our adversaries expect.
I suspect we have a very different conception of how the future is going to pan out in terms of what role the public perception and acceptance of AGI will play.
I understand your point: Lure em’ in with happytalk, then bash em’ with a rationality course. (“Excuse me Miss, how would you like a free rationality test”?) However, I simply don’t think that we can positively prepare a sizable proportion of the public (let alone the GLOBAL public) for the arrival of AGI by simply teaching rationality skills. I believe our idea of the future will just continue to compete with countless other worldviews in the public memesphere, without ever becoming truly mainstream until it is “too late” and we face something akin to a hard takeoff.
I don’t really think that we can (or need to) reach a consensus within the public for the successful takeoff of fAGI. Quite to the contrary, I actually worry that carrying our view to the mainstream will have adverse effects, especially once they realize that we aren’t some kind of technophile crackpot religion, but that the futuristic picture we try to paint is actually possible and not at all unlikely to happen. I prefer to face apathy over antagonism when push comes to shove—and since AGI could spring into existence very rapidly and take everyone apart from “those in the know” by surprise, I would hate to lose that element of surprise over our potentially numerous “enemies”.
Now of course I don’t know which path will yield the best result: confronting the public hard and fast or keeping a low profile? I suspect this may become one of the few hot-button topics where our community will have widely diverging opinions, because we simply lack a way to accurately model how people will behave upon encountering the potential threat of AGI (especially so far in advance). Just remember, that the world doesn’t consist entirely of the US and that fAGI will impact everyone. I think it is likely, that we may face serious violence once our vision of the future becomes more known and gains additional credibility by exponential improvements in advanced technologies. There are players on this planet who will not be happy to see an AGI come out of America, or for that matter Eliezer’s or whoever’s garage. (Which is why I’d strongly advocate a semi-covert international effort when it comes to the development of friendly AGI)
It is incredibly hard to predict the future behavior of people, but on a gut-level I absolutely favor an international semi-stealthy approach. It seems to be by far the safest course to take. Once the concept of the singularity and fAGI gains traction in the spheres of science and maybe even politics (perhaps in a decade or two), I would hope that minds in AI and AGI from all over the world join an international initiative to develop this sucker together. (Think CERN). To be honest, I can’t think of any other approach to develop the later stages of AGI that doesn’t look doomed from the start (not doomed in terms of being technically unfeasible, but doomed in terms of significant others thinking: “we’re not letting this suspicious organization/country take over the world with their dubious AI”. Remember that AGI is potentially much more destructive than any nuclear warhead and powers not involved in its development may blow a gasket upon realizing the potential danger.)
So from my point of view the public perception and acceptance of AGI is a comparatively negligible factor in the overall bigger picture. “People” don’t get a say in weapons development, and I predict they won’t get a say when it comes to AGI. (And we should be glad they don’t if you ask me.)
PS: When you’re just talking about teaching rationality to people however, the way to go is to lobby it into the school curriculum as a full-blown subject. Every other plan to educate the public on “all things rational” completely pales in terms of effectiveness. Teaming up with the skeptics and the “new” atheists may be very helpful for this purpose, but of course we should never let ourselves be associated with such “ungodly” worldviews while advertising our rationalistic concepts.
Very interesting post overall. Ciuld you refer me to article about this particular problem? I feel humans should be allowed to choose their collective destiny together, but I don’t know whether it’s such a bad idea to hide it from them if it will result in this. Are we on the way to becoming the new Project Manhattan?
And yes, getting it into the curriculum is great, but first we need to train teachers, and the teachers’ teachers, etc. and develop a pedagogy that works with kids, who are infamous for not beling able to make the distinctions we make or assimilating the concepts we assimilate, at certain ages, so it’d have to be really fine-tuned to be optimal.
I’ve rewritten my comment and posted it as a standalone article. I’ve somewhat improved it, so you may want to read the other one.
I am not aware of any articles concerning the problem of how we should approach self-improving AGI, I was just hacking my personal thoughts on this matter into the keyboard. If you are talking about the potentially disastrous effects of public rage over the matter of AGI, then Hugo de Garis comes to mind—but personally I find his downright apocalyptic scenarios of societal upheaval and wars over AI a bit ridiculous and hyper-pessimistic, given that as far as I know he lacks any really substantial arguments to support such a disastrous scenario.
EDIT: I have revised my opinion of Hugo’s views after watching all parts of his youtube interview on the following YTchannel: http://www.youtube.com/user/TheRationalFuture. He does make a lot of valid points and I would advise everyone to take a look in order to broaden one’s perspective.