I think an important fact for understanding the landscape of opinions on AI, is that AI is often taken as a frivolous topic, much like aliens or mind control.
Two questions:
1) Why is this?
2) How should we take it as evidence? For instance, if a certain topic doesn’t feel serious, how likely is it to really be low value? Under what circumstances should I ignore the feeling that something is silly?
AGI takeoff is an event we as a culture have never seen before, except in popular culture. So, that in mind, reporters draw on the only good reference points the population has, sci fi.
What would sane AI reporting look like? Is there a way to talk about AI to people who have only been exposed to the cultural background (if even that) in a way that doesn’t either bore them or look at least as bad as this?
A reasonable analog I can think of is concern about corporations. They are seen as constructed to seek profit alone and thereby destroying social value, they are smarter and more powerful than individual humans, and the humans interacting with them (or even in them) can’t very well control them or predict them. We construct them in some sense, but their ultimate properties are often unintentional.
The industrial revolution is some precedent, at least with respect to automation of labor. But it was long ago, and indeed, the possibility of everyone losing their jobs seems to be reported on more seriously than the other possible consequences of artificial intelligence.
Why does reporting need a historical precedent to be done in a sane-looking way?
what we have in history—it is hackable minds which were misused to make holocaust. Probably this could be one possibility to improve writings about AI danger.
But to answer question 1) - it is too wide topic! (social hackability is only one possibility of AI superpower takeoff path)
For example still miss (and probably will miss) in book:
a) How to prepare psychological trainings for human-AI communication. (or for reading this book :P )
To be clear, you are saying that a thing will seem frivolous if it does have a relevant franchise, but hasn’t happened in real life?
Some other technological topics that hadn’t happened in real life when people became concerned about them:
Nuclear weapons, had The World Set Free, though I’m not sure how well known it was (may have been seen as frivolous by most at first—I’m not sure, but by the time there were serious projects to build them I think not)
Extreme effects from climate change, e.g. massive sea level rise, freezing of Northern Europe, no particular popular culture franchise (not very frivolous)
Recombinant DNA technology, the public’s concern was somewhat motivated by The Andromeda Strain) (not frivolous I think).
To a great extent Less Wrong is what happens when somewhat intelligent, but very lazy people have ideas percolate through the coffee machine of their inferior minds and then try to present them as something new. All the crud and mistakes of the original ideas and thinkers gets included, but there’s a delightful new after-taste and hue, from badly maintained filters and old grounds.
I think an important fact for understanding the landscape of opinions on AI, is that AI is often taken as a frivolous topic, much like aliens or mind control.
Two questions:
1) Why is this?
2) How should we take it as evidence? For instance, if a certain topic doesn’t feel serious, how likely is it to really be low value? Under what circumstances should I ignore the feeling that something is silly?
Relatedly, Scott Alexander criticizes the forms of popular reporting on dangers from AI. Why does reporting takes these forms?
AGI takeoff is an event we as a culture have never seen before, except in popular culture. So, that in mind, reporters draw on the only good reference points the population has, sci fi.
What would sane AI reporting look like? Is there a way to talk about AI to people who have only been exposed to the cultural background (if even that) in a way that doesn’t either bore them or look at least as bad as this?
A reasonable analog I can think of is concern about corporations. They are seen as constructed to seek profit alone and thereby destroying social value, they are smarter and more powerful than individual humans, and the humans interacting with them (or even in them) can’t very well control them or predict them. We construct them in some sense, but their ultimate properties are often unintentional.
The industrial revolution is some precedent, at least with respect to automation of labor. But it was long ago, and indeed, the possibility of everyone losing their jobs seems to be reported on more seriously than the other possible consequences of artificial intelligence.
Why does reporting need a historical precedent to be done in a sane-looking way?
what we have in history—it is hackable minds which were misused to make holocaust. Probably this could be one possibility to improve writings about AI danger.
But to answer question 1) - it is too wide topic! (social hackability is only one possibility of AI superpower takeoff path)
For example still miss (and probably will miss) in book:
a) How to prepare psychological trainings for human-AI communication. (or for reading this book :P )
b) AI Impact to religion
etc.
What topic are you comparing it with?
When you specify that, I think the relevant question is: does the topic have an equivalent of a Terminator franchise?
War is taken fairly seriously in reporting, though there are a wide variety of war-related movies in different styles.
OK, but war happens in real life. For most people, the only time they hear of AI is in Terminator-like movies.
I’d rather compare it to some other technological topic, but which doesn’t have a relevant franchise in popular culture.
To be clear, you are saying that a thing will seem frivolous if it does have a relevant franchise, but hasn’t happened in real life?
Some other technological topics that hadn’t happened in real life when people became concerned about them:
Nuclear weapons, had The World Set Free, though I’m not sure how well known it was (may have been seen as frivolous by most at first—I’m not sure, but by the time there were serious projects to build them I think not)
Extreme effects from climate change, e.g. massive sea level rise, freezing of Northern Europe, no particular popular culture franchise (not very frivolous)
Recombinant DNA technology, the public’s concern was somewhat motivated by The Andromeda Strain) (not frivolous I think).
Evidence seems mixed.
Yes, that was my (tentative) claim.
We would need to know whether the examples were seen as frivolous after they came into being, but before the technology started being used.
To a great extent Less Wrong is what happens when somewhat intelligent, but very lazy people have ideas percolate through the coffee machine of their inferior minds and then try to present them as something new. All the crud and mistakes of the original ideas and thinkers gets included, but there’s a delightful new after-taste and hue, from badly maintained filters and old grounds.