1-line summary: if the good guys delay their projects to make them safer, the bad guys are more likely to win.
The video’s “abstract”:
It is commonly thought that caution in the initial development of machine intelligence is associated with better outcomes—and that things like extensive testing, sandoxes, and provable correctness are things that will help to produce safe and beneficial synthetic intelligent agents.
In this video, I cast doubt on that idea, by exhibiting a model in which delays caused by caution can lead to much poorer outcomes.
On a side note. Too bad EY doesn’t concentrate on making more videos. LW stuff would be so much more popular that way. People are going to watch videos before reading a lot of text.
No. And I’ve read interesting arguments to the effect that the cognitive habits of text are critical for helping people think in a logically coherent fashion.
Low resolution video appears to be good for public relations work targeting masses of people prevented by poverty from cultivating their cognitive resources, but it does not appear to be good for spelling out solid and cogent reasoning.
The idea that video leads to less logically coherent thought is somewhat testable—are the comments to TED videos less coherent than those posters write to text?
Part of the author’s argument is simply that TV causes people to become mentally passive (alpha-wave brain states, etc) but another aspect of the argument is what kind of content optimizes impact given the medium. He argues that TV works differently even from movies in part because TV simply has such low resolution and so it mostly shows close ups of faces experiencing extreme emotions, slow motion replay of human bodies colliding, and dancing cartoon squirrels because those are what the medium does best.
A movie can give you a landscape or other complex scene and have it mean something. A book can cover nearly anything (including mental states), but only via low bitrate descriptive text, generally delivering a linearized stream of implicitly tree structured arguments or a narrative.
When choosing a publication venue, the form of the media determines the competitive environment and the safely assumed cognitive skills of the audience. There may be outliers like UCTV, but the central tendency reveals the medium’s strengths.
The place to look to test the author’s thesis (as opposed to the derivative claim about the value of video for this community) would be to compare the memetic complexity, themes, and “rationality” in top youtube videos, versus highest grossing movies, versus best sellers.
I could easily imagine that it could be helpful for aspiring rationalists to express themselves and argue in more than one medium simultaneously so that their ideas have to survive in multiple contexts that should not theoretically change the “reality correspondence” of their thinking…
And good uses for low res video could probably be found by anyone trying to consciously game the medium in light of analysis of the medium...
...but “in general, for society, as a medium” I would guess that low res video isn’t particularly conducive to rationality.
I agree about the general low quality of youtube comments, but occasionally I’ll see a special interest video with intelligent comments. The low quality may be a result of youtube being popular with the general public (blogs have specific audiences, youtube is for everyone) combined with founder effect, so that people who want to do intelligent comments generally put them elsewhere.
It seems to me that another test case is audio books vs books in text.
I’d rather see tests of how well people take in argument offered in text vs sound, and some attention to whether there are different subgroups.
There are downsides to being popular. A significant one is creating fans that don’t actually understand what you’re saying very well, and then go around giving a bad impression of you.
Having a moderate amount of smart fans would be way better than having lots of silly fans. I’m a bit fearful of what kind of crowd a large number of easy-to-digest videos would attract...
It may depend on what the videos are like. They don’t have to be simplified versions of the writing—some people either take in information more easily if they hear it, or it’s more convenient for them to listen whether they’re driving or doing chores or whatever instead of reading.
Having a moderate amount of smart fans would be way better than having lots of silly fans. I’m a bit fearful of what kind of crowd a large number of easy-to-digest videos would attract...
I dislike watching videos, as they are synchronous (i.e., require a set amount of time to watch, which is generally more than it would take to read the same material) and not random access (i.e., I cannot easily skim them for a certain section).
Agreed thoroughly. They also demand all of my attention at once, and if I want to pause to do something else, it’s harder to find my place and catch up again (I can’t just glance up a couple of sentences). Plus they require fiddly mouse controls and are relatively resource-intensive, neither of which is any fun on a netbook.
I agree that that risk exists as well, but much of SIAI’s efforts revolve around increasing discussion of the risks of AGI, not just holding back their own efforts. Slowing down other efforts through awareness of the dangers is a factor that should be considered.
Also, discussions of caution may increase the number of “desirable organizations” working to develop AI. In terms of your model, such discussion could turn a black-hat organization into a smiley-faced one. No one is going to release an AI that they actually think is going to wipe out humanity. What’s more, not every well-intentioned organization would be one we want to build AGI. While certain organizations are more likely to be scrupulous in their development, the risk of well-intentioned error is probably the largest one.
In addition, one should consider the extent to which Friendliness can be developed in parallel with AGI, not just something added on at the end of the process. If we assume that no one is currently close to AGI (a fair belief, I think), then now is a fantastic time to help support the development of that theory. If FAI can be developed before anyone can implement AGI, then humanity is in good shape. If it’s easy to add FAI to a project, or if knowing about workable FAI would not help a group with the problem of AGI, then the solution can be released widely for anyone to incorporate into their project. SIAI’s goal is not to be the ones to implement the first superintelligence, but just to make sure that the first one is Friendly.
SIAI’s goal is not to be the ones to implement the first superintelligence, but just to make sure that the first one is Friendly.
That wasn’t true not terribly long ago:
“The Singularity Institute was founded on the theory that in order to get a Friendly artificial intelligence, someone has got to build one. So, we’re just going to have an organization whose mission is: build a Friendly AI. That’s us.”
Also, discussions of caution may increase the number of “desirable organizations” working to develop AI. In terms of your model, such discussion could turn a black-hat organization into a smiley-faced one.
That seems like the (dubious) “engineers are incompetent and a bug takes over the world” scenario.
I think a much more obvious concern is where the “engineers successfully build the machine to do what it is told” scenario—where the machine helps its builders and sponsors—but all the other humans in the world—not so much.
Some more grandmas dying would be “acceptable” damage. However, that isn’t the problem.
The problem is this: The risks of caution.
1-line summary: if the good guys delay their projects to make them safer, the bad guys are more likely to win.
The video’s “abstract”:
LW’s own rwallace wrote on the subject a while back.
Good video^^
On a side note. Too bad EY doesn’t concentrate on making more videos. LW stuff would be so much more popular that way. People are going to watch videos before reading a lot of text.
Am I the only one who is much more willing to read text than watch a video?
No, I also prefer text, and rarely watch youtube links when they’re given here.
Videos can be worth it when they add good visual explanations. But good visual explanations can also be added to text.
Given a choice among text, audio over slideshow, pure audio, and video of talking head with chalk or marker; the video is at the bottom of my list.
No. And I’ve read interesting arguments to the effect that the cognitive habits of text are critical for helping people think in a logically coherent fashion.
Low resolution video appears to be good for public relations work targeting masses of people prevented by poverty from cultivating their cognitive resources, but it does not appear to be good for spelling out solid and cogent reasoning.
The idea that video leads to less logically coherent thought is somewhat testable—are the comments to TED videos less coherent than those posters write to text?
TLDR: argument via XKCD :-)
Part of the author’s argument is simply that TV causes people to become mentally passive (alpha-wave brain states, etc) but another aspect of the argument is what kind of content optimizes impact given the medium. He argues that TV works differently even from movies in part because TV simply has such low resolution and so it mostly shows close ups of faces experiencing extreme emotions, slow motion replay of human bodies colliding, and dancing cartoon squirrels because those are what the medium does best.
A movie can give you a landscape or other complex scene and have it mean something. A book can cover nearly anything (including mental states), but only via low bitrate descriptive text, generally delivering a linearized stream of implicitly tree structured arguments or a narrative.
When choosing a publication venue, the form of the media determines the competitive environment and the safely assumed cognitive skills of the audience. There may be outliers like UCTV, but the central tendency reveals the medium’s strengths.
The place to look to test the author’s thesis (as opposed to the derivative claim about the value of video for this community) would be to compare the memetic complexity, themes, and “rationality” in top youtube videos, versus highest grossing movies, versus best sellers.
I could easily imagine that it could be helpful for aspiring rationalists to express themselves and argue in more than one medium simultaneously so that their ideas have to survive in multiple contexts that should not theoretically change the “reality correspondence” of their thinking…
And good uses for low res video could probably be found by anyone trying to consciously game the medium in light of analysis of the medium...
...but “in general, for society, as a medium” I would guess that low res video isn’t particularly conducive to rationality.
I agree about the general low quality of youtube comments, but occasionally I’ll see a special interest video with intelligent comments. The low quality may be a result of youtube being popular with the general public (blogs have specific audiences, youtube is for everyone) combined with founder effect, so that people who want to do intelligent comments generally put them elsewhere.
It seems to me that another test case is audio books vs books in text.
I’d rather see tests of how well people take in argument offered in text vs sound, and some attention to whether there are different subgroups.
No.
No.
There are downsides to being popular. A significant one is creating fans that don’t actually understand what you’re saying very well, and then go around giving a bad impression of you.
Having a moderate amount of smart fans would be way better than having lots of silly fans. I’m a bit fearful of what kind of crowd a large number of easy-to-digest videos would attract...
It may depend on what the videos are like. They don’t have to be simplified versions of the writing—some people either take in information more easily if they hear it, or it’s more convenient for them to listen whether they’re driving or doing chores or whatever instead of reading.
They do now have a YouTube channel.
I disagree.
I dislike watching videos, as they are synchronous (i.e., require a set amount of time to watch, which is generally more than it would take to read the same material) and not random access (i.e., I cannot easily skim them for a certain section).
Agreed thoroughly. They also demand all of my attention at once, and if I want to pause to do something else, it’s harder to find my place and catch up again (I can’t just glance up a couple of sentences). Plus they require fiddly mouse controls and are relatively resource-intensive, neither of which is any fun on a netbook.
I should add that Max Moore has recently written about this in more depth—in The Perils of Precaution.
I agree that that risk exists as well, but much of SIAI’s efforts revolve around increasing discussion of the risks of AGI, not just holding back their own efforts. Slowing down other efforts through awareness of the dangers is a factor that should be considered.
Also, discussions of caution may increase the number of “desirable organizations” working to develop AI. In terms of your model, such discussion could turn a black-hat organization into a smiley-faced one. No one is going to release an AI that they actually think is going to wipe out humanity. What’s more, not every well-intentioned organization would be one we want to build AGI. While certain organizations are more likely to be scrupulous in their development, the risk of well-intentioned error is probably the largest one.
In addition, one should consider the extent to which Friendliness can be developed in parallel with AGI, not just something added on at the end of the process. If we assume that no one is currently close to AGI (a fair belief, I think), then now is a fantastic time to help support the development of that theory. If FAI can be developed before anyone can implement AGI, then humanity is in good shape. If it’s easy to add FAI to a project, or if knowing about workable FAI would not help a group with the problem of AGI, then the solution can be released widely for anyone to incorporate into their project. SIAI’s goal is not to be the ones to implement the first superintelligence, but just to make sure that the first one is Friendly.
That wasn’t true not terribly long ago:
“The Singularity Institute was founded on the theory that in order to get a Friendly artificial intelligence, someone has got to build one. So, we’re just going to have an organization whose mission is: build a Friendly AI. That’s us.”
http://www.acceleratingfuture.com/people-blog/?p=196
Has there been a memo?
That seems like the (dubious) “engineers are incompetent and a bug takes over the world” scenario.
I think a much more obvious concern is where the “engineers successfully build the machine to do what it is told” scenario—where the machine helps its builders and sponsors—but all the other humans in the world—not so much.