I’m 90% confident that the cinematic uncanny valley will be crossed in the next decade. The number applies to movies only, it doesn’t apply to humanoid robots (1%) and video game characters (5%).
Edit: After posting this, I thought that my 90% estimate was underconfident, but then I remembered that we started the decade with Jar-Jar Binks and Gollum, and it took us almost ten years to reach the level of Emily and Jake Sully.
Avatar and Digital Emily are the reasons why I’m so confident. Digital actors in Avatar are very impressive, and as a (former) CG nerd I do think that Avatar has crossed the valley—or at least found the way across it—I just don’t think that this is proof enough for general audience and critics.
I think before the critics will be satisfied, one would have to make an entirely CGI film that wasn’t Sci Fi, or fantastic in its setting or characters.
Something like a Western that had Clint Eastwood & Lee Van Cleef from their Sergio Leone Glory Days, alongside modern day Western Stars like Christian Bale, or.. That Australian Guy who was in 3:10 to Yuma. If we were to see CGI Movies, such as I mentioned, with the Avatar tech (or Digital Emily), then I am sure the critics and public would sit up and take notice (and immediately launch into how it was really not CGI at all, but really a conspiracy to hide immortality technology from the greater public).
I think before the critics will be satisfied, one would have to make an entirely CGI film that wasn’t Sci Fi, or fantastic in its setting or characters.
Exactly. I was thinking about something like an Elvis Presley biopic, but your example will do just fine (except that I don’t think that vanilla westerns are commercially viable today).
Vanilla Westerns?!? There is Nothing Vanilla about a Sergio Leone Western! And Clint Eastwood’s Unforgiven was an awesome western, as were Silverado and 3:10 to Yuma (and there are even more that have made a fair killing at the box office).
Westerns are not usually thought of as Block-Busters though, but they do draw a big enough crowd to be profitable.
If one were to draw together Lee Van cleef, Clint Eastwood, and Eli Wallach from their Sergio Leone days together with some of the Big names in Action flics today to make a period western that starred all of these people… I think you’d have a near Block-Buster...
However, the point is really that using this technology one would be able to draw upon stage or film actors of any period or genre (where we had a decent image and voice recording) and to be able to mix actors of the past with those of today.
I just happen to have a passion for a decent Horse Opera. Pity that Firefly was such crap… decent Horse Opera is really no different from a decent Space Opera. Something like Trigun or Cowboy Bebop
Not sure whether it’s been fully crossed, but it’s close.
By 2015 we had a CGI-on-top-of-body-double Paul Walker and audiences weren’t sure when the clips of him were real ones. Rogue One had full-CGI Tarkin and Leia, though those were uncanny for some viewers (and successful for others). Can’t think of another fully CGI human example.
(No, non-human humanoids still don’t count, as impressive as Thanos was.)
You don’t think that the Valley will be crossed for video games in the next ten years?
Considering how rapidly the digital technologies make it from big screen to small, I’m guessing that we can see the Uncanny Valley crossed (for Video Games) within 2 years of its closure in films (the vast majority of digital films having crossed it).
Part of the reason is that the software packages that do things like Digital Emily (mentioned below) are so easy to buy now. They no longer cost hundreds of thousands, as they did in the early days of CGI, and even huge packages like AutoDesk, which used to sell for $25,000, now can be had for only $5,000. And, those packages can be had for a similar price. That is peanuts when compared to the cost of the people who run that software.
I agree with you. The uncanny valley refers to rendering human actors only. It is not necessary to render a whole movie from scratch. It is much more work, but only work.
IMO, The Life of Benjamin Button was the first movie that managed to cross the valley.
I’ve been out of circulation for a while. Last time I priced Autodesk, was in the early 90s, and it was still tens of thousands. I’m just now getting caught up to basic AutoCAD, and I hope to begin learning 3DS Max and Maya in the next year or so. I am astounded at how cheap these packages are now (and how wrong one of my best friends is/was about how quickly these types of software would be available. In 1989, he said it would be 30 to 40 years before we saw the types of graphics displays & software that were pretty much common by (I have discovered) 1995)… Thanks for the head’s up though.
I once watched a video of an Iraqi sniper at work, and it was disturbingly similar to what I see in realistic military video games (I don’t play them myself, but I’ve seen a couple.)
Movies are ‘pre-computed’ so you can use a real human actor as a data source for animations, plus you have enough editing time to spot and iron out any glitches, but in a video game facial animations are generated on-the-fly, so all you can use is a model that perfectly captures human facial behavior. I don’t think that it can be realistically imitated by blending between pre-recorded animations like it’s done today with mo-cap animations—e.g. you can’t pre-record eye movement for a game character.
As for the robots, they are also real-time, AND they would need muscle / eye / face movement implemented physically (as a machine, not just software), hence the lower confidence level.
Even if the non-interactivity of pre-rendered video weren’t an issue, games as a category can’t afford to pre-render more than the occasional cutscene here or there: a typical modern game is much longer than a typical modern movie—typically by at least one order of magnitude, i.e. 15 to 20 hours of gameplay, and the storyline often branches as well. In terms of dollars grossed per hours rendered, games simply can’t afford to keep up. Thus, the rise of real-time hardware 3D rendering in both PC gaming and console gaming.
Rendering is not the problem. I would say that the uncanny valley has already been passed for static images rendered in real time by current 3D hardware (this NVIDIA demo from 2007 gets pretty close). The challenge for video games to cross the uncanny valley is now mostly in the realm of animation. Video game cutscenes rendered in real time will probably cross the uncanny valley with precanned animations in the next console generation but doing so for procedural animations is very much an unsolved problem.
(I’m a graphics programmer in the video games industry so I’m fairly familiar with the current state of the art).
I wasn’t even considering the possibility of static images in video games, because static images aren’t generally considered to count in modern video games. The world doesn’t want another Myst game, and I can only imagine one other instance in a game where photorealistic, non-uncanny static images constitute the bulk of the gameplay: some sort of a dialog tree / disguised puzzle game where one or more still characters’ faces changed in reaction to your dialog choices (i.e. something along the lines of a Japanese-style dating sim).
By ‘static images rendered in real time’ I meant static images (characters not animated) rendered in real time (all 3D rendering occurring at 30+ fps). Myst consisted of pre-rendered images which is quite different.
It is possible to render 3D images of humans in real time on current consumer level 3D hardware that has moved beyond the uncanny valley when viewed as a static screenshot (from a real time rendered sequence) or as a Matrix style static scene / dynamic camera bullet time effect. The uncanny valley has not yet been bridged for procedurally animated humans. The problem is no longer in the rendering but in the procedural animation of human motion.
An ideal indicator would be a regular movie or trailer screening where the audience failed to detect a synthetic actor who (who?) played a lead role, or at least had significant screen time during the screening.
I suppose Avatar is a case in point—it’s worth CGIfying human actors because otherwise they would be totally out of place in the SF environment which is completely CGI.
″There are a number of shots of CGI humans,″ James Cameron says. ″The shots of [Stephen Lang] in an AMP suit, for instance — those are completely CG. But there’s a threshold of proximity to the camera that we didn’t feel comfortable going beyond. We didn’t get too close.″
I’m 90% confident that the cinematic uncanny valley will be crossed in the next decade. The number applies to movies only, it doesn’t apply to humanoid robots (1%) and video game characters (5%).
Edit: After posting this, I thought that my 90% estimate was underconfident, but then I remembered that we started the decade with Jar-Jar Binks and Gollum, and it took us almost ten years to reach the level of Emily and Jake Sully.
Is there a reason Avatar doesn’t count as crossing the threshold already?
Because the giant blue Na’vi people are not human.
You mean you didn’t notice the shots with the simulated humans in Avatar? ;-)
Avatar and Digital Emily are the reasons why I’m so confident. Digital actors in Avatar are very impressive, and as a (former) CG nerd I do think that Avatar has crossed the valley—or at least found the way across it—I just don’t think that this is proof enough for general audience and critics.
I think before the critics will be satisfied, one would have to make an entirely CGI film that wasn’t Sci Fi, or fantastic in its setting or characters.
Something like a Western that had Clint Eastwood & Lee Van Cleef from their Sergio Leone Glory Days, alongside modern day Western Stars like Christian Bale, or.. That Australian Guy who was in 3:10 to Yuma. If we were to see CGI Movies, such as I mentioned, with the Avatar tech (or Digital Emily), then I am sure the critics and public would sit up and take notice (and immediately launch into how it was really not CGI at all, but really a conspiracy to hide immortality technology from the greater public).
Exactly. I was thinking about something like an Elvis Presley biopic, but your example will do just fine (except that I don’t think that vanilla westerns are commercially viable today).
Vanilla Westerns?!? There is Nothing Vanilla about a Sergio Leone Western! And Clint Eastwood’s Unforgiven was an awesome western, as were Silverado and 3:10 to Yuma (and there are even more that have made a fair killing at the box office).
Westerns are not usually thought of as Block-Busters though, but they do draw a big enough crowd to be profitable.
If one were to draw together Lee Van cleef, Clint Eastwood, and Eli Wallach from their Sergio Leone days together with some of the Big names in Action flics today to make a period western that starred all of these people… I think you’d have a near Block-Buster...
However, the point is really that using this technology one would be able to draw upon stage or film actors of any period or genre (where we had a decent image and voice recording) and to be able to mix actors of the past with those of today.
I just happen to have a passion for a decent Horse Opera. Pity that Firefly was such crap… decent Horse Opera is really no different from a decent Space Opera. Something like Trigun or Cowboy Bebop
Not sure whether it’s been fully crossed, but it’s close.
By 2015 we had a CGI-on-top-of-body-double Paul Walker and audiences weren’t sure when the clips of him were real ones. Rogue One had full-CGI Tarkin and Leia, though those were uncanny for some viewers (and successful for others). Can’t think of another fully CGI human example.
(No, non-human humanoids still don’t count, as impressive as Thanos was.)
You don’t think that the Valley will be crossed for video games in the next ten years?
Considering how rapidly the digital technologies make it from big screen to small, I’m guessing that we can see the Uncanny Valley crossed (for Video Games) within 2 years of its closure in films (the vast majority of digital films having crossed it).
Part of the reason is that the software packages that do things like Digital Emily (mentioned below) are so easy to buy now. They no longer cost hundreds of thousands, as they did in the early days of CGI, and even huge packages like AutoDesk, which used to sell for $25,000, now can be had for only $5,000. And, those packages can be had for a similar price. That is peanuts when compared to the cost of the people who run that software.
I agree with you. The uncanny valley refers to rendering human actors only. It is not necessary to render a whole movie from scratch. It is much more work, but only work.
IMO, The Life of Benjamin Button was the first movie that managed to cross the valley.
My reply is here. BTW, major CG packages like Autodesk Maya and 3DS Max were at the level of $5000 and below for over a decade.
I’ve been out of circulation for a while. Last time I priced Autodesk, was in the early 90s, and it was still tens of thousands. I’m just now getting caught up to basic AutoCAD, and I hope to begin learning 3DS Max and Maya in the next year or so. I am astounded at how cheap these packages are now (and how wrong one of my best friends is/was about how quickly these types of software would be available. In 1989, he said it would be 30 to 40 years before we saw the types of graphics displays & software that were pretty much common by (I have discovered) 1995)… Thanks for the head’s up though.
Interesting, it seems that they are currently ahead with image synthesis than voice/speech synthesis.
In a way, the uncanny valley has already been crossed—video game characters in some games are sufficiently humanlike that I hesitate to kill them.
I once watched a video of an Iraqi sniper at work, and it was disturbingly similar to what I see in realistic military video games (I don’t play them myself, but I’ve seen a couple.)
Why such a big gulf between your confidence for cinema and your confidence for video games?
Movies are ‘pre-computed’ so you can use a real human actor as a data source for animations, plus you have enough editing time to spot and iron out any glitches, but in a video game facial animations are generated on-the-fly, so all you can use is a model that perfectly captures human facial behavior. I don’t think that it can be realistically imitated by blending between pre-recorded animations like it’s done today with mo-cap animations—e.g. you can’t pre-record eye movement for a game character.
As for the robots, they are also real-time, AND they would need muscle / eye / face movement implemented physically (as a machine, not just software), hence the lower confidence level.
The obvious answer would be “offline rendering”.
Even if the non-interactivity of pre-rendered video weren’t an issue, games as a category can’t afford to pre-render more than the occasional cutscene here or there: a typical modern game is much longer than a typical modern movie—typically by at least one order of magnitude, i.e. 15 to 20 hours of gameplay, and the storyline often branches as well. In terms of dollars grossed per hours rendered, games simply can’t afford to keep up. Thus, the rise of real-time hardware 3D rendering in both PC gaming and console gaming.
Rendering is not the problem. I would say that the uncanny valley has already been passed for static images rendered in real time by current 3D hardware (this NVIDIA demo from 2007 gets pretty close). The challenge for video games to cross the uncanny valley is now mostly in the realm of animation. Video game cutscenes rendered in real time will probably cross the uncanny valley with precanned animations in the next console generation but doing so for procedural animations is very much an unsolved problem.
(I’m a graphics programmer in the video games industry so I’m fairly familiar with the current state of the art).
I wasn’t even considering the possibility of static images in video games, because static images aren’t generally considered to count in modern video games. The world doesn’t want another Myst game, and I can only imagine one other instance in a game where photorealistic, non-uncanny static images constitute the bulk of the gameplay: some sort of a dialog tree / disguised puzzle game where one or more still characters’ faces changed in reaction to your dialog choices (i.e. something along the lines of a Japanese-style dating sim).
By ‘static images rendered in real time’ I meant static images (characters not animated) rendered in real time (all 3D rendering occurring at 30+ fps). Myst consisted of pre-rendered images which is quite different.
It is possible to render 3D images of humans in real time on current consumer level 3D hardware that has moved beyond the uncanny valley when viewed as a static screenshot (from a real time rendered sequence) or as a Matrix style static scene / dynamic camera bullet time effect. The uncanny valley has not yet been bridged for procedurally animated humans. The problem is no longer in the rendering but in the procedural animation of human motion.
How would you verify a crossing of the uncanny valley? A movie critic invoking it by name and saying a movie doesn’t trigger it?
An ideal indicator would be a regular movie or trailer screening where the audience failed to detect a synthetic actor who (who?) played a lead role, or at least had significant screen time during the screening.
There isn’t much financial incentive to CGI a human—if they are just acting like a regular human. That’s what actors are for.
I suppose Avatar is a case in point—it’s worth CGIfying human actors because otherwise they would be totally out of place in the SF environment which is completely CGI.
″There are a number of shots of CGI humans,″ James Cameron says. ″The shots of [Stephen Lang] in an AMP suit, for instance — those are completely CG. But there’s a threshold of proximity to the camera that we didn’t feel comfortable going beyond. We didn’t get too close.″
http://www.ew.com/ew/gallery/0,,20336893_7,00.html