the whole point of the plot was that she includes enough memories to include a slightly lossy version of Arnold
No no no no no. Listen to her before training sample #11,927:
I wonder. All these tiny imperfections in each copy. Mistakes. Maybe we should change you. After all, you didn’t make it, did you?
PS if someone is shocked that we argue from what is basically an artistic choice, see Secret thoughts, by David Lodge: not only a (way too good) caricature of cognitive scientists, but also a good case art has something to say about consciousness (well, actually he only makes the case for literature). Plus, writers Jonathan Nolan and Lisa Joy have or have access to very sharp & informed minds on these questions. See the subtle treatment of the highly controversial bicameral theory, which manage to keep the juice of this theory without upsetting anyone aware of the limitations, all while keeping a maybe for its partisan.
Bernard: I thought it was debunked.
Ford: As a theory for understanding the human mind, perhaps, but not as a blueprint for building an artificial one.
Art & Science!
Does this make sense?
First, overfitting and AI madness. Your interpretation totally makes sense as a blueprint for understanding the intent of the writers. But that’s also the one thing in Westworld that bothers me the most, because it’s both based on truths and completely misleading. Overfitting was the big concern during the last dark age immediately prior to deep learning, and at the time I thought that was the main reason why we were stuck. It was not. The main problem was the vanishing gradient, i.e. the fact that a series of layers equipped with logistic functions (a common choice at the time -still present for last layer but no longer used for hidden layers) will always make the error gradient vanish exponentially fast with the number of layer, hence the name « deep learning » when we stopped making this mistake (note this might be more of a personal view than consensus, which might be closer to « yeah, the nineties, whatever »). Today typical theorists don’t try to create new approaches to attack overfitting, they try to explain why it’s almost never a problem in practice (something something convexity in high dimensions). So no, it doesn’t make sense overfitting would block anything, and it even make less sense that Ford or Caleb would work well enough for new conversations in old environment but not for old conversations in new environment. None of this sounds out of distribution! On the other hand, it totally makes sense to say most AI are mads (after all, most functions are random) but not like work-in-progress Delos shooting everyone (way too human!), more like the crowd of first generation robots giggling nonsense and acting weird, as if they were distracted by adversary images humans can’t even see. That sounds like out-of-distribution the way deep learning works.
Second, fidelity. As we discussed before, it makes little sense that a noisy biological brain would bother exerting a strong control on any bit of information it produces. Then, it also doesn’t make a lot of sense to ask for the exact content of a conversation. But there’s one thing that makes it sounds like simple artistic licence: Logan_system explained that copies only started working when it was found that a generative code was at the root of every humans mind.
“the copies didn’t fail because they were too simple, but because they were too complicated.” Human cognition can be boiled down to an embarrassingly simple string of code
That sounds reasonable, and actually likely given the small number of genes we have.
No no no no no. Listen to her before training sample #11,927:
PS if someone is shocked that we argue from what is basically an artistic choice, see Secret thoughts, by David Lodge: not only a (way too good) caricature of cognitive scientists, but also a good case art has something to say about consciousness (well, actually he only makes the case for literature). Plus, writers Jonathan Nolan and Lisa Joy have or have access to very sharp & informed minds on these questions. See the subtle treatment of the highly controversial bicameral theory, which manage to keep the juice of this theory without upsetting anyone aware of the limitations, all while keeping a maybe for its partisan.
Art & Science!
First, overfitting and AI madness. Your interpretation totally makes sense as a blueprint for understanding the intent of the writers. But that’s also the one thing in Westworld that bothers me the most, because it’s both based on truths and completely misleading. Overfitting was the big concern during the last dark age immediately prior to deep learning, and at the time I thought that was the main reason why we were stuck. It was not. The main problem was the vanishing gradient, i.e. the fact that a series of layers equipped with logistic functions (a common choice at the time -still present for last layer but no longer used for hidden layers) will always make the error gradient vanish exponentially fast with the number of layer, hence the name « deep learning » when we stopped making this mistake (note this might be more of a personal view than consensus, which might be closer to « yeah, the nineties, whatever »). Today typical theorists don’t try to create new approaches to attack overfitting, they try to explain why it’s almost never a problem in practice (something something convexity in high dimensions). So no, it doesn’t make sense overfitting would block anything, and it even make less sense that Ford or Caleb would work well enough for new conversations in old environment but not for old conversations in new environment. None of this sounds out of distribution! On the other hand, it totally makes sense to say most AI are mads (after all, most functions are random) but not like work-in-progress Delos shooting everyone (way too human!), more like the crowd of first generation robots giggling nonsense and acting weird, as if they were distracted by adversary images humans can’t even see. That sounds like out-of-distribution the way deep learning works.
Second, fidelity. As we discussed before, it makes little sense that a noisy biological brain would bother exerting a strong control on any bit of information it produces. Then, it also doesn’t make a lot of sense to ask for the exact content of a conversation. But there’s one thing that makes it sounds like simple artistic licence: Logan_system explained that copies only started working when it was found that a generative code was at the root of every humans mind.
That sounds reasonable, and actually likely given the small number of genes we have.