That makes sense. My main question is: where is the clear evidence of human negligibility in chess? People seem to be misleadingly confident about this proposition (in general; I’m not targeting your post).
When a friend showed me the linked post, I thought “oh wow that really exposes some flaws in my thinking surrounding humans in chess.” I believe some of these flaws came from hearing assertive statements from other people on this topic. As an example, here’s Sam Harris during his interview with Eliezer Yudkowsky (transcript, audio):
Obviously we’ll be getting better and better at building narrow AI. Go is now, along with Chess, ceded to the machines. Although I guess probably cyborgs—human-computer teams—may still be better for the next fifteen days or so against the best machines. But eventually, I would expect that humans of any ability will just be adding noise to the system, and it’ll be true to say that the machines are better at chess than any human-computer team.
(In retrospect, this is a very weird assertion. Fifteen days? I thought he was talking about Go, but the last sentence makes it sound like he’s talking about chess.)
That makes sense. My main question is: where is the clear evidence of human negligibility in chess? People seem to be misleadingly confident about this proposition (in general; I’m not targeting your post).
When a friend showed me the linked post, I thought “oh wow that really exposes some flaws in my thinking surrounding humans in chess.” I believe some of these flaws came from hearing assertive statements from other people on this topic. As an example, here’s Sam Harris during his interview with Eliezer Yudkowsky (transcript, audio):
(In retrospect, this is a very weird assertion. Fifteen days? I thought he was talking about Go, but the last sentence makes it sound like he’s talking about chess.)