Perhaps I am missing something, but I do not understand the value of this post. Obviously you can beat something much smarter than you if you have more affordances than it does.
FWIW, I have read some of the discourse on the AI Boxing game. In contrast, I think those posts are valuable. They illustrate that even with very little affordances a much more intelligent entity can win against you, which is not super intuitive especially in the boxed context.
So the obvious question is, how does differences in affordances lead to differences in winning (i.e., when does brain beat brawn)? That’s a good question to ask, but I think that’s intuitive to everyone already? Like that’s what people allude to when they ask “why can’t you just unplug the AI.”
However, experiment conducted here itself is flawed for the reasons other commenters have mentioned already (i.e., you would not beat LeelaQueenOdds, which is rated 2630 FIDE in blitz). Furthermore, I’m struggling to see how you could learn anything in a chess context that would generalize to AI alignment. If you want to understand how affordances interact with wining you should research AI control.
Anyways I notice that I am confused and welcome attempts to change my views.
Perhaps I am missing something, but I do not understand the value of this post. Obviously you can beat something much smarter than you if you have more affordances than it does.
FWIW, I have read some of the discourse on the AI Boxing game. In contrast, I think those posts are valuable. They illustrate that even with very little affordances a much more intelligent entity can win against you, which is not super intuitive especially in the boxed context.
So the obvious question is, how does differences in affordances lead to differences in winning (i.e., when does brain beat brawn)? That’s a good question to ask, but I think that’s intuitive to everyone already? Like that’s what people allude to when they ask “why can’t you just unplug the AI.”
However, experiment conducted here itself is flawed for the reasons other commenters have mentioned already (i.e., you would not beat LeelaQueenOdds, which is rated 2630 FIDE in blitz). Furthermore, I’m struggling to see how you could learn anything in a chess context that would generalize to AI alignment. If you want to understand how affordances interact with wining you should research AI control.
Anyways I notice that I am confused and welcome attempts to change my views.