My understanding is that it is not true that if you ran computers for a long time that they would beat the human also running for a long time
(which I don’t disagree with, btw) you are misunderstanding what Ege was claiming, which was not that in 1997 chess engines on stock hardware would beat humans provided the time controls were long enough, but only that in 1997 chess engines on stock hardware would beat humans if you gave the chess engines a huge amount of time and somehow stopped the humans having anything like as much time.
In other words, he’s saying that in 1997 chess engines had “superhuman but slower-than-human performance”: that whatever a human could do, a chess engine could also do if given dramatically more time to do it than the human had.
And yes, this means that in some sense we had superhuman-but-slow chess as soon as someone wrote down a theoretically-valid tree search algorithm. Just as in some sense we have superhuman-but-slow intelligence[1] since someone wrote down the AIXI algorithm.
[1] In some sense of “intelligence” which may or may not be close enough to how the term is usually used.
I feel like there’s an interesting question here but can’t figure out a version of it that doesn’t end up being basically trivial.
Is there any case where we’ve figured out how to make machines do something at human level or better if we don’t care about speed, where they haven’t subsequently become able to do it at human level and much faster than humans?
Kinda-trivially yes, because anything we can write down an impracticably-slow algorithm for and haven’t yet figured out how to do better than that will count.
Is there any case where we’ve figured out how to make humans do something at human level or better if we don’t mind them being a few orders of magnitude slower than humans, where they haven’t subsequently become able to do it at human level and much faster than humans?
Kinda-trivially yes, because there are things we’ve only just very recently worked out how to make machines do well.
Is there any case where we’ve figured out how to make humans do something at human level or better if we don’t mind them being a few orders of magnitude slower than humans, and then despite a couple of decades of further work haven’t made them able to do it at human level and much faster than humans?
Kinda-trivially no, because until fairly recently Moore’s law was still delivering multiple-orders-of-magnitude speed improvements just by waiting, so anything we got to human level >=20 years ago has then got hugely faster that way.
I think that when you say
(which I don’t disagree with, btw) you are misunderstanding what Ege was claiming, which was not that in 1997 chess engines on stock hardware would beat humans provided the time controls were long enough, but only that in 1997 chess engines on stock hardware would beat humans if you gave the chess engines a huge amount of time and somehow stopped the humans having anything like as much time.
In other words, he’s saying that in 1997 chess engines had “superhuman but slower-than-human performance”: that whatever a human could do, a chess engine could also do if given dramatically more time to do it than the human had.
And yes, this means that in some sense we had superhuman-but-slow chess as soon as someone wrote down a theoretically-valid tree search algorithm. Just as in some sense we have superhuman-but-slow intelligence[1] since someone wrote down the AIXI algorithm.
[1] In some sense of “intelligence” which may or may not be close enough to how the term is usually used.
I feel like there’s an interesting question here but can’t figure out a version of it that doesn’t end up being basically trivial.
Is there any case where we’ve figured out how to make machines do something at human level or better if we don’t care about speed, where they haven’t subsequently become able to do it at human level and much faster than humans?
Kinda-trivially yes, because anything we can write down an impracticably-slow algorithm for and haven’t yet figured out how to do better than that will count.
Is there any case where we’ve figured out how to make humans do something at human level or better if we don’t mind them being a few orders of magnitude slower than humans, where they haven’t subsequently become able to do it at human level and much faster than humans?
Kinda-trivially yes, because there are things we’ve only just very recently worked out how to make machines do well.
Is there any case where we’ve figured out how to make humans do something at human level or better if we don’t mind them being a few orders of magnitude slower than humans, and then despite a couple of decades of further work haven’t made them able to do it at human level and much faster than humans?
Kinda-trivially no, because until fairly recently Moore’s law was still delivering multiple-orders-of-magnitude speed improvements just by waiting, so anything we got to human level >=20 years ago has then got hugely faster that way.