OK, so it’s superhuman on some tasks[1]. That’s well known. But so what? Computers have always been radically superhuman on some tasks.
As far as I can tell the point is supposed to be that predicting what will actually appear next is harder than generating just anything vaguely reasonable, and that a perfect predictor of anything that might appear next would be both amazingly powerful and very unlike a human (and, I assume, therefore dangerous). But that’s another “so what”. You’re not going to get an even approximately perfect predictor, no matter how much you try to train in that direction. You’re going to run into the limitations of the approach. So talking about how hard it is to get to be approximately perfect, or about how powerful something approximately perfect would be, isn’t really interesting.
By the way, it also generates a lot of wrong code. And I don’t find quines exclamation-point-worthy. Quines are exactly the sort of thing I’d expect it to get right, because some people are really fascinated by them and have written both tons of code for them and tons of text explaining how that code works.
Presumably, the tasks that machines have been superhuman at so far (arithmetic, chess) confer radically less power than the tasks that LLMs could become superhuman at soon (writing code, crafting business strategies, superhuman “Diplomacy” skill of outwitting people or other AIs in negotiations, etc.)
Why do you think an LLM could become superhuman at crafting business strategies or negotiating? Or even writing code? I don’t believe this is possible.
“Writing code” feels underspecified here. I think it is clear that LLM’s will be (perhaps already are) superhuman at writing some types of code for some purposes in certain contexts. What line are you trying to assert will not be crossed when you say you don’t think it’s possible for them to be superhuman at writing code?
OK, so it’s superhuman on some tasks[1]. That’s well known. But so what? Computers have always been radically superhuman on some tasks.
As far as I can tell the point is supposed to be that predicting what will actually appear next is harder than generating just anything vaguely reasonable, and that a perfect predictor of anything that might appear next would be both amazingly powerful and very unlike a human (and, I assume, therefore dangerous). But that’s another “so what”. You’re not going to get an even approximately perfect predictor, no matter how much you try to train in that direction. You’re going to run into the limitations of the approach. So talking about how hard it is to get to be approximately perfect, or about how powerful something approximately perfect would be, isn’t really interesting.
By the way, it also generates a lot of wrong code. And I don’t find quines exclamation-point-worthy. Quines are exactly the sort of thing I’d expect it to get right, because some people are really fascinated by them and have written both tons of code for them and tons of text explaining how that code works.
Presumably, the tasks that machines have been superhuman at so far (arithmetic, chess) confer radically less power than the tasks that LLMs could become superhuman at soon (writing code, crafting business strategies, superhuman “Diplomacy” skill of outwitting people or other AIs in negotiations, etc.)
Why do you think an LLM could become superhuman at crafting business strategies or negotiating? Or even writing code? I don’t believe this is possible.
“Writing code” feels underspecified here. I think it is clear that LLM’s will be (perhaps already are) superhuman at writing some types of code for some purposes in certain contexts. What line are you trying to assert will not be crossed when you say you don’t think it’s possible for them to be superhuman at writing code?