Presumably, the tasks that machines have been superhuman at so far (arithmetic, chess) confer radically less power than the tasks that LLMs could become superhuman at soon (writing code, crafting business strategies, superhuman “Diplomacy” skill of outwitting people or other AIs in negotiations, etc.)
Why do you think an LLM could become superhuman at crafting business strategies or negotiating? Or even writing code? I don’t believe this is possible.
“Writing code” feels underspecified here. I think it is clear that LLM’s will be (perhaps already are) superhuman at writing some types of code for some purposes in certain contexts. What line are you trying to assert will not be crossed when you say you don’t think it’s possible for them to be superhuman at writing code?
Presumably, the tasks that machines have been superhuman at so far (arithmetic, chess) confer radically less power than the tasks that LLMs could become superhuman at soon (writing code, crafting business strategies, superhuman “Diplomacy” skill of outwitting people or other AIs in negotiations, etc.)
Why do you think an LLM could become superhuman at crafting business strategies or negotiating? Or even writing code? I don’t believe this is possible.
“Writing code” feels underspecified here. I think it is clear that LLM’s will be (perhaps already are) superhuman at writing some types of code for some purposes in certain contexts. What line are you trying to assert will not be crossed when you say you don’t think it’s possible for them to be superhuman at writing code?