my read was “we’ve already got models as strong as they’re going to get, and they’re not agi”. I disagree that they’re as strong as they’re going to get.
No I didn’t say they are as strong as they are going to get. But they are strong enough to do some Python, which shows that neural Networks can make a symbolic language look as though it wasn’t one. IN other words they have no value in revealing anything about the underlying nature of Python, or language (my claim).
my read was “we’ve already got models as strong as they’re going to get, and they’re not agi”. I disagree that they’re as strong as they’re going to get.
No I didn’t say they are as strong as they are going to get. But they are strong enough to do some Python, which shows that neural Networks can make a symbolic language look as though it wasn’t one. IN other words they have no value in revealing anything about the underlying nature of Python, or language (my claim).