The only reasonable debate at this point seems to me to be exponential vs superexponential.
When somebody tells you to buy into the S&p 500 what’s their reasoning? After a century or two of reliable exponential growth the most conservative prediction is for that trend to continue (barring existential catastrophe). We are in our second or third century of dramatic recursive technology improvement. AI is clearly a part of this virtuous cycle, so the safest money looks like it’d be on radical change.
I appreciate the perspectives of the Gary Marcuses of the world, but I’ve noticed they tend more towards storytelling (“Chinese room, doesn’t know what it’s saying”). The near-term singularity crowd tends to point at thirty different graphs of exponential curves and shrug. This is of course an overgeneralization (there are plenty of statistically-grounded arguments against short horizons), but it’s hard to argue we’re in an S curve when the rate of change is still accelerating. Harder still to argue that AI isn’t/won’t get spooky when it writes better than most Americans and is about to pass the coffee test.
Forgive my hypocrisy, but I’ll do a bit of storytelling myself. As a music ed major, I was taught by nearly every one of my professors to make knowledge transfers and to teach my students to transfer. Research on exercise science, psychology, film criticism, the experience of stubbing a toe; all of it can be used to teach music better if you know how to extract the useful information. To me this act of transfer is the boson of wisdom. If palm-e can use its internal model of a bag of chips to grab it for you out of the cabinet… Google doesn’t seem to be out of line to claim palm is transferring knowledge. The Chinese room argument seems to be well and truly dead.
The only reasonable debate at this point seems to me to be exponential vs superexponential.
When somebody tells you to buy into the S&p 500 what’s their reasoning? After a century or two of reliable exponential growth the most conservative prediction is for that trend to continue (barring existential catastrophe). We are in our second or third century of dramatic recursive technology improvement. AI is clearly a part of this virtuous cycle, so the safest money looks like it’d be on radical change.
I appreciate the perspectives of the Gary Marcuses of the world, but I’ve noticed they tend more towards storytelling (“Chinese room, doesn’t know what it’s saying”). The near-term singularity crowd tends to point at thirty different graphs of exponential curves and shrug. This is of course an overgeneralization (there are plenty of statistically-grounded arguments against short horizons), but it’s hard to argue we’re in an S curve when the rate of change is still accelerating. Harder still to argue that AI isn’t/won’t get spooky when it writes better than most Americans and is about to pass the coffee test.
Forgive my hypocrisy, but I’ll do a bit of storytelling myself. As a music ed major, I was taught by nearly every one of my professors to make knowledge transfers and to teach my students to transfer. Research on exercise science, psychology, film criticism, the experience of stubbing a toe; all of it can be used to teach music better if you know how to extract the useful information. To me this act of transfer is the boson of wisdom. If palm-e can use its internal model of a bag of chips to grab it for you out of the cabinet… Google doesn’t seem to be out of line to claim palm is transferring knowledge. The Chinese room argument seems to be well and truly dead.