Despite popular misconceptions, the SAT is basically an IQ test, and doesn’t really reward obsessive freaking out and throwing money at the problem.
I am not inclined to argue about this particular point though. Scott tends to know what he writes about and whenever his mistakes are pointed out he earnestly adds a post to the list of mistakes. So I go with his take on it.
But there is an even more fundamental issue I think, which is that GPT-2 more resembles a compressed GLUT or giant Markov chain in than it does a thinking program that computes intelligent solutions for itself.
Maybe. I don’t know enough about either the brain architecture (which looks like a hodge podge of whatever evolution managed to cobble together) or the ML architecture (which is probably not much closer to intelligent design), and I do not really care. As long as AI behaves like an IQ 120+ human, I would happily accept a mix of GLUTs and Markov chains as a reasonable facsimile of intelligence and empathy.
As long as AI behaves like an IQ 120+ human, I would happily accept a mix of GLUTs and Markov chains as a reasonable facsimile of intelligence and empathy.
It doesn’t though, that’s the point! It cannot form plans. It cannot work towards coherent, long term goals, or really operate as an agent at all. It is unable to form new concepts and ideas. It is a very narrow AI only really able to remix its training data in a way that appears on the surface to approximate human writing style. That’s all it can do.
I don’t care who disagrees. If he’s got statistics then I defy the data. This is something you can go out and test in the real world. Get a practice test book, test yourself on one time test, learn some techniques, test yourself on the next test to see what difference it makes, and repeat. I’ve done this and the effect is very real. Training centers have demonstrated the effect with large group sizes.
Scott seems to disagree:
I am not inclined to argue about this particular point though. Scott tends to know what he writes about and whenever his mistakes are pointed out he earnestly adds a post to the list of mistakes. So I go with his take on it.
Maybe. I don’t know enough about either the brain architecture (which looks like a hodge podge of whatever evolution managed to cobble together) or the ML architecture (which is probably not much closer to intelligent design), and I do not really care. As long as AI behaves like an IQ 120+ human, I would happily accept a mix of GLUTs and Markov chains as a reasonable facsimile of intelligence and empathy.
It doesn’t though, that’s the point! It cannot form plans. It cannot work towards coherent, long term goals, or really operate as an agent at all. It is unable to form new concepts and ideas. It is a very narrow AI only really able to remix its training data in a way that appears on the surface to approximate human writing style. That’s all it can do.
I see your stance, and it looks like further discussion is no longer productive. We’ll see how things turn out.
I don’t care who disagrees. If he’s got statistics then I defy the data. This is something you can go out and test in the real world. Get a practice test book, test yourself on one time test, learn some techniques, test yourself on the next test to see what difference it makes, and repeat. I’ve done this and the effect is very real. Training centers have demonstrated the effect with large group sizes.