You are missing OP’s point. OP is talking about arithmetic, and other things computers are really good at. There is a tendency, when talking about AI, to assume the AI will have all the abilities of modern computers. If computers can play chess really well, then so will AI. If computers can crunch numbers really well, then so will AI. That is what OP is arguing against.
If AIs are like human brains, then they likely won’t be really good at those things. They will have all the advantages of humans of course, like being able to throw a ball or manage jiggly appendages. But they won’t necessarily be any better than us at anything. If humans take ages to do arithmetic, so will AI.
There are some other comments saying that the AI can just interface with calculators and chess engines and gain those abilities. But so can humans. AI doesn’t have any natural advantage there. The only advantage might be that it’s easier to do brain-computer interfaces. Which maybe gets you a bit more bandwidth in usable output. But I don’t see many domains where that would be very useful, vs humans with keyboards. Basically they would just be able to type faster or move a mouse faster.
And even your argument that humans are really good at analog math doesn’t hold up. There have been some experiments done to see if humans could learn to do arithmetic better if it’s presented as an analog problem. Like draw a line the same length as two shorter lines added together. Or draw a shape with the same area as two lines would make if formed into a rectangle.
Not only does it take a ton of training, but you are still only accurate within a few percent. Memorizing multiplication tables is easier and more accurate.
My implied point is that the line between hard math and easy math for humans is rather arbitrary, drawn mostly by evolution. AI is designed, not evolved, so the line between hard and easy for AI is based on the algorithm complexity and processing power, not on millions of years of trying to catch a prey or reach a fruit.
I’m not sure I agree with that. Currently most progress in AI is with neural networks, which are very similar to human brains. Not exactly the same, but they have very similar strengths and weaknesses.
We may not be bad at things because we didn’t evolve to do them. They might just be limits of our type of intelligence. NNs are good at big messy analog pattern matching, and bad at other things like doing lots of addition or solving chess boards.
They might just be limits of our type of intelligence. NNs are good at big messy analog pattern matching, and bad at other things like doing lots of addition or solving chess boards.
That could be true, we don’t know enough about the issue. But interfacing a regular computer with a NN should be a… how should I put it… no-brainer?
You are missing OP’s point. OP is talking about arithmetic, and other things computers are really good at. There is a tendency, when talking about AI, to assume the AI will have all the abilities of modern computers. If computers can play chess really well, then so will AI. If computers can crunch numbers really well, then so will AI. That is what OP is arguing against.
If AIs are like human brains, then they likely won’t be really good at those things. They will have all the advantages of humans of course, like being able to throw a ball or manage jiggly appendages. But they won’t necessarily be any better than us at anything. If humans take ages to do arithmetic, so will AI.
There are some other comments saying that the AI can just interface with calculators and chess engines and gain those abilities. But so can humans. AI doesn’t have any natural advantage there. The only advantage might be that it’s easier to do brain-computer interfaces. Which maybe gets you a bit more bandwidth in usable output. But I don’t see many domains where that would be very useful, vs humans with keyboards. Basically they would just be able to type faster or move a mouse faster.
And even your argument that humans are really good at analog math doesn’t hold up. There have been some experiments done to see if humans could learn to do arithmetic better if it’s presented as an analog problem. Like draw a line the same length as two shorter lines added together. Or draw a shape with the same area as two lines would make if formed into a rectangle.
Not only does it take a ton of training, but you are still only accurate within a few percent. Memorizing multiplication tables is easier and more accurate.
My implied point is that the line between hard math and easy math for humans is rather arbitrary, drawn mostly by evolution. AI is designed, not evolved, so the line between hard and easy for AI is based on the algorithm complexity and processing power, not on millions of years of trying to catch a prey or reach a fruit.
I’m not sure I agree with that. Currently most progress in AI is with neural networks, which are very similar to human brains. Not exactly the same, but they have very similar strengths and weaknesses.
We may not be bad at things because we didn’t evolve to do them. They might just be limits of our type of intelligence. NNs are good at big messy analog pattern matching, and bad at other things like doing lots of addition or solving chess boards.
That could be true, we don’t know enough about the issue. But interfacing a regular computer with a NN should be a… how should I put it… no-brainer?
For how long?
One of the point of AIs is rapid change and evolution.