Humans do reasoning without mathematical logic. I don’t know why anyone would think that you need mathematical logic to do reasoning.
Right. Humans do reasoning, but don’t really understand reasoning. Since ancient times, when people try to understand something they try to formalize it, hence the study of logic.
If we want to build something that can reason we have to understand reasoning or we basically won’t know what we are getting. We can’t just say “humans reason based on some ad-hoc kludgy nonformal system” and then magically extract an AI design from that. We need to build something we can understand or it won’t work, and right now, understanding reasoning in the abstract means logic and it’s extensions.
It’s a double need, though, because not only do we need to understand reasoning, self-improvement means the created thing needs to understand reasoning. Right now we don’t have a formal theory of reasoning that can handle understanding it’s own reasoning without losing power. So that’s we need to solve that.
If we want to build something that can reason we have to understand reasoning or we basically won’t know what we are getting. We can’t just say “humans reason based on some ad-hoc kludgy nonformal system” and then magically extract an AI design from that. We need to build something we can understand or it won’t work, and right now, understanding reasoning in the abstract means logic and it’s extensions.
Note that this is different from what you were saying before, and that commenting along the lines of “AI’s do Reasoning. If you can’t see the relevance of logic to reasoning, I can’t help” without further explanation doesn’t adhere to the principle of charity.
I’m very familiar with the argument that you’re making, and have discussed it with dozens of people. The reason why I didn’t respond to the argument before you made it is because I wanted to isolate our core point(s) of disagreement, rather than making presumptions. The same holds for my points below.
If we want to build something that can reason we have to understand reasoning or we basically won’t know what we are getting.
This argument has the form “If we want to build something that does X, we have to understand X, or we won’t know what we’re getting.” But this isn’t true in full generality. For example, we can build a window shade without knowing how the window shade blocks light, and still know that we’ll be getting something that blocks light. Why do you think that AI will be different?
We can’t just say “humans reason based on some ad-hoc kludgy nonformal system” and then magically extract an AI design from that.
Why do you think that it’s at all viable to create an AI based on a formal system? (For the moment putting aside safety considerations.)
As to the rest of your comment — returning to my “Chinese economy” remarks — the Chinese economy is a recursively self-improving system with “goal” of maximizing GDP. It could be that there’s goal drift, and that the Chinese economy starts optimizing for something random. But I think that the Chinese economy does a pretty good job of keeping this “goal” intact, and that it’s been doing a better and better job over time. Why do you think that it’s harder to ensure that an AI keeps its goal intact than it is to ensure that the Chinese economy keeps its “goal” intact.
Right. Humans do reasoning, but don’t really understand reasoning. Since ancient times, when people try to understand something they try to formalize it, hence the study of logic.
If we want to build something that can reason we have to understand reasoning or we basically won’t know what we are getting. We can’t just say “humans reason based on some ad-hoc kludgy nonformal system” and then magically extract an AI design from that. We need to build something we can understand or it won’t work, and right now, understanding reasoning in the abstract means logic and it’s extensions.
It’s a double need, though, because not only do we need to understand reasoning, self-improvement means the created thing needs to understand reasoning. Right now we don’t have a formal theory of reasoning that can handle understanding it’s own reasoning without losing power. So that’s we need to solve that.
There is no viable alternate path.
Note that this is different from what you were saying before, and that commenting along the lines of “AI’s do Reasoning. If you can’t see the relevance of logic to reasoning, I can’t help” without further explanation doesn’t adhere to the principle of charity.
I’m very familiar with the argument that you’re making, and have discussed it with dozens of people. The reason why I didn’t respond to the argument before you made it is because I wanted to isolate our core point(s) of disagreement, rather than making presumptions. The same holds for my points below.
This argument has the form “If we want to build something that does X, we have to understand X, or we won’t know what we’re getting.” But this isn’t true in full generality. For example, we can build a window shade without knowing how the window shade blocks light, and still know that we’ll be getting something that blocks light. Why do you think that AI will be different?
Why do you think that it’s at all viable to create an AI based on a formal system? (For the moment putting aside safety considerations.)
As to the rest of your comment — returning to my “Chinese economy” remarks — the Chinese economy is a recursively self-improving system with “goal” of maximizing GDP. It could be that there’s goal drift, and that the Chinese economy starts optimizing for something random. But I think that the Chinese economy does a pretty good job of keeping this “goal” intact, and that it’s been doing a better and better job over time. Why do you think that it’s harder to ensure that an AI keeps its goal intact than it is to ensure that the Chinese economy keeps its “goal” intact.