I’m not assuming stuff by accident. There is serious theory for this. AI people ought to learn these ideas and engage with them, IMO, since they contradict some of your ideas. If we’re right, then you need to make some changes to how you approach AI design.
So for example:
The rationality of an AI will depend on its mind design—whether it has biases built into its hardware or not us up to us.
If an AI is a universal knowledge creator, in what sense can it have a built in bias?
I’m not assuming stuff by accident. There is serious theory for this. AI people ought to learn these ideas and engage with them, IMO, since they contradict some of your ideas.
Astrology also conflicts with “our ideas”. That is not in itself a compelling reason to brush up on our astrology.
If an AI is a universal knowledge creator, in what sense can it have a built in bias?
I don’t understand this sentence. Let me make my view of things clearer: An AI’s mind can be described by a point in mind design space. Certain minds (most of them, I imagine) have cognitive biases built into their hardware. That is, they function in suboptimal ways because of the algorithms and heuristics they use. For example: human beings. That said, what is a “universal knowledge creator?” Or, to frame the question in the terms I just gave, what is its mind design?
Certain minds (most of them, I imagine) have cognitive biases built into their hardware.
That’s not what mind design space looks like. It looks something like this:
You have a bunch of stuff that isn’t a mind at all. It’s simple and it’s not there yet. Then you have a bunch of stuff that is a fully complete mind capable of anything that any mind can do. There’s also some special cases (you could have a very long program that hard codes how to deal with every possible input, situation or idea). AIs we create won’t be special cases of that type which are a bad kind of design.
This is similar to the computer design space, which has no half-computers.
what is a “universal knowledge creator?”
A knowledge creator can create knowledge in some repertoire/set. A universal can do any knowledge creation that any other knowledge creator can do. There is nothing in the repertoire of some other knowledge creator, but not its own.
Human beings are universal knowledge creators.
Are you familiar with universality of computers? And how very simple computers can be universal? There’s a lot of parallel issues.
You have a bunch of stuff that isn’t a mind at all. It’s simple and it’s not there yet. Then you have a bunch of stuff that is a fully complete mind capable of anything that any mind can do. There’s also some special cases (you could have a very long program that hard codes how to deal with every possible input, situation or idea). AIs we create won’t be special cases of that type which are a bad kind of design. This is similar to the computer design space, which has no half-computers.
I’m somewhat skeptical of this claim—I can design a mind that has the functions 0(n) (zero function), S(n) (successor function), and P(x0, x1,...xn) (projection function) but not primitive recursion, I can compute most but not all functions. So I’m skeptical of this “all or little” description of mind space and computer space.
However, I suspect it ultimately doesn’t matter because your claims don’t directly contradict my original point. If your categorization is correct and human beings are indeed universal knowledge creators, that doesn’t preclude the possibility of us having cognitive biases (which it had better not do!). Nor does it contradict the larger point, which is that cognitive biases come from cognitive architecture, i.e. where one is located in mind design space.
Are you familiar with universality of computers? And how very simple computers can be universal? There’s a lot of parallel issues.
If you’re referring to Turing-completeness, then yes I am familiar with it.
I’m somewhat skeptical of this claim—I can design a mind that has the functions 0(n) (zero function), S(n) (successor function), and P(x0, x1,...xn) (projection function) but not primitive recursion, I can compute most but not all functions. So I’m skeptical of this “all or little” description of mind space and computer space.
How is that a mind? Maybe we are defining it differently. A mind is something that can create knowledge. And a lot, not just a few special cases. Like people who can think about all kinds of topics such as engineering or art. When you give a few simple functions and don’t even have recursion, I don’t think it meets my conception of a mind, and I’m not sure what good it is.
If your categorization is correct and human beings are indeed universal knowledge creators, that doesn’t preclude the possibility of us having cognitive biases (which it had better not do!).
In what sense can a bias be very important (in the long term), if we are universal? We can change it. We can learn better. So the implementation details aren’t such a big deal to the result, you get the same kind of thing regardless.
Temporary mistakes in starting points should be expected. Thinking needs to be mistake tolerant.
I’m not assuming stuff by accident. There is serious theory for this. AI people ought to learn these ideas and engage with them, IMO, since they contradict some of your ideas. If we’re right, then you need to make some changes to how you approach AI design.
So for example:
If an AI is a universal knowledge creator, in what sense can it have a built in bias?
Astrology also conflicts with “our ideas”. That is not in itself a compelling reason to brush up on our astrology.
I don’t understand this sentence. Let me make my view of things clearer: An AI’s mind can be described by a point in mind design space. Certain minds (most of them, I imagine) have cognitive biases built into their hardware. That is, they function in suboptimal ways because of the algorithms and heuristics they use. For example: human beings. That said, what is a “universal knowledge creator?” Or, to frame the question in the terms I just gave, what is its mind design?
That’s not what mind design space looks like. It looks something like this:
You have a bunch of stuff that isn’t a mind at all. It’s simple and it’s not there yet. Then you have a bunch of stuff that is a fully complete mind capable of anything that any mind can do. There’s also some special cases (you could have a very long program that hard codes how to deal with every possible input, situation or idea). AIs we create won’t be special cases of that type which are a bad kind of design.
This is similar to the computer design space, which has no half-computers.
A knowledge creator can create knowledge in some repertoire/set. A universal can do any knowledge creation that any other knowledge creator can do. There is nothing in the repertoire of some other knowledge creator, but not its own.
Human beings are universal knowledge creators.
Are you familiar with universality of computers? And how very simple computers can be universal? There’s a lot of parallel issues.
I’m somewhat skeptical of this claim—I can design a mind that has the functions 0(n) (zero function), S(n) (successor function), and P(x0, x1,...xn) (projection function) but not primitive recursion, I can compute most but not all functions. So I’m skeptical of this “all or little” description of mind space and computer space.
However, I suspect it ultimately doesn’t matter because your claims don’t directly contradict my original point. If your categorization is correct and human beings are indeed universal knowledge creators, that doesn’t preclude the possibility of us having cognitive biases (which it had better not do!). Nor does it contradict the larger point, which is that cognitive biases come from cognitive architecture, i.e. where one is located in mind design space.
If you’re referring to Turing-completeness, then yes I am familiar with it.
How is that a mind? Maybe we are defining it differently. A mind is something that can create knowledge. And a lot, not just a few special cases. Like people who can think about all kinds of topics such as engineering or art. When you give a few simple functions and don’t even have recursion, I don’t think it meets my conception of a mind, and I’m not sure what good it is.
In what sense can a bias be very important (in the long term), if we are universal? We can change it. We can learn better. So the implementation details aren’t such a big deal to the result, you get the same kind of thing regardless.
Temporary mistakes in starting points should be expected. Thinking needs to be mistake tolerant.