I don’t think this is a good argument. Just because you cannot define something doesn’t mean it’s not a real phenomena or that you cannot reason about it at all. Before we understood fire completely, it was still real and we could reason about it somewhat (fire consumes some things, fire is hot etc.). Similarly, intelligence is a real phenomena that we don’t completely understand and we can still do some reasoning about it. It is meaningful to talk about a computer having “human-level” (I think “human-like” might be more descriptive) intelligence.
I don’t think this is a good argument. Just because you cannot define something doesn’t mean it’s not a real phenomena or that you cannot reason about it at all.
If you have no working definition for what you’re trying to discuss, you’re more than likely to be barking up the wrong tree about it. We didn’t understand fire completely, but we knew that it was hot, you couldn’t touch it, and you made it by rubbing dry sticks together really, really fast, or by making a spark with rocks and have it land on dry straw.
Also, where did I say that until I get a definition of intelligence all discussion about the concept is meaningless? I just want to know what criteria an AI must meet to be considered human and match them with what we have so far so I can see how far we might be from those benchmarks. I think it’s a perfectly reasonable way to go about this kind of discussion.
I apologize, the intent of your question was not at all clear to me from your previous post. It sounded to me like you were using this as an argument that SIAI types were clearly wrong headed.
To answer your question then, the relevant dimension of intelligence is something like “ability to design and examine itself similarly to it’s human designers”.
the relevant dimension of intelligence is something like “ability to design and examine itself similarly to it’s human designers”.
Ok, I’ll buy that. I would agree that any system that could be its own architect and hold meaningful design and code review meetings with its builders would qualify as human-level intelligent.
To clarify: I didn’t mean that such a machine is necessarily “human level intelligent” in all respects, just that that is the characteristic relevant to the idea of an “intelligence explosion”.
I just want to know what criteria an AI must meet to be considered human and match them with what we have so far so I can see how far we might be from those benchmarks.
I don’t think this is a good argument. Just because you cannot define something doesn’t mean it’s not a real phenomena or that you cannot reason about it at all. Before we understood fire completely, it was still real and we could reason about it somewhat (fire consumes some things, fire is hot etc.). Similarly, intelligence is a real phenomena that we don’t completely understand and we can still do some reasoning about it. It is meaningful to talk about a computer having “human-level” (I think “human-like” might be more descriptive) intelligence.
If you have no working definition for what you’re trying to discuss, you’re more than likely to be barking up the wrong tree about it. We didn’t understand fire completely, but we knew that it was hot, you couldn’t touch it, and you made it by rubbing dry sticks together really, really fast, or by making a spark with rocks and have it land on dry straw.
Also, where did I say that until I get a definition of intelligence all discussion about the concept is meaningless? I just want to know what criteria an AI must meet to be considered human and match them with what we have so far so I can see how far we might be from those benchmarks. I think it’s a perfectly reasonable way to go about this kind of discussion.
I apologize, the intent of your question was not at all clear to me from your previous post. It sounded to me like you were using this as an argument that SIAI types were clearly wrong headed.
To answer your question then, the relevant dimension of intelligence is something like “ability to design and examine itself similarly to it’s human designers”.
Ok, I’ll buy that. I would agree that any system that could be its own architect and hold meaningful design and code review meetings with its builders would qualify as human-level intelligent.
To clarify: I didn’t mean that such a machine is necessarily “human level intelligent” in all respects, just that that is the characteristic relevant to the idea of an “intelligence explosion”.
Interesting question, Wikipedia does list some requirements.