Gee, thanks. So you basically linked and replied as a form of damage control?
XiXi is actually one of the people here who is more critical of the SI and the notion of run-away superintelligence. XiXi can correct me if I’m wrong here, but I suspect that XiXi’s intention in this particular instance was to do just what he said. To give an example of an outsider’s perspective on the SI of exactly the type of outsider who the SI should be trying to convince and should be able to convince if their arguments have much validity.
And by the way, the “outsiders’ perception” isn’t helped when the “insiders’” arguments seem to be based not on what computers actually do, but what they’re made to do in comic books.
Ok. This is the sort of remark that get’s the SI people correctly annoyed. Generalizations from fictional evidence are bad. But, at the same time, that something happens to have occurred in fictional settings isn’t in general a reason to assign it lower probability than you would if one weren’t aware of such fiction. (To use a silly example, there’s fiction set after the sun has become a red giant. The fact that there’s such fiction isn’t relevant to evaluating whether or not the sun will enter such a phase). It also misses one of the fundamental points that the SI people have made repeatedly: computers as they exist today are very weak entities. The SI’s argument doesn’t have to do with computers in general. It centers around what happens once machines have human level intelligence. So, ask yourself, how likely is it do you think that we’ll have general AI ever, and if we do have general AI, what buggy failure modes seem most likely?
It centers around what happens once machines have human level intelligence.
As defined by… what exactly? We have problems measuring our own intelligence or even defining it so we’re giving computers a very wide sliding scale of intelligence based on personal opinions and ideas morethan a rigirous examination. A computer today could ace just about any general knowledge test we give it if we tell it how to search for an answer or compute a problem. Does that make it as intelligent as a really academically adept human? Oh and it can do it in a tiny fraction of the time it would take us. Does that make it superhuman?
It may be a red herring to focus on the definition of “intelligence” in this context. If you prefer, taboo the words intelligent and intelligence in this context and simply refer to a computer capable of doing at least everything a regular person can do. The issue is what happens after one has a machine that reaches that point.
… the words intelligent and intelligence in this context and simply refer to a computer capable of doing at least everything a regular person can do.
But we already have things capable of doing everything a regular person can do. We call them regular people. Are we trying to build another person in digital format here, and if so, why? Just because we want to see if can? Or because we have some big plans for it?
But we already have things capable of doing everything a regular person can do. We call them regular people. Are we trying to build another person in digital format here, and if so, why? Just because we want to see if can? Or because we have some big plans for it?
Irrelevant to the question at hand, which is what would happen if a machine had such capabilities. But, if you insist on discussing this issue also, machines with human-like abilities could be very helpful. For example, one might be able to train one of them to do some task, and then make multiple copies of it, much more efficient than individually training lots of humans. Or one could send such AIs into dangerous situations where we might not ethically send a person (whether it would actually be ethical to send an AI is a distinct question.)
Or one could send such AIs into dangerous situations where we might not ethically send a person (whether it would actually be ethical to send an AI is a distinct question.)
Why is it distinct? Whether doing something is an error determines if it’s beneficial to obtain ability and willingness to do it.
It’s distinct when the question is about risk to the human, rather than about the ethics of the task itself. We could make nonsentient nonpersons that nevertheless have humanlike abilities in some broad or narrow sense, so that sacrificing them in some risky or suicidal task doesn’t impact the ethical calculation as it would if we were sending a person.
(I think that’s what JoshuaZ was getting at. The “distinct question” would presumably be that of the AI’s potential personhood.)
Um… we already do all that to a pretty high extent and we don’t need general intelligence in every single facet of human ability to do that. Just make it an expert in its task and that’s all you need.
Um… we already do all that to a pretty high extent and we don’t need general intelligence in every single facet of human ability to do that. Just make it an expert in its task and that’s all you need.
There are a large number of tasks where the expertise level needed by current technology is woefully insufficient. Anything that has a strong natural language requirement for example.
Oh fun, we’re talking about my advisers’ favorite topic! Yeah, strong natural language is a huge pain and if we had devices that understood human speech well, tech companies would jump on that ASAP.
But here’s the thing. If you want natural language processing, why build a Human 2.0? Why not just build the speech recognition system? It’s making AGI for something like that the equivalent of building a 747 to fly one person across a state? I can see various expert systems coming together as an AGI, but not starting out as such.
It would surprise me if human-level natural-language processing were possible without sitting on top of a fairly sophisticated and robust world-model.
I mean, just as an example, consider how much a system has to know about the world to realize that in your next-to-last sentence, “It’s” is most likely a typo for “Isn’t.”
Granted that one could manually construct and maintain such a model rather than build tools that maintain it automatically based on ongoing observations, but the latter seems like it would pay off over time.
I don’t think this is a good argument. Just because you cannot define something doesn’t mean it’s not a real phenomena or that you cannot reason about it at all. Before we understood fire completely, it was still real and we could reason about it somewhat (fire consumes some things, fire is hot etc.). Similarly, intelligence is a real phenomena that we don’t completely understand and we can still do some reasoning about it. It is meaningful to talk about a computer having “human-level” (I think “human-like” might be more descriptive) intelligence.
I don’t think this is a good argument. Just because you cannot define something doesn’t mean it’s not a real phenomena or that you cannot reason about it at all.
If you have no working definition for what you’re trying to discuss, you’re more than likely to be barking up the wrong tree about it. We didn’t understand fire completely, but we knew that it was hot, you couldn’t touch it, and you made it by rubbing dry sticks together really, really fast, or by making a spark with rocks and have it land on dry straw.
Also, where did I say that until I get a definition of intelligence all discussion about the concept is meaningless? I just want to know what criteria an AI must meet to be considered human and match them with what we have so far so I can see how far we might be from those benchmarks. I think it’s a perfectly reasonable way to go about this kind of discussion.
I apologize, the intent of your question was not at all clear to me from your previous post. It sounded to me like you were using this as an argument that SIAI types were clearly wrong headed.
To answer your question then, the relevant dimension of intelligence is something like “ability to design and examine itself similarly to it’s human designers”.
the relevant dimension of intelligence is something like “ability to design and examine itself similarly to it’s human designers”.
Ok, I’ll buy that. I would agree that any system that could be its own architect and hold meaningful design and code review meetings with its builders would qualify as human-level intelligent.
To clarify: I didn’t mean that such a machine is necessarily “human level intelligent” in all respects, just that that is the characteristic relevant to the idea of an “intelligence explosion”.
I just want to know what criteria an AI must meet to be considered human and match them with what we have so far so I can see how far we might be from those benchmarks.
XiXi is actually one of the people here who is more critical of the SI and the notion of run-away superintelligence. XiXi can correct me if I’m wrong here, but I suspect that XiXi’s intention in this particular instance was to do just what he said. To give an example of an outsider’s perspective on the SI of exactly the type of outsider who the SI should be trying to convince and should be able to convince if their arguments have much validity.
Ok. This is the sort of remark that get’s the SI people correctly annoyed. Generalizations from fictional evidence are bad. But, at the same time, that something happens to have occurred in fictional settings isn’t in general a reason to assign it lower probability than you would if one weren’t aware of such fiction. (To use a silly example, there’s fiction set after the sun has become a red giant. The fact that there’s such fiction isn’t relevant to evaluating whether or not the sun will enter such a phase). It also misses one of the fundamental points that the SI people have made repeatedly: computers as they exist today are very weak entities. The SI’s argument doesn’t have to do with computers in general. It centers around what happens once machines have human level intelligence. So, ask yourself, how likely is it do you think that we’ll have general AI ever, and if we do have general AI, what buggy failure modes seem most likely?
As defined by… what exactly? We have problems measuring our own intelligence or even defining it so we’re giving computers a very wide sliding scale of intelligence based on personal opinions and ideas morethan a rigirous examination. A computer today could ace just about any general knowledge test we give it if we tell it how to search for an answer or compute a problem. Does that make it as intelligent as a really academically adept human? Oh and it can do it in a tiny fraction of the time it would take us. Does that make it superhuman?
It may be a red herring to focus on the definition of “intelligence” in this context. If you prefer, taboo the words intelligent and intelligence in this context and simply refer to a computer capable of doing at least everything a regular person can do. The issue is what happens after one has a machine that reaches that point.
But we already have things capable of doing everything a regular person can do. We call them regular people. Are we trying to build another person in digital format here, and if so, why? Just because we want to see if can? Or because we have some big plans for it?
Irrelevant to the question at hand, which is what would happen if a machine had such capabilities. But, if you insist on discussing this issue also, machines with human-like abilities could be very helpful. For example, one might be able to train one of them to do some task, and then make multiple copies of it, much more efficient than individually training lots of humans. Or one could send such AIs into dangerous situations where we might not ethically send a person (whether it would actually be ethical to send an AI is a distinct question.)
Why is it distinct? Whether doing something is an error determines if it’s beneficial to obtain ability and willingness to do it.
It’s distinct when the question is about risk to the human, rather than about the ethics of the task itself. We could make nonsentient nonpersons that nevertheless have humanlike abilities in some broad or narrow sense, so that sacrificing them in some risky or suicidal task doesn’t impact the ethical calculation as it would if we were sending a person.
(I think that’s what JoshuaZ was getting at. The “distinct question” would presumably be that of the AI’s potential personhood.)
Um… we already do all that to a pretty high extent and we don’t need general intelligence in every single facet of human ability to do that. Just make it an expert in its task and that’s all you need.
There are a large number of tasks where the expertise level needed by current technology is woefully insufficient. Anything that has a strong natural language requirement for example.
Oh fun, we’re talking about my advisers’ favorite topic! Yeah, strong natural language is a huge pain and if we had devices that understood human speech well, tech companies would jump on that ASAP.
But here’s the thing. If you want natural language processing, why build a Human 2.0? Why not just build the speech recognition system? It’s making AGI for something like that the equivalent of building a 747 to fly one person across a state? I can see various expert systems coming together as an AGI, but not starting out as such.
It would surprise me if human-level natural-language processing were possible without sitting on top of a fairly sophisticated and robust world-model.
I mean, just as an example, consider how much a system has to know about the world to realize that in your next-to-last sentence, “It’s” is most likely a typo for “Isn’t.”
Granted that one could manually construct and maintain such a model rather than build tools that maintain it automatically based on ongoing observations, but the latter seems like it would pay off over time.
I don’t think this is a good argument. Just because you cannot define something doesn’t mean it’s not a real phenomena or that you cannot reason about it at all. Before we understood fire completely, it was still real and we could reason about it somewhat (fire consumes some things, fire is hot etc.). Similarly, intelligence is a real phenomena that we don’t completely understand and we can still do some reasoning about it. It is meaningful to talk about a computer having “human-level” (I think “human-like” might be more descriptive) intelligence.
If you have no working definition for what you’re trying to discuss, you’re more than likely to be barking up the wrong tree about it. We didn’t understand fire completely, but we knew that it was hot, you couldn’t touch it, and you made it by rubbing dry sticks together really, really fast, or by making a spark with rocks and have it land on dry straw.
Also, where did I say that until I get a definition of intelligence all discussion about the concept is meaningless? I just want to know what criteria an AI must meet to be considered human and match them with what we have so far so I can see how far we might be from those benchmarks. I think it’s a perfectly reasonable way to go about this kind of discussion.
I apologize, the intent of your question was not at all clear to me from your previous post. It sounded to me like you were using this as an argument that SIAI types were clearly wrong headed.
To answer your question then, the relevant dimension of intelligence is something like “ability to design and examine itself similarly to it’s human designers”.
Ok, I’ll buy that. I would agree that any system that could be its own architect and hold meaningful design and code review meetings with its builders would qualify as human-level intelligent.
To clarify: I didn’t mean that such a machine is necessarily “human level intelligent” in all respects, just that that is the characteristic relevant to the idea of an “intelligence explosion”.
Interesting question, Wikipedia does list some requirements.