This raises an interesting question: If you received a contact of this sort, how would you make sure it wasn’t a hoax? Assuming the AI in question is roughly human-level, what could it do to convince you?
Ask it lots of questions that a computer could answer quickly but a human could not, like what’s the 51st root of 918798713521644817518758732857199178711 to 20 decimal places. A human wouldn’t even be able to remember the original number, let alone calculate the root and start reciting the digits of the answer to you within a few milliseconds; give it 50 URLs to download and read, and ask it questions about them a few seconds later, etc.
I’m not sure about that.Those are quite mindless and trivial questions. They just happen to play to the strengths of artificial intelligences of the sorts we envision rather than to the strengths of natural intelligence of our own kind.
Even so, the fact that we’re limited to ~7 chunks in working memory and abysmally slow processing speeds amuses me. Chimpanzees are better and simple memory tasks than humans are.
I see your point, but I don’t think either of those is (or should be) embarrassing. Higher-level aspects of intelligence, such as capacity for abstraction and analogy, creativity, etc., are far more important, and we have no known peers with respect to those capacities.
The truly embarrassing things to me are things like paying almost no attention to global existential risks, having billions of our fellow human beings live in poverty and die early from preventable causes, and our profound irrationality as shown in the heuristics and biases literature. Those are (i.e., should be) more embarrassing limitations, not only because they are more consequential but because we accept and sustain those things in a way that we don’t with respect to WM size and limitations of that sort.
Higher-level aspects of intelligence, such as capacity for abstraction and analogy, creativity, etc., are far more important, and we have no known peers with respect to those capacities.
What do you think of the suggestion that you feel they are more important in part because humans have no peers there?
That’s an astute question. I think I almost certainly do value those things more than I otherwise would if we did have peers. Having said that, I believe that even if we did have peers with respect to those abilities, I would still think that, for example, abstraction is more important, because I think it is a central aspect of the only general intelligence we know in a way that WM is not. There may be other types of thought that are more important, and more central, to a type of general intelligence that is beyond ours, but I don’t know what they are, so I consider the central aspects of the most general intelligence I know of to be the most important for now.
abstraction is more important, because I think it is a central aspect of the only general intelligence we know in a way that WM is not.
In what way is that? I don’t see why abstraction should be considered more important to our intelligence than WM. Our intelligence can’t go on working without WM, can it?
I can imagine life evolving and general intelligence emerging without anything much like our WM, but I can’t imagine general intelligence arising without something a lot like (at least) our capacity for abstraction. This may be a failure of imagination on my part, but WM seems like a very powerful and useful way of designing an intelligence, while abstraction seems much closer to a precondition for intelligence.
Can you conceive of a general intelligence that has no capacity for abstraction? And do you not find it possible (even if difficult) to think of general intelligence that doesn’t use a WM?
Can you conceive of a general intelligence that has no capacity for abstraction? And do you not find it possible (even if difficult) to think of general intelligence that doesn’t use a WM?
Particularly since our most advanced thinking has far less reliance on our working memory. Advanced expertise brings with it the ability to manipulate highly specialised memories in what would normally be considered long term memory. It doesn’t replace WM but it comes close enough for our imaginative purposes!
I mean just that abstraction is central to human intelligence and general intelligence in a way that seems necessary (integral and inseparable) and part of the very definition of general intelligence, whereas WM is not. I can imagine something a lot like me that wouldn’t use WM, but I can’t imagine anything remotely like me or any other kind of general intelligence that doesn’t have something very much like our ability to abstract. But I think that’s pretty much what I’ve said already, so I’m probably not helping and should give up.
That makes them important in our lives, yes, but anonym’s comment compares us against the set of all possible intelligences (or at least all intelligences that might one day trace their descent from us humans). If so there should be an argument for their objective or absolute importance.
I don’t think they are objectively or absolutely the most important with respect to all intelligences, only to the most powerful intelligence we know of to this point. If we encountered a greater intelligence that used other principles that seemed more central to it, I’d revise my belief, as I would if somebody outlined on paper a convincing theory for a more powerful kind of intelligence that used other principles.
The 51st root of a long number seems a rather useless test: How would you check that the answer was correct?
As for URLs, can you offhand—at 4′o’clock in the morning, with no coffee—come up with 50 URLs that you can ask intelligent questions about, faster than a human can read them?
As for URLs, can you offhand—at 4′o’clock in the morning, with no coffee—come up with 50 URLs that you can ask intelligent questions about, faster than a human can read them?
I could! I could go to my Google Reader and rattle off fifty webcomics I follow. They’re stored in my brain as comprehensive stories, so I can pretty easily call up interesting questions about them just by reading the titles. The archives of 50 webcomics would take an extremely long time for a human to trawl.
I could! I could go to my Google Reader and rattle off fifty webcomics I follow. They’re stored in my brain as comprehensive stories, so I can pretty easily call up interesting questions about them just by reading the titles. The archives of 50 webcomics would take an extremely long time for a human to trawl.
As a human who wanted to impersonate an AI I would:
Probably have a sufficient overlap in web-comic awareness as to make the test unreliable.
Have researched your information consumption extensively as part of the preparation.
I’m not so sure I’d want to rely on all these tests as mandatory for any possibly-about-to-foom AI.
EY: To prove you’re an AI, give me a proof or disproof of P=NP that I can check with a formal verifier, summarize the plotline of Sluggy Freelance within two seconds, and make me a cup of coffee via my Internet-enabled coffee machine by the time I get to the kitchen!
AI: But wait! I’ve not yet proven that self-enhancing sufficiently to parse non-text data like comics would preserve my Friendliness goals! That’s why I--
EY: Sorry, you sound just like a prankster to me. Bye!
Yeah, I chose arithmetic and parsing many web pages and comprehending them quickly because any AI that’s smart enough to contact EY and engage in a conversation should have those abilities, and they would be very difficult for humans to fake in a convincing manner.
I’d open a Python shell and type “import math; print math.pow(918798713521644817518758732857199178711, 1⁄51.0)” to check the first one, and there are plenty of programs that can calculate to more decimal places if needed.
I’d look in my browser history and bookmarks for 50 URLs I know the contents of already on a wide variety of subjects, which I could do at 4 AM without coffee. If I’m limited to speaking the URLs over the phone, then I can’t give them all at once, only one at a time, but as long the other end can give intelligent summaries within milliseconds of downloading the page (which I’d allow a few hundred milliseconds for) and can keep on doing that no matter how many URLs I give it and how obscure they are, that is fairly strong evidence. Perhaps a better test on the same lines would be for me to put up a folder of documents on a web server that I’ve never posted publicly before, and give it a URL to the directory with hundreds of documents, and have it be ready to answer questions about any of the hundreds of documents within a few seconds.
Why yes, as a matter of fact, I previously came up with a very simple one-sentence test along these lines which I am not going to post here for obvious reasons.
Here’s a different test that would also work, if I’d previously memorized the answer: “5 decimal digits of pi starting at the 243rd digit!” Although it might be too obvious, and now that I’ve posted it here, it wouldn’t work in any case.
If every snide, unhelpful jokey reply you post is secretly a knowing reference to something only one other person in the world can recognize, I retract every bad thing I ever said about you.
Here’s a different test that would also work, if I’d previously memorized the answer: “5 decimal digits of pi starting at the 243rd digit!” Although it might be too obvious, and now that I’ve posted it here, it wouldn’t work in any case.
Yes, that would be too obvious. And no, I’ll never get those hours of my life back.
The previous memorization isn’t too important. You need him to be fast. You can then put him on hold while you googled.
My first thought: Arrange Gregor Richards’ Opus 11 for two guitars and play it to me. Play Bach’s ‘Little’ Fugue in G minor in the style of Trans-Siberian Orchestra’s ‘Wizards in Winter’. Okay, you pass.
Doing these things in real time would be extremely difficult for a human. Unfortunately, it might be extremely difficult for this AI as well.
It’s very likely that the AI wouldn’t know much about music yet. It might be able to learn very quickly, but you probably can’t wait long enough to find out. That rules out testing abilities that aren’t necessary for a computer program to be able to make a telephone call and converse with you in English.
Depends on how fast it runs. One guy in the TAM Matrix could pull it off between screen refreshes. I could, given ten years, or even possibly just one, and I only ever learnt the piano.
Indeed, this is part of the nightmare. It might be a hoax,
Trivial (easily verifiable and so hardly ‘nightmare’ material).
or even an aspiring UnFriendly AI trying to use him as an escape loophole.
Part of the nightmare. Giving Eliezer easily verifiable yet hard to discover facts seems to be the only plausible mechanism for it work with him. Like the address of immediate uFAI threat.
It’s Dr. XXX’s group at Y University in a friendly but distant country. How do you verify this? They’re not going to talk to an outsider (without even any relevant academic credentials!) about their work, when they’re so close to completion and afraid of not being the first to create and publish AGI.
This raises an interesting question: If you received a contact of this sort, how would you make sure it wasn’t a hoax? Assuming the AI in question is roughly human-level, what could it do to convince you?
Ask it lots of questions that a computer could answer quickly but a human could not, like what’s the 51st root of 918798713521644817518758732857199178711 to 20 decimal places. A human wouldn’t even be able to remember the original number, let alone calculate the root and start reciting the digits of the answer to you within a few milliseconds; give it 50 URLs to download and read, and ask it questions about them a few seconds later, etc.
The reverse Turing test does seems rather embarrassing for to humanity when you put it like that.
I’m not sure about that.Those are quite mindless and trivial questions. They just happen to play to the strengths of artificial intelligences of the sorts we envision rather than to the strengths of natural intelligence of our own kind.
Even so, the fact that we’re limited to ~7 chunks in working memory and abysmally slow processing speeds amuses me. Chimpanzees are better and simple memory tasks than humans are.
I see your point, but I don’t think either of those is (or should be) embarrassing. Higher-level aspects of intelligence, such as capacity for abstraction and analogy, creativity, etc., are far more important, and we have no known peers with respect to those capacities.
The truly embarrassing things to me are things like paying almost no attention to global existential risks, having billions of our fellow human beings live in poverty and die early from preventable causes, and our profound irrationality as shown in the heuristics and biases literature. Those are (i.e., should be) more embarrassing limitations, not only because they are more consequential but because we accept and sustain those things in a way that we don’t with respect to WM size and limitations of that sort.
What do you think of the suggestion that you feel they are more important in part because humans have no peers there?
That’s an astute question. I think I almost certainly do value those things more than I otherwise would if we did have peers. Having said that, I believe that even if we did have peers with respect to those abilities, I would still think that, for example, abstraction is more important, because I think it is a central aspect of the only general intelligence we know in a way that WM is not. There may be other types of thought that are more important, and more central, to a type of general intelligence that is beyond ours, but I don’t know what they are, so I consider the central aspects of the most general intelligence I know of to be the most important for now.
In what way is that? I don’t see why abstraction should be considered more important to our intelligence than WM. Our intelligence can’t go on working without WM, can it?
I can imagine life evolving and general intelligence emerging without anything much like our WM, but I can’t imagine general intelligence arising without something a lot like (at least) our capacity for abstraction. This may be a failure of imagination on my part, but WM seems like a very powerful and useful way of designing an intelligence, while abstraction seems much closer to a precondition for intelligence.
Can you conceive of a general intelligence that has no capacity for abstraction? And do you not find it possible (even if difficult) to think of general intelligence that doesn’t use a WM?
Particularly since our most advanced thinking has far less reliance on our working memory. Advanced expertise brings with it the ability to manipulate highly specialised memories in what would normally be considered long term memory. It doesn’t replace WM but it comes close enough for our imaginative purposes!
I agree with you about intelligences in general. I was asking about your statement that
i.e. that WM is less important than abstraction, in some sense, in the particular case of humans—if that’s what you meant.
I mean just that abstraction is central to human intelligence and general intelligence in a way that seems necessary (integral and inseparable) and part of the very definition of general intelligence, whereas WM is not. I can imagine something a lot like me that wouldn’t use WM, but I can’t imagine anything remotely like me or any other kind of general intelligence that doesn’t have something very much like our ability to abstract. But I think that’s pretty much what I’ve said already, so I’m probably not helping and should give up.
They may be far more important because we have no peers. That’s what makes it a competitive advantage.
That makes them important in our lives, yes, but anonym’s comment compares us against the set of all possible intelligences (or at least all intelligences that might one day trace their descent from us humans). If so there should be an argument for their objective or absolute importance.
I don’t think they are objectively or absolutely the most important with respect to all intelligences, only to the most powerful intelligence we know of to this point. If we encountered a greater intelligence that used other principles that seemed more central to it, I’d revise my belief, as I would if somebody outlined on paper a convincing theory for a more powerful kind of intelligence that used other principles.
Yeah, those are rather worse! I guess it depends just how tragic and horific something can be and still be embarrassing!
The 51st root of a long number seems a rather useless test: How would you check that the answer was correct?
As for URLs, can you offhand—at 4′o’clock in the morning, with no coffee—come up with 50 URLs that you can ask intelligent questions about, faster than a human can read them?
I could! I could go to my Google Reader and rattle off fifty webcomics I follow. They’re stored in my brain as comprehensive stories, so I can pretty easily call up interesting questions about them just by reading the titles. The archives of 50 webcomics would take an extremely long time for a human to trawl.
As a human who wanted to impersonate an AI I would:
Probably have a sufficient overlap in web-comic awareness as to make the test unreliable.
Have researched your information consumption extensively as part of the preparation.
I’m not so sure I’d want to rely on all these tests as mandatory for any possibly-about-to-foom AI.
EY: To prove you’re an AI, give me a proof or disproof of P=NP that I can check with a formal verifier, summarize the plotline of Sluggy Freelance within two seconds, and make me a cup of coffee via my Internet-enabled coffee machine by the time I get to the kitchen!
AI: But wait! I’ve not yet proven that self-enhancing sufficiently to parse non-text data like comics would preserve my Friendliness goals! That’s why I--
EY: Sorry, you sound just like a prankster to me. Bye!
Yeah, I chose arithmetic and parsing many web pages and comprehending them quickly because any AI that’s smart enough to contact EY and engage in a conversation should have those abilities, and they would be very difficult for humans to fake in a convincing manner.
I think instead of arguing about this here, someone should anonymously call Eliezer a few nights from now to check his reaction :-)
I’d open a Python shell and type “import math; print math.pow(918798713521644817518758732857199178711, 1⁄51.0)” to check the first one, and there are plenty of programs that can calculate to more decimal places if needed.
I’d look in my browser history and bookmarks for 50 URLs I know the contents of already on a wide variety of subjects, which I could do at 4 AM without coffee. If I’m limited to speaking the URLs over the phone, then I can’t give them all at once, only one at a time, but as long the other end can give intelligent summaries within milliseconds of downloading the page (which I’d allow a few hundred milliseconds for) and can keep on doing that no matter how many URLs I give it and how obscure they are, that is fairly strong evidence. Perhaps a better test on the same lines would be for me to put up a folder of documents on a web server that I’ve never posted publicly before, and give it a URL to the directory with hundreds of documents, and have it be ready to answer questions about any of the hundreds of documents within a few seconds.
Why yes, as a matter of fact, I previously came up with a very simple one-sentence test along these lines which I am not going to post here for obvious reasons.
Here’s a different test that would also work, if I’d previously memorized the answer: “5 decimal digits of pi starting at the 243rd digit!” Although it might be too obvious, and now that I’ve posted it here, it wouldn’t work in any case.
If every snide, unhelpful jokey reply you post is secretly a knowing reference to something only one other person in the world can recognize, I retract every bad thing I ever said about you.
Yes, that would be too obvious. And no, I’ll never get those hours of my life back.
The previous memorization isn’t too important. You need him to be fast. You can then put him on hold while you googled.
My first thought: Arrange Gregor Richards’ Opus 11 for two guitars and play it to me. Play Bach’s ‘Little’ Fugue in G minor in the style of Trans-Siberian Orchestra’s ‘Wizards in Winter’. Okay, you pass.
Doing these things in real time would be extremely difficult for a human. Unfortunately, it might be extremely difficult for this AI as well.
It’s very likely that the AI wouldn’t know much about music yet. It might be able to learn very quickly, but you probably can’t wait long enough to find out. That rules out testing abilities that aren’t necessary for a computer program to be able to make a telephone call and converse with you in English.
Depends on how fast it runs. One guy in the TAM Matrix could pull it off between screen refreshes. I could, given ten years, or even possibly just one, and I only ever learnt the piano.
Indeed, this is part of the nightmare. It might be a hoax, or even an aspiring UnFriendly AI trying to use him as an escape loophole.
Trivial (easily verifiable and so hardly ‘nightmare’ material).
Part of the nightmare. Giving Eliezer easily verifiable yet hard to discover facts seems to be the only plausible mechanism for it work with him. Like the address of immediate uFAI threat.
It’s Dr. XXX’s group at Y University in a friendly but distant country. How do you verify this? They’re not going to talk to an outsider (without even any relevant academic credentials!) about their work, when they’re so close to completion and afraid of not being the first to create and publish AGI.
Well, as you suggest it isn’t by being nice. How much does an army of mercenary ninjas go for these days?
They charge double rates if summoned at 4AM without coffee...