Let’s run with that idea. There’s ‘general-intelligence-1’, which means “domain-general intelligence at a level comparable to that of a human”; and there’s ‘general-intelligence-2’, which means (I take it) “domain-general intelligence at a level comparable to that of a human, plus the ability to solve the Turing Test”. On the face of it, GI2 looks like a much more ad-hoc and heterogeneous definition. To use GI2 is to assert, by fiat, that most intelligences (e.g., most intelligent alien races) of roughly human-level intellectual ability (including ones a bit smarter than humans) are not general intelligences, because they aren’t optimized for disguising themselves as one particular species from a Milky Way planet called Earth.
If your definition has nothing to recommend itself, then more useful definitions are on offer.
“The problem I’m pointing to here is that a lot of people treat ‘what I mean’ as a magical category.”
I can’t see any evidence of anyone invlolved in these discussions doing that. It looks like a straw man to me.
An AI you can’t talk to has pretty limited usefulness
An AI doesn’t need to be able to trick you in order for you to be able to give it instructions. All sorts of useful skills AIs have these days don’t require them to persuade everyone that they’re human.
Oh, and isn’t EY assumign that an AGi will have NLP? After all, it is supposed to be able to talk its way out of the box.
Read the article you’re commenting on. One of its two main theses is, in bold: The seed is not the superintelligence.
It can figure out semantics for itslef. Values are a subsert of semantics...
Yes. We should focusing on solving the values part of semantics, rather than the entire superset.
Wherer do you get this stuff from? Modern societies, with their complex legal and security systems are much less violent than ancient socieites. To take ut one example.
Doesn’t matter. Give an ancient or a modern society arbitrarily large amounts of power overnight, and the end results won’t differ in any humanly important way. There won’t be any nights after that.
Why don’t humans do that?
Setting aside the power issue: Because humans don’t use ‘smiles’ or ‘statements of approval’ or any other string of virtues an AI researcher has come up with to date for its decision criteria. The specific proposals for making AI humanistic to date have all depended on fake utility functions, or stochastic riffs on fake utility functions.
Uh-huh. MIRI has settled that centuries-aold quesiton for once and all has it?
Lots of easy questions were centuries old when they were solved. ‘This is old, therefore I’m not going to think about it’ is a bad pattern to fall into. If you think the orthogonality thesis is wrong, then give an argument establishing agnosticism or its negation.
I can’t see any evidence of anyone invlolved in these discussions doing that. It looks like a straw man to me.
‘Mean’, ‘right’, ‘rational’, etc.
If want to be sure that these terms, as used by a particular person, are magical categories, you need to ask the particular person whether they have a mechanical interpretation in mind—address the argument, not the person.
Whether any particular person has a mechanical interpretation of these concepts in mind cannot be shown by a completely general argument like Ghosts in the Machine . You don’t think that your use of ‘Mean’, ‘right’, ‘rational’, etc is necessarily magical!
But whether someone has a non-magical explanation can easily be shown by asking them. In particular, it is highly reasonable to assume that an actual AI researcher would have such an interpretation. It is not reasonable to interpret sheer absence of evidence—especially a wilful absence of evidence, based on refusal to engage—as evidence of magical thinking.
At the time of writing the MIRI/LW side of this debate is known to be wrong… and that is not despite good, rational epistemology, it is *because of *bad, dogmatic, Ad Hominen, debate. There are multiple occasions where EY instructs his followers not to even engage with the side that turned out to be correct.,
Let’s run with that idea. There’s ‘general-intelligence-1’, which means “domain-general intelligence at a level comparable to that of a human”; and there’s ‘general-intelligence-2’, which means (I take it) “domain-general intelligence at a level comparable to that of a human, plus the ability to solve the Turing Test”. On the face of it, GI2 looks like a much more ad-hoc and heterogeneous definition. To use GI2 is to assert, by fiat, that most intelligences (e.g., most intelligent alien races) of roughly human-level intellectual ability (including ones a bit smarter than humans) are not general intelligences, because they aren’t optimized for disguising themselves as one particular species from a Milky Way planet called Earth.
If your definition has nothing to recommend itself, then more useful definitions are on offer.
http://lesswrong.com/lw/igf/the_genie_knows_but_doesnt_care/9p8x?context=1#comments
http://lesswrong.com/lw/igf/the_genie_knows_but_doesnt_care/9p91?context=1#comments
http://lesswrong.com/lw/igf/the_genie_knows_but_doesnt_care/9q9m
http://lesswrong.com/lw/igf/the_genie_knows_but_doesnt_care/9qop
http://ieet.org/index.php/IEET/more/loosemore20121128
http://nothingismere.com/2013/09/06/the-seed-is-not-the-superintelligence/#comments
‘Mean’, ‘right’, ‘rational’, etc.
An AI doesn’t need to be able to trick you in order for you to be able to give it instructions. All sorts of useful skills AIs have these days don’t require them to persuade everyone that they’re human.
Read the article you’re commenting on. One of its two main theses is, in bold: The seed is not the superintelligence.
Yes. We should focusing on solving the values part of semantics, rather than the entire superset.
Doesn’t matter. Give an ancient or a modern society arbitrarily large amounts of power overnight, and the end results won’t differ in any humanly important way. There won’t be any nights after that.
Setting aside the power issue: Because humans don’t use ‘smiles’ or ‘statements of approval’ or any other string of virtues an AI researcher has come up with to date for its decision criteria. The specific proposals for making AI humanistic to date have all depended on fake utility functions, or stochastic riffs on fake utility functions.
Lots of easy questions were centuries old when they were solved. ‘This is old, therefore I’m not going to think about it’ is a bad pattern to fall into. If you think the orthogonality thesis is wrong, then give an argument establishing agnosticism or its negation.
If want to be sure that these terms, as used by a particular person, are magical categories, you need to ask the particular person whether they have a mechanical interpretation in mind—address the argument, not the person.
Whether any particular person has a mechanical interpretation of these concepts in mind cannot be shown by a completely general argument like Ghosts in the Machine . You don’t think that your use of ‘Mean’, ‘right’, ‘rational’, etc is necessarily magical! But whether someone has a non-magical explanation can easily be shown by asking them. In particular, it is highly reasonable to assume that an actual AI researcher would have such an interpretation. It is not reasonable to interpret sheer absence of evidence—especially a wilful absence of evidence, based on refusal to engage—as evidence of magical thinking.
At the time of writing the MIRI/LW side of this debate is known to be wrong… and that is not despite good, rational epistemology, it is *because of *bad, dogmatic, Ad Hominen, debate. There are multiple occasions where EY instructs his followers not to even engage with the side that turned out to be correct.,