We cannot just predict what the outcomes of our decisions will be, we really, really have to go through the whole process of making them. We cannot even pretend that decisions are determined until after we have finish making them.
And what about an AI that can predict it’s own decisions (because it knows its source code)?
I believe that a compatibilist can accept both freewill and determinism at the same time. I reject them both as not useful to understanding decisions. I think there is a difference between believing both A and B and believing neither A or B.
It seems to me unlikely that an AI could predict its own decisions by examining its source code but not running the code. But I am not sure it is completely impossible just because I cannot see how it would be done. If it were possible I would be extremely surprised if it was faster or easier that just running the code.
As I’ve stated before, no AI can predict its own decisions in that sense (i.e. in detail, before it has made them.) Knowing its source code doesn’t help; it has to run the code in order to know what result it gets.
As I’ve stated before, no AI can predict its own decisions in that sense (i.e. in detail, before it has made them.)
I suggest that it can but it is totally pointless for it to do so.
Knowing its source code doesn’t help; it has to run the code in order to know what result it gets.
Things can be proved from source code without running it. This applies to any source code, including that of oneself. Again, it doesn’t seem a particularly useful thing to do in most cases.
For example if the top-level decision function of an AI is:
def DecideWhatToDo(self, environment):
if environment.IsUnderWater():
return actions.SELF_DESTRUCT
else:
return self.EmergentComplexStochasticDecisionFunction(environment)
… and the AI doesn’t self-modify, then it can predict that it will decide to self destruct if it falls in the water, only by analysing the code, without running it (also assuming, of course, that it is good enough at code analysis).
Of course, you can imagine AIs that can’t predict any of it’s decisions, and as wedfrid says, in most non-trivial cases, most probably wouldn’t be able to.
(This may be important, because having provable decisions in certain situations could be key to cooperation in prisonner’s-dilemma-type situations)
Of course that is predictable, but that code wouldn’t exist in any intelligent program, or at least it isn’t an intelligent action; predicting it is like predicting that I’ll die if my brain is crushed.
Unknowns, we’ve been over this issue before. You don’t need to engage in perfect prediction in order to be able to usefully predict. Moreover, even if you can’t predict everything you can still examine and improve specific modules. For example, if an AI has a module for factoring integers using a naive, brute-force factoring algorithm, it could examine that and decide to replace it with a quicker, more efficient module for factoring (that maybe used the number field sieve for example). It can do that even though it can’t predict the precise behavior of the module without running it.
That’s also because this is a simplified example, merely intended to provide a counter-example to your original assertion.
As I’ve stated before, no AI can predict its own decisions in that sense (i.e. in detail, before it has made them.) Knowing its source code doesn’t help; it has to run the code in order to know what result it gets.
Agreed, it isn’t an intelligent action, but if you start saying intelligent agents can only take intelligent decisions, then you’re playing No True Scotsman.
I can imagine plenty of situations where someone might want to design an agent that takes certain unintelligent decisions in certain circumstances, or an agent that self-modifies in that way. If an agent can not only make promises, but also formally prove by showing it’s own source code that those promises are binding and that it can’t change them - then it may be at an advantage for negociations and cooperation over an agent that can’t do that.
So “stupid” decisions that can be predicted by reading one’s own source code isn’t a feature that I consider unlikely in the design-space of AIs.
I would agree with that. But I would just say that the AI would experience doing those things (for example keeping such promises) as we experience reflex actions, not as decisions.
It’s like that precisely because it is easily predictable; as I said in another reply, an AI will experience its decisions as indeterminate, so anything it knows in advance in such a determinate way, will not be understood as a decision, just as I don’t decide to die if my brain is crushed, but I know that will happen. In the same way the AI will merely know that it will self-destruct if it is placed under water.
From this, it seems like your argument for why this will not appear in its decision algorithm, is simply that you have a specific definition for “decision” that requires the AI to “understand it as a decision”. I don’t know why the AI has to experience its decisions as indeterminate (indeed, that seems like a flawed design if its decisions are actually determined!).
Rather, any code that leads from inputs to a decision should be called part of the AI’s ‘decision algorithm’ regardless of how it ‘feels’. I don’t have a problem with an AI ‘merely knowing’ that it will make a certain decision. (and be careful - ‘merely’ is an imprecise weasel word)
It isn’t a flawed design because when you start running the program, it has to analyze the results of different possible actions. Yes, it is determined objectively, but it has to consider several options as possible actions nonetheless.
Now, I am not certain about this, but we have to examine that code before we know it’s outcome.
While this isn’t “Running” the code in the traditional sense of computation as we are familiar with it today, it does seem that the code is sort of run by our brains as a simulation as we scan it.
As sort of meta-process if you will...
I could be so wrong about that though… eh...
Also, that code is useless really, except maybe as a wait function… It doesn’t really do anything (Not sure why Unknowns gets voted up in the first post above, and down below)...
Also, leaping from some code to the Entirety of an AI’s source code seems to be a rather large leap.
Also, leaping from some code to the Entirety of an AI’s source code seems to be a rather large leap.
“some code” is part of “the entirety of an AI’s source code”—if it doesn’t need to execute some part of the code, then it doesn’t need to execute the entirety of the code.
This is false for some algorithms, and so I imagine it would be false for the entirety of the AI’s source code.
It is, incidently, trivial to alter the code to an algorithm for making decisions and also simple to make it an algorithm that can predict it’s decision before making it.
do_self_analysis();
unsigned long i;
unsigned long j;
for (i=0; i<ULONG_MAX-1; i++)
for (j=0; j<ULONG_MAX-1; j++);
if(i > 2) return ACTION_DEFECT;
return ACTION_COOPERATE;
The do_self_analysis method (do they call them methods or functions? Too long since I’ve used C) can browse the entire source code of the AI, determine that the above piece of code is the algorithm for making the relevant decision, prove that do_self_analysis doesn’t change anything or perform any output and does return in finite time and then go on to predict that the AI will behave like a really inefficient defection rock. Quite a while later it will actually make the decision to defect.
When the AI runs the code for predicting it’s action, it will have the subjective experience of making the decision. Later “it will actually make the decision to defect” only in the sense that the external result will come at that time. If you ask it when it made it’s decision, it will point to the time when it analyzed the code.
You are mistaken. I consider the explanations given thus far by myself and others sufficient. (No disrespect intended beyond that implicit in the fact of disagreement itself and I did not vote on the parent.)
If you ask it when it made it’s decision, it will point to the time when it analyzed the code.
If you ask the AI when it made its decision it will either point to the time after the analysis or it will be wrong.
I avoided commenting on the ‘subjective experience’ side of things because I thought it was embodying a whole different kind of confusion. It assumes that the AI executes some kind of ‘subjective experience’ reasoning that is similar to that of humans (or some subset thereof). This quirk relies on lacking any strong boundaries between thought processes. People usually can’t predict their decisions without making them. For both the general case and the specific case of the code I gave a correctly implemented module that could be given the label ‘subjective experience’ would see the difference between prediction and analysis.
I upvoted the parent for the use of it’s. I usually force myself to write its in that context but cringe while doing so. The syntax of the English language is annoying.
I upvoted the parent for the use of it’s. I usually force myself to write its in that context but cringe while doing so. The syntax of the English language is annoying.
Really? Do you also cringe when using theirs, yours, ours, mine, and thine?
Mine and thine? They don’t belong in the category. The flaw isn’t that all words about possession should have an apostrophe. The awkwardness is that the pattern of adding the “s” to the end to indicate ownership is the same from “Fred’s” to “its” but arbitrarily not punctuated in the same way. The (somewhat obsolete) “ine” is a distinct mechanism of creating a possessive pronoun which while adding complexity at least doesn’t add inconsistency.
As for “theirs, yours and ours”, they prompt cringes in decreasing order of strength (in fact, it may not be a coincidence that you asked in that order). Prepend “hers” to the list and append “his”. “Hers” and “theirs” feel more cringe-worthy, as best as I can judge, because they are closer in usage to “Fred’s” while “ours” is at least a step or two away. “His” is a special case in as much as it is a whole different word. It isn’t a different mechanism like “thine” or “thy” but it isn’t “hes” either. I have never accidentally typed “hi’s”.
No, I’m not reading the wrong pattern. I’m criticising the pattern in terms of the objective and emotional-subjective criteria that I use for evaluating elements of languages and communication patterns in general. I am aware of the rules in question and more than capable of implementing it and the hundreds of other rules that go into making our language.
The undesirable aspect of this part of the language is this: It is not even remotely coincidental that we add the “ss” sound to the end of a noun to make it possessive and that most modern possessive pronouns are just the pronoun with a “ss” sound at the end. Nevertheless, the rule is “use the appropriate possessive pronoun”… that’s a bleeding lookup table! A lookup table for something that is nearly always an algorithmic modification is not something I like in a language design. More importantly, when it comes to the spoken word the rule for making *nouns possessive is “almost always add ‘ss’”. ‘Always’ is better than ‘almost always’ (but too much to ask). Given ‘almost always’ , the same kind of rule for converting them all to written form would be far superior.
According to subjectively-objective criteria, this feature of English sucks. If nothing else it would be fair to say that my ‘subjective’ is at least not entirely arbitrary, whether or not you share the same values with respect to language.
Yes, this is definitely a difference in how we perceive the language. I don’t see any inherent problem with a lookup table in the language, given that most of the language is already lookup tables in the same sense (what distinguishes ‘couch’ from ‘chair’, for instance). And it would not occur to me to have a rule for “*nouns” rather than the actual separate rules for nouns and pronouns. Note also that pronouns have possessive adjective and possessive pronoun forms, while nouns do not. They’re an entirely different sort of animal.
So I would not think to write “It’s brand is whichever brand is it’s” instead of “its brand is whichever brand is its” anymore than I would think to write “me’s brand is whichever brand is me’s” (or whatever) instead of “my brand is whichever brand is mine”
Yes, this is definitely a difference in how we perceive the language.
I suspect the difference extends down to the nature of our thought processes. Let me see… using Myers-Briggs terminology and from just this conversation I’m going to guess ?STJ.
I tend to test as INTP/INTJ depending, I think, on whether I’ve been doing ethics lately. But then, I’m pretty sure it’s been shown that inasmuch as that model has any predictive power, it needs to be evaluated in context… so who knows about today.
That’s not exactly true, and I didn’t think it had terribly much bearing to my point on account of we’re talking about pronouns, but I’ll amend the parent.
Indeed, and while we’re on the subject of idiolects: my preference is for the spelling to follow the pronunciation. Hence either “Charles’s tie” or “Charles’ tie” is correct, depending on how you want it to be pronounced (in this case I usually prefer the latter option, but the meter of the sentence may sometimes make the other a better choice).
“If you ask the AI when it made its decision it will either point to the time after the analysis or it will be wrong.”
I use “decision” precisely to refer the experience that we have when we make a decision, and this experience has no mathematical definition. So you may believe yourself right about this, but you don’t have (and can’t have) any mathematical proof of it.
(I corrected this comment so that it says “mathematical proof” instead of proof in general.)
Making a claim, and then, when given counter-arguments, claiming that one was using an exotic definition seems close to logical rudeness to me.
It also does his initial position a disservice. Rereading the original claim with the professed intended meaning changes it from “not quite technical true” to, basically, nonsense (at least in as much as it claims to pertain to AIs).
I don’t think my definition is … inconsistent with the sense used in decision theory.
You defined decision as a mathematical undefinable experience and suggested that it cannot be subject to proofs. That isn’t even remotely compatible with the sense used in decision theory.
It is compatible with it as an addition to it; the mathematics of decision theory does not have decisions happening at particular moments in time, but it consistent with decision theory to recognize that in real life, decisions do happen at particular moments.
No, but surely some chunks of similarly-transparent code would appear in an algorithm for making decisions. And since I can read that code and know what it outputs without executing it, surely a superintelligence could read more complex code and know what it outputs without executing it. So it is patently false that in principle the AI will not be able to know the output of the algorithm without executing it.
Any chunk of transparent code won’t be the code for making an intelligent decision. And the decision algorithm as a whole won’t be transparent to the same intelligence, but perhaps only to something still more intelligent.
Any chunk of transparent code won’t be the code for making an intelligent decision.
Do you have a proof of this statement? If so, I will accept that it is not in principle possible for an AI to predict what its decision algorithm will return without executing it.
Of course, logical proof isn’t entirely necessary when you’re dealing with Bayesians, so I’d also like to see any evidence that you have that favors this statement, even if it doesn’t add up to a proof.
It’s not possible to prove the statement because we have no mathematical definition of intelligence.
Eliezer claims that it is possible to create a superintelligent AI which is not conscious. I disagree with this because it is basically saying that zombies are possible. True, he would say that he only believes that human zombies are impossible, not that zombie intelligences in general are impossible. But in that case he has no idea whatsoever what consciousness corresponds to in the physical world, and in fact has no reason not to accept dualism.
My position is more consistent: all zombies are impossible, and any intelligent being will be conscious. So it will also have the subjective experience of making decisions. But it is essential to this experience that you don’t know what you’re going to do before you do it; when you experience knowing what you’re going to do, you experience deciding to do it.
Therefore any AI that runs code capable of predicting its decisions, will at that very time subjectively experience making those decisions. And on the other hand, given that a block of code will not cause it to feel the sensation of deciding, that block of code must be incapable of predicting its decision algorithm.
You may still disagree, but please note that this is entirely consistent with everything you and wedrifid have argued, so his claim that I have been refuted is invalid.
As I recall, Eliezer’s definition of consciousness is borrowed from GEB- it’s when the mind examines itself, essentially. That has very real physical consequences, so the idea of non-conscious AGI doesn’t support the idea of zombies, which require consciousness to have no physical effects.
Any AGI would be able to examine itself, so if that is the definition of consciousness, every intelligence would be conscious. But Eliezer denies the latter, so he also implicitly denies that definition of consciousness.
Any AGI would be able to examine itself, so if that is the definition of consciousness, every intelligence would be conscious. But Eliezer denies the latter, so he also implicitly denies that definition of consciousness.
I’m not sure I am parsing correctly what you’ve wrote. It may rest with your use of the word “intelligence”- how are you defining that term?
You could replace it with “AI.” Any AI can examine itself, so any AI will be conscious, if consciousness is or results from examining itself. I agree with this, but Eliezer does not.
My position is more consistent: all zombies are impossible, and any intelligent being will be conscious. So it will also have the subjective experience of making decisions. But it is essential to this experience that you don’t know what you’re going to do before you do it; when you experience knowing what you’re going to do, you experience deciding to do it.
Therefore any AI that runs code capable of predicting its decisions, will at that very time subjectively experience making those decisions. And on the other hand, given that a block of code will not cause it to feel the sensation of deciding, that block of code must be incapable of predicting its decision algorithm.
I don’t have any problem granting that “any intelligent being will be conscious”, nor that “It will have the subjective experience of making decisions”, though that might just be because I don’t have a formal specification of either of those—we might still be talking past each other there.
But it is essential to this experience that you don’t know what you’re going to do before you do it
I don’t grant this. Can you elaborate?
when you experience knowing what you’re going to do, you experience deciding to do it.
I’m not sure that’s true, or in what sense it’s true. I know that if someone offered me a million dollars for my shoes, I would happily sell them my shoes. Coming to that realization didn’t feel to me like the subjective feeling of deciding to sell something to someone at the time, as compared to my recollection of past transactions.
Therefore any AI that runs code capable of predicting its decisions, will at that very time subjectively experience making those decisions.
Okay, that follows from the previous claim.
And on the other hand, given that a block of code will not cause it to feel the sensation of deciding, that block of code must be incapable of predicting its decision algorithm.
If I were moved to accept your previous claim, I would now be skeptical of the claim that “a block of code will not cause it to feel the sensation of deciding”. Especially since we’ve already shown that some blocks of code would be capable of predicting some decision algorithms.
that block of code must be incapable of predicting its decision algorithm.
This follows, but I draw the inference in the opposite direction, as noted above.
I would distinguish between “choosing” and “deciding”. When we say “I have some decisions to make,” we also mean to say that we don’t know yet what we’re going to do.
On the other hand, it is sometimes possible for you to have several options open to you, and you already know which one you will “choose”. Your example of the shoes and the million dollars is one such case; you could choose not to take the million dollars, but you would not, and you know this in advance.
Given this distinction, if you have a decision to make, as soon as you know what you will or would do, you will experience making a decision. For example, presumably there is some amount of money ($5? $20? $50? $100? $300?) that could be offered for your shoes such that you are unclear whether you should take the offer. As soon as you know what you would do, you will feel yourself “deciding” that “if I was offered this amount, I would take it.” It isn’t a decision to do something concretely, but it is still a decision.
And what about an AI that can predict it’s own decisions (because it knows its source code)?
Also, are you a compatibilist?
I believe that a compatibilist can accept both freewill and determinism at the same time. I reject them both as not useful to understanding decisions. I think there is a difference between believing both A and B and believing neither A or B. It seems to me unlikely that an AI could predict its own decisions by examining its source code but not running the code. But I am not sure it is completely impossible just because I cannot see how it would be done. If it were possible I would be extremely surprised if it was faster or easier that just running the code.
As I’ve stated before, no AI can predict its own decisions in that sense (i.e. in detail, before it has made them.) Knowing its source code doesn’t help; it has to run the code in order to know what result it gets.
I suggest that it can but it is totally pointless for it to do so.
Things can be proved from source code without running it. This applies to any source code, including that of oneself. Again, it doesn’t seem a particularly useful thing to do in most cases.
I’m wondering why this got downvoted—it’s true!
For example if the top-level decision function of an AI is:
… and the AI doesn’t self-modify, then it can predict that it will decide to self destruct if it falls in the water, only by analysing the code, without running it (also assuming, of course, that it is good enough at code analysis).
Of course, you can imagine AIs that can’t predict any of it’s decisions, and as wedfrid says, in most non-trivial cases, most probably wouldn’t be able to.
(This may be important, because having provable decisions in certain situations could be key to cooperation in prisonner’s-dilemma-type situations)
Of course that is predictable, but that code wouldn’t exist in any intelligent program, or at least it isn’t an intelligent action; predicting it is like predicting that I’ll die if my brain is crushed.
Unknowns, we’ve been over this issue before. You don’t need to engage in perfect prediction in order to be able to usefully predict. Moreover, even if you can’t predict everything you can still examine and improve specific modules. For example, if an AI has a module for factoring integers using a naive, brute-force factoring algorithm, it could examine that and decide to replace it with a quicker, more efficient module for factoring (that maybe used the number field sieve for example). It can do that even though it can’t predict the precise behavior of the module without running it.
I certainly agree that an AI can predict some aspects of its behavior.
That’s also because this is a simplified example, merely intended to provide a counter-example to your original assertion.
Agreed, it isn’t an intelligent action, but if you start saying intelligent agents can only take intelligent decisions, then you’re playing No True Scotsman.
I can imagine plenty of situations where someone might want to design an agent that takes certain unintelligent decisions in certain circumstances, or an agent that self-modifies in that way. If an agent can not only make promises, but also formally prove by showing it’s own source code that those promises are binding and that it can’t change them - then it may be at an advantage for negociations and cooperation over an agent that can’t do that.
So “stupid” decisions that can be predicted by reading one’s own source code isn’t a feature that I consider unlikely in the design-space of AIs.
I would agree with that. But I would just say that the AI would experience doing those things (for example keeping such promises) as we experience reflex actions, not as decisions.
Why not?
In what way is it like that, and how is that relevant to the question?
It’s like that precisely because it is easily predictable; as I said in another reply, an AI will experience its decisions as indeterminate, so anything it knows in advance in such a determinate way, will not be understood as a decision, just as I don’t decide to die if my brain is crushed, but I know that will happen. In the same way the AI will merely know that it will self-destruct if it is placed under water.
From this, it seems like your argument for why this will not appear in its decision algorithm, is simply that you have a specific definition for “decision” that requires the AI to “understand it as a decision”. I don’t know why the AI has to experience its decisions as indeterminate (indeed, that seems like a flawed design if its decisions are actually determined!).
Rather, any code that leads from inputs to a decision should be called part of the AI’s ‘decision algorithm’ regardless of how it ‘feels’. I don’t have a problem with an AI ‘merely knowing’ that it will make a certain decision. (and be careful - ‘merely’ is an imprecise weasel word)
It isn’t a flawed design because when you start running the program, it has to analyze the results of different possible actions. Yes, it is determined objectively, but it has to consider several options as possible actions nonetheless.
This is false for some algorithms, and so I imagine it would be false for the entirety of the AI’s source code. For example (ANSI C):
I know that i is equal to 5 after this code is executed, and I know that without executing the code in any sense.
Now, I am not certain about this, but we have to examine that code before we know it’s outcome.
While this isn’t “Running” the code in the traditional sense of computation as we are familiar with it today, it does seem that the code is sort of run by our brains as a simulation as we scan it.
As sort of meta-process if you will...
I could be so wrong about that though… eh...
Also, that code is useless really, except maybe as a wait function… It doesn’t really do anything (Not sure why Unknowns gets voted up in the first post above, and down below)...
Also, leaping from some code to the Entirety of an AI’s source code seems to be a rather large leap.
“some code” is part of “the entirety of an AI’s source code”—if it doesn’t need to execute some part of the code, then it doesn’t need to execute the entirety of the code.
That isn’t an algorithm for making decisions.
No, but note the text:
It is, incidently, trivial to alter the code to an algorithm for making decisions and also simple to make it an algorithm that can predict it’s decision before making it.
The do_self_analysis method (do they call them methods or functions? Too long since I’ve used C) can browse the entire source code of the AI, determine that the above piece of code is the algorithm for making the relevant decision, prove that do_self_analysis doesn’t change anything or perform any output and does return in finite time and then go on to predict that the AI will behave like a really inefficient defection rock. Quite a while later it will actually make the decision to defect.
All rather pointless but the concept is proved.
When the AI runs the code for predicting it’s action, it will have the subjective experience of making the decision. Later “it will actually make the decision to defect” only in the sense that the external result will come at that time. If you ask it when it made it’s decision, it will point to the time when it analyzed the code.
You are mistaken. I consider the explanations given thus far by myself and others sufficient. (No disrespect intended beyond that implicit in the fact of disagreement itself and I did not vote on the parent.)
The explanations given say nothing about the AI’s subjective experience, so they can’t be sufficient to refute my claim about that.
Consider my reply to be to the claim:
If you ask the AI when it made its decision it will either point to the time after the analysis or it will be wrong.
I avoided commenting on the ‘subjective experience’ side of things because I thought it was embodying a whole different kind of confusion. It assumes that the AI executes some kind of ‘subjective experience’ reasoning that is similar to that of humans (or some subset thereof). This quirk relies on lacking any strong boundaries between thought processes. People usually can’t predict their decisions without making them. For both the general case and the specific case of the code I gave a correctly implemented module that could be given the label ‘subjective experience’ would see the difference between prediction and analysis.
I upvoted the parent for the use of it’s. I usually force myself to write its in that context but cringe while doing so. The syntax of the English language is annoying.
Really? Do you also cringe when using theirs, yours, ours, mine, and thine?
Mine and thine? They don’t belong in the category. The flaw isn’t that all words about possession should have an apostrophe. The awkwardness is that the pattern of adding the “s” to the end to indicate ownership is the same from “Fred’s” to “its” but arbitrarily not punctuated in the same way. The (somewhat obsolete) “ine” is a distinct mechanism of creating a possessive pronoun which while adding complexity at least doesn’t add inconsistency.
As for “theirs, yours and ours”, they prompt cringes in decreasing order of strength (in fact, it may not be a coincidence that you asked in that order). Prepend “hers” to the list and append “his”. “Hers” and “theirs” feel more cringe-worthy, as best as I can judge, because they are closer in usage to “Fred’s” while “ours” is at least a step or two away. “His” is a special case in as much as it is a whole different word. It isn’t a different mechanism like “thine” or “thy” but it isn’t “hes” either. I have never accidentally typed “hi’s”.
You’re just reading the wrong pattern. There are simple, consistent rules:
When making a noun possessive, EDIT:
add ’suse the appropriate possessive form with an apostropheWhen making a pronoun possessive, use the appropriate possessive pronoun (none of which have an apostrophe)
EDIT:
Leaving out ” Jesus’ ” for the moment...No, I’m not reading the wrong pattern. I’m criticising the pattern in terms of the objective and emotional-subjective criteria that I use for evaluating elements of languages and communication patterns in general. I am aware of the rules in question and more than capable of implementing it and the hundreds of other rules that go into making our language.
The undesirable aspect of this part of the language is this: It is not even remotely coincidental that we add the “ss” sound to the end of a noun to make it possessive and that most modern possessive pronouns are just the pronoun with a “ss” sound at the end. Nevertheless, the rule is “use the appropriate possessive pronoun”… that’s a bleeding lookup table! A lookup table for something that is nearly always an algorithmic modification is not something I like in a language design. More importantly, when it comes to the spoken word the rule for making *nouns possessive is “almost always add ‘ss’”. ‘Always’ is better than ‘almost always’ (but too much to ask). Given ‘almost always’ , the same kind of rule for converting them all to written form would be far superior.
According to subjectively-objective criteria, this feature of English sucks. If nothing else it would be fair to say that my ‘subjective’ is at least not entirely arbitrary, whether or not you share the same values with respect to language.
Yes, this is definitely a difference in how we perceive the language. I don’t see any inherent problem with a lookup table in the language, given that most of the language is already lookup tables in the same sense (what distinguishes ‘couch’ from ‘chair’, for instance). And it would not occur to me to have a rule for “*nouns” rather than the actual separate rules for nouns and pronouns. Note also that pronouns have possessive adjective and possessive pronoun forms, while nouns do not. They’re an entirely different sort of animal.
So I would not think to write “It’s brand is whichever brand is it’s” instead of “its brand is whichever brand is its” anymore than I would think to write “me’s brand is whichever brand is me’s” (or whatever) instead of “my brand is whichever brand is mine”
I suspect the difference extends down to the nature of our thought processes. Let me see… using Myers-Briggs terminology and from just this conversation I’m going to guess ?STJ.
I tend to test as INTP/INTJ depending, I think, on whether I’ve been doing ethics lately. But then, I’m pretty sure it’s been shown that inasmuch as that model has any predictive power, it needs to be evaluated in context… so who knows about today.
There’s one more rule—if the noun you’re making possessive ends with an s (this applies to both singular and plural nouns), just add an apostrophe.
That’s not exactly true, and I didn’t think it had terribly much bearing to my point on account of we’re talking about pronouns, but I’ll amend the parent.
Indeed, and while we’re on the subject of idiolects: my preference is for the spelling to follow the pronunciation. Hence either “Charles’s tie” or “Charles’ tie” is correct, depending on how you want it to be pronounced (in this case I usually prefer the latter option, but the meter of the sentence may sometimes make the other a better choice).
“If you ask the AI when it made its decision it will either point to the time after the analysis or it will be wrong.”
I use “decision” precisely to refer the experience that we have when we make a decision, and this experience has no mathematical definition. So you may believe yourself right about this, but you don’t have (and can’t have) any mathematical proof of it.
(I corrected this comment so that it says “mathematical proof” instead of proof in general.)
I think most people on LessWrong are using “decision” in the sense used in Decision Theory.
Making a claim, and then, when given counter-arguments, claiming that one was using an exotic definition seems close to logical rudeness to me.
It also does his initial position a disservice. Rereading the original claim with the professed intended meaning changes it from “not quite technical true” to, basically, nonsense (at least in as much as it claims to pertain to AIs).
I don’t think my definition is either exotic or inconsistent with the sense used in decision theory.
You defined decision as a mathematical undefinable experience and suggested that it cannot be subject to proofs. That isn’t even remotely compatible with the sense used in decision theory.
It is compatible with it as an addition to it; the mathematics of decision theory does not have decisions happening at particular moments in time, but it consistent with decision theory to recognize that in real life, decisions do happen at particular moments.
If you believe that we can’t have any proof of it, then you’re wasting our time with arguments.
You might have a proof of it, but not a mathematical proof.
Also note that your comment that I would be “wasting our time” implies that you think that you couldn’t be wrong.
How many legs does an animal have if I call a tail a leg and believe all animals are quadrupeds?
How many legs does a dog have if I call a tail a leg?
No, but surely some chunks of similarly-transparent code would appear in an algorithm for making decisions. And since I can read that code and know what it outputs without executing it, surely a superintelligence could read more complex code and know what it outputs without executing it. So it is patently false that in principle the AI will not be able to know the output of the algorithm without executing it.
Any chunk of transparent code won’t be the code for making an intelligent decision. And the decision algorithm as a whole won’t be transparent to the same intelligence, but perhaps only to something still more intelligent.
Do you have a proof of this statement? If so, I will accept that it is not in principle possible for an AI to predict what its decision algorithm will return without executing it.
Of course, logical proof isn’t entirely necessary when you’re dealing with Bayesians, so I’d also like to see any evidence that you have that favors this statement, even if it doesn’t add up to a proof.
It’s not possible to prove the statement because we have no mathematical definition of intelligence.
Eliezer claims that it is possible to create a superintelligent AI which is not conscious. I disagree with this because it is basically saying that zombies are possible. True, he would say that he only believes that human zombies are impossible, not that zombie intelligences in general are impossible. But in that case he has no idea whatsoever what consciousness corresponds to in the physical world, and in fact has no reason not to accept dualism.
My position is more consistent: all zombies are impossible, and any intelligent being will be conscious. So it will also have the subjective experience of making decisions. But it is essential to this experience that you don’t know what you’re going to do before you do it; when you experience knowing what you’re going to do, you experience deciding to do it.
Therefore any AI that runs code capable of predicting its decisions, will at that very time subjectively experience making those decisions. And on the other hand, given that a block of code will not cause it to feel the sensation of deciding, that block of code must be incapable of predicting its decision algorithm.
You may still disagree, but please note that this is entirely consistent with everything you and wedrifid have argued, so his claim that I have been refuted is invalid.
As I recall, Eliezer’s definition of consciousness is borrowed from GEB- it’s when the mind examines itself, essentially. That has very real physical consequences, so the idea of non-conscious AGI doesn’t support the idea of zombies, which require consciousness to have no physical effects.
Any AGI would be able to examine itself, so if that is the definition of consciousness, every intelligence would be conscious. But Eliezer denies the latter, so he also implicitly denies that definition of consciousness.
I’m not sure I am parsing correctly what you’ve wrote. It may rest with your use of the word “intelligence”- how are you defining that term?
You could replace it with “AI.” Any AI can examine itself, so any AI will be conscious, if consciousness is or results from examining itself. I agree with this, but Eliezer does not.
Yes we do, ability to apply optimization pressure in a wide variety of environments. The platonic ideal of which is AIXI.
Can you please provide a link?
http://lesswrong.com/lw/x5/nonsentient_optimizers/
Thank you. I agree with Eliezer for reasons touched on in my comments to simplicio’s Consciousness of simulations & uploads thread.
I don’t have any problem granting that “any intelligent being will be conscious”, nor that “It will have the subjective experience of making decisions”, though that might just be because I don’t have a formal specification of either of those—we might still be talking past each other there.
I don’t grant this. Can you elaborate?
I’m not sure that’s true, or in what sense it’s true. I know that if someone offered me a million dollars for my shoes, I would happily sell them my shoes. Coming to that realization didn’t feel to me like the subjective feeling of deciding to sell something to someone at the time, as compared to my recollection of past transactions.
Okay, that follows from the previous claim.
If I were moved to accept your previous claim, I would now be skeptical of the claim that “a block of code will not cause it to feel the sensation of deciding”. Especially since we’ve already shown that some blocks of code would be capable of predicting some decision algorithms.
This follows, but I draw the inference in the opposite direction, as noted above.
I would distinguish between “choosing” and “deciding”. When we say “I have some decisions to make,” we also mean to say that we don’t know yet what we’re going to do.
On the other hand, it is sometimes possible for you to have several options open to you, and you already know which one you will “choose”. Your example of the shoes and the million dollars is one such case; you could choose not to take the million dollars, but you would not, and you know this in advance.
Given this distinction, if you have a decision to make, as soon as you know what you will or would do, you will experience making a decision. For example, presumably there is some amount of money ($5? $20? $50? $100? $300?) that could be offered for your shoes such that you are unclear whether you should take the offer. As soon as you know what you would do, you will feel yourself “deciding” that “if I was offered this amount, I would take it.” It isn’t a decision to do something concretely, but it is still a decision.