It seems someone has downvoted you for not being familiar with Eliezer’s work on AI. Basically, this is overly anthropomorphic. It is one of our goals to ensure that an AI can progress from a ‘seed AI’ to a superintelligent AI without anything going wrong, but, in practice, we’ve observed that using metaphors like ‘parenting’ confuses people too much to make progress, so we avoid it.
I wasn’t using parenting as a metaphor. I meant it quite literally (only the educational part, not the diaper changing).
One of the fundamental attributes of an AI is that it’s a program which can learn new things.
Humans are also entities that learn new things.
But humans, left alone, don’t fare so well. Helping people learn is important, especially children. This avoids having everyone reinvent the wheel.
The parenting issue therefore must be addressed for AI. I am familiar with the main ideas of the kind of AI work you guys do, but I have not found the answer to this.
One possible way to address it is to say the AI will reinvent the wheel. It will have no help but just figure everything out from scratch.
Another approach would be to program some ideas into the AI (changeable, or not, or some of each), and then leave it alone with that starting point.
Another approach would be to talk with the AI, answer its questions, lecture it, etc… This is the approach humans use with their children.
Each of these approaches has various problems with it which are non-trivial to solve.
I wasn’t using parenting as a metaphor. I meant it quite literally (only the educational part, not the diaper changing).
When humans hear parenting, they think of the human parenting process. Describe the AI as ‘learning’ and the humans as ‘helping it learn’. This get us closer to the idea of humans learning about the universe around them, rather than being raised as generic members of society.
Don’t worry about downvotes, they do not matter.
Well, the point of down votes is discourage certain behaviour, and I agree that you should use terminology that we have found less likely to cause confusion.
This is definitely an important problem, but we’re not really at the stage where it is necessary yet. I don’t see how we could make much progress on how to get an AI to learn without knowing the algorithms that it will use to learn.
When humans hear parenting, they think of the human parenting process.
Not all humans. Not me. Is that not a bias?
Well, the point of down votes is discourage certain behaviour
I don’t discourage without any argument being given, just on the basis of someone’s judgement without knowing the reason. I don’t think I should. I think that would be irrational. I’m surprised that this community wants to encourage people to conform to the collective opinion of others as expressed by votes.
I don’t see how we could make much progress on how to get an AI to learn without knowing the algorithms that it will use to learn.
OK, I think I see where you are coming from. However, there is only one known algorithm that learns (creates knowledge). It is, in short, evolution. We should expect an AI to use it, we shouldn’t expect a brand new solution to this hard problem (historically there have been very few candidate solutions proposed, most not at all promising).
The implementation details are not very important because the result will be universal, just like people are. This is similar to how the implementation details of universal computers are not important for many purposes.
Are you guys familiar with these concepts? There is important knowledge relevant to creating AIs which your statement seems to me to overlook.
I don’t discourage without any argument being given, just on the basis of someone’s judgement without knowing the reason. I don’t think I should. I think that would be irrational. I’m surprised that this community wants to encourage people to conform to the collective opinion of others as expressed by votes.
As a general rule, if I downvote, I either reply to the post, or it is something that should be obvious to someone who has read the main sequences.
OK, I think I see where you are coming from. However, there is only one known algorithm that learns (creates knowledge). It is, in short, evolution.
No, there is another: the brain. It is also much faster than evolution, an advantage I would want a FAI to have.
You’re conflating two things. Biological evolution is a very specific algorithm, with well-studied mathematical properties. ‘Evolution’ in general just means any change over time. You seem to be using it in an intermediate sense, as any change that proceeds through reproduction, variation, and selection, which is also a common meaning. This, however, is still very broad, so there’s very little that you can learn about an AI just from knowing “it will come up with many ideas, mostly based on previous ones, and reject most of them”. This seems less informative than “it will look at evidence and then rationally adjust its understanding”.
Why is it that you guys want to make AI but don’t study relevant topics like this?
Eliezer has studied cognitive science. Those of us not working directly with him have very little to do with AI design. Even Eliezer’s current work is slightly more background theory than AI itself.
I’m not conflating them. I did not mean “change over time”.
There are many things we can learn from evolutionary epistemology. It seeming broad to you does not prevent that. You would do better to ask what good it is instead of guess it is no good.
For one thing it connects with meme theory.
A different example is that it explains misunderstandings when people communicate. Misunderstandings are extremely common because communication involves 1) guessing what the other person is trying to say 2) selecting between those guesses with criticism 3) making more guesses which are variants of previous guesses 4) more selection 5) etc
This explanation helps us see how easily communication can go wrong. It raises interesting issues like why so much communication doesn’t go wrong. It refutes various myths like that people absorb their teacher’s lectures a little like sponges.
It matters. And other explanations of miscommunication are worse.
Eliezer has studied cognitive science.
But that isn’t the topic I was speaking of. I meant evolutionary epistemology. Which btw I know that Eliezer has not studied much because he isn’t familiar with one of it’s major figures (Popper).
Evolution is a largely philosophical theory (distinct from the scientific theory about the history of life of earth). It is a theory of epistemology. Some parts of epistemology technically depend on the laws of physics, but it is general researched separately from physics. There has not been any science experiment to test it which I consider important, but I could conceive of some because if you specified different and perverse laws of physics you could break evolution. In a different sense, evolution is tested constantly in that the laws of physics and evidence we see around us, every day, are not that perverse but conceivable physics that would break evolution.
The reason I accept evolution (again I refer to the epistemological theory about how knowledge is created) is that it is a good explanation, and it solves an important philosophical problem, and I don’t know anything wrong with it, and I also don’t know any rivals which solve the problem.
The problem has a long history. Where does “apparent design” come from? Paley gave an example of finding a watch in nature, which he said you know can’t have gotten there by chance. That’s correct—the watch has knowledge (aka apparent designed, or purposeful complexity, or many other terms). The watch is adapted “to a purpose” as some people put it (I’m not really a fan of the purpose terminology. But it’s adapted! And I think it gets the point across ok.)
Paley then guessed as follows: there is no possible solution to the origins of knowledge other than “A designer (God) created it”. This is a very bad solution even pre-Darwin because it does not actually solve the problem. The designer itself has knowledge, adaptation to a purpose, whatever. So where did it come from? The origin is not answered.
Since then, the problem has been solved by the theory of evolution and nothing else. And it applies to more than just watches found in nature, and to plants and animals. It also applies to human knowledge. The answer “intelligence did it” is no better than “God did it”. How does intelligence do it? The only known answer is: by evolution.
The best thing to read on this topic is The Beginning of Infinity by David Deutsch which discusses Popperian epistemology, evolution, Paley’s problem and its solution, and also has two chapters about meme theory which give important applications.
Also here: http://fallibleideas.com/tradition (Deutsch discusses static and dynamic memes and societies. I discuss “traditions” rather than “memes”. It’s quite similar stuff.)
Evolution is a largely philosophical theory (distinct from the scientific theory about the history of life of earth). It is a theory of epistemology. Some parts of epistemology technically depend on the laws of physics, but it is general researched separately from physics.
What? Epistemological evolution seems to be about how the mind works, independent of what philosophical status is accorded to the thoughts. Surely it could be tested just by checking if the mind actually develops ideas in accordance with the way it is predicted to.
If you want to check how minds work, you could do that. But that’s very hard. We’re not there yet. We don’t know how.
How minds work is a separate issue from evolutionary epistemology. Epistemology is about how knowledge is created (in abstract, not in human minds specifically). If it turns out there is another way, it wouldn’t upset the evolution would create knowledge if done in minds.
There’s no reason to think there is another way. No argument that there is. No explanation of why to expect there to be. No promising research on the verge of working one out. Shrug.
Epistemology is about how knowledge is created (in abstract, not in human minds specifically).
I see. I thought that evolutionary epistemology was a theory of human minds, though I know that that technically isn’t epistemology. Does evolutionary epistemology describe knowledge about the world, mathematical knowledge, or both (I suspect you will say both)?
So, you’re saying that in order to create knowledge, there has to be copying, variation, and selection. I would agree with the first two, but not necessarily the third. Consider a formal axiomatic system. It produces an ever-growing list of theorems, but none are ever selected any more than others. Would you still consider this system to be learning?
With deduction, all the consequences are already contained in the premises and axioms. Abstractly, that’s not learning.
When human mathematicians do deduction, they do learn stuff, because they also think about stuff while doing it, they don’t just mechanically and thoughtlessly follow the rules of math.
So induction (or probabilistic updating, since you said that Popper proved it not to be the same as whatever philosopher call ‘induction’) isn’t learning either because the conclusions are contained in the priors and observations?
If the axiomatic system was physically implemented in a(n ever-growing) computer, would you consider this learning?
the idea of induction is that the conclusions are NOT logically contained in the observations (that’s why it is not deduction).
if you make up a prior from which everything deductively follows, and everything else is mere deduction from there, then all of your problems and mistakes are in the prior.
If the axiomatic system was physically implemented in a(n ever-growing) computer, would you consider this learning?
no. learning is creating new knowledge. that would simply be human programmers putting their own knowledge into a prior, and then the machine not creating any new knowledge that wasn’t in the prior.
The correct method of updating one’s probability distributions is contained in the observations. P(H|E) = P(H)P(E|H)/P(E) .
If the axiomatic system was physically implemented in a(n ever-growing) computer, would you consider this learning?
no. learning is creating new knowledge. that would simply be human programmers putting their own knowledge into a prior, and then the machine not creating any new knowledge that wasn’t in the prior.
So how could evolutionary epistemology be relevant to AI design?
AIs are programs that create knowledge. That means they need to do evolution. That means they need, roughly, a conjecture generator, a criticism generator, and a criticism evaluator. The conjecture generator might double as the criticism generator since a criticism is a type of conjecture, but it might not.
The conjectures need to be based on the previous conjectures (not necessarily all of the, but some). That makes it replication with variation. The criticism is the selection.
Any AI design that completely ignores this is, imo, hopeless. I think that’s why the AI field hasn’t really gotten anywhere. They don’t understand what they are trying to make, because they have the wrong philosophy (in particular the wrong explanations. i don’t mean math or logic).
AIs are programs that create knowledge. That means they need to do evolution. That means they need, roughly, a conjecture generator, a criticism generator, and a criticism evaluator. The conjecture generator might double as the criticism generator since a criticism is a type of conjecture, but it might not.
Note that there are AI approaches which do do something close to what you think an AI “needs”. For example, some of Simon Colton’s work can be thought of in a way roughly like what you want. But it is a mistake to think that such an entity needs to do that. (Some of the hardcore Bayesians make the same mistake in assuming that an AI must use a Bayesian framework. That something works well as a philosophical approach is not the same claim as that it should work well in a specific setting where we want an artificial entity to produce certain classes of systematic reliable results.)
Those aren’t AIs. They do not create new knowledge. They do not “learn” in my sense—of doing more than they were programmed to. All the knowledge is provided by the human programmer—they are designed by an intelligent person and to the extent they “act intelligent” it’s all due to the person providing the thinking for it.
Those aren’t AIs. They do not create new knowledge. They do not “learn” in my sense—of doing more than they were programmed to.
I’m not sure this is at all well-defined. I’m curious, what would make you change your mind? If for example, Colton’s systems constructed new definitions, proofs, conjectures, and counter-examples in math would that be enough to decide they were learning?
Could you explain how this is connected to the issue of making new knowledge?
Or: show me the code, and explain to me how it works, and how the code doesn’t contain all the knowledge the AI creates.
This seems a bit like showing a negative. I will suggest you look for a start at Simon Colton’s paper in the Journal of Integer Sequences which uses a program that operates in a way very close to the way you think an AI would need to operate in terms of making conjectures and trying to refute them. I don’t know if the source code is easily available. It used to be on Colton’s website but I don’t see it there anymore; if his work seems at all interesting to you you can presumably email him requesting a copy. I don’t know how to show that the AI “doesn’t contain all the knowledge the AI creates” aside from the fact that the system constructed concepts and conjectures in number theory which had not previously been constructed. Moreover, Colton’s own background in number theory is not very heavy, so it is difficult to claim that he’s importing his own knowledge into the code. If you define more precisely what you mean by the code containing the knowledge I might be able to answer that further. Without a more precise notion it isn’t clear to me how to respond.
Holding a conversation requires creating knowledge of what the other guy is saying.
In deduction, you agree that the conclusions are logically contained in the premises and axioms, right? They aren’t something new.
In a spam filter, a programmer figures out how he wants spam filtered (he has the idea), then he tells the computer to do it. The computer doesn’t figure out the idea or any new idea.
With biological evolution, for example, we see something different. You get stuff out, like cats, which weren’t specified in advance. And they aren’t a trivial extension; they contain important knowledge such as the knowledge of optics that makes their eyes work. This is why “Where can cats come from?” has been considered an important question (people want an explanation of the knowledge which i sometimes called “apparent design), while “Where can rocks come from?” is not in the same category of question (it does have some interest for other reasons).
With people, people create ideas that aren’t in their genes, and were’t told to them by their parents or anyone else. That includes abstract ideas that aren’t the summation of observation. They sometimes create ideas no one ever thought of before. THey create new ideas.
In an AI (AGI you call it?) should be like a person: it should create new ideas which are not in it’s “genes” (programming). If someone actually writes an AI they will understand how it works and they can explain it, and we can use their explanation to judge whether they “cheated” or not (whether they, e.g., hard coded some ideas into the program and then said the AI invented them).
In deduction, you agree that the conclusions are logically contained in the premises and axioms, right? They aren’t something new.
Ok. So to make sure I understand this claim. You are asserting that mathematicians are not constructing anything “new” when they discover proofs or theorems in set axiomatic systems?
With biological evolution, for example, we see something different. You get stuff out, like cats, which weren’t specified in advance. And they aren’t a trivial extension;
Are genetic algorithm systems then creating something new by your definition?
In an AI (AGI you call it?)
Different concepts. An artificial intelligent is not (necessarily) a well-defined notion. An AGI is an artficial general intelligence, essentially something that passes the Turing test. Not the same concept.
If someone actually writes an AI they will understand how it works and they can explain it, and we can use their explanation to judge whether they “cheated” or not (whether they, e.g., hard coded some ideas into the program and then said the AI invented them).
I see no reason to assume that a person will necessarily understand how an AGI they constructed works. To use the most obvious hypothetical, someone might make a neural net modeled very closely after the human brain that functions as an AGI without any understanding of how it works.
Ok. So to make sure I understand this claim. You are asserting that mathematicians are not constructing anything “new” when they discover proofs or theorems in set axiomatic systems?
When you “discover” that 2+1 = 3, given premises and axioms, you aren’t discovering something new.
But working mathematicians do more than that. They create new knowledge. It includes:
1) they learn new ways to think about the premises and axioms
2) they do not publish deductively implied facts unselectively or randomly. they choose the ones that they consider important. by making these choices they are adding content not found in the premises and axioms
3) they make choices between different possible proofs of the same thing. again where they make choices they are adding stuff, based on their own non-deductive understanding
4) when mathematicians work on proofs, they also think about stuff as they go. just like when experimental scientists do fairly mundane tasks in a lab, at the same time they will think and make it interesting with their thoughts.
Are genetic algorithm systems then creating something new by your definition?
They could be. I don’t think any exist yet that do. For example I read a Dawkins paper about one. In the paper he basically explained how he tweaked the code in order to get the results he wanted. He didn’t, apparently, realize that it was him, not the program, creating the output.
By “AI” I mean AGI. An intelligence (like a person) which is artificial. Please read all my prior statements in light of that.
I see no reason to assume that a person will necessarily understand how an AGI they constructed works. To use the most obvious hypothetical, someone might make a neural net modeled very closely after the human brain that functions as an AGI without any understanding of how it works.
Well, OK, but they’d understand how it was created, and could explain that. They could explain what they know about why it works (it copies what humans do). And they could also make the code public and discuss what it doesn’t include (e.g. hard coded special cases. except for the 3 he included on purpose, and he explains why they are there). That’d be pretty convincing!
Is anyone here working on the problem of parenting/educating AIs?
It seems someone has downvoted you for not being familiar with Eliezer’s work on AI. Basically, this is overly anthropomorphic. It is one of our goals to ensure that an AI can progress from a ‘seed AI’ to a superintelligent AI without anything going wrong, but, in practice, we’ve observed that using metaphors like ‘parenting’ confuses people too much to make progress, so we avoid it.
Don’t worry about downvotes, they do not matter.
I wasn’t using parenting as a metaphor. I meant it quite literally (only the educational part, not the diaper changing).
One of the fundamental attributes of an AI is that it’s a program which can learn new things.
Humans are also entities that learn new things.
But humans, left alone, don’t fare so well. Helping people learn is important, especially children. This avoids having everyone reinvent the wheel.
The parenting issue therefore must be addressed for AI. I am familiar with the main ideas of the kind of AI work you guys do, but I have not found the answer to this.
One possible way to address it is to say the AI will reinvent the wheel. It will have no help but just figure everything out from scratch.
Another approach would be to program some ideas into the AI (changeable, or not, or some of each), and then leave it alone with that starting point.
Another approach would be to talk with the AI, answer its questions, lecture it, etc… This is the approach humans use with their children.
Each of these approaches has various problems with it which are non-trivial to solve.
Make sense so far?
When humans hear parenting, they think of the human parenting process. Describe the AI as ‘learning’ and the humans as ‘helping it learn’. This get us closer to the idea of humans learning about the universe around them, rather than being raised as generic members of society.
Well, the point of down votes is discourage certain behaviour, and I agree that you should use terminology that we have found less likely to cause confusion.
AIs don’t necessarily have so much of a problem with this. They learn very differently than humans: http://lesswrong.com/lw/jo/einsteins_arrogance/ , http://lesswrong.com/lw/qj/einsteins_speed/ , http://lesswrong.com/lw/qk/that_alien_message/
This is definitely an important problem, but we’re not really at the stage where it is necessary yet. I don’t see how we could make much progress on how to get an AI to learn without knowing the algorithms that it will use to learn.
Not all humans. Not me. Is that not a bias?
I don’t discourage without any argument being given, just on the basis of someone’s judgement without knowing the reason. I don’t think I should. I think that would be irrational. I’m surprised that this community wants to encourage people to conform to the collective opinion of others as expressed by votes.
OK, I think I see where you are coming from. However, there is only one known algorithm that learns (creates knowledge). It is, in short, evolution. We should expect an AI to use it, we shouldn’t expect a brand new solution to this hard problem (historically there have been very few candidate solutions proposed, most not at all promising).
The implementation details are not very important because the result will be universal, just like people are. This is similar to how the implementation details of universal computers are not important for many purposes.
Are you guys familiar with these concepts? There is important knowledge relevant to creating AIs which your statement seems to me to overlook.
Yes, that would be a bias. Note that this kind of bias is not always explicitly noticed.
As a general rule, if I downvote, I either reply to the post, or it is something that should be obvious to someone who has read the main sequences.
No, there is another: the brain. It is also much faster than evolution, an advantage I would want a FAI to have.
You are unfamiliar with the basic concepts of evolutionary epistemology. The brain internally does evolution of ideas.
Why is it that you guys want to make AI but don’t study relevant topics like this?
You’re conflating two things. Biological evolution is a very specific algorithm, with well-studied mathematical properties. ‘Evolution’ in general just means any change over time. You seem to be using it in an intermediate sense, as any change that proceeds through reproduction, variation, and selection, which is also a common meaning. This, however, is still very broad, so there’s very little that you can learn about an AI just from knowing “it will come up with many ideas, mostly based on previous ones, and reject most of them”. This seems less informative than “it will look at evidence and then rationally adjust its understanding”.
There’s an article related to this: http://lesswrong.com/lw/l6/no_evolutions_for_corporations_or_nanodevices/
Eliezer has studied cognitive science. Those of us not working directly with him have very little to do with AI design. Even Eliezer’s current work is slightly more background theory than AI itself.
I’m not conflating them. I did not mean “change over time”.
There are many things we can learn from evolutionary epistemology. It seeming broad to you does not prevent that. You would do better to ask what good it is instead of guess it is no good.
For one thing it connects with meme theory.
A different example is that it explains misunderstandings when people communicate. Misunderstandings are extremely common because communication involves 1) guessing what the other person is trying to say 2) selecting between those guesses with criticism 3) making more guesses which are variants of previous guesses 4) more selection 5) etc
This explanation helps us see how easily communication can go wrong. It raises interesting issues like why so much communication doesn’t go wrong. It refutes various myths like that people absorb their teacher’s lectures a little like sponges.
It matters. And other explanations of miscommunication are worse.
But that isn’t the topic I was speaking of. I meant evolutionary epistemology. Which btw I know that Eliezer has not studied much because he isn’t familiar with one of it’s major figures (Popper).
I don’t know enough about evolutionary epistemology to evaluate the usefulness and applicability of its ideas.
How was evolutionary epistemology tested? Are there experiments or just introspection?
Evolution is a largely philosophical theory (distinct from the scientific theory about the history of life of earth). It is a theory of epistemology. Some parts of epistemology technically depend on the laws of physics, but it is general researched separately from physics. There has not been any science experiment to test it which I consider important, but I could conceive of some because if you specified different and perverse laws of physics you could break evolution. In a different sense, evolution is tested constantly in that the laws of physics and evidence we see around us, every day, are not that perverse but conceivable physics that would break evolution.
The reason I accept evolution (again I refer to the epistemological theory about how knowledge is created) is that it is a good explanation, and it solves an important philosophical problem, and I don’t know anything wrong with it, and I also don’t know any rivals which solve the problem.
The problem has a long history. Where does “apparent design” come from? Paley gave an example of finding a watch in nature, which he said you know can’t have gotten there by chance. That’s correct—the watch has knowledge (aka apparent designed, or purposeful complexity, or many other terms). The watch is adapted “to a purpose” as some people put it (I’m not really a fan of the purpose terminology. But it’s adapted! And I think it gets the point across ok.)
Paley then guessed as follows: there is no possible solution to the origins of knowledge other than “A designer (God) created it”. This is a very bad solution even pre-Darwin because it does not actually solve the problem. The designer itself has knowledge, adaptation to a purpose, whatever. So where did it come from? The origin is not answered.
Since then, the problem has been solved by the theory of evolution and nothing else. And it applies to more than just watches found in nature, and to plants and animals. It also applies to human knowledge. The answer “intelligence did it” is no better than “God did it”. How does intelligence do it? The only known answer is: by evolution.
The best thing to read on this topic is The Beginning of Infinity by David Deutsch which discusses Popperian epistemology, evolution, Paley’s problem and its solution, and also has two chapters about meme theory which give important applications.
You can also find some, e.g. here: http://fallibleideas.com/evolution-and-knowledge
Also here: http://fallibleideas.com/tradition (Deutsch discusses static and dynamic memes and societies. I discuss “traditions” rather than “memes”. It’s quite similar stuff.)
What? Epistemological evolution seems to be about how the mind works, independent of what philosophical status is accorded to the thoughts. Surely it could be tested just by checking if the mind actually develops ideas in accordance with the way it is predicted to.
If you want to check how minds work, you could do that. But that’s very hard. We’re not there yet. We don’t know how.
How minds work is a separate issue from evolutionary epistemology. Epistemology is about how knowledge is created (in abstract, not in human minds specifically). If it turns out there is another way, it wouldn’t upset the evolution would create knowledge if done in minds.
There’s no reason to think there is another way. No argument that there is. No explanation of why to expect there to be. No promising research on the verge of working one out. Shrug.
I see. I thought that evolutionary epistemology was a theory of human minds, though I know that that technically isn’t epistemology. Does evolutionary epistemology describe knowledge about the world, mathematical knowledge, or both (I suspect you will say both)?
It describes the creation of any type of knowledge. It doesn’t tell you the specifics of any field itself, but doing it helps you learn them.
So, you’re saying that in order to create knowledge, there has to be copying, variation, and selection. I would agree with the first two, but not necessarily the third. Consider a formal axiomatic system. It produces an ever-growing list of theorems, but none are ever selected any more than others. Would you still consider this system to be learning?
With deduction, all the consequences are already contained in the premises and axioms. Abstractly, that’s not learning.
When human mathematicians do deduction, they do learn stuff, because they also think about stuff while doing it, they don’t just mechanically and thoughtlessly follow the rules of math.
So induction (or probabilistic updating, since you said that Popper proved it not to be the same as whatever philosopher call ‘induction’) isn’t learning either because the conclusions are contained in the priors and observations?
If the axiomatic system was physically implemented in a(n ever-growing) computer, would you consider this learning?
the idea of induction is that the conclusions are NOT logically contained in the observations (that’s why it is not deduction).
if you make up a prior from which everything deductively follows, and everything else is mere deduction from there, then all of your problems and mistakes are in the prior.
no. learning is creating new knowledge. that would simply be human programmers putting their own knowledge into a prior, and then the machine not creating any new knowledge that wasn’t in the prior.
The correct method of updating one’s probability distributions is contained in the observations. P(H|E) = P(H)P(E|H)/P(E) .
So how could evolutionary epistemology be relevant to AI design?
AIs are programs that create knowledge. That means they need to do evolution. That means they need, roughly, a conjecture generator, a criticism generator, and a criticism evaluator. The conjecture generator might double as the criticism generator since a criticism is a type of conjecture, but it might not.
The conjectures need to be based on the previous conjectures (not necessarily all of the, but some). That makes it replication with variation. The criticism is the selection.
Any AI design that completely ignores this is, imo, hopeless. I think that’s why the AI field hasn’t really gotten anywhere. They don’t understand what they are trying to make, because they have the wrong philosophy (in particular the wrong explanations. i don’t mean math or logic).
Could you explain where AIXI does any of that?
Or could you explain where a Bayesian spam filter does that?
Note that there are AI approaches which do do something close to what you think an AI “needs”. For example, some of Simon Colton’s work can be thought of in a way roughly like what you want. But it is a mistake to think that such an entity needs to do that. (Some of the hardcore Bayesians make the same mistake in assuming that an AI must use a Bayesian framework. That something works well as a philosophical approach is not the same claim as that it should work well in a specific setting where we want an artificial entity to produce certain classes of systematic reliable results.)
Those aren’t AIs. They do not create new knowledge. They do not “learn” in my sense—of doing more than they were programmed to. All the knowledge is provided by the human programmer—they are designed by an intelligent person and to the extent they “act intelligent” it’s all due to the person providing the thinking for it.
I’m not sure this is at all well-defined. I’m curious, what would make you change your mind? If for example, Colton’s systems constructed new definitions, proofs, conjectures, and counter-examples in math would that be enough to decide they were learning?
How about it starts by passing the turing test?
Or: show me the code, and explain to me how it works, and how the code doesn’t contain all the knowledge the AI creates.
Could you explain how this is connected to the issue of making new knowledge?
This seems a bit like showing a negative. I will suggest you look for a start at Simon Colton’s paper in the Journal of Integer Sequences which uses a program that operates in a way very close to the way you think an AI would need to operate in terms of making conjectures and trying to refute them. I don’t know if the source code is easily available. It used to be on Colton’s website but I don’t see it there anymore; if his work seems at all interesting to you you can presumably email him requesting a copy. I don’t know how to show that the AI “doesn’t contain all the knowledge the AI creates” aside from the fact that the system constructed concepts and conjectures in number theory which had not previously been constructed. Moreover, Colton’s own background in number theory is not very heavy, so it is difficult to claim that he’s importing his own knowledge into the code. If you define more precisely what you mean by the code containing the knowledge I might be able to answer that further. Without a more precise notion it isn’t clear to me how to respond.
Holding a conversation requires creating knowledge of what the other guy is saying.
In deduction, you agree that the conclusions are logically contained in the premises and axioms, right? They aren’t something new.
In a spam filter, a programmer figures out how he wants spam filtered (he has the idea), then he tells the computer to do it. The computer doesn’t figure out the idea or any new idea.
With biological evolution, for example, we see something different. You get stuff out, like cats, which weren’t specified in advance. And they aren’t a trivial extension; they contain important knowledge such as the knowledge of optics that makes their eyes work. This is why “Where can cats come from?” has been considered an important question (people want an explanation of the knowledge which i sometimes called “apparent design), while “Where can rocks come from?” is not in the same category of question (it does have some interest for other reasons).
With people, people create ideas that aren’t in their genes, and were’t told to them by their parents or anyone else. That includes abstract ideas that aren’t the summation of observation. They sometimes create ideas no one ever thought of before. THey create new ideas.
In an AI (AGI you call it?) should be like a person: it should create new ideas which are not in it’s “genes” (programming). If someone actually writes an AI they will understand how it works and they can explain it, and we can use their explanation to judge whether they “cheated” or not (whether they, e.g., hard coded some ideas into the program and then said the AI invented them).
Ok. So to make sure I understand this claim. You are asserting that mathematicians are not constructing anything “new” when they discover proofs or theorems in set axiomatic systems?
Are genetic algorithm systems then creating something new by your definition?
Different concepts. An artificial intelligent is not (necessarily) a well-defined notion. An AGI is an artficial general intelligence, essentially something that passes the Turing test. Not the same concept.
I see no reason to assume that a person will necessarily understand how an AGI they constructed works. To use the most obvious hypothetical, someone might make a neural net modeled very closely after the human brain that functions as an AGI without any understanding of how it works.
When you “discover” that 2+1 = 3, given premises and axioms, you aren’t discovering something new.
But working mathematicians do more than that. They create new knowledge. It includes:
1) they learn new ways to think about the premises and axioms
2) they do not publish deductively implied facts unselectively or randomly. they choose the ones that they consider important. by making these choices they are adding content not found in the premises and axioms
3) they make choices between different possible proofs of the same thing. again where they make choices they are adding stuff, based on their own non-deductive understanding
4) when mathematicians work on proofs, they also think about stuff as they go. just like when experimental scientists do fairly mundane tasks in a lab, at the same time they will think and make it interesting with their thoughts.
They could be. I don’t think any exist yet that do. For example I read a Dawkins paper about one. In the paper he basically explained how he tweaked the code in order to get the results he wanted. He didn’t, apparently, realize that it was him, not the program, creating the output.
By “AI” I mean AGI. An intelligence (like a person) which is artificial. Please read all my prior statements in light of that.
Well, OK, but they’d understand how it was created, and could explain that. They could explain what they know about why it works (it copies what humans do). And they could also make the code public and discuss what it doesn’t include (e.g. hard coded special cases. except for the 3 he included on purpose, and he explains why they are there). That’d be pretty convincing!