I don’t agree with either meaning of epistemology. The traditional meaning of epistemology, which I accept, is the study of knowledge, and in particular questions like What is knowledge? and How do we sort out good ideas from bad ideas? and How is knowledge created?
Both of your definitions of the field have bayesian ways of thinking already built in to them. They are biased.
If you don’t want to be an epistemology, that would be OK with me. But for example Yudkowsky claimed that Bayesianism was dethroning Popperism. To do that it has to be an epistemology and deal with the same questions Popper addresses.
Popperian epistemology does not offer any rulebook. It says rulebooks are an authoritarian and foundationalist mistake, which comes out of the attempt to find a source of justification. (Well, the psychological claims are not important and not epistemology. But Popper did occasionally say things like that, and I think it’s true)
I will take a look at your links, thanks. I respect that author a lot for this post on why heritability studies are wrong:
(1). The problem is, it’s pretty hard to determine whether a given answer to (1) is right, wrong or meaningless, when it’s composed of mere words (cognitive black boxes) and doesn’t automatically translate to an answer for (2). So most LWers think that (2) is really the right question to ask, and any non-confused answer to (2) ought to dissolve any leftover confusion about (1).
Note that Popperians think there is no algorithm that automatically arrives at rational beliefs. There’s no privileged road to truth. AIs will not be more rational than people. OK they usually won’t have a few uniquely human flaws (like, umm, caring if they are fat). But there is no particular reason to expect this stuff will be replaced with correct ideas. Whatever AIs think of instead will have its own mistakes. It’s the same kind of issue as if some children were left on a deserted island to form their own culture. They’ll avoid various mistakes from our culture, but they will also make new ones. The rationality of AIs, just like the rationality of the next generation, depends primarily on the rationality of the educational techniques used (education is closely connected to epistemology in my view, because it’s about learning, i.e. creating knowledge. Popperian epistemology has close connections to educational theory which led to the philosophy “Taking Children Seriously” by David Deutsch).
1) How can a human sort out good ideas from bad ideas?
2) How can a computer program sort out good ideas from bad ideas?
and the subsequent paragraph can stay unchanged. Whatever recipe you’re proposing to improve human understanding, it ought to be “reductionist” and apply to programs too, otherwise it doesn’t meet the LW standard. Whether AIs can be more rational than people is beside the point.
I don’t think you understood the word “reductionist”. Reductionism doesn’t mean that things can be reduced to lower levels but that they should—it actually objections to high level statements and considers them worse. There’s no need for reductionism of that kind for ideas to be applicable to low level issues like being programmable.
Yes Popperian epistemology can be used for an AI with the reformulations (at least: I don’t know any argument that it couldn’t).
Why aren’t we there yet? There aren’t a lot of Popperians, Popperian philosophy does not seek to be formal which makes it harder to translate into code, and most effort has been directed at human problems (including criticizing large mistakes plaguing the field of philosophy, and which also affect regular people and permeate our culture). The epistemology problems important to humans are not all the same as the ones important to writing an AI. For an AI you need to worry about what information to start it with. Humans are born with information, we don’t yet have the science to control that, so there’s is only limited reason to worry about it. Similarly there is the issue of how to educate a very young child. No one knows the answer to that in words—they can do it by following cultural traditions but they can’t explain it. But for AIs, how to deal with the very young stages is important.
Broadly an AI will need a conjecture generator, a criticism generator, and a criticism evaluator. Humans have these built in. So again the problems for AI are somewhat different than what’s important for, e.g., explaining epistemology to human adults.
You may think the details of these things in humans are crucially important. The reason they aren’t is that they are universal, so implementation details don’t affect anything much about our lives.
It’s still interesting to think about. I do sometimes. I’ll try to present a few issues. In abstract terms we would be content with a random conjecture generator, and with sorting through infinitely many conjectures. But we can’t program it like that—too slow. You need shortcuts. A big one is you generate new conjectures by taking old conjectures and making random but limited changes to them. How limited is a good idea? I don’t know how to quantify that. Moving on, there is an issue of: do you wait until conjectures are created and then criticize them afterwards? Or do you program it in such a way that conjectures which would be refuted by a criticism can sometimes not be generated in the first place, as a kind of optimization? I lean towards the second view, but I don’t know how to code it. I’m partial to the notion of using criticisms as filters on the set of possible conjectures. There’s no danger of getting stuck, or losing universality if the filters can be disabled as desired, and modified as desired, and they don’t prevent conjectures that would want to modify them. That raises another issue which is: can people think themselves into a bad state they can’t get out of? I don’t know if that’s impossible or not. I don’t think it happens in practice (yes people can be really dumb, but i don’t think they are even close to impossible to get out of). If it was technically possible for an AI to get stuck, would that be a big deal? You can see here perhaps some of the ways I don’t care for rulebooks.
BTW one of the things our theory tells us is you can never build half an AI. It will jump straight from very minimal functionality to universal functionality, just as computer programming languages do. (The “jump to universality” is discussed by David Deutsch in The Beginning of Infinity). One thing this means is there is no way to know how far along we are—the jump could come at any time with one new insight.
Whether AIs can be more rational than people is beside the point.
Is it? What good are they, then? I have some answers to that, but nothing really huge. If they aren’t assumed to be super rational geniuses then they can’t be expected to quickly bring about the singularity or that kind of thing.
BTW one of the things our theory tells us is you can never build half an AI. It will jump straight from very minimal functionality to universal functionality, just as computer programming languages do. (The “jump to universality” is discussed by David Deutsch in The Beginning of Infinity). One thing this means is there is no way to know how far along we are—the jump could come at any time with one new insight.
That sounds pretty bizarre. So much for the idea of progress via better and better compression and modeling. However, it seems pretty unlikely to me that you actually know what you are talking about here.
Insulting my expertise is not an argument. (And given you know nothing about my expertise, it’s a silly too. Concluding that people aren’t experts because you disagree with them is biased and closed minded.)
Are you familiar with the topic? Do you want me to give you a lecture on it? Will you read about it?
Reductionism doesn’t mean that things can be reduced to lower levels but that they should—it actually objections to high level statements and considers them worse.
Conventionally, and confusingly, the word reductionism has two meanings:
Reductionism can either mean (a) an approach to understanding the nature of complex things by reducing them to the interactions of their parts, or to simpler or more fundamental things or (b) a philosophical position that a complex system is nothing but the sum of its parts, and that an account of it can be reduced to accounts of individual constituents.
I didn’t say it was false, just irrelevant to the current discussion of what we want from a theory of knowledge.
You could use math instead of code. To take a Bayesian example, the Solomonoff prior is uncomputable, but well-defined mathematically and you can write computable approximations to it, so it counts as progress in my book. To take a non-Bayesian example, fuzzy logic is formalized enough to be useful in applications.
Anyway, I think I understand where you’re coming from, and maybe it’s unfair to demand new LW-style insights from you. But hopefully you also understand why we like Bayesianism, and that we don’t even think of it at the level you’re discussing.
I understand some. But I think you’re mistaken and I don’t see a lot to like when judged by the standards of good philosophy. Philosophy is important. Your projects, like inventing an AI, will run into obstacles you did not foresee if your philosophy is mistaken.
Of course I have the same criticism about people in all sorts of other fields. Architects or physicists or economists who don’t know philosophy run into problems too. But claiming to have an epistemology, and claiming to replace Popper, those are things most fields don’t do. So I try to ask about it. Shrug.
I think I figured out the main idea of Bayesian epistemology. It is: Bayes’ theorem is the source of justification (this is intended as the solution to the problem of justification, which is a bad problem).
But when you start doing math, it’s ignored, and you get stuff right (at least given the premises, which are often not realistic, following the proud tradition of game theory and economics). So I should clarify: that’s the main philosophical claim. It’s not very interesting. Oh well.
I think I figured out the main idea of Bayesian epistemology. It is: Bayes’ theorem is the source of justification (this is intended as the solution to the problem of justification, which is a bad problem).
No. See here, where Eliezer specifically says that this is not the case. (“But first, let it be clearly admitted that the rules of Bayesian updating, do not of themselves solve the problem of induction.”)
Note that Popperians think there is no algorithm that automatically arrives at rational beliefs. There’s no privileged road to truth. AIs will not be more rational than people. OK they usually won’t have a few uniquely human flaws (like, umm, caring if they are fat). But there is no particular reason to expect this stuff will be replaced with correct ideas. Whatever AIs think of instead will have its own mistakes. It’s the same kind of issue as if some children were left on a deserted island to form their own culture. They’ll avoid various mistakes from our culture, but they will also make new ones. The rationality of AIs, just like the rationality of the next generation, depends primarily on the rationality of the educational techniques used (education is closely connected to epistemology in my view, because it’s about learning, i.e. creating knowledge.
This is mostly irrelevant to your main point, but I’m going to talk about it because it bothered me. I don’t think anyone on LessWrong would agree with this paragraph, since it assumes a whole bunch of things about AI that we have good reasons to not assume. The rationality of an AI will depend on its mind design—whether it has biases built into its hardware or not us up to us. In other words, you can’t assert that AIs will make their own mistakes because this assumes things about the mind design of the AI, things that we can’t assume because we haven’t built it yet. Also, even if an AI does have its own cognitive biases, it still might be orders of magnitude more rational than a human being.
I’m not assuming stuff by accident. There is serious theory for this. AI people ought to learn these ideas and engage with them, IMO, since they contradict some of your ideas. If we’re right, then you need to make some changes to how you approach AI design.
So for example:
The rationality of an AI will depend on its mind design—whether it has biases built into its hardware or not us up to us.
If an AI is a universal knowledge creator, in what sense can it have a built in bias?
I’m not assuming stuff by accident. There is serious theory for this. AI people ought to learn these ideas and engage with them, IMO, since they contradict some of your ideas.
Astrology also conflicts with “our ideas”. That is not in itself a compelling reason to brush up on our astrology.
If an AI is a universal knowledge creator, in what sense can it have a built in bias?
I don’t understand this sentence. Let me make my view of things clearer: An AI’s mind can be described by a point in mind design space. Certain minds (most of them, I imagine) have cognitive biases built into their hardware. That is, they function in suboptimal ways because of the algorithms and heuristics they use. For example: human beings. That said, what is a “universal knowledge creator?” Or, to frame the question in the terms I just gave, what is its mind design?
Certain minds (most of them, I imagine) have cognitive biases built into their hardware.
That’s not what mind design space looks like. It looks something like this:
You have a bunch of stuff that isn’t a mind at all. It’s simple and it’s not there yet. Then you have a bunch of stuff that is a fully complete mind capable of anything that any mind can do. There’s also some special cases (you could have a very long program that hard codes how to deal with every possible input, situation or idea). AIs we create won’t be special cases of that type which are a bad kind of design.
This is similar to the computer design space, which has no half-computers.
what is a “universal knowledge creator?”
A knowledge creator can create knowledge in some repertoire/set. A universal can do any knowledge creation that any other knowledge creator can do. There is nothing in the repertoire of some other knowledge creator, but not its own.
Human beings are universal knowledge creators.
Are you familiar with universality of computers? And how very simple computers can be universal? There’s a lot of parallel issues.
You have a bunch of stuff that isn’t a mind at all. It’s simple and it’s not there yet. Then you have a bunch of stuff that is a fully complete mind capable of anything that any mind can do. There’s also some special cases (you could have a very long program that hard codes how to deal with every possible input, situation or idea). AIs we create won’t be special cases of that type which are a bad kind of design. This is similar to the computer design space, which has no half-computers.
I’m somewhat skeptical of this claim—I can design a mind that has the functions 0(n) (zero function), S(n) (successor function), and P(x0, x1,...xn) (projection function) but not primitive recursion, I can compute most but not all functions. So I’m skeptical of this “all or little” description of mind space and computer space.
However, I suspect it ultimately doesn’t matter because your claims don’t directly contradict my original point. If your categorization is correct and human beings are indeed universal knowledge creators, that doesn’t preclude the possibility of us having cognitive biases (which it had better not do!). Nor does it contradict the larger point, which is that cognitive biases come from cognitive architecture, i.e. where one is located in mind design space.
Are you familiar with universality of computers? And how very simple computers can be universal? There’s a lot of parallel issues.
If you’re referring to Turing-completeness, then yes I am familiar with it.
I’m somewhat skeptical of this claim—I can design a mind that has the functions 0(n) (zero function), S(n) (successor function), and P(x0, x1,...xn) (projection function) but not primitive recursion, I can compute most but not all functions. So I’m skeptical of this “all or little” description of mind space and computer space.
How is that a mind? Maybe we are defining it differently. A mind is something that can create knowledge. And a lot, not just a few special cases. Like people who can think about all kinds of topics such as engineering or art. When you give a few simple functions and don’t even have recursion, I don’t think it meets my conception of a mind, and I’m not sure what good it is.
If your categorization is correct and human beings are indeed universal knowledge creators, that doesn’t preclude the possibility of us having cognitive biases (which it had better not do!).
In what sense can a bias be very important (in the long term), if we are universal? We can change it. We can learn better. So the implementation details aren’t such a big deal to the result, you get the same kind of thing regardless.
Temporary mistakes in starting points should be expected. Thinking needs to be mistake tolerant.
Also, even if an AI does have its own cognitive biases, it still might be orders of magnitude more rational than a human being.
Or orders of magnitude less rational. This isn’t terribly germane to your original point but it seemed worth pointing out. We really have no good idea what the minimum amount of rationality actually is for an intelligent entity.
Oh, I definitely agree with that. It’s certainly possible to conceive of a really, really, really suboptimal mind that is still “intelligent” in the sense that it can attempt to solve problems.
I don’t agree with either meaning of epistemology. The traditional meaning of epistemology, which I accept, is the study of knowledge, and in particular questions like What is knowledge? and How do we sort out good ideas from bad ideas? and How is knowledge created?
Both of your definitions of the field have bayesian ways of thinking already built in to them. They are biased.
If you don’t want to be an epistemology, that would be OK with me. But for example Yudkowsky claimed that Bayesianism was dethroning Popperism. To do that it has to be an epistemology and deal with the same questions Popper addresses.
Popperian epistemology does not offer any rulebook. It says rulebooks are an authoritarian and foundationalist mistake, which comes out of the attempt to find a source of justification. (Well, the psychological claims are not important and not epistemology. But Popper did occasionally say things like that, and I think it’s true)
I will take a look at your links, thanks. I respect that author a lot for this post on why heritability studies are wrong:
http://cscs.umich.edu/~crshalizi/weblog/520.html
Note that Popperians think there is no algorithm that automatically arrives at rational beliefs. There’s no privileged road to truth. AIs will not be more rational than people. OK they usually won’t have a few uniquely human flaws (like, umm, caring if they are fat). But there is no particular reason to expect this stuff will be replaced with correct ideas. Whatever AIs think of instead will have its own mistakes. It’s the same kind of issue as if some children were left on a deserted island to form their own culture. They’ll avoid various mistakes from our culture, but they will also make new ones. The rationality of AIs, just like the rationality of the next generation, depends primarily on the rationality of the educational techniques used (education is closely connected to epistemology in my view, because it’s about learning, i.e. creating knowledge. Popperian epistemology has close connections to educational theory which led to the philosophy “Taking Children Seriously” by David Deutsch).
I″m willing to reformulate like this:
1) How can a human sort out good ideas from bad ideas?
2) How can a computer program sort out good ideas from bad ideas?
and the subsequent paragraph can stay unchanged. Whatever recipe you’re proposing to improve human understanding, it ought to be “reductionist” and apply to programs too, otherwise it doesn’t meet the LW standard. Whether AIs can be more rational than people is beside the point.
I don’t think you understood the word “reductionist”. Reductionism doesn’t mean that things can be reduced to lower levels but that they should—it actually objections to high level statements and considers them worse. There’s no need for reductionism of that kind for ideas to be applicable to low level issues like being programmable.
Yes Popperian epistemology can be used for an AI with the reformulations (at least: I don’t know any argument that it couldn’t).
Why aren’t we there yet? There aren’t a lot of Popperians, Popperian philosophy does not seek to be formal which makes it harder to translate into code, and most effort has been directed at human problems (including criticizing large mistakes plaguing the field of philosophy, and which also affect regular people and permeate our culture). The epistemology problems important to humans are not all the same as the ones important to writing an AI. For an AI you need to worry about what information to start it with. Humans are born with information, we don’t yet have the science to control that, so there’s is only limited reason to worry about it. Similarly there is the issue of how to educate a very young child. No one knows the answer to that in words—they can do it by following cultural traditions but they can’t explain it. But for AIs, how to deal with the very young stages is important.
Broadly an AI will need a conjecture generator, a criticism generator, and a criticism evaluator. Humans have these built in. So again the problems for AI are somewhat different than what’s important for, e.g., explaining epistemology to human adults.
You may think the details of these things in humans are crucially important. The reason they aren’t is that they are universal, so implementation details don’t affect anything much about our lives.
It’s still interesting to think about. I do sometimes. I’ll try to present a few issues. In abstract terms we would be content with a random conjecture generator, and with sorting through infinitely many conjectures. But we can’t program it like that—too slow. You need shortcuts. A big one is you generate new conjectures by taking old conjectures and making random but limited changes to them. How limited is a good idea? I don’t know how to quantify that. Moving on, there is an issue of: do you wait until conjectures are created and then criticize them afterwards? Or do you program it in such a way that conjectures which would be refuted by a criticism can sometimes not be generated in the first place, as a kind of optimization? I lean towards the second view, but I don’t know how to code it. I’m partial to the notion of using criticisms as filters on the set of possible conjectures. There’s no danger of getting stuck, or losing universality if the filters can be disabled as desired, and modified as desired, and they don’t prevent conjectures that would want to modify them. That raises another issue which is: can people think themselves into a bad state they can’t get out of? I don’t know if that’s impossible or not. I don’t think it happens in practice (yes people can be really dumb, but i don’t think they are even close to impossible to get out of). If it was technically possible for an AI to get stuck, would that be a big deal? You can see here perhaps some of the ways I don’t care for rulebooks.
BTW one of the things our theory tells us is you can never build half an AI. It will jump straight from very minimal functionality to universal functionality, just as computer programming languages do. (The “jump to universality” is discussed by David Deutsch in The Beginning of Infinity). One thing this means is there is no way to know how far along we are—the jump could come at any time with one new insight.
Is it? What good are they, then? I have some answers to that, but nothing really huge. If they aren’t assumed to be super rational geniuses then they can’t be expected to quickly bring about the singularity or that kind of thing.
That sounds pretty bizarre. So much for the idea of progress via better and better compression and modeling. However, it seems pretty unlikely to me that you actually know what you are talking about here.
Insulting my expertise is not an argument. (And given you know nothing about my expertise, it’s a silly too. Concluding that people aren’t experts because you disagree with them is biased and closed minded.)
Are you familiar with the topic? Do you want me to give you a lecture on it? Will you read about it?
Conventionally, and confusingly, the word reductionism has two meanings:
I didn’t say it was false, just irrelevant to the current discussion of what we want from a theory of knowledge.
You could use math instead of code. To take a Bayesian example, the Solomonoff prior is uncomputable, but well-defined mathematically and you can write computable approximations to it, so it counts as progress in my book. To take a non-Bayesian example, fuzzy logic is formalized enough to be useful in applications.
Anyway, I think I understand where you’re coming from, and maybe it’s unfair to demand new LW-style insights from you. But hopefully you also understand why we like Bayesianism, and that we don’t even think of it at the level you’re discussing.
I understand some. But I think you’re mistaken and I don’t see a lot to like when judged by the standards of good philosophy. Philosophy is important. Your projects, like inventing an AI, will run into obstacles you did not foresee if your philosophy is mistaken.
Of course I have the same criticism about people in all sorts of other fields. Architects or physicists or economists who don’t know philosophy run into problems too. But claiming to have an epistemology, and claiming to replace Popper, those are things most fields don’t do. So I try to ask about it. Shrug.
I think I figured out the main idea of Bayesian epistemology. It is: Bayes’ theorem is the source of justification (this is intended as the solution to the problem of justification, which is a bad problem).
But when you start doing math, it’s ignored, and you get stuff right (at least given the premises, which are often not realistic, following the proud tradition of game theory and economics). So I should clarify: that’s the main philosophical claim. It’s not very interesting. Oh well.
No. See here, where Eliezer specifically says that this is not the case. (“But first, let it be clearly admitted that the rules of Bayesian updating, do not of themselves solve the problem of induction.”)
I had already seen that.
Note that I said justification not induction.
I don’t want to argue about this. If you like the idea, enjoy it. If you don’t, just forget about it and reply to something else I said.
This is mostly irrelevant to your main point, but I’m going to talk about it because it bothered me. I don’t think anyone on LessWrong would agree with this paragraph, since it assumes a whole bunch of things about AI that we have good reasons to not assume. The rationality of an AI will depend on its mind design—whether it has biases built into its hardware or not us up to us. In other words, you can’t assert that AIs will make their own mistakes because this assumes things about the mind design of the AI, things that we can’t assume because we haven’t built it yet. Also, even if an AI does have its own cognitive biases, it still might be orders of magnitude more rational than a human being.
I’m not assuming stuff by accident. There is serious theory for this. AI people ought to learn these ideas and engage with them, IMO, since they contradict some of your ideas. If we’re right, then you need to make some changes to how you approach AI design.
So for example:
If an AI is a universal knowledge creator, in what sense can it have a built in bias?
Astrology also conflicts with “our ideas”. That is not in itself a compelling reason to brush up on our astrology.
I don’t understand this sentence. Let me make my view of things clearer: An AI’s mind can be described by a point in mind design space. Certain minds (most of them, I imagine) have cognitive biases built into their hardware. That is, they function in suboptimal ways because of the algorithms and heuristics they use. For example: human beings. That said, what is a “universal knowledge creator?” Or, to frame the question in the terms I just gave, what is its mind design?
That’s not what mind design space looks like. It looks something like this:
You have a bunch of stuff that isn’t a mind at all. It’s simple and it’s not there yet. Then you have a bunch of stuff that is a fully complete mind capable of anything that any mind can do. There’s also some special cases (you could have a very long program that hard codes how to deal with every possible input, situation or idea). AIs we create won’t be special cases of that type which are a bad kind of design.
This is similar to the computer design space, which has no half-computers.
A knowledge creator can create knowledge in some repertoire/set. A universal can do any knowledge creation that any other knowledge creator can do. There is nothing in the repertoire of some other knowledge creator, but not its own.
Human beings are universal knowledge creators.
Are you familiar with universality of computers? And how very simple computers can be universal? There’s a lot of parallel issues.
I’m somewhat skeptical of this claim—I can design a mind that has the functions 0(n) (zero function), S(n) (successor function), and P(x0, x1,...xn) (projection function) but not primitive recursion, I can compute most but not all functions. So I’m skeptical of this “all or little” description of mind space and computer space.
However, I suspect it ultimately doesn’t matter because your claims don’t directly contradict my original point. If your categorization is correct and human beings are indeed universal knowledge creators, that doesn’t preclude the possibility of us having cognitive biases (which it had better not do!). Nor does it contradict the larger point, which is that cognitive biases come from cognitive architecture, i.e. where one is located in mind design space.
If you’re referring to Turing-completeness, then yes I am familiar with it.
How is that a mind? Maybe we are defining it differently. A mind is something that can create knowledge. And a lot, not just a few special cases. Like people who can think about all kinds of topics such as engineering or art. When you give a few simple functions and don’t even have recursion, I don’t think it meets my conception of a mind, and I’m not sure what good it is.
In what sense can a bias be very important (in the long term), if we are universal? We can change it. We can learn better. So the implementation details aren’t such a big deal to the result, you get the same kind of thing regardless.
Temporary mistakes in starting points should be expected. Thinking needs to be mistake tolerant.
Or orders of magnitude less rational. This isn’t terribly germane to your original point but it seemed worth pointing out. We really have no good idea what the minimum amount of rationality actually is for an intelligent entity.
Oh, I definitely agree with that. It’s certainly possible to conceive of a really, really, really suboptimal mind that is still “intelligent” in the sense that it can attempt to solve problems.