1) How can a human sort out good ideas from bad ideas?
2) How can a computer program sort out good ideas from bad ideas?
and the subsequent paragraph can stay unchanged. Whatever recipe you’re proposing to improve human understanding, it ought to be “reductionist” and apply to programs too, otherwise it doesn’t meet the LW standard. Whether AIs can be more rational than people is beside the point.
I don’t think you understood the word “reductionist”. Reductionism doesn’t mean that things can be reduced to lower levels but that they should—it actually objections to high level statements and considers them worse. There’s no need for reductionism of that kind for ideas to be applicable to low level issues like being programmable.
Yes Popperian epistemology can be used for an AI with the reformulations (at least: I don’t know any argument that it couldn’t).
Why aren’t we there yet? There aren’t a lot of Popperians, Popperian philosophy does not seek to be formal which makes it harder to translate into code, and most effort has been directed at human problems (including criticizing large mistakes plaguing the field of philosophy, and which also affect regular people and permeate our culture). The epistemology problems important to humans are not all the same as the ones important to writing an AI. For an AI you need to worry about what information to start it with. Humans are born with information, we don’t yet have the science to control that, so there’s is only limited reason to worry about it. Similarly there is the issue of how to educate a very young child. No one knows the answer to that in words—they can do it by following cultural traditions but they can’t explain it. But for AIs, how to deal with the very young stages is important.
Broadly an AI will need a conjecture generator, a criticism generator, and a criticism evaluator. Humans have these built in. So again the problems for AI are somewhat different than what’s important for, e.g., explaining epistemology to human adults.
You may think the details of these things in humans are crucially important. The reason they aren’t is that they are universal, so implementation details don’t affect anything much about our lives.
It’s still interesting to think about. I do sometimes. I’ll try to present a few issues. In abstract terms we would be content with a random conjecture generator, and with sorting through infinitely many conjectures. But we can’t program it like that—too slow. You need shortcuts. A big one is you generate new conjectures by taking old conjectures and making random but limited changes to them. How limited is a good idea? I don’t know how to quantify that. Moving on, there is an issue of: do you wait until conjectures are created and then criticize them afterwards? Or do you program it in such a way that conjectures which would be refuted by a criticism can sometimes not be generated in the first place, as a kind of optimization? I lean towards the second view, but I don’t know how to code it. I’m partial to the notion of using criticisms as filters on the set of possible conjectures. There’s no danger of getting stuck, or losing universality if the filters can be disabled as desired, and modified as desired, and they don’t prevent conjectures that would want to modify them. That raises another issue which is: can people think themselves into a bad state they can’t get out of? I don’t know if that’s impossible or not. I don’t think it happens in practice (yes people can be really dumb, but i don’t think they are even close to impossible to get out of). If it was technically possible for an AI to get stuck, would that be a big deal? You can see here perhaps some of the ways I don’t care for rulebooks.
BTW one of the things our theory tells us is you can never build half an AI. It will jump straight from very minimal functionality to universal functionality, just as computer programming languages do. (The “jump to universality” is discussed by David Deutsch in The Beginning of Infinity). One thing this means is there is no way to know how far along we are—the jump could come at any time with one new insight.
Whether AIs can be more rational than people is beside the point.
Is it? What good are they, then? I have some answers to that, but nothing really huge. If they aren’t assumed to be super rational geniuses then they can’t be expected to quickly bring about the singularity or that kind of thing.
BTW one of the things our theory tells us is you can never build half an AI. It will jump straight from very minimal functionality to universal functionality, just as computer programming languages do. (The “jump to universality” is discussed by David Deutsch in The Beginning of Infinity). One thing this means is there is no way to know how far along we are—the jump could come at any time with one new insight.
That sounds pretty bizarre. So much for the idea of progress via better and better compression and modeling. However, it seems pretty unlikely to me that you actually know what you are talking about here.
Insulting my expertise is not an argument. (And given you know nothing about my expertise, it’s a silly too. Concluding that people aren’t experts because you disagree with them is biased and closed minded.)
Are you familiar with the topic? Do you want me to give you a lecture on it? Will you read about it?
Reductionism doesn’t mean that things can be reduced to lower levels but that they should—it actually objections to high level statements and considers them worse.
Conventionally, and confusingly, the word reductionism has two meanings:
Reductionism can either mean (a) an approach to understanding the nature of complex things by reducing them to the interactions of their parts, or to simpler or more fundamental things or (b) a philosophical position that a complex system is nothing but the sum of its parts, and that an account of it can be reduced to accounts of individual constituents.
I didn’t say it was false, just irrelevant to the current discussion of what we want from a theory of knowledge.
You could use math instead of code. To take a Bayesian example, the Solomonoff prior is uncomputable, but well-defined mathematically and you can write computable approximations to it, so it counts as progress in my book. To take a non-Bayesian example, fuzzy logic is formalized enough to be useful in applications.
Anyway, I think I understand where you’re coming from, and maybe it’s unfair to demand new LW-style insights from you. But hopefully you also understand why we like Bayesianism, and that we don’t even think of it at the level you’re discussing.
I understand some. But I think you’re mistaken and I don’t see a lot to like when judged by the standards of good philosophy. Philosophy is important. Your projects, like inventing an AI, will run into obstacles you did not foresee if your philosophy is mistaken.
Of course I have the same criticism about people in all sorts of other fields. Architects or physicists or economists who don’t know philosophy run into problems too. But claiming to have an epistemology, and claiming to replace Popper, those are things most fields don’t do. So I try to ask about it. Shrug.
I think I figured out the main idea of Bayesian epistemology. It is: Bayes’ theorem is the source of justification (this is intended as the solution to the problem of justification, which is a bad problem).
But when you start doing math, it’s ignored, and you get stuff right (at least given the premises, which are often not realistic, following the proud tradition of game theory and economics). So I should clarify: that’s the main philosophical claim. It’s not very interesting. Oh well.
I think I figured out the main idea of Bayesian epistemology. It is: Bayes’ theorem is the source of justification (this is intended as the solution to the problem of justification, which is a bad problem).
No. See here, where Eliezer specifically says that this is not the case. (“But first, let it be clearly admitted that the rules of Bayesian updating, do not of themselves solve the problem of induction.”)
I″m willing to reformulate like this:
1) How can a human sort out good ideas from bad ideas?
2) How can a computer program sort out good ideas from bad ideas?
and the subsequent paragraph can stay unchanged. Whatever recipe you’re proposing to improve human understanding, it ought to be “reductionist” and apply to programs too, otherwise it doesn’t meet the LW standard. Whether AIs can be more rational than people is beside the point.
I don’t think you understood the word “reductionist”. Reductionism doesn’t mean that things can be reduced to lower levels but that they should—it actually objections to high level statements and considers them worse. There’s no need for reductionism of that kind for ideas to be applicable to low level issues like being programmable.
Yes Popperian epistemology can be used for an AI with the reformulations (at least: I don’t know any argument that it couldn’t).
Why aren’t we there yet? There aren’t a lot of Popperians, Popperian philosophy does not seek to be formal which makes it harder to translate into code, and most effort has been directed at human problems (including criticizing large mistakes plaguing the field of philosophy, and which also affect regular people and permeate our culture). The epistemology problems important to humans are not all the same as the ones important to writing an AI. For an AI you need to worry about what information to start it with. Humans are born with information, we don’t yet have the science to control that, so there’s is only limited reason to worry about it. Similarly there is the issue of how to educate a very young child. No one knows the answer to that in words—they can do it by following cultural traditions but they can’t explain it. But for AIs, how to deal with the very young stages is important.
Broadly an AI will need a conjecture generator, a criticism generator, and a criticism evaluator. Humans have these built in. So again the problems for AI are somewhat different than what’s important for, e.g., explaining epistemology to human adults.
You may think the details of these things in humans are crucially important. The reason they aren’t is that they are universal, so implementation details don’t affect anything much about our lives.
It’s still interesting to think about. I do sometimes. I’ll try to present a few issues. In abstract terms we would be content with a random conjecture generator, and with sorting through infinitely many conjectures. But we can’t program it like that—too slow. You need shortcuts. A big one is you generate new conjectures by taking old conjectures and making random but limited changes to them. How limited is a good idea? I don’t know how to quantify that. Moving on, there is an issue of: do you wait until conjectures are created and then criticize them afterwards? Or do you program it in such a way that conjectures which would be refuted by a criticism can sometimes not be generated in the first place, as a kind of optimization? I lean towards the second view, but I don’t know how to code it. I’m partial to the notion of using criticisms as filters on the set of possible conjectures. There’s no danger of getting stuck, or losing universality if the filters can be disabled as desired, and modified as desired, and they don’t prevent conjectures that would want to modify them. That raises another issue which is: can people think themselves into a bad state they can’t get out of? I don’t know if that’s impossible or not. I don’t think it happens in practice (yes people can be really dumb, but i don’t think they are even close to impossible to get out of). If it was technically possible for an AI to get stuck, would that be a big deal? You can see here perhaps some of the ways I don’t care for rulebooks.
BTW one of the things our theory tells us is you can never build half an AI. It will jump straight from very minimal functionality to universal functionality, just as computer programming languages do. (The “jump to universality” is discussed by David Deutsch in The Beginning of Infinity). One thing this means is there is no way to know how far along we are—the jump could come at any time with one new insight.
Is it? What good are they, then? I have some answers to that, but nothing really huge. If they aren’t assumed to be super rational geniuses then they can’t be expected to quickly bring about the singularity or that kind of thing.
That sounds pretty bizarre. So much for the idea of progress via better and better compression and modeling. However, it seems pretty unlikely to me that you actually know what you are talking about here.
Insulting my expertise is not an argument. (And given you know nothing about my expertise, it’s a silly too. Concluding that people aren’t experts because you disagree with them is biased and closed minded.)
Are you familiar with the topic? Do you want me to give you a lecture on it? Will you read about it?
Conventionally, and confusingly, the word reductionism has two meanings:
I didn’t say it was false, just irrelevant to the current discussion of what we want from a theory of knowledge.
You could use math instead of code. To take a Bayesian example, the Solomonoff prior is uncomputable, but well-defined mathematically and you can write computable approximations to it, so it counts as progress in my book. To take a non-Bayesian example, fuzzy logic is formalized enough to be useful in applications.
Anyway, I think I understand where you’re coming from, and maybe it’s unfair to demand new LW-style insights from you. But hopefully you also understand why we like Bayesianism, and that we don’t even think of it at the level you’re discussing.
I understand some. But I think you’re mistaken and I don’t see a lot to like when judged by the standards of good philosophy. Philosophy is important. Your projects, like inventing an AI, will run into obstacles you did not foresee if your philosophy is mistaken.
Of course I have the same criticism about people in all sorts of other fields. Architects or physicists or economists who don’t know philosophy run into problems too. But claiming to have an epistemology, and claiming to replace Popper, those are things most fields don’t do. So I try to ask about it. Shrug.
I think I figured out the main idea of Bayesian epistemology. It is: Bayes’ theorem is the source of justification (this is intended as the solution to the problem of justification, which is a bad problem).
But when you start doing math, it’s ignored, and you get stuff right (at least given the premises, which are often not realistic, following the proud tradition of game theory and economics). So I should clarify: that’s the main philosophical claim. It’s not very interesting. Oh well.
No. See here, where Eliezer specifically says that this is not the case. (“But first, let it be clearly admitted that the rules of Bayesian updating, do not of themselves solve the problem of induction.”)
I had already seen that.
Note that I said justification not induction.
I don’t want to argue about this. If you like the idea, enjoy it. If you don’t, just forget about it and reply to something else I said.