Hi, Deutsch was my mentor. I run the discussion forums where we’ve been continuously open to debate and questions since before LW existed. I’m also familiar with Solomonoff induction, Bayes, RAZ and HPMOR. Despite several attempts, I’ve been broadly unable to get (useful, clear) answers from the LW crowd about our questions and criticisms related to induction. But I remain interested in trying to resolve these disagreements and to sort out epistemological issues.
Are you interested in extended discussion about this, with a goal of reaching some conclusions about CR/LW differences, or do you know anyone who is? And if you’re interested, have you read FoR and BoI?
I’ll begin with one comment now:
I am getting the sense that critrats frequently engage in a terrible Strong Opinionatedness where they let themselves wholely believe probably wrong theories
~All open, public groups have lots of low quality self-proclaimed members. You may be right about some critrats you’ve talked with or read.
But that is not a CR position. CR says we only ever believe theories tentatively. We always know they may be wrong and that we may need to reconsider. We can’t 100% count on ideas. Wholely believing things is not a part of CR.
If by “wholely” you mean with a 100% probability, that is also not a CR position, since CR doesn’t assign probabilities of truth to beliefs. If you insist on a probability, a CRist might say “0% or infinitesimal” (Popper made some comments similar to that) for all his beliefs, never 100%, while reiterating that probability applies to physical events so the question is misconceived.
Sometimes we act, judge, decide or (tentatively) conclude. When we do this, we have to choose something and not some other things. E.g. it may have been a close call between getting sushi or pizza, but then I chose only pizza and no sushi, not 51% pizza and 49% sushi. (Sometimes meta/mixed/compromise views are appropriate, which combine elements of rival views. E.g. I could go to a food court and get 2 slices of pizza and 2 maki rolls. But then I’m acting 100% on that plan and not following either original plan. So I’m still picking a single plan to wholely act on.)
I had an interesting discussion about induction with my (critrat) friend Ella Hoeppner recently, I think we arrived at some things...
I think it was...
I stumbled on some quotes of DD (from this, which I should read in full at some point) criticizing, competently, the principle of induction (which is, roughly, “what was, will continue”). My stance is that it is indeed underspecified, but that Solomonoff induction pretty much provides the rest of the specification. Ella’s response to solomonoff induction was “but it too is underspecified, because the programming language that it uses is arbitrary”, I replied with “every language has a constant-sized interpreter specification so in the large they all end up giving values of similar sizes”, but I don’t really know how to back up there being some sort of reasonable upper bound to interpreter sizes, then we ran into the fact that there is no ultimate metaphysical foundation for semantics, why are we grounding semantics on a thing like turing machines? I just don’t know. The most meta metalanguage always ends up being english, or worse, demonstration; show the learner some examples and they will figure out the rules without using any language at all, and people always seem reliant on receiving demonstrations at some point in their education.
I think I left it at… it’s easy for us to point at the category of languages are ‘computerlike’ and easy to implement with simple things like transistors, that is, for some reason, what we use as a bedrock. We just will. Maybe there is nothing below there. I can’t see why we should expect there to be. We will just use what works.
Alongside that, somewhat confusing the issue, there is another definition of induction; induction is whatever cognitive process takes a stream of observation of a phenomena and produces theories that are good for anticipating future observations.
I suppose we could call that “theorizing”, if the need were strong.
I’ve heard from some critrats, “there is no such thing as inductive cognition, it’s just evolution”, (lent a small bit of support by DD quotes like “why is it still conventional wisdom that we get our theories by induction?” (the answer may be; because “induction” is sometimes defined to be whatever kind of thing theories come out of)), if they mean it the way I understood: If evolution performs the role of an inductive cognition, then evolution is an inductive cognition (collectively), there is such a thing as evolution, so there is such a thing as inductive cognition.
(I then name induction-techne, the process of coming up with theories that are useful not just for predicting the phenomena, but for *manipulating the phenomena*, It is elicited by puzzle games like The Witness (recommended), and the games Ella and I are working on, after which we might name their genre, “induction games”? (the “techne” is somewhat implied by “game”’s suggestion of interactivity)).
Are you interested in extended discussion about this, with a goal of reaching some conclusions about CR/LW differences, or do you know anyone who is?
I am evidently interested in discussing it, but I am probably not the best person for it. My background in math is not strong enough for me to really contribute to analytic epistemology, so my knowledge of bayesian epistemology is all a bit impressionistic. I had far too much difficulty citing examples of concrete applications of the bayesian approach. I can probably find more, but it takes conscious effort, there must be people for whom it doesn’t.
I have not read those books, I’m definitely considering it, they sound pretty good.
I think it might be very fruitful if David got together with Judea Pearl, though, who seems to me to be the foremost developer of bayesian causal reasoning. It looks like they might not have met before and they seem to have similarly playful approaches to language and epistemology that makes me wonder if they might get along.
Sometimes we act, judge, decide or (tentatively) conclude
Aye, the tragedy of agency. If only we could delay acting until after we’ve figured everything out, it would solve so many problems.
A place to start is considering what problems we’re trying to solve.
Epistemology has problems like:
What is knowledge? How can new knowledge be created? What is an error? How can errors be corrected? How can disagreements between ideas be resolved? How do we learn? How can we use knowledge when making decisions? What should we do about incomplete information? Can we achieve infallible certainty (how?)? What is intelligence? How can observation be connected to thinking? Are all (good) ideas connected to observation or just some?
Are those the sorts of problems you’re trying to solve when you talk about Solomonoff induction? If so, what’s the best literature you know of that outlines (gives high level explanations rather than a bunch of details) how Solomonoff induction plus some other stuff (it should specify what stuff) solves those problems? (And says which remain currently unsolved problems?)
It’s worse than that: SI doesn’t even try to build a meaningful ontological model.
I’ve heard from some critrats, “there is no such thing as inductive cognition, it’s just evolution”,
Why can’t it be both?
Alongside that, somewhat confusing the issue, there is another definition of induction; induction is whatever cognitive process takes a stream of observation of a phenomena and produces theories that are good for anticipating future observations
So the first definition is what? A mysterious process where the purely passive reception of sense data leads to hypothesis formation.
The critrat world has eloquent arguments against that version of induction, although no one has believed in a for a long time.
the answer may be; because “induction” is sometimes defined to be whatever kind of thing theories come out of)
Well, only sometimes.
CR doesn’t have good arguments against the other kind of induction, the kind that just predicts future observations on the basis of past ones , the kind that simple organisms and algorithms can do. And it doesnt have much motivation to distinguish them. Being sweepingly anti inductive is their thing. They believe that they believe they hold all beliefs tentatively..but that doesn’t include the anti inductive belief.
CR doesn’t have good arguments against the other kind of induction, the kind that just predicts future observations on the basis of past ones , the kind that simple organisms and algorithms can do.
This is the old kind of induction; Bertrand Russell had arguments against that kind of induction...
The refutations of that kind of induction are way beyond the bounds of CR.
It’s worse than that: SI doesn’t even try to build a meaningful ontological model.
Hm, does it need one?
Why can’t it be both?
I think that’s what I said.
So the first definition is what?
Again, “what was, will continue”. DD says something about real years never having started with 20 therefore the year 2000 wont happen, which seems to refute it as a complete specification, but on reflection I just feel like he understood it in an overly crude way because he wasn’t thinking in a probabilistic way about managing the coexistence of competing theories that agree with past data but make different predictions about the future, and he still probably doesn’t have that.
The reality is, you actually aren’t supposed to have certainty that the year 2000 will happen, 0 and 1 are not real probabilities etc
It’s worse than that: SI doesn’t even try to build a meaningful ontological model.
Hm, does it need one
Yes, if you are going to claim that it solves the problem of attaching objective probabilities to ontological theories..or theories for short. If what it actually delivers is complexity measures on computer programs, it would be honest to say so.
Hi, Deutsch was my mentor. I run the discussion forums where we’ve been continuously open to debate and questions since before LW existed. I’m also familiar with Solomonoff induction, Bayes, RAZ and HPMOR. Despite several attempts, I’ve been broadly unable to get (useful, clear) answers from the LW crowd about our questions and criticisms related to induction. But I remain interested in trying to resolve these disagreements and to sort out epistemological issues.
Are you interested in extended discussion about this, with a goal of reaching some conclusions about CR/LW differences, or do you know anyone who is? And if you’re interested, have you read FoR and BoI?
I’ll begin with one comment now:
~All open, public groups have lots of low quality self-proclaimed members. You may be right about some critrats you’ve talked with or read.
But that is not a CR position. CR says we only ever believe theories tentatively. We always know they may be wrong and that we may need to reconsider. We can’t 100% count on ideas. Wholely believing things is not a part of CR.
If by “wholely” you mean with a 100% probability, that is also not a CR position, since CR doesn’t assign probabilities of truth to beliefs. If you insist on a probability, a CRist might say “0% or infinitesimal” (Popper made some comments similar to that) for all his beliefs, never 100%, while reiterating that probability applies to physical events so the question is misconceived.
Sometimes we act, judge, decide or (tentatively) conclude. When we do this, we have to choose something and not some other things. E.g. it may have been a close call between getting sushi or pizza, but then I chose only pizza and no sushi, not 51% pizza and 49% sushi. (Sometimes meta/mixed/compromise views are appropriate, which combine elements of rival views. E.g. I could go to a food court and get 2 slices of pizza and 2 maki rolls. But then I’m acting 100% on that plan and not following either original plan. So I’m still picking a single plan to wholely act on.)
I’m glad to hear from you.
I had an interesting discussion about induction with my (critrat) friend Ella Hoeppner recently, I think we arrived at some things...
I think it was...
I stumbled on some quotes of DD (from this, which I should read in full at some point) criticizing, competently, the principle of induction (which is, roughly, “what was, will continue”). My stance is that it is indeed underspecified, but that Solomonoff induction pretty much provides the rest of the specification. Ella’s response to solomonoff induction was “but it too is underspecified, because the programming language that it uses is arbitrary”, I replied with “every language has a constant-sized interpreter specification so in the large they all end up giving values of similar sizes”, but I don’t really know how to back up there being some sort of reasonable upper bound to interpreter sizes, then we ran into the fact that there is no ultimate metaphysical foundation for semantics, why are we grounding semantics on a thing like turing machines? I just don’t know. The most meta metalanguage always ends up being english, or worse, demonstration; show the learner some examples and they will figure out the rules without using any language at all, and people always seem reliant on receiving demonstrations at some point in their education.
I think I left it at… it’s easy for us to point at the category of languages are ‘computerlike’ and easy to implement with simple things like transistors, that is, for some reason, what we use as a bedrock. We just will. Maybe there is nothing below there. I can’t see why we should expect there to be. We will just use what works.
Alongside that, somewhat confusing the issue, there is another definition of induction; induction is whatever cognitive process takes a stream of observation of a phenomena and produces theories that are good for anticipating future observations.
I suppose we could call that “theorizing”, if the need were strong.
I’ve heard from some critrats, “there is no such thing as inductive cognition, it’s just evolution”, (lent a small bit of support by DD quotes like “why is it still conventional wisdom that we get our theories by induction?” (the answer may be; because “induction” is sometimes defined to be whatever kind of thing theories come out of)), if they mean it the way I understood: If evolution performs the role of an inductive cognition, then evolution is an inductive cognition (collectively), there is such a thing as evolution, so there is such a thing as inductive cognition.
(I then name induction-techne, the process of coming up with theories that are useful not just for predicting the phenomena, but for *manipulating the phenomena*, It is elicited by puzzle games like The Witness (recommended), and the games Ella and I are working on, after which we might name their genre, “induction games”? (the “techne” is somewhat implied by “game”’s suggestion of interactivity)).
I am evidently interested in discussing it, but I am probably not the best person for it. My background in math is not strong enough for me to really contribute to analytic epistemology, so my knowledge of bayesian epistemology is all a bit impressionistic. I had far too much difficulty citing examples of concrete applications of the bayesian approach. I can probably find more, but it takes conscious effort, there must be people for whom it doesn’t.
I have not read those books, I’m definitely considering it, they sound pretty good.
I think it might be very fruitful if David got together with Judea Pearl, though, who seems to me to be the foremost developer of bayesian causal reasoning. It looks like they might not have met before and they seem to have similarly playful approaches to language and epistemology that makes me wonder if they might get along.
Aye, the tragedy of agency. If only we could delay acting until after we’ve figured everything out, it would solve so many problems.
A place to start is considering what problems we’re trying to solve.
Epistemology has problems like:
What is knowledge? How can new knowledge be created? What is an error? How can errors be corrected? How can disagreements between ideas be resolved? How do we learn? How can we use knowledge when making decisions? What should we do about incomplete information? Can we achieve infallible certainty (how?)? What is intelligence? How can observation be connected to thinking? Are all (good) ideas connected to observation or just some?
Are those the sorts of problems you’re trying to solve when you talk about Solomonoff induction? If so, what’s the best literature you know of that outlines (gives high level explanations rather than a bunch of details) how Solomonoff induction plus some other stuff (it should specify what stuff) solves those problems? (And says which remain currently unsolved problems?)
(My questions are open to anyone else, too.)
It’s worse than that: SI doesn’t even try to build a meaningful ontological model.
Why can’t it be both?
So the first definition is what? A mysterious process where the purely passive reception of sense data leads to hypothesis formation.
The critrat world has eloquent arguments against that version of induction, although no one has believed in a for a long time.
Well, only sometimes.
CR doesn’t have good arguments against the other kind of induction, the kind that just predicts future observations on the basis of past ones , the kind that simple organisms and algorithms can do. And it doesnt have much motivation to distinguish them. Being sweepingly anti inductive is their thing. They believe that they believe they hold all beliefs tentatively..but that doesn’t include the anti inductive belief.
This is the old kind of induction; Bertrand Russell had arguments against that kind of induction...
The refutations of that kind of induction are way beyond the bounds of CR.
Looks like the simple organisms and algorithms didn’t listen to him!
I don’t think you’re taking this seriously.
Hm, does it need one?
I think that’s what I said.
Again, “what was, will continue”. DD says something about real years never having started with 20 therefore the year 2000 wont happen, which seems to refute it as a complete specification, but on reflection I just feel like he understood it in an overly crude way because he wasn’t thinking in a probabilistic way about managing the coexistence of competing theories that agree with past data but make different predictions about the future, and he still probably doesn’t have that.
The reality is, you actually aren’t supposed to have certainty that the year 2000 will happen, 0 and 1 are not real probabilities etc
Sigh ..that takes me back about 11 years. Yes, induction is always straw manned, the Popper-Miller paper is gold plated truth, etc.
Yes, if you are going to claim that it solves the problem of attaching objective probabilities to ontological theories..or theories for short. If what it actually delivers is complexity measures on computer programs, it would be honest to say so.
Which discussion forums are you referring to?
http://fallibleideas.com/discussion