I’d say bayesian epistemology’s stance is that there is one completely perfect way of understanding reality, but that it’s perhaps provably unattainable to finite things.
You cannot disprove this, because you have not attained it. You do not have an explanation of quantum gravity or globally correlated variation in the decay rate of neutrinos over time or any number of other historical or physical mysteries.
(For some mysteries, like anthropic measure binding pthe hard problem of consciousness] it seems provably impossible to ever test our models. We each have exactly one observation about what kinds of matter attract subjectivity, we can’t share data (solopsistic doubt), we can never get more data (to “enter” a new locus of consciousness we would have to leave our old one, losing access to the observed experience that we had), but the question still matters a lot for measuring the quantity of experience over different brain architectures, so we need to have theories, even though objective truth can’t be attained.)
It’s good to believe that there’s an objective truth and try to move towards it, but you also need to make peace with the fact that you will almost certainly never arrive at it, and dialethic epistemologies like popper’s can’t give you the framework you need to operate gracefully without ever getting objective truth.
We sometimes talk about aumann’s agreement theorem, the claim that any two bayesians who, roughly speaking, talk for long enough, will eventually come to agree about everything. This might serve the same social purpose as an advocation of objectivity. Though I do not know whether it’s really true of humans that they will converge if they talk long enough, I still hold out faith that it will be one day, if we keep trying, if we try to get better at it.
taken to logical conclusions it means roughly that all our theories are wrong in an absolute sense
Which means “wrong” is no longer a meaningful word. Do you think you can operate without having a word like “wrong”? Do you think you can operate without that concept? I think DD sometimes plays inflamatory word games, defining things poorly on purpose.
there are rational ways to choose *exactly one* explanation (or zero if none hold up)
If you point a gun at a bayesian’s head and force them to make a bet against a single outcome, they’re always supposed to be able to. It’s just, there are often reasons not to.
we should believe only those things for which there are no unanswered criticisms
Why believe anything! There’s a sense in which a bayesian doesn’t have any beliefs, especially beliefs with unanswered criticisms. The great thing is that you don’t need to have beliefs to methodically do your best to optimize expected utility. You can operate well amid uncertainty. For instance, I can recommend that you take vitamin D supplements just in case the joe rogan interview that the youtube algorithm served me yesterday about how vitamin D is crucial for the respiratory immune system and the covid severity rates differ enormously depending on it was true. I don’t need to confirm that it’s true by trying to assess primary evidence, I don’t need to, in every sense, “believe”, because vitamin D is cheap and you should probably be taking it anyway for other reasons, and I have other stuff that I need to be reading right now.
In conclusion I don’t see many substantial epistemological differences. I’ve heard that DD makes some pretty risible claims about the prerequisites to creative intelligence (roughly, that values must, from an engineering feasibility perspective, be learned, that it would be in some way hard to make AGI that wouldn’t need to be “raised” into a value system by a big open society, that a thing with ‘wrong values’ couldn’t participate in an open society [and open societies will be stronger] and so wont pose a major threat), but it’s not obvious to me how those claims bear at on bayesian epistemology.
Maybe something to do with the ease with which people who like decision theory can conceive of and describe of very fast-growing non-human-aligned agents? While DD would claim that decision theory’s superintelligences are unrealistic to the point of inapplicability, and a real process of making one AGI would tend to take a long time and involve a lot of human intervention?
In conclusion I don’t see many substantial epistemological differences.
A Bayesian does have beliefs about the probability of various outcomes even if there are unanswered criticisms involved. Generally, the idea is that people examine criticisms more because they believe that the opportunity cost to answer the criticism is woth it and not just because they are unanswered.
I’d say bayesian epistemology’s stance is that there is one completely perfect way of understanding reality, but that it’s perhaps provably unattainable to finite things.
You cannot disprove this, because you have not attained it. You do not have an explanation of quantum gravity or globally correlated variation in the decay rate of neutrinos over time or any number of other historical or physical mysteries.
[...]
It’s good to believe that there’s an objective truth and try to move towards it, but you also need to make peace with the fact that you will almost certainly never arrive at it
Yes, knowledge creation is an unending, iterative process. It could only end if we come to the big objective truth, but that can’t happen (the argument for why is in BoI—the beginning of infinity).
We sometimes talk about aumann’s agreement theorem, the claim that any two bayesians who, roughly speaking, talk for long enough, will eventually come to agree about everything.
I think this is true of any two *rational* people with sufficient knowledge, and it’s rationality not bayesians that’s important. If two partially *irrational* bayesians talk, then there’s no reason to think they’d reach agreement on ~everything.
There is a subtle case with regards to creative thought, though: take two people who agree on ~everything. One of them has an idea, they now don’t agree on ~everything (but can get back to that state by talking more).
WRT “sufficient knowledge”: the two ppl need methods of discussing which are rational, and rational ways to resolve disagreements and impasse chains. they also need attitudes about solving problems. namely that any problem they run into in the discussion is able to be solved and that one or both of them can come up with ways to deal with *any* problem when it arises.
> taken to logical conclusions it means roughly that all our theories are wrong in an absolute sense
Which means “wrong” is no longer a meaningful word. Do you think you can operate without having a word like “wrong”? Do you think you can operate without that concept?
If it were meaningless I wouldn’t have had to add “in an absolute sense”. Just because an explanation is wrong in an *absolute* sense (i.e. it doesn’t perfectly match reality) does not mean it’s not *useful*. Fallibilism generally says it’s okay to believe things that are false (which all explanations are in some case); however, there are conditions on those times like there are no known unanswered criticisms and no alternatives.
Since BoI there has been more work on this problem and the reasoning around when to call something “true” (practically speaking) has improved—I think. Particularly:
Knowledge exists relative to *problems*
Whether knowledge applies or is correct or not can be evaluated rationally because we have *goals* (sometimes these goals are not specific enough, and there are generic ways of making your goals arbitrarily specific)
Roughly: true things are explanations/ideas which solve your problem, have no known unanswered criticism (i.e. are not refuted), and no alternatives which have no known unanswered criticisms
something is wrong if the conjecture that it solves the problem is refuted (and that refutation is unanswered)
note: a criticisms of an idea is itself an idea, so can be criticised (i.e. the first criticism is refuted by a second criticism) - this can be recursive and potentially go on forever (tho we know ways to make sure they don’t).
I think DD sometimes plays inflamatory word games, defining things poorly on purpose.
I think he’s in a tough spot to try and explain complex, subtle relationships in epistemology using a language where the words and grammar have been developed, in part, to be compatible with previous, incorrect epistemologies.
I don’t think he defines things poorly (at least typically); and would acknowledge an incomplete/fuzzy definition if he provided one. (Note: one counterexample is enough to refute this claim I’m making)
> there are rational ways to choose *exactly one* explanation (or zero if none hold up)
If you point a gun at a bayesian’s head and force them to make a bet against a single outcome, they’re always supposed to be able to. It’s just, there are often reasons not to.
I think you misunderstand me.
let’s say you wanted a pet, we need to make a conjecture about what to buy you that will make you happy (hopefully without developing regret later). the possible set of pets to start with are all the things that anyone has ever called a pet.
with something like this there will be lots of other goals, background goals, which we need to satisfy but don’t normally list. An example is that the pet doesn’t kill you, so we remove snakes, elephants, other other things that might hurt you. there are other background goals like life of the pet or ongoing cost; adopting you a cat with operable cancer isn’t a good solution.
there are maybe other practical goals too, like it should be an animal (no pet rocks), should be fluffy (so no fish, etc), shouldn’t cost more than $100, and yearly cost is under $1000 (excluding medical but you get health insurance for that).
maybe we do this sort of refinement a bit more and get a list like: cat, dog, rabbit, mouse
you might be *happy* with any of them, but can you be *more happy* with one than any other; is there a *best* pet? **note: this is not an optimisation problem** b/c we’re not turning every solution into a single unit (e.g. your ‘happiness index’); we’re providing *decisive reasons* for why an option should or shouldn’t be included. We’ve also been using this term “happy” but it’s more than just that, it’s got other important things in there—the important thing, though, is that it’s your *preference* and it matches that (i.e. each of the goals we introduce are in fact goals of yours; put another way: the conditions we introduce correspond directly and accurately to a goal)
this is the sort of case where there is there’s no gun to anyone’s head, but we can continue to refine down to a list of exactly **one** option (or zero). let’s say you wanted an animal you could easily play with → then rabbit,mouse are excluded, so we have options: cat,dog. If you’d prefer an animal that wasn’t a predator—both cat,dog excluded and we get to zero (so we need to come up with new options or remove a goal). If instead you wanted a pet that you could easily train to use a litter tray, well we can exclude a dog so you’re down to one. Let’s say the litter tray is the condition you imposed.
What happens if I remember ferrets can be pets and I suggest that? well now we need a *new* goal to find which of the cat or ferret you’d prefer.
Note: for most things we don’t go to this level of detail b/c we don’t need to; like if you have multiple apps to choose from that satisfy all your goals you can just choose one. If you find out a reason it’s not good, then you’ve added a new goal (if you weren’t originally mistaken, that is) and can go back to the list of other options.
Note 2: The method and framework I’ve just used wrt the pet problem is something called yes/no philosophy and has been developed by Elliot Temple over the past ~10+ years. Here are some links:
Note 3: During the link-finding exercise I found this: “All ideas are either true or false and should be judged as refuted or non-refuted and not given any other status – see yes no philosophy.” (credit: Alan Forrester) I think this is a good way to look at it; *technically and epistemically speaking:* true/false is not a judgement we can make, but refuted/non-refuted *is*. we use refuted/non-refuted as a proxy for false/true when making decisions, because (as fallible beings) we cannot do any better than that.
I’m curious about how a bayesian would tackle that problem. Do you just stop somewhere and say “the cat has a higher probability so we’ll go with that?” Do you introduce goals like I did to eliminate options? Is the elimination of those options equivalent to something like: reducing the probability of those options being true to near-zero? (or absolute zero?) Can a bayesian use this method to eliminate options without doing probability stuff? If a bayesian *can*, what if I conjecture that it’s possible to *always* do it for *all* problems? If that’s the case there would be a way to decisively reach a single answer—so no need for probability. (There’s always the edge case there was a mistake somewhere, but I don’t think there’s a meaningful answer to problems like “P(a mistake in a particular chain of reasoning)” or “P(the impact of a mistake is that the solution we came to changes)”—note: those P(__) statements are within a well defined context like an exact and particular chain of reasoning/explanation.
Why believe anything!
So we can make decisions.
The great thing is that you don’t need to have beliefs to methodically do your best to optimize expected utility
Yes you do—you need a theory of expected utility; how to measure it, predict it, manipulate it, etc. You also need a theory of how to use things (b/c my expected utility of amazing tech I don’t know how to use is 0). You need to believe these theories are true, otherwise you have no way to calculate a meaningful value for expected utility!
You can operate well amid uncertainty
Yes, I additionally claim we can operate **decisively**.
In conclusion I don’t see many substantial epistemological differences.
It matters more for big things, like SENS and MIRI. Both are working on things other than key problems; there is no good reason to think they’ll make significant progress b/c there are other more foundational problems.
I agree practically a lot of decisions come out the same.
I’ve heard that DD makes some pretty risible claims about the prerequisites to creative intelligence (roughly, that values must, from an engineering feasibility perspective, be learned, that it would be in some way hard to make AGI that wouldn’t need to be “raised” into a value system by a big open society, that a thing with ‘wrong values’ couldn’t participate in an open society [and open societies will be stronger] and so wont pose a major threat), but it’s not obvious to me how those claims bear at on bayesian epistemology.
I don’t know why they would be risible—nobody has a good reason why his ideas are wrong to my knowledge. They refute a lot of the fear-mongering that happens about AGI. They provide reasons for why a paperclip machine isn’t going to turn all matter into paperclips. They’re important because they refute big parts of theories from thinkers like Bostrom. That’s important because time, money, and effort are being spent in the course of taking Bostrom’s theories seriously, even though we have good reasons they’re not true. That could be time, money, and effort spent on more important problems like figuring out how creativity works. That’s a problem which would actually lead to the creation of an AGI.
Calling unanswered criticisms *risible* seems irrational to me. Sure unexpected answers could be funny the first time you hear them (though this just sounds like ppl being mean, not like it was the punchline to some untold joke) but if someone makes a serious point and you dismiss it because you think it’s silly, then you’re either irrational or you have a good, robust reason it’s not true.
[...] and a real process of making one AGI would tend to take a long time and involve a lot of human intervention?
He doesn’t claim this at all. From memory the full argument is in Ch7 of BoI (though has dependencies on some/all of the content in the first 6 chapters, and some subtleties are elaborated on later in the book). He expressly deals with the case where an AGI can run like 20,000x faster than a human (i.e. arbitrarily fast). He also doesn’t presume it needs to be raised like a human child or take the same resources/attention/etc.
I think you’d probably consider the way we approach problems of this scale to be popperian in character. We open ourselves to lots of claims and talk about them and criticize them. We try to work them into a complete picture of all of the options and then pick the best one, according to a metric that is not quite utility, because the desire for a pet is unlikely to be an intrinsic one.
The gun to our head in this situation is our mortality or aging that will eventually close the window of opportunity to enjoying having a pet.
I’m not sure how to relate to this as an analytical epistemological method, though. Most of the work, for me, would involve sitting with my desire for a pet and interrogating it, knowing that it isn’t at all immediately clear why I want one. I would try to see if it was a malformed expression of hunting or farming instincts. If I found that it was, the desire would dissipate, the impulse would recognize that it wouldn’t get me closer to the thing that it wants.
Barring that, I would be inclined to focus on dogs, because I know that no other companion animal has evolved to live alongside humans and enjoy it in the same way that dogs have. I’m not sure where that resolution came from.
What I’m getting at is that most of the interesting parts of this problem are inarticulable. Looking for opportunities to apply analytic methods or articulable epistemic processes doesn’t seem interesting to me at all.
A reasonable person’s approach to solving most problems right now is to ask a a huge machine learning system that nobody understands to recommend some articles from an incomprehensibly huge set of people.
Real world examples of decisionmaking generally aren’t solvable, or reducible to optimal methods.
belief
Do I believe in mathematics.. I can question the applicability of a mathematical model to a situation.
It’s probably worth mentioning that even mathematical claims aren’t beyond doubt, as mathematical claims can be arrived at in error (cosmic rays flipping bits) and it’s important that we’re able to notice and revert our position when that happens.
risable
My impression from observed usage was that “risable” meant “spurious, inspiring anger”. Finding that the dictionary definition of a word disagrees with natural impressions of it is a very common experience for me. I could just stop using words I’ve never explicitly looked up the meaning of, but that doesn’t seem ideal. I’m starting to wonder if dictionaries are the problem. Maybe there aren’t supposed to be dictionaries. Maybe there’s something very unnatural about them and they’re preventing linguistic evolution that would have been useful. OTOH, there is also something very unnatural about a single language being spoken by like a billion humans or whatever it is, and English’s unnaturally large vocabulary should probably be celebrated.
To clarify, it makes me angry to see someone assuming moral realism with such confidence that they might declare the most important industrial safety project in the history of humanity to be a foolish waste of time. The claim that there could be single objectively correct human morality is not compatible with anthropology, human history, nor the present political reality. It could still be true, but it is not sufficed, there is not sufficient reason to act as if it is definitely true. There should be more humility here than there is.
My first impression of a person who then goes on to claim that there is an objective, not just human, but universal morality that could bring unaligned machines into harmony with humanity is that they are lying to sell books. This is probably not the case, lying about that would be very stupid, but it’s a hypothesis that I have to take seriously. When a person says something like that, it has the air of a preacher who is telling a nice lie that they think will endear them to people and bring together a pleasant congregation around that unifying myth of the objectively correct universal morality, and maybe it will, but they need to find a different myth, because this one will endanger everything they value.
It’s not uncommon for competing ideas to have that sort of relationship. This is a good thing, though, because you have ways of making progress: e.g. compare the two ideas to come up with an experiment or create a more specific goal. Typically refuting one of those ideas will also answer or refute the criticism attached to it.
If a theory doesn’t offer some refutation for competing theories then that fact is (potentially) a criticism of that theory.
We can also come up with criticisms that are sort of independent of where they came from, like a new criticism is somewhat linked to idea A but idea A can be wrong in some way without implying the criticism was also wrong. It doesn’t make either theory A or B more likely or something when this happens; it just means there are two criticisms not one.
It’s a bad thing if ideas can’t be criticised at all, but it’s also a bad thing if the relationship of mutual criticism is cyclic, if it doesn’t have an obvious foundation or crux.
We can also come up with criticisms that are sort of independent of where they came from, like a new criticism is somewhat linked to idea A but idea A can be wrong in some way without implying the criticism was also wrong
Do you have a concrete example?
And it’s always possible both are wrong, anyway
Kind of, but “everything is wrong” is vulgar scepticism.
It’s a bad thing if ideas can’t be criticised at all, but it’s also a bad thing if the relationship of mutual criticism is cyclic, if it doesn’t have an obvious foundation or crux.
Do you have an example? I can’t think of an example of an infinite regress except cases where there are other options which stop the regress. (I have examples of these, but they’re contrived)
> We can also come up with criticisms that are sort of independent of where they came from, like a new criticism is somewhat linked to idea A but idea A can be wrong in some way without implying the criticism was also wrong
Do you have a concrete example?
I think some of the criticisms of inductivism Popper offered are like this. Even if Popper was wrong about big chunks of critical rationalism, it wouldn’t necessarily invalidate the criticisms. Example: A proof of the impossibility of inductive probability.
(Note: I don’t think Popper was wrong but I’m also not sure it’s necessary to discuss that now if we disagree; just wanted to mention)
> And it’s always possible both are wrong, anyway
Kind of, but “everything is wrong” is vulgar scepticism.
I’m not suggesting anything I said was a reason to think both theories wrong, I listed it because it was a possibility I didn’t mention in the other paragraphs, and it’s a bit of a trivial case for this stuff (i.e. if we come up with a reason both are wrong then we don’t have to worry about them anymore if we can’t answer that criticism)
> I can’t think of an example of an infinite regress except cases where there are other options which stop the regress.
The other options need to be acceptable to both parties!
Sure, or the parties need a rational method of resolving a disagreement on acceptability. I’m not sure why that’s particularly relevant, though.
> I can’t think of an example of an infinite regress except cases where there are other options which stop the regress.
I don’t see how that is an example, principally because it seems wrong to me.
You didn’t quote an example—I’m unsure if you meant to quote a different part?
In any case, what you’ve quoted isn’t an example, and you don’t explain why it seems wrong or what about it is an issue. Do you mean that cases exist where there is an infinite regress and it’s not soluble with other methods?
I’m also not sure why this is particularly relevant.
Are we still talking about the below?
> We can also come up with criticisms that are sort of independent of where they came from, like a new criticism is somewhat linked to idea A but idea A can be wrong in some way without implying the criticism was also wrong
Do you have a concrete example?
I did give you an example (one of Popper’s arguments against inductivism).
A generalised abstract case is where someone of a particular epistemology criticises another school of epistemology on the grounds of an internal contradiction. A criticism of that person’s criticism does not necessarily relate to that person’s epistemology, and vice versa.
Sure, or the parties need a rational method of resolving a disagreement on acceptability. I’m not sure why that’s particularly relevant, though.
The relevance is that CR can’t guarantee that any given dispute is resolveable.
Do you have a concrete example?
I did give you an example (one of Popper’s arguments against inductivism)
But I don’t count it as an example, since I don’t regard it as correct, let so be as being a valid argument with the further property of floating free of questionable background assumptions.
In particular , it is based on bivalent logic where, 1 and 0 are the only possible values, but the loud and proud inductivists here base their arguments on probabilistic logic, where propositions have a probability between but not including 1 and 0. So “induction must be based on bivalent logic” is an assumption.
A generalised abstract case is where someone of a particular epistemology criticises another school of epistemology on the grounds of an internal contradiction.
The assumption that contradictions are bad is a widespread assumption, but it is still an assumption.
The assumption that contradictions are bad is a widespread assumption, but it is still an assumption.
Reality does not contradict itself; ever. An epistemology is a theory about how knowledge works. If a theory (epistemic or otherwise) contains an internal contradiction, it cannot accurately portray reality. This is not an assumption, it’s an explanatory conclusion.
I’m not convinced we can get anywhere productive continuing this discussion. If you don’t think contradictions are bad, it feels like there’s going to be a lot of work finding common ground.
But I don’t count it as an example, since I don’t regard it as correct [...]
This is irrational. Examples of relationships do not depend on whether the example is real or not. All that’s required is that the relationship is clear, whether each of us judges the idea itself as true or not doesn’t matter in this case. We don’t need to argue this point anyway, since you provided an example:
In particular , it is based on bivalent logic where, 1 and 0 are the only possible values, but the loud and proud inductivists here base their arguments on probabilistic logic, where propositions have a probability between but not including 1 and 0. So “induction must be based on bivalent logic” is an assumption.
Cool, so do you see how the argument you made is separate from whether inductivism is right or not?
Your argument proposes a criticism of Popper’s argument. The criticism is your conjecture that Popper made a mistake. Your criticism doesn’t rely on whether inductivism is right or not, just whether it’s consistent or not (and consistent according to some principles you hint at). Similarly, if Popper did make a mistake with that argument, it doesn’t mean that CR is wrong, or that Inductivism is wrong; it just means Popper’s criticism was wrong.
Curiously, you say:
But I don’t count it as an example, since I don’t regard it as correct,
Do you count yourself a Bayesian or Inductivist? What probability did you assign to it being correct? And what probability do you generally assign to a false-positive result when you evaluate the correctness of examples like this?
Firstly, epistemology goes first. You don’t know anything about reality without having the means to acquire knowledge. Secondly, I didn’t say it was the PNC was actually false.
This is irrational. Examples of relationships do not depend on whether the example is real or not
So there is a relationship between the Miller and Popper papers conclusions, and it’s assumptions. Of course there is. That is what I am saying. But you were citing it as an example of a criticism that doesn’t depend on assumptions.
Your argument proposes a criticism of Popper’s argument
No, it proposes a criticism of your argument … the criticsm that there is a contradiction between your claim that the paper makes no assumptions, and the fact that it evidently does.
So there is a relationship between the Miller and Popper papers conclusions, and it’s assumptions. Of course there is. That is what I am saying. But you were citing it as an example of a criticism that doesn’t depend on assumptions.
> Your argument proposes a criticism of Popper’s argument
No, it proposes a criticism of your argument … the criticsm that there is a contradiction between your claim that the paper makes no assumptions, and the fact that it evidently does.
I didn’t claim that paper made no assumptions. I claimed that refuting that argument^[1] would not refute CR, and vice versa. Please review the thread, I think there’s been some significant miscommunications. If something’s unclear to you, you can quote it to point it out.
Firstly, epistemology goes first. You don’t know anything about reality without having the means to acquire knowledge.
Inductivism is not compatible with this—it has no way to bootstrap except by some other, more foundational epistemic factors.
Also, you didn’t really respond to my point or the chain of discussion-logic before that. I said an internal contradiction would be a way to refute an idea (as a second example when you asked for examples). you said contradictions being bad is an assumption. i said no, it’s a conclusion, and offered an explanation (which you’ve ignored). In fact, through this discussion you haven’t—as far as I can see—actually been interested in figuring out a) what anything else things or b) where and what you might be wrong about.
Secondly, I didn’t say it was the PNC was actually false.
I don’t think there’s any point talking about this, then. We haven’t had any meaningful discussion about it and I don’t see why we would.
I didn’t claim that paper made no assumptions. I claimed that refuting that argument[1] would not refute CR, and vice versa
Theoretically, if CR consists of a set of claims , then refuting one claim wouldn’t refute the rest. In practice , critrats are dogmatically wedded to the non existence of any form of induction.
Inductivism is not compatible with this—it has no way to bootstrap except by some other, more foundational epistemic factors.
I don’t particularly identify as an inductivist , and I don’t think that the critrat version of inductivism, is what self identified inductivists believe in.
i said no, it’s a conclusion, and offered an explanation (which you’ve ignored)
Conclusion from what? The conclusion will be based on some deeper assumption.
you haven’t—as far as I can see—actually been interested in figuring out a) what anything else things
What anyone else thinks?
I am very familiar with popular CR since I used to hang out in the same forums as Curi. I’ve also read some if the great man’s works.
Anecdote time: after a long discussion about the existence of any form of induction , on a CR forum, someone eventually popped up who had asked KRP the very question, after bumping into him at a conference many years ago , and his reply was that it existed , but wasn’t suitable for science.
But of course the true believing critrats weren’t convinced by Word of God.
Secondly, I didn’t say it was the PNC was actually false.
I don’t think there’s any point talking about this, then.
The point is that every claim in general depends on assumptions. So, in particular, the critrats don’t have a disproof of induction that floats free of assumptions.
I am uncomfortable with this practice. I think I am banned from participating in curi’s forum now anyway due to my comments here so it doesn’t affect me personally but it is a little strange to have this list with people’s personal information up.
Anecdote time: after a long discussion about the existence of any form of induction , on a CR forum, someone eventually popped up who had asked KRP the very question, after bumping into him at a conference many years ago , and his reply was that it existed , but wasn’t suitable for science.
What anyone else thinks? I am very familiar with popular CR since I used to hang out in the same forums as Curi. I’ve also read some if the great man’s works.
I’d say bayesian epistemology’s stance is that there is one completely perfect way of understanding reality, but that it’s perhaps provably unattainable to finite things.
You cannot disprove this, because you have not attained it. You do not have an explanation of quantum gravity or globally correlated variation in the decay rate of neutrinos over time or any number of other historical or physical mysteries.
(For some mysteries, like anthropic measure binding pthe hard problem of consciousness] it seems provably impossible to ever test our models. We each have exactly one observation about what kinds of matter attract subjectivity, we can’t share data (solopsistic doubt), we can never get more data (to “enter” a new locus of consciousness we would have to leave our old one, losing access to the observed experience that we had), but the question still matters a lot for measuring the quantity of experience over different brain architectures, so we need to have theories, even though objective truth can’t be attained.)
It’s good to believe that there’s an objective truth and try to move towards it, but you also need to make peace with the fact that you will almost certainly never arrive at it, and dialethic epistemologies like popper’s can’t give you the framework you need to operate gracefully without ever getting objective truth.
We sometimes talk about aumann’s agreement theorem, the claim that any two bayesians who, roughly speaking, talk for long enough, will eventually come to agree about everything. This might serve the same social purpose as an advocation of objectivity. Though I do not know whether it’s really true of humans that they will converge if they talk long enough, I still hold out faith that it will be one day, if we keep trying, if we try to get better at it.
Which means “wrong” is no longer a meaningful word. Do you think you can operate without having a word like “wrong”? Do you think you can operate without that concept? I think DD sometimes plays inflamatory word games, defining things poorly on purpose.
If you point a gun at a bayesian’s head and force them to make a bet against a single outcome, they’re always supposed to be able to. It’s just, there are often reasons not to.
Why believe anything! There’s a sense in which a bayesian doesn’t have any beliefs, especially beliefs with unanswered criticisms. The great thing is that you don’t need to have beliefs to methodically do your best to optimize expected utility. You can operate well amid uncertainty. For instance, I can recommend that you take vitamin D supplements just in case the joe rogan interview that the youtube algorithm served me yesterday about how vitamin D is crucial for the respiratory immune system and the covid severity rates differ enormously depending on it was true. I don’t need to confirm that it’s true by trying to assess primary evidence, I don’t need to, in every sense, “believe”, because vitamin D is cheap and you should probably be taking it anyway for other reasons, and I have other stuff that I need to be reading right now.
In conclusion I don’t see many substantial epistemological differences. I’ve heard that DD makes some pretty risible claims about the prerequisites to creative intelligence (roughly, that values must, from an engineering feasibility perspective, be learned, that it would be in some way hard to make AGI that wouldn’t need to be “raised” into a value system by a big open society, that a thing with ‘wrong values’ couldn’t participate in an open society [and open societies will be stronger] and so wont pose a major threat), but it’s not obvious to me how those claims bear at on bayesian epistemology.
Maybe something to do with the ease with which people who like decision theory can conceive of and describe of very fast-growing non-human-aligned agents? While DD would claim that decision theory’s superintelligences are unrealistic to the point of inapplicability, and a real process of making one AGI would tend to take a long time and involve a lot of human intervention?
A Bayesian does have beliefs about the probability of various outcomes even if there are unanswered criticisms involved. Generally, the idea is that people examine criticisms more because they believe that the opportunity cost to answer the criticism is woth it and not just because they are unanswered.
Yes, knowledge creation is an unending, iterative process. It could only end if we come to the big objective truth, but that can’t happen (the argument for why is in BoI—the beginning of infinity).
I think this is true of any two *rational* people with sufficient knowledge, and it’s rationality not bayesians that’s important. If two partially *irrational* bayesians talk, then there’s no reason to think they’d reach agreement on ~everything.
There is a subtle case with regards to creative thought, though: take two people who agree on ~everything. One of them has an idea, they now don’t agree on ~everything (but can get back to that state by talking more).
WRT “sufficient knowledge”: the two ppl need methods of discussing which are rational, and rational ways to resolve disagreements and impasse chains. they also need attitudes about solving problems. namely that any problem they run into in the discussion is able to be solved and that one or both of them can come up with ways to deal with *any* problem when it arises.
If it were meaningless I wouldn’t have had to add “in an absolute sense”. Just because an explanation is wrong in an *absolute* sense (i.e. it doesn’t perfectly match reality) does not mean it’s not *useful*. Fallibilism generally says it’s okay to believe things that are false (which all explanations are in some case); however, there are conditions on those times like there are no known unanswered criticisms and no alternatives.
Since BoI there has been more work on this problem and the reasoning around when to call something “true” (practically speaking) has improved—I think. Particularly:
Knowledge exists relative to *problems*
Whether knowledge applies or is correct or not can be evaluated rationally because we have *goals* (sometimes these goals are not specific enough, and there are generic ways of making your goals arbitrarily specific)
Roughly: true things are explanations/ideas which solve your problem, have no known unanswered criticism (i.e. are not refuted), and no alternatives which have no known unanswered criticisms
something is wrong if the conjecture that it solves the problem is refuted (and that refutation is unanswered)
note: a criticisms of an idea is itself an idea, so can be criticised (i.e. the first criticism is refuted by a second criticism) - this can be recursive and potentially go on forever (tho we know ways to make sure they don’t).
I think he’s in a tough spot to try and explain complex, subtle relationships in epistemology using a language where the words and grammar have been developed, in part, to be compatible with previous, incorrect epistemologies.
I don’t think he defines things poorly (at least typically); and would acknowledge an incomplete/fuzzy definition if he provided one. (Note: one counterexample is enough to refute this claim I’m making)
I think you misunderstand me.
let’s say you wanted a pet, we need to make a conjecture about what to buy you that will make you happy (hopefully without developing regret later). the possible set of pets to start with are all the things that anyone has ever called a pet.
with something like this there will be lots of other goals, background goals, which we need to satisfy but don’t normally list. An example is that the pet doesn’t kill you, so we remove snakes, elephants, other other things that might hurt you. there are other background goals like life of the pet or ongoing cost; adopting you a cat with operable cancer isn’t a good solution.
there are maybe other practical goals too, like it should be an animal (no pet rocks), should be fluffy (so no fish, etc), shouldn’t cost more than $100, and yearly cost is under $1000 (excluding medical but you get health insurance for that).
maybe we do this sort of refinement a bit more and get a list like: cat, dog, rabbit, mouse
you might be *happy* with any of them, but can you be *more happy* with one than any other; is there a *best* pet? **note: this is not an optimisation problem** b/c we’re not turning every solution into a single unit (e.g. your ‘happiness index’); we’re providing *decisive reasons* for why an option should or shouldn’t be included. We’ve also been using this term “happy” but it’s more than just that, it’s got other important things in there—the important thing, though, is that it’s your *preference* and it matches that (i.e. each of the goals we introduce are in fact goals of yours; put another way: the conditions we introduce correspond directly and accurately to a goal)
this is the sort of case where there is there’s no gun to anyone’s head, but we can continue to refine down to a list of exactly **one** option (or zero). let’s say you wanted an animal you could easily play with → then rabbit,mouse are excluded, so we have options: cat,dog. If you’d prefer an animal that wasn’t a predator—both cat,dog excluded and we get to zero (so we need to come up with new options or remove a goal). If instead you wanted a pet that you could easily train to use a litter tray, well we can exclude a dog so you’re down to one. Let’s say the litter tray is the condition you imposed.
What happens if I remember ferrets can be pets and I suggest that? well now we need a *new* goal to find which of the cat or ferret you’d prefer.
Note: for most things we don’t go to this level of detail b/c we don’t need to; like if you have multiple apps to choose from that satisfy all your goals you can just choose one. If you find out a reason it’s not good, then you’ve added a new goal (if you weren’t originally mistaken, that is) and can go back to the list of other options.
Note 2: The method and framework I’ve just used wrt the pet problem is something called yes/no philosophy and has been developed by Elliot Temple over the past ~10+ years. Here are some links:
Argument · Yes or No Philosophy, Curiosity – Rejecting Gradations of Certainty, Curiosity – Critical Rationalism Epistemology Explanations, Curiosity – Critical Preferences and Strong Arguments, Curiosity – Rationally Resolving Conflicts of Ideas, Curiosity – Explaining Popper on Fallible Scientific Knowledge, Curiosity – Yes or No Philosophy Discussion with Andrew Crawshaw
Note 3: During the link-finding exercise I found this: “All ideas are either true or false and should be judged as refuted or non-refuted and not given any other status – see yes no philosophy.” (credit: Alan Forrester) I think this is a good way to look at it; *technically and epistemically speaking:* true/false is not a judgement we can make, but refuted/non-refuted *is*. we use refuted/non-refuted as a proxy for false/true when making decisions, because (as fallible beings) we cannot do any better than that.
I’m curious about how a bayesian would tackle that problem. Do you just stop somewhere and say “the cat has a higher probability so we’ll go with that?” Do you introduce goals like I did to eliminate options? Is the elimination of those options equivalent to something like: reducing the probability of those options being true to near-zero? (or absolute zero?) Can a bayesian use this method to eliminate options without doing probability stuff? If a bayesian *can*, what if I conjecture that it’s possible to *always* do it for *all* problems? If that’s the case there would be a way to decisively reach a single answer—so no need for probability. (There’s always the edge case there was a mistake somewhere, but I don’t think there’s a meaningful answer to problems like “P(a mistake in a particular chain of reasoning)” or “P(the impact of a mistake is that the solution we came to changes)”—note: those P(__) statements are within a well defined context like an exact and particular chain of reasoning/explanation.
So we can make decisions.
Yes you do—you need a theory of expected utility; how to measure it, predict it, manipulate it, etc. You also need a theory of how to use things (b/c my expected utility of amazing tech I don’t know how to use is 0). You need to believe these theories are true, otherwise you have no way to calculate a meaningful value for expected utility!
Yes, I additionally claim we can operate **decisively**.
It matters more for big things, like SENS and MIRI. Both are working on things other than key problems; there is no good reason to think they’ll make significant progress b/c there are other more foundational problems.
I agree practically a lot of decisions come out the same.
I don’t know why they would be risible—nobody has a good reason why his ideas are wrong to my knowledge. They refute a lot of the fear-mongering that happens about AGI. They provide reasons for why a paperclip machine isn’t going to turn all matter into paperclips. They’re important because they refute big parts of theories from thinkers like Bostrom. That’s important because time, money, and effort are being spent in the course of taking Bostrom’s theories seriously, even though we have good reasons they’re not true. That could be time, money, and effort spent on more important problems like figuring out how creativity works. That’s a problem which would actually lead to the creation of an AGI.
Calling unanswered criticisms *risible* seems irrational to me. Sure unexpected answers could be funny the first time you hear them (though this just sounds like ppl being mean, not like it was the punchline to some untold joke) but if someone makes a serious point and you dismiss it because you think it’s silly, then you’re either irrational or you have a good, robust reason it’s not true.
He doesn’t claim this at all. From memory the full argument is in Ch7 of BoI (though has dependencies on some/all of the content in the first 6 chapters, and some subtleties are elaborated on later in the book). He expressly deals with the case where an AGI can run like 20,000x faster than a human (i.e. arbitrarily fast). He also doesn’t presume it needs to be raised like a human child or take the same resources/attention/etc.
Have you read much of BoI?
I think you’d probably consider the way we approach problems of this scale to be popperian in character. We open ourselves to lots of claims and talk about them and criticize them. We try to work them into a complete picture of all of the options and then pick the best one, according to a metric that is not quite utility, because the desire for a pet is unlikely to be an intrinsic one.
The gun to our head in this situation is our mortality or aging that will eventually close the window of opportunity to enjoying having a pet.
I’m not sure how to relate to this as an analytical epistemological method, though. Most of the work, for me, would involve sitting with my desire for a pet and interrogating it, knowing that it isn’t at all immediately clear why I want one. I would try to see if it was a malformed expression of hunting or farming instincts. If I found that it was, the desire would dissipate, the impulse would recognize that it wouldn’t get me closer to the thing that it wants.
Barring that, I would be inclined to focus on dogs, because I know that no other companion animal has evolved to live alongside humans and enjoy it in the same way that dogs have. I’m not sure where that resolution came from.
What I’m getting at is that most of the interesting parts of this problem are inarticulable. Looking for opportunities to apply analytic methods or articulable epistemic processes doesn’t seem interesting to me at all.
A reasonable person’s approach to solving most problems right now is to ask a a huge machine learning system that nobody understands to recommend some articles from an incomprehensibly huge set of people.
Real world examples of decisionmaking generally aren’t solvable, or reducible to optimal methods.
Do I believe in mathematics.. I can question the applicability of a mathematical model to a situation.
It’s probably worth mentioning that even mathematical claims aren’t beyond doubt, as mathematical claims can be arrived at in error (cosmic rays flipping bits) and it’s important that we’re able to notice and revert our position when that happens.
My impression from observed usage was that “risable” meant “spurious, inspiring anger”. Finding that the dictionary definition of a word disagrees with natural impressions of it is a very common experience for me. I could just stop using words I’ve never explicitly looked up the meaning of, but that doesn’t seem ideal. I’m starting to wonder if dictionaries are the problem. Maybe there aren’t supposed to be dictionaries. Maybe there’s something very unnatural about them and they’re preventing linguistic evolution that would have been useful. OTOH, there is also something very unnatural about a single language being spoken by like a billion humans or whatever it is, and English’s unnaturally large vocabulary should probably be celebrated.
To clarify, it makes me angry to see someone assuming moral realism with such confidence that they might declare the most important industrial safety project in the history of humanity to be a foolish waste of time. The claim that there could be single objectively correct human morality is not compatible with anthropology, human history, nor the present political reality. It could still be true, but it is not sufficed, there is not sufficient reason to act as if it is definitely true. There should be more humility here than there is.
My first impression of a person who then goes on to claim that there is an objective, not just human, but universal morality that could bring unaligned machines into harmony with humanity is that they are lying to sell books. This is probably not the case, lying about that would be very stupid, but it’s a hypothesis that I have to take seriously. When a person says something like that, it has the air of a preacher who is telling a nice lie that they think will endear them to people and bring together a pleasant congregation around that unifying myth of the objectively correct universal morality, and maybe it will, but they need to find a different myth, because this one will endanger everything they value.
I haven’t read BoI. I’ve been thinking about it.
Most fraught ideas are mutually refuted...A can be refuted assuming B, B can be refuted using A.
It’s not uncommon for competing ideas to have that sort of relationship. This is a good thing, though, because you have ways of making progress: e.g. compare the two ideas to come up with an experiment or create a more specific goal. Typically refuting one of those ideas will also answer or refute the criticism attached to it.
If a theory doesn’t offer some refutation for competing theories then that fact is (potentially) a criticism of that theory.
We can also come up with criticisms that are sort of independent of where they came from, like a new criticism is somewhat linked to idea A but idea A can be wrong in some way without implying the criticism was also wrong. It doesn’t make either theory A or B more likely or something when this happens; it just means there are two criticisms not one.
And it’s always possible both are wrong, anyway.
It’s a bad thing if ideas can’t be criticised at all, but it’s also a bad thing if the relationship of mutual criticism is cyclic, if it doesn’t have an obvious foundation or crux.
Do you have a concrete example?
Kind of, but “everything is wrong” is vulgar scepticism.
Do you have an example? I can’t think of an example of an infinite regress except cases where there are other options which stop the regress. (I have examples of these, but they’re contrived)
I think some of the criticisms of inductivism Popper offered are like this. Even if Popper was wrong about big chunks of critical rationalism, it wouldn’t necessarily invalidate the criticisms. Example: A proof of the impossibility of inductive probability.
(Note: I don’t think Popper was wrong but I’m also not sure it’s necessary to discuss that now if we disagree; just wanted to mention)
I’m not suggesting anything I said was a reason to think both theories wrong, I listed it because it was a possibility I didn’t mention in the other paragraphs, and it’s a bit of a trivial case for this stuff (i.e. if we come up with a reason both are wrong then we don’t have to worry about them anymore if we can’t answer that criticism)
The other options need to be acceptable to both parties!
I don’t see how that is an example, principally because it seems wrong to me.
Sure, or the parties need a rational method of resolving a disagreement on acceptability. I’m not sure why that’s particularly relevant, though.
You didn’t quote an example—I’m unsure if you meant to quote a different part?
In any case, what you’ve quoted isn’t an example, and you don’t explain why it seems wrong or what about it is an issue. Do you mean that cases exist where there is an infinite regress and it’s not soluble with other methods?
I’m also not sure why this is particularly relevant.
Are we still talking about the below?
I did give you an example (one of Popper’s arguments against inductivism).
A generalised abstract case is where someone of a particular epistemology criticises another school of epistemology on the grounds of an internal contradiction. A criticism of that person’s criticism does not necessarily relate to that person’s epistemology, and vice versa.
The relevance is that CR can’t guarantee that any given dispute is resolveable.
But I don’t count it as an example, since I don’t regard it as correct, let so be as being a valid argument with the further property of floating free of questionable background assumptions.
In particular , it is based on bivalent logic where, 1 and 0 are the only possible values, but the loud and proud inductivists here base their arguments on probabilistic logic, where propositions have a probability between but not including 1 and 0. So “induction must be based on bivalent logic” is an assumption.
The assumption that contradictions are bad is a widespread assumption, but it is still an assumption.
Reality does not contradict itself; ever. An epistemology is a theory about how knowledge works. If a theory (epistemic or otherwise) contains an internal contradiction, it cannot accurately portray reality. This is not an assumption, it’s an explanatory conclusion.
I’m not convinced we can get anywhere productive continuing this discussion. If you don’t think contradictions are bad, it feels like there’s going to be a lot of work finding common ground.
This is irrational. Examples of relationships do not depend on whether the example is real or not. All that’s required is that the relationship is clear, whether each of us judges the idea itself as true or not doesn’t matter in this case. We don’t need to argue this point anyway, since you provided an example:
Cool, so do you see how the argument you made is separate from whether inductivism is right or not?
Your argument proposes a criticism of Popper’s argument. The criticism is your conjecture that Popper made a mistake. Your criticism doesn’t rely on whether inductivism is right or not, just whether it’s consistent or not (and consistent according to some principles you hint at). Similarly, if Popper did make a mistake with that argument, it doesn’t mean that CR is wrong, or that Inductivism is wrong; it just means Popper’s criticism was wrong.
Curiously, you say:
Do you count yourself a Bayesian or Inductivist? What probability did you assign to it being correct? And what probability do you generally assign to a false-positive result when you evaluate the correctness of examples like this?
Neither. You don’t have to treat epistemology as a religion.
Firstly, epistemology goes first. You don’t know anything about reality without having the means to acquire knowledge. Secondly, I didn’t say it was the PNC was actually false.
So there is a relationship between the Miller and Popper papers conclusions, and it’s assumptions. Of course there is. That is what I am saying. But you were citing it as an example of a criticism that doesn’t depend on assumptions.
No, it proposes a criticism of your argument … the criticsm that there is a contradiction between your claim that the paper makes no assumptions, and the fact that it evidently does.
I didn’t claim that paper made no assumptions. I claimed that refuting that argument^[1] would not refute CR, and vice versa. Please review the thread, I think there’s been some significant miscommunications. If something’s unclear to you, you can quote it to point it out.
[1]: for clarity, the argument in Q: A proof of the impossibility of inductive probability.
Inductivism is not compatible with this—it has no way to bootstrap except by some other, more foundational epistemic factors.
Also, you didn’t really respond to my point or the chain of discussion-logic before that. I said an internal contradiction would be a way to refute an idea (as a second example when you asked for examples). you said contradictions being bad is an assumption. i said no, it’s a conclusion, and offered an explanation (which you’ve ignored). In fact, through this discussion you haven’t—as far as I can see—actually been interested in figuring out a) what anything else things or b) where and what you might be wrong about.
I don’t think there’s any point talking about this, then. We haven’t had any meaningful discussion about it and I don’t see why we would.
Theoretically, if CR consists of a set of claims , then refuting one claim wouldn’t refute the rest. In practice , critrats are dogmatically wedded to the non existence of any form of induction.
I don’t particularly identify as an inductivist , and I don’t think that the critrat version of inductivism, is what self identified inductivists believe in.
Conclusion from what? The conclusion will be based on some deeper assumption.
What anyone else thinks? I am very familiar with popular CR since I used to hang out in the same forums as Curi. I’ve also read some if the great man’s works.
Anecdote time: after a long discussion about the existence of any form of induction , on a CR forum, someone eventually popped up who had asked KRP the very question, after bumping into him at a conference many years ago , and his reply was that it existed , but wasn’t suitable for science.
But of course the true believing critrats weren’t convinced by Word of God.
The point is that every claim in general depends on assumptions. So, in particular, the critrats don’t have a disproof of induction that floats free of assumptions.
1
I just discovered he keeps a wall of shame for people who left his forum:
http://curi.us/2215-list-of-fallible-ideas-evaders
Are you in this wall?
I am uncomfortable with this practice. I think I am banned from participating in curi’s forum now anyway due to my comments here so it doesn’t affect me personally but it is a little strange to have this list with people’s personal information up.
Source?
Which forums? Under what name?