Some acceptable ideas: choose the one that is more quickly disproved or the one that does less damage if wrong
Maximizing expected utility does these things in a very simple way to the exact extent that it should.
Hmm...
My first impulse was to say “bayes is not a method. It is a low-level language for epistemology. Methods emerge higher in the abstraction stack. Its fandom uses just whatever methods work.”
But maybe there could be something reasonably describable as a bayesian method. But I don’t work with enough with non-bayesian philosophers to be able to immediately know how we are different, well enough to narrow in on it.
Is the bayesian method… trying always to understand things on the math/decision theory level? Confidently; deutsch is not doing that. His understanding of AGI is utterly anthropomorphic, and is not informed by decision theory; it is not informed by the study of reliable, deeply comprehensible essences of things, and so it will not come into play very much in the adjacent discipline of engineering.
I guess… to get closer to understanding what the bayesian methodology might be uniquely good at… I’ll have to reexamine some original reasoning that I have done with it… so… understanding things in terms of decision lets me identify concepts that are basic and necessary for consistent decisionmaking (paraphrasings of consistent decisionmaking: for being free and agentic and not self-defeating or easily tricked). Which let me narrow in on just the aspects of the hard problem of consciousness that must be, in some sense, real. Which lead me to conclusions like “fish aren’t important moral subjects, because even though they’re clearly capable of suffering, experiences have magnitude, and theirs must be negligible, for it to be other than negligible, something astronomically unlikely would have needed to have happened, so it basically must be.” Which means I get to be more of a pescaterian than a vegan, which is a very immediately useful realization to have arrived at.
If that argument doesn’t make sense to you, well that might mean that we’ve just identified something that bayesian/decision theoretic reasoning can do, that can’t be done without it.
I would be interested to know how Mirror Chamber strikes you though, I haven’t tried to get non-bayesians to read it.
But maybe there could be something reasonably describable as a bayesian method. But I don’t work with enough with non-bayesian philosophers to be able to immediately know how we are different, well enough to narrow in on it.
I don’t know how you’d describe Bayesianism atm but I’ll list some things I think are important context or major differences. I might put some things in quotes as a way to be casual but LMK if any part is not specific enough or ambiguous or whatever.
both CR and Bayesianism answer Qs about knowledge and judging knowledge; they’re incompatible b/c they make incompatible claims about the world but overlap.
CR says that truth is objective
explanations are the foundation of knowledge, and it’s from explanations that we gain predictive power
no knowledge is derived from the past; that’s an illusion b/c we’re already using per-existing explanations as foundations
new knowledge can be created to explain things about the past we didn’t understand, but that’s new knowledge in the same way the original explanation was once new knowledge
e.g. axial tilt theory of seasons; no amount of past experience helped understand what’s *really* happening, someone had to make a conjecture in terms of geometry (and maybe Newtonian physics too)
when we have two explanations for a single phenomena they’re either the same, both wrong, or one is “right”
“right” is different from “true”—this is where fallibilism comes in (note: I don’t think you can talk about CR without talking about fallibilism; broadly they’re synonyms)
taken to logical conclusions it means roughly that all our theories are wrong in an absolute sense and we’ll discover more and more better explanations about the universe to explain it
this includes ~everything: anything we want to understand requires an explanation: quantum physics, knowledge creation, computer sciences, AGI, how minds work (which is actually the same general problem as AGI) - including human minds, economics, why people choose particular ice-cream flavors
DD suggests in *the beginning of infinity* that we should rename scientific theories scientific “misconceptions” because that’s more accurate
anyone can be mistaken on anything
there are rational ways to choose *exactly one* explanation (or zero if none hold up)
if we have a reason that some explanation is false, then there is no amount of “support” which makes it less likely to be false. (this is what is meant by ‘criticism’). no objectively true thing has an objectively true reason that it’s false.
so we should believe only those things for which there are no unanswered criticisms
this is why some CR ppl are insistent on finishing and concluding discussions—if two people disagree then one must have knowledge of why the other is wrong, or they’re both wrong (or both don’t know enough, etc)
to refuse to finish a discussion is either denying the counterparty the opportunity to correct an error (which was evidently important enough to start the discussion about) - this is anti-knowledge and irrational, *or* it’s to deny that you have an error (or that the error can be corrected) which is also anti-knowledge and irrational.
there are maybe things to discuss about practicality but even if there are good reasons to drop conversations for practical purposes sometimes, it doesn’t explain why it happens so much.
that was less focused on differences/incompatibilities than I had in mind originally but hopefully it gives you some ideas.
Is the bayesian method… trying always to understand things on the math/decision theory level? Confidently; deutsch is not doing that.
Unless it’s maths/decision theory related, that’s right. CR/Fallibilism is more about reasoning; like an internal-contradiction means an idea is wrong; there’s 0 probability it’s correct. Maybe someone alters the idea so it doesn’t have a contradiction which means it needs to be judged again.
His understanding of AGI is utterly anthropomorphic
I don’t think that’s the case. I think his understanding/theories of AGI don’t have anything to do with humans (besides that we’d create one—excluding aliens showing up or whatever). There’s a separate explanation for why AGI isn’t going to arise randomly e.g. out of a genetic ML algorithm.
If that argument doesn’t make sense to you, well that might mean that we’ve just identified something that bayesian/decision theoretic reasoning can do, that can’t be done without it.
Well, we don’t agree about fish, but whether it makes sense or not depends on your meaning. If you mean that I understand your reasoning, I think I do. If you mean that I think the reasoning is okay, maybe from your principles but I don’t think it’s *right*. Like I think there are issues with it such that the explanation and conclusion shouldn’t be used.
ps: I realize that’s a lot of text to dump all at once, sorry about that. Maybe it’s a good idea to focus on one thing?
Well, we don’t agree about fish [...] I don’t think it’s *right*
Understanding what you mean by “right”, I think I might agree; it’s not complete, it’s not especially close to certainty.
It’s difficult to apply the mirror chamber’s reduction of anthropic measure across different species (it was only necessitated for comparing over a pair of very similar experiences), and I’m not sure the biomass difference between fishbrain and humanbrain is such that anthropics can be used either, meaning… well, we can conclude, from the amount of rock in the universe, and the tiny amount of humans in the universe, and our being humans instead of rock, that it is astronomically unlikely that anthropic measure binds in significant quantities to rock. If it did, we would almost certainly have woken up in a different sort of place. But for fish, perhaps the numbers are not large enough for us to draw a similar conclusion. (Again, I’m realizing the validity of that sort of argument doesn’t clearly entail from the mirror chamber, though I think it is suggested by it)
I think my real reasons for going with pescatarianism are being fed into from other sources, here. It’s not just the anthropic measure thing. Also receiving a strong push from my friends in neuroscience who claim that the neurology of fish is just way too simple to be given a lot of experiential weight, in the same way that a thermostat is too simple for us to think anything is suffering when … [reexamines the assumptions]...
Hmm. I no longer believe their reasoning there (I should talk to them again I guess). I have seen too many bastards say “but that’s merely a machine so it couldn’t have conscious experience” of systems that probably would have conscious experience, and here they are saying that a biological reinforcement learning system that observably learns from painful experience could not truly suffer. It’s not clear that there’s a difference between that and suffering. I think fish suffer. The quantity must be small, but this is not enough to conclude that it’s negligible.
(… qualia == the class of observations upon which indexical claims can be conditioned?? (I think I’m going to have to write this up properly and do a post))
On the note of *qualia* (providing in case it helps)
DD says this in BoI when he first uses the word:
Intelligence in the general-purpose sense that Turing meant is one of a constellation of attributes of the human mind that have been puzzling philosophers for millennia; others include consciousness, free will, and meaning. A typical such puzzle is that of qualia (singular quale, which rhymes with ‘baalay’) – meaning the subjective aspect of sensations. So for instance the sensation of seeing the colour blue is a quale. Consider the following thought experiment. You are a biochemist with the misfortune to have been born with a genetic defect that disables the blue receptors in your retinas. Consequently you have a form of colour blindness in which you are able to see only red and green, and mixtures of the two such as yellow, but anything purely blue also looks to you like one of those mixtures. Then you discover a cure that will cause your blue receptors to start working. Before administering the cure to yourself, you can confidently make certain predictions about what will happen if it works. One of them is that, when you hold up a blue card as a test, you will see a colour that you have never seen before. You can predict that you will call it ‘blue’, because you already know what the colour of the card is called (and can already check which colour it is with a spectrophotometer). You can also predict that when you first see a clear daytime sky after being cured you will experience a similar quale to that of seeing the blue card. But there is one thing that neither you nor anyone else could predict about the outcome of this experiment, and that is: what blue will look like. Qualia are currently neither describable nor predictable – a unique property that should make them deeply problematic to anyone with a scientific world view (though, in the event, it seems to be mainly philosophers who worry about it).
and under “terminology” at the end of the chapter:
Quale (plural qualia) The subjective aspect of a sensation.
I’d say bayesian epistemology’s stance is that there is one completely perfect way of understanding reality, but that it’s perhaps provably unattainable to finite things.
You cannot disprove this, because you have not attained it. You do not have an explanation of quantum gravity or globally correlated variation in the decay rate of neutrinos over time or any number of other historical or physical mysteries.
(For some mysteries, like anthropic measure binding pthe hard problem of consciousness] it seems provably impossible to ever test our models. We each have exactly one observation about what kinds of matter attract subjectivity, we can’t share data (solopsistic doubt), we can never get more data (to “enter” a new locus of consciousness we would have to leave our old one, losing access to the observed experience that we had), but the question still matters a lot for measuring the quantity of experience over different brain architectures, so we need to have theories, even though objective truth can’t be attained.)
It’s good to believe that there’s an objective truth and try to move towards it, but you also need to make peace with the fact that you will almost certainly never arrive at it, and dialethic epistemologies like popper’s can’t give you the framework you need to operate gracefully without ever getting objective truth.
We sometimes talk about aumann’s agreement theorem, the claim that any two bayesians who, roughly speaking, talk for long enough, will eventually come to agree about everything. This might serve the same social purpose as an advocation of objectivity. Though I do not know whether it’s really true of humans that they will converge if they talk long enough, I still hold out faith that it will be one day, if we keep trying, if we try to get better at it.
taken to logical conclusions it means roughly that all our theories are wrong in an absolute sense
Which means “wrong” is no longer a meaningful word. Do you think you can operate without having a word like “wrong”? Do you think you can operate without that concept? I think DD sometimes plays inflamatory word games, defining things poorly on purpose.
there are rational ways to choose *exactly one* explanation (or zero if none hold up)
If you point a gun at a bayesian’s head and force them to make a bet against a single outcome, they’re always supposed to be able to. It’s just, there are often reasons not to.
we should believe only those things for which there are no unanswered criticisms
Why believe anything! There’s a sense in which a bayesian doesn’t have any beliefs, especially beliefs with unanswered criticisms. The great thing is that you don’t need to have beliefs to methodically do your best to optimize expected utility. You can operate well amid uncertainty. For instance, I can recommend that you take vitamin D supplements just in case the joe rogan interview that the youtube algorithm served me yesterday about how vitamin D is crucial for the respiratory immune system and the covid severity rates differ enormously depending on it was true. I don’t need to confirm that it’s true by trying to assess primary evidence, I don’t need to, in every sense, “believe”, because vitamin D is cheap and you should probably be taking it anyway for other reasons, and I have other stuff that I need to be reading right now.
In conclusion I don’t see many substantial epistemological differences. I’ve heard that DD makes some pretty risible claims about the prerequisites to creative intelligence (roughly, that values must, from an engineering feasibility perspective, be learned, that it would be in some way hard to make AGI that wouldn’t need to be “raised” into a value system by a big open society, that a thing with ‘wrong values’ couldn’t participate in an open society [and open societies will be stronger] and so wont pose a major threat), but it’s not obvious to me how those claims bear at on bayesian epistemology.
Maybe something to do with the ease with which people who like decision theory can conceive of and describe of very fast-growing non-human-aligned agents? While DD would claim that decision theory’s superintelligences are unrealistic to the point of inapplicability, and a real process of making one AGI would tend to take a long time and involve a lot of human intervention?
In conclusion I don’t see many substantial epistemological differences.
A Bayesian does have beliefs about the probability of various outcomes even if there are unanswered criticisms involved. Generally, the idea is that people examine criticisms more because they believe that the opportunity cost to answer the criticism is woth it and not just because they are unanswered.
I’d say bayesian epistemology’s stance is that there is one completely perfect way of understanding reality, but that it’s perhaps provably unattainable to finite things.
You cannot disprove this, because you have not attained it. You do not have an explanation of quantum gravity or globally correlated variation in the decay rate of neutrinos over time or any number of other historical or physical mysteries.
[...]
It’s good to believe that there’s an objective truth and try to move towards it, but you also need to make peace with the fact that you will almost certainly never arrive at it
Yes, knowledge creation is an unending, iterative process. It could only end if we come to the big objective truth, but that can’t happen (the argument for why is in BoI—the beginning of infinity).
We sometimes talk about aumann’s agreement theorem, the claim that any two bayesians who, roughly speaking, talk for long enough, will eventually come to agree about everything.
I think this is true of any two *rational* people with sufficient knowledge, and it’s rationality not bayesians that’s important. If two partially *irrational* bayesians talk, then there’s no reason to think they’d reach agreement on ~everything.
There is a subtle case with regards to creative thought, though: take two people who agree on ~everything. One of them has an idea, they now don’t agree on ~everything (but can get back to that state by talking more).
WRT “sufficient knowledge”: the two ppl need methods of discussing which are rational, and rational ways to resolve disagreements and impasse chains. they also need attitudes about solving problems. namely that any problem they run into in the discussion is able to be solved and that one or both of them can come up with ways to deal with *any* problem when it arises.
> taken to logical conclusions it means roughly that all our theories are wrong in an absolute sense
Which means “wrong” is no longer a meaningful word. Do you think you can operate without having a word like “wrong”? Do you think you can operate without that concept?
If it were meaningless I wouldn’t have had to add “in an absolute sense”. Just because an explanation is wrong in an *absolute* sense (i.e. it doesn’t perfectly match reality) does not mean it’s not *useful*. Fallibilism generally says it’s okay to believe things that are false (which all explanations are in some case); however, there are conditions on those times like there are no known unanswered criticisms and no alternatives.
Since BoI there has been more work on this problem and the reasoning around when to call something “true” (practically speaking) has improved—I think. Particularly:
Knowledge exists relative to *problems*
Whether knowledge applies or is correct or not can be evaluated rationally because we have *goals* (sometimes these goals are not specific enough, and there are generic ways of making your goals arbitrarily specific)
Roughly: true things are explanations/ideas which solve your problem, have no known unanswered criticism (i.e. are not refuted), and no alternatives which have no known unanswered criticisms
something is wrong if the conjecture that it solves the problem is refuted (and that refutation is unanswered)
note: a criticisms of an idea is itself an idea, so can be criticised (i.e. the first criticism is refuted by a second criticism) - this can be recursive and potentially go on forever (tho we know ways to make sure they don’t).
I think DD sometimes plays inflamatory word games, defining things poorly on purpose.
I think he’s in a tough spot to try and explain complex, subtle relationships in epistemology using a language where the words and grammar have been developed, in part, to be compatible with previous, incorrect epistemologies.
I don’t think he defines things poorly (at least typically); and would acknowledge an incomplete/fuzzy definition if he provided one. (Note: one counterexample is enough to refute this claim I’m making)
> there are rational ways to choose *exactly one* explanation (or zero if none hold up)
If you point a gun at a bayesian’s head and force them to make a bet against a single outcome, they’re always supposed to be able to. It’s just, there are often reasons not to.
I think you misunderstand me.
let’s say you wanted a pet, we need to make a conjecture about what to buy you that will make you happy (hopefully without developing regret later). the possible set of pets to start with are all the things that anyone has ever called a pet.
with something like this there will be lots of other goals, background goals, which we need to satisfy but don’t normally list. An example is that the pet doesn’t kill you, so we remove snakes, elephants, other other things that might hurt you. there are other background goals like life of the pet or ongoing cost; adopting you a cat with operable cancer isn’t a good solution.
there are maybe other practical goals too, like it should be an animal (no pet rocks), should be fluffy (so no fish, etc), shouldn’t cost more than $100, and yearly cost is under $1000 (excluding medical but you get health insurance for that).
maybe we do this sort of refinement a bit more and get a list like: cat, dog, rabbit, mouse
you might be *happy* with any of them, but can you be *more happy* with one than any other; is there a *best* pet? **note: this is not an optimisation problem** b/c we’re not turning every solution into a single unit (e.g. your ‘happiness index’); we’re providing *decisive reasons* for why an option should or shouldn’t be included. We’ve also been using this term “happy” but it’s more than just that, it’s got other important things in there—the important thing, though, is that it’s your *preference* and it matches that (i.e. each of the goals we introduce are in fact goals of yours; put another way: the conditions we introduce correspond directly and accurately to a goal)
this is the sort of case where there is there’s no gun to anyone’s head, but we can continue to refine down to a list of exactly **one** option (or zero). let’s say you wanted an animal you could easily play with → then rabbit,mouse are excluded, so we have options: cat,dog. If you’d prefer an animal that wasn’t a predator—both cat,dog excluded and we get to zero (so we need to come up with new options or remove a goal). If instead you wanted a pet that you could easily train to use a litter tray, well we can exclude a dog so you’re down to one. Let’s say the litter tray is the condition you imposed.
What happens if I remember ferrets can be pets and I suggest that? well now we need a *new* goal to find which of the cat or ferret you’d prefer.
Note: for most things we don’t go to this level of detail b/c we don’t need to; like if you have multiple apps to choose from that satisfy all your goals you can just choose one. If you find out a reason it’s not good, then you’ve added a new goal (if you weren’t originally mistaken, that is) and can go back to the list of other options.
Note 2: The method and framework I’ve just used wrt the pet problem is something called yes/no philosophy and has been developed by Elliot Temple over the past ~10+ years. Here are some links:
Note 3: During the link-finding exercise I found this: “All ideas are either true or false and should be judged as refuted or non-refuted and not given any other status – see yes no philosophy.” (credit: Alan Forrester) I think this is a good way to look at it; *technically and epistemically speaking:* true/false is not a judgement we can make, but refuted/non-refuted *is*. we use refuted/non-refuted as a proxy for false/true when making decisions, because (as fallible beings) we cannot do any better than that.
I’m curious about how a bayesian would tackle that problem. Do you just stop somewhere and say “the cat has a higher probability so we’ll go with that?” Do you introduce goals like I did to eliminate options? Is the elimination of those options equivalent to something like: reducing the probability of those options being true to near-zero? (or absolute zero?) Can a bayesian use this method to eliminate options without doing probability stuff? If a bayesian *can*, what if I conjecture that it’s possible to *always* do it for *all* problems? If that’s the case there would be a way to decisively reach a single answer—so no need for probability. (There’s always the edge case there was a mistake somewhere, but I don’t think there’s a meaningful answer to problems like “P(a mistake in a particular chain of reasoning)” or “P(the impact of a mistake is that the solution we came to changes)”—note: those P(__) statements are within a well defined context like an exact and particular chain of reasoning/explanation.
Why believe anything!
So we can make decisions.
The great thing is that you don’t need to have beliefs to methodically do your best to optimize expected utility
Yes you do—you need a theory of expected utility; how to measure it, predict it, manipulate it, etc. You also need a theory of how to use things (b/c my expected utility of amazing tech I don’t know how to use is 0). You need to believe these theories are true, otherwise you have no way to calculate a meaningful value for expected utility!
You can operate well amid uncertainty
Yes, I additionally claim we can operate **decisively**.
In conclusion I don’t see many substantial epistemological differences.
It matters more for big things, like SENS and MIRI. Both are working on things other than key problems; there is no good reason to think they’ll make significant progress b/c there are other more foundational problems.
I agree practically a lot of decisions come out the same.
I’ve heard that DD makes some pretty risible claims about the prerequisites to creative intelligence (roughly, that values must, from an engineering feasibility perspective, be learned, that it would be in some way hard to make AGI that wouldn’t need to be “raised” into a value system by a big open society, that a thing with ‘wrong values’ couldn’t participate in an open society [and open societies will be stronger] and so wont pose a major threat), but it’s not obvious to me how those claims bear at on bayesian epistemology.
I don’t know why they would be risible—nobody has a good reason why his ideas are wrong to my knowledge. They refute a lot of the fear-mongering that happens about AGI. They provide reasons for why a paperclip machine isn’t going to turn all matter into paperclips. They’re important because they refute big parts of theories from thinkers like Bostrom. That’s important because time, money, and effort are being spent in the course of taking Bostrom’s theories seriously, even though we have good reasons they’re not true. That could be time, money, and effort spent on more important problems like figuring out how creativity works. That’s a problem which would actually lead to the creation of an AGI.
Calling unanswered criticisms *risible* seems irrational to me. Sure unexpected answers could be funny the first time you hear them (though this just sounds like ppl being mean, not like it was the punchline to some untold joke) but if someone makes a serious point and you dismiss it because you think it’s silly, then you’re either irrational or you have a good, robust reason it’s not true.
[...] and a real process of making one AGI would tend to take a long time and involve a lot of human intervention?
He doesn’t claim this at all. From memory the full argument is in Ch7 of BoI (though has dependencies on some/all of the content in the first 6 chapters, and some subtleties are elaborated on later in the book). He expressly deals with the case where an AGI can run like 20,000x faster than a human (i.e. arbitrarily fast). He also doesn’t presume it needs to be raised like a human child or take the same resources/attention/etc.
I think you’d probably consider the way we approach problems of this scale to be popperian in character. We open ourselves to lots of claims and talk about them and criticize them. We try to work them into a complete picture of all of the options and then pick the best one, according to a metric that is not quite utility, because the desire for a pet is unlikely to be an intrinsic one.
The gun to our head in this situation is our mortality or aging that will eventually close the window of opportunity to enjoying having a pet.
I’m not sure how to relate to this as an analytical epistemological method, though. Most of the work, for me, would involve sitting with my desire for a pet and interrogating it, knowing that it isn’t at all immediately clear why I want one. I would try to see if it was a malformed expression of hunting or farming instincts. If I found that it was, the desire would dissipate, the impulse would recognize that it wouldn’t get me closer to the thing that it wants.
Barring that, I would be inclined to focus on dogs, because I know that no other companion animal has evolved to live alongside humans and enjoy it in the same way that dogs have. I’m not sure where that resolution came from.
What I’m getting at is that most of the interesting parts of this problem are inarticulable. Looking for opportunities to apply analytic methods or articulable epistemic processes doesn’t seem interesting to me at all.
A reasonable person’s approach to solving most problems right now is to ask a a huge machine learning system that nobody understands to recommend some articles from an incomprehensibly huge set of people.
Real world examples of decisionmaking generally aren’t solvable, or reducible to optimal methods.
belief
Do I believe in mathematics.. I can question the applicability of a mathematical model to a situation.
It’s probably worth mentioning that even mathematical claims aren’t beyond doubt, as mathematical claims can be arrived at in error (cosmic rays flipping bits) and it’s important that we’re able to notice and revert our position when that happens.
risable
My impression from observed usage was that “risable” meant “spurious, inspiring anger”. Finding that the dictionary definition of a word disagrees with natural impressions of it is a very common experience for me. I could just stop using words I’ve never explicitly looked up the meaning of, but that doesn’t seem ideal. I’m starting to wonder if dictionaries are the problem. Maybe there aren’t supposed to be dictionaries. Maybe there’s something very unnatural about them and they’re preventing linguistic evolution that would have been useful. OTOH, there is also something very unnatural about a single language being spoken by like a billion humans or whatever it is, and English’s unnaturally large vocabulary should probably be celebrated.
To clarify, it makes me angry to see someone assuming moral realism with such confidence that they might declare the most important industrial safety project in the history of humanity to be a foolish waste of time. The claim that there could be single objectively correct human morality is not compatible with anthropology, human history, nor the present political reality. It could still be true, but it is not sufficed, there is not sufficient reason to act as if it is definitely true. There should be more humility here than there is.
My first impression of a person who then goes on to claim that there is an objective, not just human, but universal morality that could bring unaligned machines into harmony with humanity is that they are lying to sell books. This is probably not the case, lying about that would be very stupid, but it’s a hypothesis that I have to take seriously. When a person says something like that, it has the air of a preacher who is telling a nice lie that they think will endear them to people and bring together a pleasant congregation around that unifying myth of the objectively correct universal morality, and maybe it will, but they need to find a different myth, because this one will endanger everything they value.
It’s not uncommon for competing ideas to have that sort of relationship. This is a good thing, though, because you have ways of making progress: e.g. compare the two ideas to come up with an experiment or create a more specific goal. Typically refuting one of those ideas will also answer or refute the criticism attached to it.
If a theory doesn’t offer some refutation for competing theories then that fact is (potentially) a criticism of that theory.
We can also come up with criticisms that are sort of independent of where they came from, like a new criticism is somewhat linked to idea A but idea A can be wrong in some way without implying the criticism was also wrong. It doesn’t make either theory A or B more likely or something when this happens; it just means there are two criticisms not one.
It’s a bad thing if ideas can’t be criticised at all, but it’s also a bad thing if the relationship of mutual criticism is cyclic, if it doesn’t have an obvious foundation or crux.
We can also come up with criticisms that are sort of independent of where they came from, like a new criticism is somewhat linked to idea A but idea A can be wrong in some way without implying the criticism was also wrong
Do you have a concrete example?
And it’s always possible both are wrong, anyway
Kind of, but “everything is wrong” is vulgar scepticism.
It’s a bad thing if ideas can’t be criticised at all, but it’s also a bad thing if the relationship of mutual criticism is cyclic, if it doesn’t have an obvious foundation or crux.
Do you have an example? I can’t think of an example of an infinite regress except cases where there are other options which stop the regress. (I have examples of these, but they’re contrived)
> We can also come up with criticisms that are sort of independent of where they came from, like a new criticism is somewhat linked to idea A but idea A can be wrong in some way without implying the criticism was also wrong
Do you have a concrete example?
I think some of the criticisms of inductivism Popper offered are like this. Even if Popper was wrong about big chunks of critical rationalism, it wouldn’t necessarily invalidate the criticisms. Example: A proof of the impossibility of inductive probability.
(Note: I don’t think Popper was wrong but I’m also not sure it’s necessary to discuss that now if we disagree; just wanted to mention)
> And it’s always possible both are wrong, anyway
Kind of, but “everything is wrong” is vulgar scepticism.
I’m not suggesting anything I said was a reason to think both theories wrong, I listed it because it was a possibility I didn’t mention in the other paragraphs, and it’s a bit of a trivial case for this stuff (i.e. if we come up with a reason both are wrong then we don’t have to worry about them anymore if we can’t answer that criticism)
> I can’t think of an example of an infinite regress except cases where there are other options which stop the regress.
The other options need to be acceptable to both parties!
Sure, or the parties need a rational method of resolving a disagreement on acceptability. I’m not sure why that’s particularly relevant, though.
> I can’t think of an example of an infinite regress except cases where there are other options which stop the regress.
I don’t see how that is an example, principally because it seems wrong to me.
You didn’t quote an example—I’m unsure if you meant to quote a different part?
In any case, what you’ve quoted isn’t an example, and you don’t explain why it seems wrong or what about it is an issue. Do you mean that cases exist where there is an infinite regress and it’s not soluble with other methods?
I’m also not sure why this is particularly relevant.
Are we still talking about the below?
> We can also come up with criticisms that are sort of independent of where they came from, like a new criticism is somewhat linked to idea A but idea A can be wrong in some way without implying the criticism was also wrong
Do you have a concrete example?
I did give you an example (one of Popper’s arguments against inductivism).
A generalised abstract case is where someone of a particular epistemology criticises another school of epistemology on the grounds of an internal contradiction. A criticism of that person’s criticism does not necessarily relate to that person’s epistemology, and vice versa.
Sure, or the parties need a rational method of resolving a disagreement on acceptability. I’m not sure why that’s particularly relevant, though.
The relevance is that CR can’t guarantee that any given dispute is resolveable.
Do you have a concrete example?
I did give you an example (one of Popper’s arguments against inductivism)
But I don’t count it as an example, since I don’t regard it as correct, let so be as being a valid argument with the further property of floating free of questionable background assumptions.
In particular , it is based on bivalent logic where, 1 and 0 are the only possible values, but the loud and proud inductivists here base their arguments on probabilistic logic, where propositions have a probability between but not including 1 and 0. So “induction must be based on bivalent logic” is an assumption.
A generalised abstract case is where someone of a particular epistemology criticises another school of epistemology on the grounds of an internal contradiction.
The assumption that contradictions are bad is a widespread assumption, but it is still an assumption.
The assumption that contradictions are bad is a widespread assumption, but it is still an assumption.
Reality does not contradict itself; ever. An epistemology is a theory about how knowledge works. If a theory (epistemic or otherwise) contains an internal contradiction, it cannot accurately portray reality. This is not an assumption, it’s an explanatory conclusion.
I’m not convinced we can get anywhere productive continuing this discussion. If you don’t think contradictions are bad, it feels like there’s going to be a lot of work finding common ground.
But I don’t count it as an example, since I don’t regard it as correct [...]
This is irrational. Examples of relationships do not depend on whether the example is real or not. All that’s required is that the relationship is clear, whether each of us judges the idea itself as true or not doesn’t matter in this case. We don’t need to argue this point anyway, since you provided an example:
In particular , it is based on bivalent logic where, 1 and 0 are the only possible values, but the loud and proud inductivists here base their arguments on probabilistic logic, where propositions have a probability between but not including 1 and 0. So “induction must be based on bivalent logic” is an assumption.
Cool, so do you see how the argument you made is separate from whether inductivism is right or not?
Your argument proposes a criticism of Popper’s argument. The criticism is your conjecture that Popper made a mistake. Your criticism doesn’t rely on whether inductivism is right or not, just whether it’s consistent or not (and consistent according to some principles you hint at). Similarly, if Popper did make a mistake with that argument, it doesn’t mean that CR is wrong, or that Inductivism is wrong; it just means Popper’s criticism was wrong.
Curiously, you say:
But I don’t count it as an example, since I don’t regard it as correct,
Do you count yourself a Bayesian or Inductivist? What probability did you assign to it being correct? And what probability do you generally assign to a false-positive result when you evaluate the correctness of examples like this?
Firstly, epistemology goes first. You don’t know anything about reality without having the means to acquire knowledge. Secondly, I didn’t say it was the PNC was actually false.
This is irrational. Examples of relationships do not depend on whether the example is real or not
So there is a relationship between the Miller and Popper papers conclusions, and it’s assumptions. Of course there is. That is what I am saying. But you were citing it as an example of a criticism that doesn’t depend on assumptions.
Your argument proposes a criticism of Popper’s argument
No, it proposes a criticism of your argument … the criticsm that there is a contradiction between your claim that the paper makes no assumptions, and the fact that it evidently does.
So there is a relationship between the Miller and Popper papers conclusions, and it’s assumptions. Of course there is. That is what I am saying. But you were citing it as an example of a criticism that doesn’t depend on assumptions.
> Your argument proposes a criticism of Popper’s argument
No, it proposes a criticism of your argument … the criticsm that there is a contradiction between your claim that the paper makes no assumptions, and the fact that it evidently does.
I didn’t claim that paper made no assumptions. I claimed that refuting that argument^[1] would not refute CR, and vice versa. Please review the thread, I think there’s been some significant miscommunications. If something’s unclear to you, you can quote it to point it out.
Firstly, epistemology goes first. You don’t know anything about reality without having the means to acquire knowledge.
Inductivism is not compatible with this—it has no way to bootstrap except by some other, more foundational epistemic factors.
Also, you didn’t really respond to my point or the chain of discussion-logic before that. I said an internal contradiction would be a way to refute an idea (as a second example when you asked for examples). you said contradictions being bad is an assumption. i said no, it’s a conclusion, and offered an explanation (which you’ve ignored). In fact, through this discussion you haven’t—as far as I can see—actually been interested in figuring out a) what anything else things or b) where and what you might be wrong about.
Secondly, I didn’t say it was the PNC was actually false.
I don’t think there’s any point talking about this, then. We haven’t had any meaningful discussion about it and I don’t see why we would.
I didn’t claim that paper made no assumptions. I claimed that refuting that argument[1] would not refute CR, and vice versa
Theoretically, if CR consists of a set of claims , then refuting one claim wouldn’t refute the rest. In practice , critrats are dogmatically wedded to the non existence of any form of induction.
Inductivism is not compatible with this—it has no way to bootstrap except by some other, more foundational epistemic factors.
I don’t particularly identify as an inductivist , and I don’t think that the critrat version of inductivism, is what self identified inductivists believe in.
i said no, it’s a conclusion, and offered an explanation (which you’ve ignored)
Conclusion from what? The conclusion will be based on some deeper assumption.
you haven’t—as far as I can see—actually been interested in figuring out a) what anything else things
What anyone else thinks?
I am very familiar with popular CR since I used to hang out in the same forums as Curi. I’ve also read some if the great man’s works.
Anecdote time: after a long discussion about the existence of any form of induction , on a CR forum, someone eventually popped up who had asked KRP the very question, after bumping into him at a conference many years ago , and his reply was that it existed , but wasn’t suitable for science.
But of course the true believing critrats weren’t convinced by Word of God.
Secondly, I didn’t say it was the PNC was actually false.
I don’t think there’s any point talking about this, then.
The point is that every claim in general depends on assumptions. So, in particular, the critrats don’t have a disproof of induction that floats free of assumptions.
I am uncomfortable with this practice. I think I am banned from participating in curi’s forum now anyway due to my comments here so it doesn’t affect me personally but it is a little strange to have this list with people’s personal information up.
Anecdote time: after a long discussion about the existence of any form of induction , on a CR forum, someone eventually popped up who had asked KRP the very question, after bumping into him at a conference many years ago , and his reply was that it existed , but wasn’t suitable for science.
What anyone else thinks? I am very familiar with popular CR since I used to hang out in the same forums as Curi. I’ve also read some if the great man’s works.
ps: I realize that’s a lot of text to dump all at once, sorry about that. Maybe it’s a good idea to focus on one thing?
It might be a good idea to divide comments up so that they can be voted on separately and so that the replies can be branch off under them too, but it’s not important!
I’m happy to do this. On the one hand I don’t like that lots of replies creates more pressure to reply to everything, but I think if we’ll probably be fine focusing on the stuff we find more important if we don’t mind dropping some loose ends. If they become relevant we can come back to them.
Aye, it’s kind of a definition of it, a way of seeing what it would have to mean. I don’t know if I could advocate any other definitions than the one outlined here.
Maximizing expected utility does these things in a very simple way to the exact extent that it should.
Hmm...
My first impulse was to say “bayes is not a method. It is a low-level language for epistemology. Methods emerge higher in the abstraction stack. Its fandom uses just whatever methods work.”
But maybe there could be something reasonably describable as a bayesian method. But I don’t work with enough with non-bayesian philosophers to be able to immediately know how we are different, well enough to narrow in on it.
Is the bayesian method… trying always to understand things on the math/decision theory level? Confidently; deutsch is not doing that. His understanding of AGI is utterly anthropomorphic, and is not informed by decision theory; it is not informed by the study of reliable, deeply comprehensible essences of things, and so it will not come into play very much in the adjacent discipline of engineering.
I guess… to get closer to understanding what the bayesian methodology might be uniquely good at… I’ll have to reexamine some original reasoning that I have done with it… so… understanding things in terms of decision lets me identify concepts that are basic and necessary for consistent decisionmaking (paraphrasings of consistent decisionmaking: for being free and agentic and not self-defeating or easily tricked). Which let me narrow in on just the aspects of the hard problem of consciousness that must be, in some sense, real. Which lead me to conclusions like “fish aren’t important moral subjects, because even though they’re clearly capable of suffering, experiences have magnitude, and theirs must be negligible, for it to be other than negligible, something astronomically unlikely would have needed to have happened, so it basically must be.” Which means I get to be more of a pescaterian than a vegan, which is a very immediately useful realization to have arrived at.
If that argument doesn’t make sense to you, well that might mean that we’ve just identified something that bayesian/decision theoretic reasoning can do, that can’t be done without it.
I would be interested to know how Mirror Chamber strikes you though, I haven’t tried to get non-bayesians to read it.
I don’t know how you’d describe Bayesianism atm but I’ll list some things I think are important context or major differences. I might put some things in quotes as a way to be casual but LMK if any part is not specific enough or ambiguous or whatever.
both CR and Bayesianism answer Qs about knowledge and judging knowledge; they’re incompatible b/c they make incompatible claims about the world but overlap.
CR says that truth is objective
explanations are the foundation of knowledge, and it’s from explanations that we gain predictive power
no knowledge is derived from the past; that’s an illusion b/c we’re already using per-existing explanations as foundations
new knowledge can be created to explain things about the past we didn’t understand, but that’s new knowledge in the same way the original explanation was once new knowledge
e.g. axial tilt theory of seasons; no amount of past experience helped understand what’s *really* happening, someone had to make a conjecture in terms of geometry (and maybe Newtonian physics too)
when we have two explanations for a single phenomena they’re either the same, both wrong, or one is “right”
“right” is different from “true”—this is where fallibilism comes in (note: I don’t think you can talk about CR without talking about fallibilism; broadly they’re synonyms)
taken to logical conclusions it means roughly that all our theories are wrong in an absolute sense and we’ll discover more and more better explanations about the universe to explain it
this includes ~everything: anything we want to understand requires an explanation: quantum physics, knowledge creation, computer sciences, AGI, how minds work (which is actually the same general problem as AGI) - including human minds, economics, why people choose particular ice-cream flavors
DD suggests in *the beginning of infinity* that we should rename scientific theories scientific “misconceptions” because that’s more accurate
anyone can be mistaken on anything
there are rational ways to choose *exactly one* explanation (or zero if none hold up)
if we have a reason that some explanation is false, then there is no amount of “support” which makes it less likely to be false. (this is what is meant by ‘criticism’). no objectively true thing has an objectively true reason that it’s false.
so we should believe only those things for which there are no unanswered criticisms
this is why some CR ppl are insistent on finishing and concluding discussions—if two people disagree then one must have knowledge of why the other is wrong, or they’re both wrong (or both don’t know enough, etc)
to refuse to finish a discussion is either denying the counterparty the opportunity to correct an error (which was evidently important enough to start the discussion about) - this is anti-knowledge and irrational, *or* it’s to deny that you have an error (or that the error can be corrected) which is also anti-knowledge and irrational.
there are maybe things to discuss about practicality but even if there are good reasons to drop conversations for practical purposes sometimes, it doesn’t explain why it happens so much.
that was less focused on differences/incompatibilities than I had in mind originally but hopefully it gives you some ideas.
Unless it’s maths/decision theory related, that’s right. CR/Fallibilism is more about reasoning; like an internal-contradiction means an idea is wrong; there’s 0 probability it’s correct. Maybe someone alters the idea so it doesn’t have a contradiction which means it needs to be judged again.
I don’t think that’s the case. I think his understanding/theories of AGI don’t have anything to do with humans (besides that we’d create one—excluding aliens showing up or whatever). There’s a separate explanation for why AGI isn’t going to arise randomly e.g. out of a genetic ML algorithm.
Well, we don’t agree about fish, but whether it makes sense or not depends on your meaning. If you mean that I understand your reasoning, I think I do. If you mean that I think the reasoning is okay, maybe from your principles but I don’t think it’s *right*. Like I think there are issues with it such that the explanation and conclusion shouldn’t be used.
ps: I realize that’s a lot of text to dump all at once, sorry about that. Maybe it’s a good idea to focus on one thing?
Understanding what you mean by “right”, I think I might agree; it’s not complete, it’s not especially close to certainty.
It’s difficult to apply the mirror chamber’s reduction of anthropic measure across different species (it was only necessitated for comparing over a pair of very similar experiences), and I’m not sure the biomass difference between fishbrain and humanbrain is such that anthropics can be used either, meaning… well, we can conclude, from the amount of rock in the universe, and the tiny amount of humans in the universe, and our being humans instead of rock, that it is astronomically unlikely that anthropic measure binds in significant quantities to rock. If it did, we would almost certainly have woken up in a different sort of place. But for fish, perhaps the numbers are not large enough for us to draw a similar conclusion. (Again, I’m realizing the validity of that sort of argument doesn’t clearly entail from the mirror chamber, though I think it is suggested by it)
I think my real reasons for going with pescatarianism are being fed into from other sources, here. It’s not just the anthropic measure thing. Also receiving a strong push from my friends in neuroscience who claim that the neurology of fish is just way too simple to be given a lot of experiential weight, in the same way that a thermostat is too simple for us to think anything is suffering when … [reexamines the assumptions]...
Hmm. I no longer believe their reasoning there (I should talk to them again I guess). I have seen too many bastards say “but that’s merely a machine so it couldn’t have conscious experience” of systems that probably would have conscious experience, and here they are saying that a biological reinforcement learning system that observably learns from painful experience could not truly suffer. It’s not clear that there’s a difference between that and suffering. I think fish suffer. The quantity must be small, but this is not enough to conclude that it’s negligible.
(… qualia == the class of observations upon which indexical claims can be conditioned?? (I think I’m going to have to write this up properly and do a post))
On the note of *qualia* (providing in case it helps)
DD says this in BoI when he first uses the word:
and under “terminology” at the end of the chapter:
This is in Ch7 which is about AGI.
I’d say bayesian epistemology’s stance is that there is one completely perfect way of understanding reality, but that it’s perhaps provably unattainable to finite things.
You cannot disprove this, because you have not attained it. You do not have an explanation of quantum gravity or globally correlated variation in the decay rate of neutrinos over time or any number of other historical or physical mysteries.
(For some mysteries, like anthropic measure binding pthe hard problem of consciousness] it seems provably impossible to ever test our models. We each have exactly one observation about what kinds of matter attract subjectivity, we can’t share data (solopsistic doubt), we can never get more data (to “enter” a new locus of consciousness we would have to leave our old one, losing access to the observed experience that we had), but the question still matters a lot for measuring the quantity of experience over different brain architectures, so we need to have theories, even though objective truth can’t be attained.)
It’s good to believe that there’s an objective truth and try to move towards it, but you also need to make peace with the fact that you will almost certainly never arrive at it, and dialethic epistemologies like popper’s can’t give you the framework you need to operate gracefully without ever getting objective truth.
We sometimes talk about aumann’s agreement theorem, the claim that any two bayesians who, roughly speaking, talk for long enough, will eventually come to agree about everything. This might serve the same social purpose as an advocation of objectivity. Though I do not know whether it’s really true of humans that they will converge if they talk long enough, I still hold out faith that it will be one day, if we keep trying, if we try to get better at it.
Which means “wrong” is no longer a meaningful word. Do you think you can operate without having a word like “wrong”? Do you think you can operate without that concept? I think DD sometimes plays inflamatory word games, defining things poorly on purpose.
If you point a gun at a bayesian’s head and force them to make a bet against a single outcome, they’re always supposed to be able to. It’s just, there are often reasons not to.
Why believe anything! There’s a sense in which a bayesian doesn’t have any beliefs, especially beliefs with unanswered criticisms. The great thing is that you don’t need to have beliefs to methodically do your best to optimize expected utility. You can operate well amid uncertainty. For instance, I can recommend that you take vitamin D supplements just in case the joe rogan interview that the youtube algorithm served me yesterday about how vitamin D is crucial for the respiratory immune system and the covid severity rates differ enormously depending on it was true. I don’t need to confirm that it’s true by trying to assess primary evidence, I don’t need to, in every sense, “believe”, because vitamin D is cheap and you should probably be taking it anyway for other reasons, and I have other stuff that I need to be reading right now.
In conclusion I don’t see many substantial epistemological differences. I’ve heard that DD makes some pretty risible claims about the prerequisites to creative intelligence (roughly, that values must, from an engineering feasibility perspective, be learned, that it would be in some way hard to make AGI that wouldn’t need to be “raised” into a value system by a big open society, that a thing with ‘wrong values’ couldn’t participate in an open society [and open societies will be stronger] and so wont pose a major threat), but it’s not obvious to me how those claims bear at on bayesian epistemology.
Maybe something to do with the ease with which people who like decision theory can conceive of and describe of very fast-growing non-human-aligned agents? While DD would claim that decision theory’s superintelligences are unrealistic to the point of inapplicability, and a real process of making one AGI would tend to take a long time and involve a lot of human intervention?
A Bayesian does have beliefs about the probability of various outcomes even if there are unanswered criticisms involved. Generally, the idea is that people examine criticisms more because they believe that the opportunity cost to answer the criticism is woth it and not just because they are unanswered.
Yes, knowledge creation is an unending, iterative process. It could only end if we come to the big objective truth, but that can’t happen (the argument for why is in BoI—the beginning of infinity).
I think this is true of any two *rational* people with sufficient knowledge, and it’s rationality not bayesians that’s important. If two partially *irrational* bayesians talk, then there’s no reason to think they’d reach agreement on ~everything.
There is a subtle case with regards to creative thought, though: take two people who agree on ~everything. One of them has an idea, they now don’t agree on ~everything (but can get back to that state by talking more).
WRT “sufficient knowledge”: the two ppl need methods of discussing which are rational, and rational ways to resolve disagreements and impasse chains. they also need attitudes about solving problems. namely that any problem they run into in the discussion is able to be solved and that one or both of them can come up with ways to deal with *any* problem when it arises.
If it were meaningless I wouldn’t have had to add “in an absolute sense”. Just because an explanation is wrong in an *absolute* sense (i.e. it doesn’t perfectly match reality) does not mean it’s not *useful*. Fallibilism generally says it’s okay to believe things that are false (which all explanations are in some case); however, there are conditions on those times like there are no known unanswered criticisms and no alternatives.
Since BoI there has been more work on this problem and the reasoning around when to call something “true” (practically speaking) has improved—I think. Particularly:
Knowledge exists relative to *problems*
Whether knowledge applies or is correct or not can be evaluated rationally because we have *goals* (sometimes these goals are not specific enough, and there are generic ways of making your goals arbitrarily specific)
Roughly: true things are explanations/ideas which solve your problem, have no known unanswered criticism (i.e. are not refuted), and no alternatives which have no known unanswered criticisms
something is wrong if the conjecture that it solves the problem is refuted (and that refutation is unanswered)
note: a criticisms of an idea is itself an idea, so can be criticised (i.e. the first criticism is refuted by a second criticism) - this can be recursive and potentially go on forever (tho we know ways to make sure they don’t).
I think he’s in a tough spot to try and explain complex, subtle relationships in epistemology using a language where the words and grammar have been developed, in part, to be compatible with previous, incorrect epistemologies.
I don’t think he defines things poorly (at least typically); and would acknowledge an incomplete/fuzzy definition if he provided one. (Note: one counterexample is enough to refute this claim I’m making)
I think you misunderstand me.
let’s say you wanted a pet, we need to make a conjecture about what to buy you that will make you happy (hopefully without developing regret later). the possible set of pets to start with are all the things that anyone has ever called a pet.
with something like this there will be lots of other goals, background goals, which we need to satisfy but don’t normally list. An example is that the pet doesn’t kill you, so we remove snakes, elephants, other other things that might hurt you. there are other background goals like life of the pet or ongoing cost; adopting you a cat with operable cancer isn’t a good solution.
there are maybe other practical goals too, like it should be an animal (no pet rocks), should be fluffy (so no fish, etc), shouldn’t cost more than $100, and yearly cost is under $1000 (excluding medical but you get health insurance for that).
maybe we do this sort of refinement a bit more and get a list like: cat, dog, rabbit, mouse
you might be *happy* with any of them, but can you be *more happy* with one than any other; is there a *best* pet? **note: this is not an optimisation problem** b/c we’re not turning every solution into a single unit (e.g. your ‘happiness index’); we’re providing *decisive reasons* for why an option should or shouldn’t be included. We’ve also been using this term “happy” but it’s more than just that, it’s got other important things in there—the important thing, though, is that it’s your *preference* and it matches that (i.e. each of the goals we introduce are in fact goals of yours; put another way: the conditions we introduce correspond directly and accurately to a goal)
this is the sort of case where there is there’s no gun to anyone’s head, but we can continue to refine down to a list of exactly **one** option (or zero). let’s say you wanted an animal you could easily play with → then rabbit,mouse are excluded, so we have options: cat,dog. If you’d prefer an animal that wasn’t a predator—both cat,dog excluded and we get to zero (so we need to come up with new options or remove a goal). If instead you wanted a pet that you could easily train to use a litter tray, well we can exclude a dog so you’re down to one. Let’s say the litter tray is the condition you imposed.
What happens if I remember ferrets can be pets and I suggest that? well now we need a *new* goal to find which of the cat or ferret you’d prefer.
Note: for most things we don’t go to this level of detail b/c we don’t need to; like if you have multiple apps to choose from that satisfy all your goals you can just choose one. If you find out a reason it’s not good, then you’ve added a new goal (if you weren’t originally mistaken, that is) and can go back to the list of other options.
Note 2: The method and framework I’ve just used wrt the pet problem is something called yes/no philosophy and has been developed by Elliot Temple over the past ~10+ years. Here are some links:
Argument · Yes or No Philosophy, Curiosity – Rejecting Gradations of Certainty, Curiosity – Critical Rationalism Epistemology Explanations, Curiosity – Critical Preferences and Strong Arguments, Curiosity – Rationally Resolving Conflicts of Ideas, Curiosity – Explaining Popper on Fallible Scientific Knowledge, Curiosity – Yes or No Philosophy Discussion with Andrew Crawshaw
Note 3: During the link-finding exercise I found this: “All ideas are either true or false and should be judged as refuted or non-refuted and not given any other status – see yes no philosophy.” (credit: Alan Forrester) I think this is a good way to look at it; *technically and epistemically speaking:* true/false is not a judgement we can make, but refuted/non-refuted *is*. we use refuted/non-refuted as a proxy for false/true when making decisions, because (as fallible beings) we cannot do any better than that.
I’m curious about how a bayesian would tackle that problem. Do you just stop somewhere and say “the cat has a higher probability so we’ll go with that?” Do you introduce goals like I did to eliminate options? Is the elimination of those options equivalent to something like: reducing the probability of those options being true to near-zero? (or absolute zero?) Can a bayesian use this method to eliminate options without doing probability stuff? If a bayesian *can*, what if I conjecture that it’s possible to *always* do it for *all* problems? If that’s the case there would be a way to decisively reach a single answer—so no need for probability. (There’s always the edge case there was a mistake somewhere, but I don’t think there’s a meaningful answer to problems like “P(a mistake in a particular chain of reasoning)” or “P(the impact of a mistake is that the solution we came to changes)”—note: those P(__) statements are within a well defined context like an exact and particular chain of reasoning/explanation.
So we can make decisions.
Yes you do—you need a theory of expected utility; how to measure it, predict it, manipulate it, etc. You also need a theory of how to use things (b/c my expected utility of amazing tech I don’t know how to use is 0). You need to believe these theories are true, otherwise you have no way to calculate a meaningful value for expected utility!
Yes, I additionally claim we can operate **decisively**.
It matters more for big things, like SENS and MIRI. Both are working on things other than key problems; there is no good reason to think they’ll make significant progress b/c there are other more foundational problems.
I agree practically a lot of decisions come out the same.
I don’t know why they would be risible—nobody has a good reason why his ideas are wrong to my knowledge. They refute a lot of the fear-mongering that happens about AGI. They provide reasons for why a paperclip machine isn’t going to turn all matter into paperclips. They’re important because they refute big parts of theories from thinkers like Bostrom. That’s important because time, money, and effort are being spent in the course of taking Bostrom’s theories seriously, even though we have good reasons they’re not true. That could be time, money, and effort spent on more important problems like figuring out how creativity works. That’s a problem which would actually lead to the creation of an AGI.
Calling unanswered criticisms *risible* seems irrational to me. Sure unexpected answers could be funny the first time you hear them (though this just sounds like ppl being mean, not like it was the punchline to some untold joke) but if someone makes a serious point and you dismiss it because you think it’s silly, then you’re either irrational or you have a good, robust reason it’s not true.
He doesn’t claim this at all. From memory the full argument is in Ch7 of BoI (though has dependencies on some/all of the content in the first 6 chapters, and some subtleties are elaborated on later in the book). He expressly deals with the case where an AGI can run like 20,000x faster than a human (i.e. arbitrarily fast). He also doesn’t presume it needs to be raised like a human child or take the same resources/attention/etc.
Have you read much of BoI?
I think you’d probably consider the way we approach problems of this scale to be popperian in character. We open ourselves to lots of claims and talk about them and criticize them. We try to work them into a complete picture of all of the options and then pick the best one, according to a metric that is not quite utility, because the desire for a pet is unlikely to be an intrinsic one.
The gun to our head in this situation is our mortality or aging that will eventually close the window of opportunity to enjoying having a pet.
I’m not sure how to relate to this as an analytical epistemological method, though. Most of the work, for me, would involve sitting with my desire for a pet and interrogating it, knowing that it isn’t at all immediately clear why I want one. I would try to see if it was a malformed expression of hunting or farming instincts. If I found that it was, the desire would dissipate, the impulse would recognize that it wouldn’t get me closer to the thing that it wants.
Barring that, I would be inclined to focus on dogs, because I know that no other companion animal has evolved to live alongside humans and enjoy it in the same way that dogs have. I’m not sure where that resolution came from.
What I’m getting at is that most of the interesting parts of this problem are inarticulable. Looking for opportunities to apply analytic methods or articulable epistemic processes doesn’t seem interesting to me at all.
A reasonable person’s approach to solving most problems right now is to ask a a huge machine learning system that nobody understands to recommend some articles from an incomprehensibly huge set of people.
Real world examples of decisionmaking generally aren’t solvable, or reducible to optimal methods.
Do I believe in mathematics.. I can question the applicability of a mathematical model to a situation.
It’s probably worth mentioning that even mathematical claims aren’t beyond doubt, as mathematical claims can be arrived at in error (cosmic rays flipping bits) and it’s important that we’re able to notice and revert our position when that happens.
My impression from observed usage was that “risable” meant “spurious, inspiring anger”. Finding that the dictionary definition of a word disagrees with natural impressions of it is a very common experience for me. I could just stop using words I’ve never explicitly looked up the meaning of, but that doesn’t seem ideal. I’m starting to wonder if dictionaries are the problem. Maybe there aren’t supposed to be dictionaries. Maybe there’s something very unnatural about them and they’re preventing linguistic evolution that would have been useful. OTOH, there is also something very unnatural about a single language being spoken by like a billion humans or whatever it is, and English’s unnaturally large vocabulary should probably be celebrated.
To clarify, it makes me angry to see someone assuming moral realism with such confidence that they might declare the most important industrial safety project in the history of humanity to be a foolish waste of time. The claim that there could be single objectively correct human morality is not compatible with anthropology, human history, nor the present political reality. It could still be true, but it is not sufficed, there is not sufficient reason to act as if it is definitely true. There should be more humility here than there is.
My first impression of a person who then goes on to claim that there is an objective, not just human, but universal morality that could bring unaligned machines into harmony with humanity is that they are lying to sell books. This is probably not the case, lying about that would be very stupid, but it’s a hypothesis that I have to take seriously. When a person says something like that, it has the air of a preacher who is telling a nice lie that they think will endear them to people and bring together a pleasant congregation around that unifying myth of the objectively correct universal morality, and maybe it will, but they need to find a different myth, because this one will endanger everything they value.
I haven’t read BoI. I’ve been thinking about it.
Most fraught ideas are mutually refuted...A can be refuted assuming B, B can be refuted using A.
It’s not uncommon for competing ideas to have that sort of relationship. This is a good thing, though, because you have ways of making progress: e.g. compare the two ideas to come up with an experiment or create a more specific goal. Typically refuting one of those ideas will also answer or refute the criticism attached to it.
If a theory doesn’t offer some refutation for competing theories then that fact is (potentially) a criticism of that theory.
We can also come up with criticisms that are sort of independent of where they came from, like a new criticism is somewhat linked to idea A but idea A can be wrong in some way without implying the criticism was also wrong. It doesn’t make either theory A or B more likely or something when this happens; it just means there are two criticisms not one.
And it’s always possible both are wrong, anyway.
It’s a bad thing if ideas can’t be criticised at all, but it’s also a bad thing if the relationship of mutual criticism is cyclic, if it doesn’t have an obvious foundation or crux.
Do you have a concrete example?
Kind of, but “everything is wrong” is vulgar scepticism.
Do you have an example? I can’t think of an example of an infinite regress except cases where there are other options which stop the regress. (I have examples of these, but they’re contrived)
I think some of the criticisms of inductivism Popper offered are like this. Even if Popper was wrong about big chunks of critical rationalism, it wouldn’t necessarily invalidate the criticisms. Example: A proof of the impossibility of inductive probability.
(Note: I don’t think Popper was wrong but I’m also not sure it’s necessary to discuss that now if we disagree; just wanted to mention)
I’m not suggesting anything I said was a reason to think both theories wrong, I listed it because it was a possibility I didn’t mention in the other paragraphs, and it’s a bit of a trivial case for this stuff (i.e. if we come up with a reason both are wrong then we don’t have to worry about them anymore if we can’t answer that criticism)
The other options need to be acceptable to both parties!
I don’t see how that is an example, principally because it seems wrong to me.
Sure, or the parties need a rational method of resolving a disagreement on acceptability. I’m not sure why that’s particularly relevant, though.
You didn’t quote an example—I’m unsure if you meant to quote a different part?
In any case, what you’ve quoted isn’t an example, and you don’t explain why it seems wrong or what about it is an issue. Do you mean that cases exist where there is an infinite regress and it’s not soluble with other methods?
I’m also not sure why this is particularly relevant.
Are we still talking about the below?
I did give you an example (one of Popper’s arguments against inductivism).
A generalised abstract case is where someone of a particular epistemology criticises another school of epistemology on the grounds of an internal contradiction. A criticism of that person’s criticism does not necessarily relate to that person’s epistemology, and vice versa.
The relevance is that CR can’t guarantee that any given dispute is resolveable.
But I don’t count it as an example, since I don’t regard it as correct, let so be as being a valid argument with the further property of floating free of questionable background assumptions.
In particular , it is based on bivalent logic where, 1 and 0 are the only possible values, but the loud and proud inductivists here base their arguments on probabilistic logic, where propositions have a probability between but not including 1 and 0. So “induction must be based on bivalent logic” is an assumption.
The assumption that contradictions are bad is a widespread assumption, but it is still an assumption.
Reality does not contradict itself; ever. An epistemology is a theory about how knowledge works. If a theory (epistemic or otherwise) contains an internal contradiction, it cannot accurately portray reality. This is not an assumption, it’s an explanatory conclusion.
I’m not convinced we can get anywhere productive continuing this discussion. If you don’t think contradictions are bad, it feels like there’s going to be a lot of work finding common ground.
This is irrational. Examples of relationships do not depend on whether the example is real or not. All that’s required is that the relationship is clear, whether each of us judges the idea itself as true or not doesn’t matter in this case. We don’t need to argue this point anyway, since you provided an example:
Cool, so do you see how the argument you made is separate from whether inductivism is right or not?
Your argument proposes a criticism of Popper’s argument. The criticism is your conjecture that Popper made a mistake. Your criticism doesn’t rely on whether inductivism is right or not, just whether it’s consistent or not (and consistent according to some principles you hint at). Similarly, if Popper did make a mistake with that argument, it doesn’t mean that CR is wrong, or that Inductivism is wrong; it just means Popper’s criticism was wrong.
Curiously, you say:
Do you count yourself a Bayesian or Inductivist? What probability did you assign to it being correct? And what probability do you generally assign to a false-positive result when you evaluate the correctness of examples like this?
Neither. You don’t have to treat epistemology as a religion.
Firstly, epistemology goes first. You don’t know anything about reality without having the means to acquire knowledge. Secondly, I didn’t say it was the PNC was actually false.
So there is a relationship between the Miller and Popper papers conclusions, and it’s assumptions. Of course there is. That is what I am saying. But you were citing it as an example of a criticism that doesn’t depend on assumptions.
No, it proposes a criticism of your argument … the criticsm that there is a contradiction between your claim that the paper makes no assumptions, and the fact that it evidently does.
I didn’t claim that paper made no assumptions. I claimed that refuting that argument^[1] would not refute CR, and vice versa. Please review the thread, I think there’s been some significant miscommunications. If something’s unclear to you, you can quote it to point it out.
[1]: for clarity, the argument in Q: A proof of the impossibility of inductive probability.
Inductivism is not compatible with this—it has no way to bootstrap except by some other, more foundational epistemic factors.
Also, you didn’t really respond to my point or the chain of discussion-logic before that. I said an internal contradiction would be a way to refute an idea (as a second example when you asked for examples). you said contradictions being bad is an assumption. i said no, it’s a conclusion, and offered an explanation (which you’ve ignored). In fact, through this discussion you haven’t—as far as I can see—actually been interested in figuring out a) what anything else things or b) where and what you might be wrong about.
I don’t think there’s any point talking about this, then. We haven’t had any meaningful discussion about it and I don’t see why we would.
Theoretically, if CR consists of a set of claims , then refuting one claim wouldn’t refute the rest. In practice , critrats are dogmatically wedded to the non existence of any form of induction.
I don’t particularly identify as an inductivist , and I don’t think that the critrat version of inductivism, is what self identified inductivists believe in.
Conclusion from what? The conclusion will be based on some deeper assumption.
What anyone else thinks? I am very familiar with popular CR since I used to hang out in the same forums as Curi. I’ve also read some if the great man’s works.
Anecdote time: after a long discussion about the existence of any form of induction , on a CR forum, someone eventually popped up who had asked KRP the very question, after bumping into him at a conference many years ago , and his reply was that it existed , but wasn’t suitable for science.
But of course the true believing critrats weren’t convinced by Word of God.
The point is that every claim in general depends on assumptions. So, in particular, the critrats don’t have a disproof of induction that floats free of assumptions.
1
I just discovered he keeps a wall of shame for people who left his forum:
http://curi.us/2215-list-of-fallible-ideas-evaders
Are you in this wall?
I am uncomfortable with this practice. I think I am banned from participating in curi’s forum now anyway due to my comments here so it doesn’t affect me personally but it is a little strange to have this list with people’s personal information up.
Source?
Which forums? Under what name?
It might be a good idea to divide comments up so that they can be voted on separately and so that the replies can be branch off under them too, but it’s not important!
I’ll reply to the rest tomorrow I think
I’m happy to do this. On the one hand I don’t like that lots of replies creates more pressure to reply to everything, but I think if we’ll probably be fine focusing on the stuff we find more important if we don’t mind dropping some loose ends. If they become relevant we can come back to them.
Will the Mirror Chamber explain what “anthropic measure” (or the anthropic measure function) is?
I ended up clicking through to this and I guess that the mirror chamber post is important but not sure if I should read something else first.
I started reading, and it’s curious enough (and short enough) I’m willing to read the rest, but wanted to ask the above first.
Aye, it’s kind of a definition of it, a way of seeing what it would have to mean. I don’t know if I could advocate any other definitions than the one outlined here.