Open thread, Jul. 25 - Jul. 31, 2016
If it’s worth saying, but not worth its own post, then it goes here.
Notes for future OT posters:
1. Please add the ‘open_thread’ tag.
2. Check if there is an active Open Thread before posting a new one. (Immediately before; refresh the list-of-threads page before posting.)
3. Open Threads should start on Monday, and end on Sunday.
4. Unflag the two options “Notify me of new top level comments on this article” and “
- 29 Jul 2016 3:17 UTC; 0 points) 's comment on A rational unfalsifyable believe by (
In case you missed:
Recently I published:
http://lesswrong.com/lw/np5/adversity_to_success/
http://lesswrong.com/lw/nsf/should_you_change_where_you_live_also_a_worked/
http://lesswrong.com/lw/nsn/the_problem_tm_analyse_a_conversation/
I collect bot-voted down-spam by our resident troll Eugine (who has never really said why he bothers to do so). Which pushes them off the discussion list. Spending time solving this problem is less important to me than posting more so it might be around for a while. I am sorry if anyone missed these posts. But troll’s gonna troll.
Feel free to check them out! LW needs content. Trying to solve that problem right now while ignoring the downvoting because I know you love me and would never actually downvote my writing. (or you would actually tell me about why—as everyone else already does when they have a problem)
Wow, that is a lot of downvotes on neutral-to-good comments. Your posts aren’t great, but they don’t seem like −10 territory, either.
I thought we had something for this now?
nope. No current solution. I have no doubt what the cause is, because real lesswrongers write comments with their downvotes.
This has gotten bad enough that it needs to be dealt with. I have changed my mind; removing downvotes entirely seems like the best way to handle this in the short term.
I’ve written an essay criticizing the claim that computational complexity means a Singularity is impossible because of bad asymptotics: http://www.gwern.net/Complexity%20vs%20AI
One screwup that you didn’t touch on was the 70%. 70% is the square root of 1⁄2, not 2. If it’s 2x as smart as its designers and the complexity class of smartness is square, then this new AI will be able to make one 40% smarter than it is, not 30% less smart. Imagine if the AI had been 9 times smarter than its designers… would its next generation have been 1⁄3 as smart as it started? It’s completely upside-down.
Two ‘Crawlviati’ attributions are inside the quotes.
You didn’t really call out certain objections as stronger than others. I would be surprised if giving up determinism was half as useful as giving up optimality. And changing the problem is huge. I think that, though this would not impact the actual strength of the argument, calling certain items out after the list before the next section would give it a rhetorical kick.
I sort of did touch on Naam’s screwup there; it doesn’t make sense to even talk about the next generation of AI being dumber, whether or not the ‘complexity’ is square, square root, log, or exponential or whether he calculated 70% right.
whups
I’m not sure which objections are stronger than others. Nondeterminism is probably less helpful to an AI than approximations, but is approximations more helpful than redefining problems? Computronium brute force? The possibility that P=NP and an AI can find a non-galactic algorithm for that? Weighing the stronger objections would require a much more precise total model of the constant factors and asymptotics and computational resources and possibilities for expansions.
I think redefining problems and approximation are both huge. I didn’t mean a complete ranking, just fleshing out and giving more life to certain elements after the list is done. Pointing out how big a deal they are. These are important failures in the argument. In a way it comes across as a kind of reverse Gish Gallop—you have a bunch of really really strong arguments, and by putting them in a list the impression is weakened.
Still reading minor nitpick: for point 2 you don’t want to say NP (since P is in NP). It is the NP-hard problems that people would say can’t be solved but for small instances (which as you point out is not a reasonable assumption).
Thanks for helping me out in some tough times, LessWrong crew. Please keep supporting one another and being positive rather than negative.
Stay safe, friend.
X-risks prevention groups are disproportionally concentrated in San Francisco and around London. They are more concentrated than possible sources of risks. So in event of devastating EQ in SF our ability to prevent x-risks may be greatly reduced.
I slightly edited the opening formula to reflect the fact that one can no longer posts to Main. Also added the instruction to unflag the submission options.
Why the internet of things has a lot of possibilities for going wrong
Internet of thing is something new for the domestic utilities, but industrial automation has linked physical plants to digital networks for decades. DoE experimental software attacks to the electric grid after all dates back to the end of the 90′s. Stuxnet a few years later showed us that those kind of attack could be weaponized.
IoT security is news because it has reached the public opinion, but let’s not be fooled: it’s been years that we are in the cyber warfare of things.
Just a note that someone just went through and apparently downvoted as many of my comments as they could, without regard for content.
LW spoilt me. I now watch The Mickey Mouse Club as a story of an all-powerful, all-knowing AI and knowledge sent back from the future...
Yeah, the involuntary (?) creepiness factor of some series shoot up for me too after coming in contact with LW. On the other side, I appreciated Person of Interest so much more...
Try ‘The tale of Guzman’s family’ in Ch. Maturin’s ‘Melmoth the Wanderer’. A bit like some stories by O’Henry, and with a happy end. Oddly modern language, considering. I found it incredibly moving and clear (although I cannot see how selling one’s soul to the devil is any worse for it than murdering one’s entire family).
You know, it would be funny to imagine a word with knowledge being passed through some oracle by a so-called antiport, that is, you get a true, nonmalignant, relevant answer to your question about what happens at time X—but everybody forgets one and the same random thing until that moment, of unknown meta-ness:)
I’m not sure I understand the model you are proposing, can you elaborate with a concrete example? It might be interesting enough to come up with a short story about it.
I can’t really imagine information disappearing… Maybe something like, “I will answer if you taboo a certain notion until a certain time in the future, and I will not say more unless you agree. If you agree and defect, the answer will become false as soon as I can make this happen, and there will be no further transactions”?
I think I can make this work :)
A videotaped virtual meeting on effective ways of marketing EA to a broad audience.
Who are the moderators here, again? I don’t see where to find that information. It’s not on the sidebar or About page, and search doesn’t yield anything for ‘moderator’.
There are two lists of moderators, one for Discussion and one general LW list. Only difference is that Alexei doesn’t appear as a “Discussion” moderator. It’s hard to know who on the list is actually active in moderating—site policy seems to be very hands off except in truly exceptional circumstances, and most of the people listed are no longer active here.
Moderators to lesswrong
Moderators to discussion
edited to add: Of those listed, only EY and Elo display the tag “Editor” when user profile is displayed (under the total/monthly karma listing)
Right, but they’re not the only ones. (Check out my profile.)
You might think that the editors list would contain Elo or myself, but it doesn’t.
Fixed. put my name on the list.
How did you fix it? If you did anything more complicated than navigating to that page as an editor, you should write it down somewhere, perhaps the wiki.
nothing complicated. Sorry
There’s an out of date page that’s not linked anywhere. It’s unclear to me why it isn’t automatically generated.
The wiki explains the difference between moderators and editors and has links to the lists that we know about, even if they are not correct.
In a comment to my map of biases in x-risks research I got suggestion to add collective biases, so I tried to make their preliminary list and would welcome any suggestions:
“Some biases result from collective behaviour of group of peoples, so that each person seems to be rational but the result is irrational or suboptimal.
Well-known examples are tragedy of the commons, prisoners dilemma and other non optimal Nash equilibriums.
Different form of natural selection may result in such group behaviour, for example psychopaths may easily reach higher status.
It all affects discovery and management of x-risks. Here I will try to list some of such biases in two most important fields: science and management of x-risks.
Collective biases in the field of global risks research
Publication bias
Different schools of thought
It is impossible to read everything – important points are not read
Fraud
Fight for priority
No objective measure of truth in field of x-risks
Betting on small probabilities is impossible
Different languages
Not everything is published
Paywalls
Commercialization
Arrogance of billionaires
Generations problem: young ones like novelty but lack knowledge, old ones are too conservative
Funding, grants and academic position fight
Memes
Authoritative scientists and their opinions
Personal animosity and rivalry
Cooperation as law status signaling
Collective biases and obstacles in the management of risks
Earlier problems have higher priority
Politics and fight for power
Political believes (left and right)
Believes as group membership signs
Religions
Many actors problem (many countries)
Election cycles
Lies and false promises
Corruption
Wars
Communication problems in decision chains
Failure of the ability to predict the future
What are rationalist presumptions?
I am new to this rationality and Bayesian ways of thinking. I am reading the sequence, but I have few questions along the way. These questions is from the first article (http://lesswrong.com/lw/31/what_do_we_mean_by_rationality/)
I suppose we do presume things, like we are not dreaming/under global and permanent illusion by a demon/a brain in a vat/in a Truman show/in a matrix. And, sufficiently frequently, you mean what I think you meant. I am wondering, if there is a list of things that rationalist presume and take for granted without further proof. Are there anything that is self evident?
Sometimes a value could derive from other value. (e.g. I do not value monarchy because I hold the value that all men are created equal). But either we have circular values or we take some value to be evident (We hold these truths to be self-evident, that all men are created equal). I think circular values make no sense. So my question is, what are the values that most rationalists agree to be intrinsically valuable, or self evident, or could be presumed to be valuable in and of itself?
Rationalists often presume that it is possible to do much better than average by applying a small amount of optimization power. This is true in many domains, but can get you in trouble in certain places (see: the valley of bad rationality).
Rationalists often fail to compartmentalize, even when it would be highly useful.
Rationalists are often overconfident (see: SSC calibration questions) but believe they are well calibrated (bias blind spot, also just knowing about a bias is not enough to unbias you)
Rationalists don’t even lift bro.
Rationalists often fail to take marginal utility arguments to their logical conclusion, which is why they spend their time on things they are already good at rather than power leveling their lagging skills (see above). (Actually, I think we might be wired for this in order to seek comparative advantage in tribal roles.)
Rationalists often presume that others are being stupidly irrational when really the other people just have significantly different values and/or operate largely in domains where there aren’t strong reinforcement mechanisms for systematic thought or are stuck in a local maximum in an area where crossing a chasm is very costly.
Ratiionalists stil have too much trust in scientific studies, especially psychological studies.
Many do.
http://thefutureprimaeval.net/why-we-even-lift/
If you’re referring to the calibration questions on the 2014 LW survey, rationalists were pretty well calibrated on them (though a bit overconfident). I described some analyses of the data here and here, and here’s a picture:
(where the amount of overconfidence is shown by how far the blue dots are below the black line)
I don’t know of any data on whether rationalists believe they are well calibrated on these sorts of questions—I suspect that a fair number of people would guess that they are overconfident.
I’ll also note here that I’m planning to do some analyses of the calibration questions on the 2016 LW Diaspora Survey during the next month. I think that there are issues with some of the questions that were on the survey, so before I do any analyses I’ll note that my preferred analyses will only include 4 of the questions:
Which is heavier, a virus or a prion?
What year was the fast food chain “Dairy Queen” founded? (Within five years)
Without counting, how many keys on a standard IBM keyboard released after 1986, within ten?
What’s the diameter of a standard soccerball, in cm within 2?
For thoroughness I will also do some secondary analyses which include 7 questions, those 4 plus the following 3 (even though I think that these 3 questions have some issues which make them less good as tests of calibration):
I’m thinking of a number between one and ten, what is it?
Alexander Hamilton appears on how many distinct denominations of US Currency?
How many calories in a reese’s peanut butter cup within 20?
Why not?
they do. http://thefutureprimaeval.net/why-we-even-lift/
Requires (non-mental) effort.
I’ve lurked around a bit and akrasia seems to be a consistent problem—I’d imagine that requires mental effort.
But on topic I doubt lifting weights doesn’t require mental effort. You still need to choose a menu, choose your lifting program, consistently make sure you’re doing things right. In fact, if common failure mods of dieting are usually caused by not enough mental energy put into proper planning.
And I’d give a special mention to the discipline required to follow on your meal plan.
Those things definitely take mental effort.
TLDR: What’s the ‘mental effort’ you’re talking about? Running calculations on $bitrate=(brainsize)* all day long?
formula not researched!
“Requires non-mental effort” does NOT imply that no mental effort is required.
The quip points out that nerds (and most local rationalists are nerds) are perfectly fine with spending a lot of mental energy on things of interest, but are generally loath to engage in intense exercise and/or tolerate physical discomfort and pain.
Others have given very practical answers, but it sounds to me like you are trying to ground your philosophy in something more concrete than practical advice, and so you might want a more ivory-tower sort of answer.
In theory, it’s best not to assign anything 100% certainty, because it’s impossible to update such a belief if it turns out not to be true. As a consequence, we don’t really have a set of absolutely stable axioms from which to derive everything else. Even “I think therefore I am” makes certain assumptions.
Worse, it’s mathematically provable (via Löb’s Theorem) that no system of logic can prove it’s own validity. It’s not just that we haven’t found the right axioms yet; it’s that it is physically impossible for any axioms to be able to prove that they are valid. We can’t just use induction to prove that induction is valid.
I’m not aware of this being discussed on LW before, but how can anyone function without induction? We couldn’t conclude that anything would happen again, just because it had worked a million times before. Why should I listen to my impulse to breathe, just because it seems like it’s been a good idea the past thousand times? If induction isn’t valid, then I have no reason to believe that the next breath won’t kill me instead. Why should I favor certain patterns of twitching my muscles over others, without inductive reasoning? How would I even conclude that persistent patterns in the universe like “muscles” or concepts like “twitching” existed? Without induction, we’d literally have zero knowledge of anything.
So, if you are looking for a fundamental rationalist presumption from which to build everything else, it’s induction. Once we decide to live with that, induction lets us accept fundamental mathematical truths like 1+1=2, and build up a full metaphysics and epistemology from there. This takes a lot of bootstrapping, by improving on imperfect mathematical tools, but appears possible.
(How, you ask? By listing a bunch of theorems without explaining them, like this: We can observe that simpler theories tend to be true more often, and use induction to conclude Occam’s Razor. We can then mathematically formalize this into Kolmogorov complexity. If we compute the Kolmogrov complexity of all possible hypotheses, we get Solomonof induction, which should be the theoretically optimum set of Bayesian priors. Cruder forms of induction also gives us evidence that statistics is useful, and in particular that Baye’s theorem is the optimal ways of updating existing beliefs. With sufficient computing power, we could theoretically perform Bayesian updates on these universal priors, for all existing evidence, and arrive at a perfectly rational set of beliefs. Developing a practical way of approximating this is left as an exercise for the reader.)
No one is really very happy about having to take induction as a leap of faith, but it appears to be the smallest possible assumption that allows for the development of a coherent and broadly practical philosophy. We’re making a baseless assumption, but it’s the smallest possible assumption, and if it turns out there was a mistake in all the proofs of Löb’s theorem and there is a system of logic that can prove it’s own validity, I’m sure everyone would jump on that. But induction is the best we have.
This, and your links to Lob’s theory, is one of the most fear inducing piece of writing that I have ever read. Now I want to know if I have understand this properly. I found that the best way to do it is to first explain what I understand to myself, and then to other people. My explanation is below:
I suppose that rationalist would have some simple, intuitive and obvious presumptions a foundation (e.g. most of the time, my sensory organs reflect the world accurately). But apparently, it put its foundation on a very specific set of statement, the most powerful, wild and dangerous of them all: self-referential statement:
Rationalist presume Occam’s razor because it proof itself Rationalist presume Induction razor because it proof itself *etc.
And a collection of these self-referential statement (if you collect the right elements) would reinforce one another. Upon this collection, the whole field of rationality is built.
To the best of my understanding, this train of thought is nearly identical to the Presuppositionalism school of Reformed Christian Apologetics.
The reformed / Presbyterian understanding of the Judeo-Christian God (from here on simply referred to as God), is that God is a self-referential entity, owing to their interpretation of the famous Tetragrammaton. They believe that God is true for many reasons, but chief among all, is that it attest itself to be the truth.
Now I am not making any statement about rationality or presuppositionalism, but it seems to me that there is a logical veil that we cannot get to the bottom of and it is called self-reference.
The best that we can do is to get a non-contradicting collection of self-referential statement that covers the epistemology and axiology and by that point, everyone is rational.
Very close, but not quite. (Or, at least not quite my understanding. I haven’t dug too deep.)
A reply to Presuppositionalism
I wouldn’t say that we should presume anything because it proves itself. Emotionally, we may have a general impulse to accept things because of evidence, and so it is natural to accept induction using inductive reasoning. So, that’s likely why the vast majority of people actually accept some form of induction. However, this is not self-consistent, according to Lob’s theorem. We must either accept induction without being able to make a principled argument for doing so, or we must reject it, also without a principled reason.
So, Presuppositionalism appears to be logically false, according to Lob’s theorem.
I could leave it at that, but it’s bad form to fight a straw man, and not the strongest possible form of an argument. The steel man of Presuppositionalism might instead take certain propositions as a matter of faith, and make no attempt to prove them. One might then build much more complex philosophies on top of those assumptions.
Brief detour
Before I reply to that, let me back up for a moment. I Agree Denotationally But Object Connotationally with most of the rest of what you said above. (It seems to me to be technically true, but phrased in such a way that it would be natural to draw false inferences from it.)
If I had merely posited that induction was valid, I suspect it wouldn’t have been disconcerting, even if I didn’t offer any explanation as to why we should start there and not at “I am not dreaming” or any of the examples you listed. You were happy to accept some starting place, so long as it felt reasonable. All I did was add a little rigor to the concept of a starting point.
However, by additionally pointing out the problems with asserting anything from scratch, I’ve weakened my own case, albeit for the larger goal of epistemic rationality. But since all useful philosophies must be based in something, they also can’t prove their own validity. The falling tide lowers all ships, but doesn’t change their hull draft) or mast height.
So, we still can’t then say “the moon is made of blue cheese, because the moon is made of blue cheese”. If we just assume random things to be true, eventually some of them might start to contradict one another. Even if they didn’t, we’d still have made multiple random assertions when it was possible to make fewer. It’s not practically possible not to use induction, so every practical philosophy does so. However, adding additional assertions is unnecessary.
So, I agree connotationally when you say “The best that we can do is to get a non-contradicting collection of self-referential statement that covers the epistemology and axiology”. This infers that all possible sets of starting points are equally valid, which I don’t agree with. I’ll concede that induction is equally as valid as total epistemic nihilism (the position that nothing is knowable, not to be confused with moral nihilism, which has separate problems). I can’t justify accepting induction over rejecting it. However, once I accept at least 1 thing, I can use that as a basis for judging other tools and axioms.
A reply to the Presuppositionalism steel man
Lets go back to the Presuppositionalism steel man. Rather than making a self-referential statement as a proof, it merely accepted certain claims without proof. Any given Presuppositionalist must accept induction to function in the real world. If they also use that induction and accept things that induction proves, then we can claim to have a simpler philosophy. (Simpler being closer to the truth, according to Occam’s razor.)
They might accept induction, but reject Occam’s razor, though. I haven’t thought through the philosophical implications of trying to reject Occam’s Razor, but at first glance it seems like it would make life impractically complicated. It doesn’t necessarily lead to being unable to conclude that one should continue breathing, since it’s always worked in the past. So, it’s not instant death, like truly rejecting induction, but I suspect that truly rejecting Occam’s razor, and completely following through with all the logical implications, would cause problems nearly as bad.
For example, overfitting might prevent drawing meaningful conclusions about how anything works, since trillions of arbitrarily complex function can all be fit to any given data set. (For example, sums of different sine waves.) It may be possible to substitute some other principle for Occam’s razor to minimize this problem, but I suspect that then it would then be possible to compare that method against Occam’s Razor (well, Solomonoff induction) and demonstrate that one produced more accurate results. There may already be a proof that Solomonoff induction is the best possible set of Bayesian Priors, but I honestly haven’t looked into it. It may merely be the best set of priors known so far. (Either way, it’s only the best assuming infinite computing power is available, so the question is more academic than practical.)
General conclusions
So, it looks like this is the least bad possible philosophy, or at least quite close. It’s a shame we can’t reject epistemic nihilism, but pretty much everything else seems objectively suboptimal, even if some things may hold more aesthetic appeal or be more intuitive or easy to apply. (This is really math heavy, and almost nothing in mathematics is intuitive. So, in practice we need lots of heuristics and rules of thumb to make day to day decisions. None of this is relevant except when these more practical methods fail us, like on really fundamental questions. The claim is just that all such practical heuristics seem to work by approximating Solomonoff induction. This allows aspiring rationalists to judge potential heuristics by this measure, and predict what circumstances the heuristic will work or fail in.)
It is NOT a guarantee that we’re right about everything. It is NOT an excuse to make lots of arbitrary presuppositions in order to get the conclusions we want. Anything with any assumptions is NOT perfect, but this is just the best we have, and if we ever find something better we should switch to that and never look back.
That assumes that a rational person is one who holds beliefs because of a chain of logic. Empricially Superforcasters don’t simply try to follow a chain of logic to get their beliefs. A rational person in the LW sense thus is not one that holds beliefs because of a chain of logic.
Tedlock gives in his book a good outlook about how to form beliefs about the likelihood that beliefs are true.
Neither do bad forecasters, or cranks, or schizophrenics. The suppressed premiss here is that superforcasters are right or reliable. But that implies that their claims are tested or testable and that implies some basic presumptions of logic or empricism.
Tedlock lays out a bunch of principles to come to correct conclusions. One of the principles is being a fox that uses multiple chains instead of trying to use one correct chain that rests on a foundation based on which other beliefs can be logically deduced.
The paradigm that Arielgenesis proposes is to follow a hedgehog system where a single chain of logic can be relied on because certain basic presuppositions are accepted as true.
Holding a belief because of a chain of logic has little to do with the principle of empricism.
There are many ways to do bad forecasts. As far as the examples of cranks and schizophrenics go, those are usually hedgehogs. A lot of cranks usually follow a chain of logic. If you take people who think there are illegal tricks to avoid paying income tax, they usually have elaborate chains of logic to back up their case.
How do you know that I hold my belief based on a “suppresed premiss”? If something is supressed and you can’t see it, maybe the structure of my reasoning process isn’t the structure you guess.
Missing the point. The point is how their conclusions are verified.
Logic is implicit in empricisicm because the idea that contradictions are false is implicit in the idea of disproof by contradictory evidence.
Missing the point. I didn’t say that logic is sufficient for correctness. I am saying that if you have some sort of black-box, but effective reasoning, then some kind of presupposition is going to be needed to verify it.
If you have other reasoning show it. Otherwise that was an irrelevant nitpick.
I think Science and Sanity lays out a framework for dealing with beliefs that doesn’t categories them into true/false that is better than the basic true/false dichomity.
I care more about what Science and Sanity called semantic reactions than I care about presuppositions.
Basically you feed the relevant data into your mind and then you let it process the data. As a result of processing it there’s a semantic reaction. Internally the brain does that with a neural net that doesn’t use logical chains to do it’s work.
When I write here I point out the most important piece of the data, but not all of what my reasoning is based on because it’s based on lots of experiences and lots of empiric data.
Using a ramified logic with more than two truth values is not the same as not using logic at all!
That is such a vague description of reasoning that it covers everything from superforecasting to schizobabble. You have relieved yourself of the burden of explaining how reasoning works without presupposiitons by not treating reasoning as something that necessarily works at all.
Could you define what you mean with “logic” if not thinking in terms of whether a statement is true?
Thinking about how probable it is, or how much subjective credence it should have. There are formal ways of demonstrating how fuzzy logic and probability theory extend bivalent logic.
Science and Sanity is not about probability theory or similar concepts of having numbers between 0 and 1.
“The map is not the territory” doesn’t mean “The map is the territory with credence X that’s between 0 and 1″. It’s rather a rejection about the concept of the is of identity and instead thinking in terms like semantic reactions.
I was pointing out that the claim that logic is implicit in empiricism survives an attack on bivalence. I couldn’t see any other specific point being made.
Let’s say I want to learn juggling. Simply reading a book that gives me a theory of juggling won’t give me the skill to juggle. What gives me the skill is practicing it and exposing myself with the practice to empiric feedback.
I don’t think it’s useful to model that part of empiric learning to juggle with logic.
Juggling with logic is a loose metaphor...literally, juggling is a physical skill, so it cannot be learnt from pure theory. But reasoning is not a physical skill.
If you were able to make implicit reasoning explicit, you would be able to do useful things like seeing how it works, and improving it. I’m not seeing the downside to explicitness. Implicit reasoning is usually more complex than explicit reasoning, and it’s advantage lies in its complexity, not it’s implicitness.
Why do you think the dualistic distinction of physical and mental is useful for skill learning? But if you want a more mental skill how about dual n-Back?
The problem is that the amount of information that you can use for implicit reasoning vastly outweighs the amount of information for explicit reasoning. It’s quite often useful to make certain information explicit but you usually can’t make all available information that a brain uses for a reasoning process explicit.
Besides neither General Semantics or the Superforcasting principles are against using explicit reasoning. In both cases there are quite explicit heuristics about how to reason.
I started by saying that your idea that all reasoning processes are either explicit or implicit is limiting. In General Sematics you rather say “X is more explicit than Y” instead of “X is explicit”. Using the binary classifier mean that your model doesn’t show certain information about reality that someone who uses the General Sematics model uses shows.
“Explicitness is important” isn’t a defense at all because it misses the point. I’m not against using explicit information just as I’m not against using implicit information.
If you agree that it covers superforcasting than my argument is right. Using presuppotions is a very particular way of reasoning and there are many other possible heuristics that can be used.
A LW comment also isn’t long enough to lay out a complete system of reasoning as complex as the one proposed in Science and Sanity or that proposed in Superforcasting. That why I refer to general arguments are refer to the books for a more detailed explanation of particular heuristics.
There’s basically two kinds of reasoning—the kind that can be made manifest (explicit,etc) and the kind that can’t. The gold standard of solving of solving the problem of presuppositions (foundations, intuitions) is to show that nothing presupposition-like is needed in explicit reasoning. Failed attempts tend to switch to implicit reasoning, or to take it that sufficiently obvious presupposiitons don’t count as presuppositions (We can show this with induction...we can show this with empiricism).
I don’t think that’s the case. Trying to put complex concepts into two boxes binary boxes is done very frequently in the Western tradition but there no inherent argument that it’s the best way to do things. Science and Sanity argues in detail why binary thining is limiting.
As far as this particular case of the implicit/explicit distinction, most kinds of reasoning tend to be a mix. Reasoning that’s completely explicit is the kind of reasoning that can be done by a computer with very limited bandwith. For many problems we know that computers can’t solve them as easily as calculating 23472349 * 5435408 which can be done completely explicitely. If you limit yourself to what can be made completely explicit you limit yourself to a level of intelligence that can’t outperform computers with very limited memory/CPU power.
Explicit reasoning has a its disadvantages, but is still hard to do without. In talking about superforecasters, you are taking it that someone has managed to determine who they are as opposed to ordinary forecasters, raving lunatics, etc. Deterimining that kind of thing is where explicit reasoning..what’s the alternative? Groups of people intuiting that each other are reliable intuiters?
That’s why you mix it with implicit reasoning if you care about the outcome of the reasoning process. Doing everything implict is as bad as doing everything explicit.
I would have thought the problem with doing everything explicitly is that it is not possible.
Our usual way of combining explicit about and implicit reasoning is to reason explicitly from premises which we find intuitively appealing, ie which we arrive at by implicit reasoning. That isn’t a solution to the problem, that is the problem: everything is founded on presuppositions, and if they are implicit we can’t check how they are arrived at, and we also can’t check how reliable they are without needing to use further presuppositions.
Korzybski seems to be saying we should be using more implicit reasoning. I don’t s how that helps.
I don’t think that’s what he’s saying. In the case of “consciousness of abstraction” he even encourages people to be explicit about things that they usually aren’t.
Korzybski takes a long book to explain how he thinks reasoning should be done and coins a bunch of basic concepts on which it should be built that are internally consistent. I don’t think I can give you a full understanding of how the framework works in the space of a few comments.
Does it address the problem at hand?
Most statements we make in general semantics are about maps about there no presumption that the map is real and is the territory. Indeed being explicit about the fact that it isn’t is an important part.
How does that address the Presumption problem? You could say that no statement made by anybody has any bearing on reality, so the presumptions they are based on don’t matter...but if that kind of sweeping anti-realism were a good solution , it would have been adopted along ago.
I don’t think General Semantics is anti-realism anymore than Einsteins Relativity theory is anti-realism because it states that a lot is relative. I think General Semantics hasn’t been adopted because it’s actually hard to learn to switch to thinking in terms of General Semantics.
Academic Science in the 20st century worked to compartamentalize knowledge by subjects in a quite specific way and a discipline like General Semantics didn’t fit in that compartamentatilization. It’s similar to how Cybernetics as a field didn’t make it big because it doesn’t fit into the common categorisation.
I am not saying that GS is necessarily anti realistic, just trying to find some relevance to your comment. I don’t suppose I will ever find out how GS solves the presupposition problem, since you seem to be more interested in saying how great it is in the most general possible terms.
Answering the question is like answering how some mathematical proof works that goes for 200 pages. GS is a complex system that builds on itself.
Do you feel confident you personally have the answer in your own mind, or are you just running on the assumption that GS must contain it somewhere, because of its general wonderfulness?
The outside view: http://lesswrong.com/lw/54u/bayesian_epistemology_vs_popper/3v49
I’m think the problem doesn’t make sense in the GS paradigm. Kuhn wrote that problem set in one paradigm aren’t necessarily expressable in the paradigm of another framework and I think this is case like that.
According to Kuhn science needs to have a crisis to stop using the existing paradigm and move to a different one. In the field of medicine you could say that the paradigm of Evidence-Based Medicine solved certain issues that the prevailing scientific paradigm had at the time Korzybski wrote. Thinking in terms of probabilities and controlled trials solves certain practical problems really well. It especially solved the practical problem of proving that patented drugs provide clinical effects for patients really well and much better than the previous paradigm.
That’s a problem that GS doesn’t solve that problem as well. There are socioeconomic reasons why a paradigm that solves that problem well won. On the physics side “shut up and calculate” also worked well socioeconomically. “Shut up and calculate” works well for problems such as flying airplanes, going to the moon or building computer chips. To solve those problems the conceptualization of underyling ontology isn’t necessary. Economically people did well in those area with ignoring ontology and simply focusing on epistemology.
GS doesn’t provide better answer to those questions. On the other hand the prevailing paradigm gives really crappy answers for questions such as “What is autism?”. What’s a human? Is a human something different than a homo sapiens? GS is useful for thinking about the answers to those questions. Those questions are starting to become economically relevant in a way they didn’t used to with big data and AI.
On the QS facebook group I had yesterday a conversation about practical problems with the ontology of what the term mood means with a person saying that they had trouble creating data about moods because they couldn’t find a definition on which 30% of psychologists agree.
I think “general wonderfulness” is the wrong framing. It’s that GS is doing well at different problems.
Do you realise that over the course of the discussion, you have
1) offered a solution to the problem of ubnfounded foundations.
2) offered a claim that a solution exists, but is too long to write down.
3) offered a claim that the problem doesn’t exist in the first place.
The solution offered at the beginning is basically: “Don’t try to let your reasoning be based on underlying foundations in the first place.”
That leaves the open question about how to reason. GS is an answer to that question.
“One the one hand, on the other hand, on the third hand”-reasoning as advocated in Superforcasting where there doesn’t have to be a shared foudnation for all three hands is another. That’s what Tetlock calls “foxy” thinking and where he argues that it makes better predictions than hedgehog thinking where everything is based on one model with one foundation. But Superfocasting provides a bunch of heuristics and not a deep ontological foundation.
I also have other frameworks that point in the same direction but that are even harder to describe and likely not accessible by simply reading a book.
No. The problem exist if you take certain assumptions for granted. If haven’t claim that you don’t have the problem if you make those assumption and follow certain heuristics.
This leaves open the question of how to reason differently. GS is an answer of how to reason differently and it’s complex and demonstrating that it’s an internally consistent approach takes time and is done in Science and Sanity over many pages.
No, I do see that the problem exist if you follow certain heuristics.
What that seems to amount to is “conduct all your reasoning inside a black box”. That makes soem problems, such as the problem of being able to veify your reasoning
Not it’s not a black box. It’s just not the usually used box and Science and Sanity describes how the box works. And that’s sufficiently complex that it’s not easy to break down on one page.
Eliezer ruminates on foundations and wrestles with the difficulties quite a bit in the Metaethics sequence, for example:
Where Recursive Justification Hits Bottom
Fundamental Doubts
Thank you. This reply actually answer the first part of my question.
The ‘working’ presuppositions include:
Induction
Occam’s razor
I will quote most important part from Fundamental Doubts
And this have a lot of similarities with my previous conclusion (with significant differences about circular logic and meta loops)
Okay, I don’t know why everyone is making this so complicated.
In theory, nothing is presupposed. We aren’t certain of anything and never will be.
In practice, if induction works for you (it will) then use it! Once it’s just a question of practicality, try anything you like, and use what works.
It won’t let you be certain, but it’ll let you move with power within the world.
As for values, morals, your question suggests you might be interested in A Thousand Shards of Desire in the sequences. We value what we do, with lots of similarities to each other, because evolution designed our psychology that way.
Evolution is messy and uncoordinated. We ended up with a lump of half random values not at all coherent.
So, we don’t look for, or recommend looking for, any One Great Guiding Principle of morality; there probably isn’t one.
We just care about life and fairness and happiness and fun and freedom and stuff like anyone else. Lots of lw people get a lot of mileage out of consequentialism, utilitarianism, and particularly preference utilitarianism.
But these are not presumed. Morality is, more or less, just a pile of things that humans value. You don’t HAVE to prove it to get people to try to be happy or to like freedom (all else equal).
If I’ve erred here, I would much like to know. I puzzled over these questions myself and thought I understood them.
Having an official doctrine that nothing is certain is not a all the same as having no presuppositions. To have a presupposition is to treat something as true (including using it methodologically) without being able to prove it. In the absence of any p=1 data, it makes sense to use your highest probability uncertain beliefs presuppositionally. It’s absence of foundations (combined with a willingness to employ it, nonetheless ) that makes something presuppositional, not presence of certainty.
Treating something as true non-methodologically means making inferences from it, or using it to disprove soemethign else.
Treating something as true methodologically means using as a rule of inference.
If you have a method of showing that induction works, that method will ground out in presuppositions. If you dion’t, then induction itself is a (methodological) presupposition for you.
Finally: treating moral values as arbitrary, but nonetheless something you should pursue[*], is at the farthest possible remove from showing that they are not presuppositions!
[*]Why?
None of this requires that you pretend to know more than you do.
I don’t have to pretend to know whether I’m in a simulation or not. I can admit my ignorance, and then act, knowing that I do not know for certain if my actions will serve.
I think of this in levels
I can’t prove that logic is true. So I don’t claim to know it is with probability 1. I don’t pretend to.
But, IF it is true, then my reasonings are better than nothing for understanding things.
So, my statements end up looking something like: “(IF logic works) the fact that this seems logical means it’s probably true.”
But, I don’t really know if my senses are accurate messengers of knowledge (Matrix, etc). That’s on another level But I don’t have to pretend that I know they are and I don’t. So my statements end up looking like: “((IF logic works) and my senses are reporting accurately) the fact that this seems logical means it’s probably true.”
We just have to learn to act amid uncertainty. We don’t have to pretend that we know anything to do so.
Morals are not arbitrary. I’m just talking about the Sequences’ take on morality. If you care about a different set of things, then morality doesn’t follow that change, it just means that you now care about something other than morality.
If you love circles, and then start loving ovals, that doesn’t make ovals into circles, it just means you’ve stopped caring about circles and started caring about something else.
Morality is a fixed equation.
To say you “should” be moral is tautological. It’s just saying you “should” do what you “should” do.
Yes, if you can’t solve the presupposition problem, the main alternative is to carry on as before, at the object level, but with less confidence at the meta level. But who is failing to take that advice? As far as I can see, it is Yudkowsky. He makes no claim to have solved the problem of unfounded foundations, but continues putting very high probabilities on ideas he likes, and vehemently denying ones he doesn’t.
Ok. You should be moral. But there is no strong reason why you should folliow arbitrary values. Therefore, arbitrary values are not morality.
So what are the correct moral values?
Well, in the normal course of life, on the object level, some things are more probable than others.
If you push me about if I REALLY know they’re true, then I admit that my reasoning and data could be confounded by a Matrix or whatever.
Maybe it’s clearer like so:
Colloquially, I know how to judge relative probabilities.
Philosophically (strictly), I don’t know the probability that any of my conclusions are true (because they rest on concepts I don’t pretend to know are true).
About the moral values thing, it sounds kinda like you haven’t read the sequence on metaethics. If not, then I’m glad to be the one to introduce you to the idea, and I can give you the broad strokes in a few sentences in a comment, but you might want to ponder the sequence if you want more.
Morality is a set of things humans care about. Each person has their own set, but as humans with a common psychology, those sets greatly overlap, creating a general morality.
But, humans don’t have access to our source code. We can’t see all that we care about. Figuring out the specific values, and how much to weight them against each other is just the old game of thought experiments and considering trade-offs, etcetera.
Nothing that can be reduced to some one-word or one-sentence idea that sums it all up. So we don’t know what all the values are or how they’re weighted. You might read about “Coherent Extrapolated Volition,” if you like.
Morality is not arbitrary any more than circularity is arbitrary. Both refer to a specific thing with specific qualities. If you change the qualities of the thing, that doesn’t change morality or change circularity, it just means that the thing you have no longer has morality, no longer has circularity.
A great example is Alexander Wales’ short story “The Last Christmas” (particularly chapter 2 and 3). See below.
The elves care about Christmas Spirit, not right and wrong, or morality, or fairness.
When it’s pointed out that what they’re doing isn’t fair, they don’t protest, they just say “We don’t care. Fairness isn’t part of the Christmas Spirit.”
And we might say, “Santa being fat? We don’t care, that’s not part of morality. We don’t deny that it’s part of the Christmas Spirit; we just don’t care that it is.”
If aliens care about different things, it’s not about our morality versus “their” morality. It would be about THE morality versus THE Glumpshizzle. The paper-clipper is used also as example. It doesn’t care about morality. It cares about clippiness.
The moral thing and the clippy thing to do are both fixed calculations. Once you know the answer, it’s a feature of your mind if you happen to respond to morality, or clippiness, or Glumpshizzle, or Christmas Spirit.
If anybody thinks I’ve misunderstood part of this, please, do let me know. I’ve tried to understand, and would like to correct any mistakes if I have them.
“You wouldn’t even make any arguments for why you should live?” asked Charles.
“My life is meaningless in the face of the Christmas spirit,” said Matilda.
“But if it didn’t matter to the Christmas spirit,” said Charles, “If I just wanted to see you die for fun?”
“Allowing you to satisfy your desires is part of maintaining the Christmas spirit, Santa,”
“It’s unfair,” said Charles.
“Life is unfair,” said Matilda.
“Does it have to be?” asked Charles. “Is that the Christmas spirit?”
“I don’t know,” said Matilda. “Fairness doesn’t enter into it, I don’t think. Why should Christmas be fair if life isn’t fair?”
http://alexanderwales.com/the-last-christmas-chapter-1-2/
More a case of read but not believed.
That isn’t saying anything cogent. If moral values are some specific subset of human values, you haven’t said what the criterion of inclusion in that subset is. On the other hand, if you are saying all human values are moral values, that is incredible:-
Human values can conflict.
Morality is a decision theory, it tells you what you should do.
A ragbag of conflicting values cannot be used to make a definitive decision.
Therefore morality is not a ragbag of conflicting values.
Perhaps you think CEV solves the problem of value conflict. But if human morality is broadly defined, then th CEV process will be doing almost all the lifting, and CEV is almost entirely unspecified. On the other hand, if you narrow down the specification of human values , you increase the amount of arbitrariness.
Your theory of morality is arbitrary because you are not explaining why only human (twenty first century? Western?) values count as morality. Rather. you are using “morality” as something like a place name or personal name. No reason need be given why Istanbul is Istanbul, that’s just a label someone put in an area of Earths surface.
But morality cannot be a matter of arbitrary labeling, because it is about having a principled reason why you should do one thing and not another......however no such reason could be founded on an arbitrary naming ceremony! No more than everyone should obey me just because I dub myself the King of the World! To show that human values are morality, you have to show that they should be followed, which you don’t do just by calling them morality. That doesn’t remove the arbitrariness in the right way.
Because the map is not the territory, normative force does not come from labels or naming ceremonies. You can’t change what is by relabelling it, and you can’t change what ought to be that way either.
Note how we have different rules about proper names and meaningful terms. You can name things as you wish , because nothing follows from it, because names are labels, not contentfull terms. You can make inferences from contentfull terms, but you should apply them carefully, since argument from tendentiously applied terms us a common form of bad argument. Folow the rules and you have no causal series going from map to territory. Choose one from column A, and one from column B and you do.
What you are describing isn’t fixed in the expected sense of being derivable from first principles.
How does that pan out in practice? If (1) humans have the one true morality, then we should apply it, and even force it in others. If (2) morality is just a set of arbitrary values, there is little reason humans should folow it, and even less justification to impose it.
These are contradictory ideas, yet you are asserting both of them!
BTW, denial of your claims that morality is a unique but arbitrary thing doesn’t entail believing that clipping is morality. You can have N things that are morality, according to some criteria, without Clipping being amongst them.
Moreover, alternative r theories don’t have to disclaim any connection between morality and human values.
[Disclaimer: My ethics and metaethics are not necessarily the same as those of Bound_up; in fact I think they are not. More below.]
I think this argument, in order to work, needs some further premise to the effect that a decision only counts as “definitive” if it is universal, if in some suitable sense everyone would/should arrive at the same decision; and then the second step (“Morality tells you what you should do”) needs to say explicitly that morality does this universally.
In that case, the argument works—but, I think, it works in a rather uninteresting way because the real work is being done by defining “morality” to be universal. It comes down to this: If we define “morality” to be universal, then no account of morality that doesn’t make it universal will do. Which is true enough, but doesn’t really tell us anything we didn’t already know.
I think I largely agree with what I take to be one of your main objections to Eliezer’s “metaethics sequence”. I think Eliezer’s is a nonrealist theory masquerading as a realist theory. He sketches, or at least suggests the existence of, some set of moral values broadly shared by humanity—so far, so good, though as you say there are a lot of details to be filled in and it may or may not actually be possible to do that. He then says “let us call this Morality, and let us define terms like should and good in terms of these values”—which is OK in so far as anyone can define any words however they like, I guess. And then he says “and this solves a key problem of metaethics, namely how we can see human values as non-arbitrary even though they look arbitrary: human values are non-arbitrary because they are what words like should and right and bad are about”—which is mere sophistry, because if you were worried before about human values being arbitrary then you should be equally worried after his definitional move about the definitions of terms like should being arbitrary.
But I don’t think (as, IIUC, Eliezer and Bound_up also don’t think) we need to be terribly worried about that. Supposing—and it’s a big supposition—that we are able to identify some reasonably coherent set of values as “human moral values” via CEV or anything else, I don’t think the arbitrariness of this set of values is any reason why we shouldn’t care about it, strive to live accordingly, program our superpowerful superintelligent godlike AIs to use it, etc. Yes, it’s “just a label”, but it’s a label distinguished by being (in some sense that depends on just where we get this set of values from) what we and the rest of the human race care about.
Ok, but it would have been helpful to have argued the point.
AFAICT, it is only necessary for to have the same decision across a certain reference class, not universally.
Who is defining morality to be universal? I dont think it is me. I think my argument works in a fairly general sense. If morality is a ragbag of values, then in the general case it is going to contain contradictions, and that will stop you making any kind of decision based on it.
I disagree with this objection to Eliezer’s ethics because I think the distinction between “realist” and “nonrealist” theories is a confusion that needs to be done away with. The question is not whether morality (or anything else) is “something real,” but whether or not moral claims are actually true or false. Because that is all the reality that actually matter: tables and chairs are real, as far as I am concerned, because “there is a table in this room” is actually true. (This is also relevant to our previous discussion about consciousness.)
And in Eliezer’s theory, some moral claims are actually true, and some are actually false. So I agree with him that his theory is realist.
I do disagree with his theory, however, insofar as it implies that “what we care about” is essentially arbitrary, even if it is what it is.
That (whether moral claims are actually true or false) is exactly how I distinguish moral realism from moral nonrealism, and I think this is a standard way to understand the terms.
But any nonrealist theory can be made into one in which moral claims have truth values by redefining the key words; my suggestion is that Eliezer’s theory is of this kind, that it is nearer to a straightforwardly nonrealist theory, which it becomes if e.g. you replace his use of terms like “good” with terms that are explicit about what value system the reference (“good according to human values”) than to typical more ambitious realist theories that claim that moral judgements are true or false according to some sort of moral authority that goes beyond any particular person’s or group’s or system’s values.
I agree that the typical realist theory implies more objectivity than is present in Eliezer’s theory. But in the same way, the typical non-realist theory implies less objectivity than is present there. E.g. someone who says that “this action is good” just means “I want to do this action” has less objectivity, because it will vary from person to person, which is not the case in Eliezer’s theory.
I think we are largely agreed as to facts and disagree only on whether it’s better to call Eliezer’s theory, which is intermediate between many realist theories and many non-realist theories, “realist” or “non-realist”.
I’m not sure, though, that someone who says that “this is good” = “I want to do this” is really a typical non-realist. My notion of a typical non-realist—typical, I mean, among people who’ve actually thought seriously about this stuff—is somewhat nearer to Eliezer’s position than that.
Anyway, the reason why I class Eliezer’s position as non-realist is that the distinction between Eliezer’s position and that of many (other?) non-realists is purely terminological—he agrees that there are all these various value systems, and that if ours seems special to us that’s because it’s ours rather than because of some agent-independent feature of the universe that picks ours out in preference to others, but he wants to use words like “good” to refer to one particular value system—whereas the distinction between his position and that of most (other?) realists goes beyond terminology: they say that the value system they regard as real is actually built into the fabric of reality in some way that goes beyond the mere fact that it’s our (or their) value system.
You may weight these differences differently.
I think he wants a system which works like realism, in that there are definite answers to ethical questions (“fixed”, “frozen”) ,but without spookiness.
Yudkowsky,’s theory entails the same problem as relativism: if morality is whatever people value, and if what people happen to value is intuitively immoral , slavery, torture,whatever, then there’s no fixed standard of morality. The label “moral” has been placed on a moving target. (Standard relativism usually has this problem synchronously , ie different communities are said to have different but equally valid moralities at the same time, but it makes little difference if you are asserting that the global community has different but equally valid moralities at different times)
You can avoid the problems of relativism by setting up an external standard, and there are many theories of that type, but they tend to have the problem that the external standard is not naturalistic....God’s commands, the Form of the good, and so on. I think Yudkowsky wants a theory that is non arbitrary and also naturalistic. I don’t think he arrives a single theory that does both. If the Moral Equation is just a label for human intuition, then it ssuffers from all the vagaries of labeling values as moral, the original theory. If the Moral Equation is something ideal and abstract, why can’t aliens partake?
I agree.
Again. my point is that it that to do justice to philosophical doubt, you need to avoid high probabilities in practical reasoning a laTaleb. But not everyone gets that. A lot of people think that using probability alone us sufficient.
Hi Arielgenesis, and welcome!
From a rationalist perspective, taking things for granted is both dangerous and extremely useful. We want to preserve our ability to change our minds about things in the right direction (closer to truth) whenever the opportunity arises. That being said, we cannot afford to doubt everything, as updating our beliefs takes time and resources.
So there are things we take for granted. Most mathematics, physics, the basic laws and phenomena of Science in general. Those are ideally backed by the scientific method, which axioms are grounded in building a useful model of the world (see Making Beliefs Pay Rent (in Anticipated Experiences) ).
From my rationalist perspective, then, there are no self-evident things, but there are obvious things, considered evident by the overwhelming weight of available… evidence.
Regarding values… it’s a tough problem. I personally find that all preconceptions I had about universally shared values are shattered one by one the more I study them. For more information on this, I shall redirect you to complexity of value.
No, if you look at our yearly census you find that it lists a question for the probability that we are living in a simulation. Most people don’t presume that this probability is zero but enter numbers different from zero if my memory is right.
People will also discard the low probability possibilities at every stage of a process of thought, because otherwise the combinatorial explosion us impossible to cope with.
Effectiveness is desirable; effectiveness is measured by results; consistency and verifiability are how we measure what is real.
As a corollary, things that have no evidence do not merit belief. We needn’t presume that we are not in a simulation, we can evaluate the evidence for it.
The central perspective shift is recognizing that beliefs are not assertions about reality, but assertions about our knowledge of reality. This what is meant by the map and the territory.
Does evidence have to be direct evidence? Or can something like inference to the best explanation be included?
That is exactly the sort of situation where direct evidence is useless.
How do we not fall into the rabbit hole of finding evidence that we are not in a simulation?
There is a LessWrong wiki entry for just this problem: https://wiki.lesswrong.com/wiki/Simulation_argument
The rabbit hole problem is solved by recognizing when we have made the best determination we can with current information. Once that is done, we stop.
Understanding that beliefs are our knowledge of reality rather than reality itself has some very interesting effects. The first is that our beliefs do not have to take the form of singular conclusions, such as we are or are not in a simulation; instead our belief can take the form of a system of conclusions, with confidence distributed among them. The second is the notion of paying rent, which is super handy for setting priorities. In summary, if it does not yield a new expectation, it probably does not merit consideration.
If this does not seem sufficiently coherent, consider that you are allowed to be inconsistent, and also that you are engaging with rationality early in its development.
If inference to the best explanation is included, we can’t do that. We can know when we have exhausted all the prima facie evidence, but we can’t know when we have exhausted every possible explanation for it. What you haven’t thought of yet, you haven’t thought of. Compare with the problem of knowingly arriving at the final and perfect theory of physics,
This is a useful bit of clarification, and timely.
Would that change if there was a mechanism for describing the criteria for the best explanation?
For example, could we show from a body of evidence the minimum entropy, and therefore even if there are other explanations they are at best equivalent?
Equivalent in what sense? The fact that you can have equivalently predictive theories with different ontological implications is a large part of the problem.
Another part is that you don’t have exhaustive knowledge of all possible theories. Being able to algorithmically check how good a theory is, a tall ordet, but even if you had one it would not be able to tell you that you had hit the best possible theory , only the best out of the N fed into it.
Let me try to restate, to be sure I have understood correctly:
We cannot stop once we have exhausted the evidence because explanations of equal predictive power have different ontological implications, and these implications must be accounted for in determining the best explanation. Further, we don’t have a way to exclude other ontological implications we have not considered.
Question: why don’t the ontological implications of our method of analysis constrain us to observing explanations with similar ontological implications?
Maybe they can[*], but it is not exactly a good thing...if you stick to one method of analysis, you will be in an echo chamber.
[*}An example might be the way reality looks mathematical to physics, which some people are willing to take fairly literally.
Echo chamber implies getting the same information back.
It would be more accurate to say we will inevitably reach a local maxima. Awareness of the ontological implications should be a useful tool in helping us recognize when we are there and which way to go next.
Without pursuing the analysis to its maximal conclusions, how can we distinguish the merits of different ontologies?
Without having a way of ranging across ontologyspace, how can we distinguish the merits of different ontologies? But we don’t have such a way. In its absence, we can pursue an ontology to the point of breakdown, whereupon we have no clear path onwards. It can also be a slow of process … it took centuries for scholastic philosophers to reach that point with the Aristotelian framework.
Alternatively, if an ontology works, that is no proof that it ia the best possible ontology, or the final answer...again because of the impossibility of crawling across ontologyspace.
This sounds strongly like we have no grounds for considering ontology at all when determining what the best possible explanation.
We cannot qualitatively distinguish between ontologies, except through the other qualities we were already examining.
We don’t have a way of searching for new ontologies.
So it looks like all we have done is go from best possible explanation to best available explanation where some superior explanation occupies a space of almost-zero in our probability distribution.
If that is supposed to mean that every ontology comes with its own isolated, tailor-made criteria, and that there are no others … then I dont think the situation is quite that bad: it;’s partly true, but there are also criteria that span ontologies, like parsimony.
The point is that we don’t have a mechanical, algorithmic way of searching for new ontologies. (It’s a very lesswronging piece of thinking to suppose that means there is no way at all). Clearly , we come up with new ontologies from time to time. In the absence of an algorithm for constructing ontologies, doing so is more of a createive process, and in the absence of algorithmic criteria for evaluating them, doing so is more like an aesthetic process.
My overall points are that
1) Philosophy is genuinely difficult..its failure to churn out results rapidly isn’t due to a boneheaded refusal to adopt some one-size-fits all algorithm such as Bayes...
2) … because there is currently no algorithm that covers everything you would want to do.
it’s a one word difference, but it’s very significant difference in terms of implications. For instance, we can;t quantify how far the best available explanation is from the best possible explanation. That can mean that the use of probablistic reasoning does’t go far enough.
I mean to say we are not ontologically motivated. The examples OP gave aren’t ontological questions, only questions with ontological implications, which makes the ontology descriptive rather than prescriptive. That the implications carry forward only makes the description consistent.
In the scholastic case, my sense of the process of moving beyond Aristotle is that it relied on things happening that disagreed with Aristotle, which weren’t motivated by testing Aristotle. Architecture and siege engines did for falling objects, for example.
I agree with your points. I am now experiencing some disquiet about how slippery the notion of ‘best’ is. I wonder how one would distinguish whether it was undefinable or not.
Who’s “we”? Lesswrongians seem pretty motivated to assert the correctness of physicalism and wrongness of dualism, supernaturalism,, etc.
I’m not following that. Can you give concrete examples?
What I had in mind was Aristotelean metaphysics, not Aristotelean physics. The metaphysics, the accident/essence distinction and so on, failed separately.
An article based on rationality-informed strategies of probabilistic thinking and de-anchoring to deal with police racial profiling. Note that the data on racial profiling is corrected for the higher rate of crimes committed by black people. This is a very by-the-numbers piece.
Good article overall but if the people at the Center for Policing Equity found no racial bias in police shootings do you think they would have published the results? And if they did publish such a study would Salon have let you publish an article uncritically citing its conclusions? In short: shouldn’t file drawer effects cause us to be wary of this part of your article?
Yeah, totally hear you about the file drawer effect, which is why I found two separate citations besides the Center for Policing Equity, which I cited in the piece—this one, and this one. One is a poll, and the other is a government statistical analysis on traffic stops that includes race information. Neither of these is something to which the file drawer effect (publication bias) would apply.
OK, good point.
The mainstream LW idea seems to be that the right to life is based on sentience.
At the same time, killing babies is the go-to example of something awful.
Does everyone think babies are sentient, or do they think that it’s awful to kill babies even if they’re not sentient for some reason, or what?
Does anyone have any reasoning on abortion besides, Not sentient being, killing it is okay QED (wouldn’t that apply to newborns, too?)?
I think killing babies is uniquely horrible rather than uniquely harmful.
Professor Moriarty’s evil plan to destroy the world and kill everyone is nearing completion. All he has to do is to press the big red button on the world-destroying machine. You are standing nearby with a hand grenade and could kill Moriarty. But, anticipating this sort of problem, he has taken care to have the whole area filled with cute babies.
You should probably throw the hand grenade even though it will kill lots of babies. But if your response to this situation is anything like “Yessss! Finally I get to kill some babies!” then, although I suppose in some sense I’m glad it’s you rather than someone more scrupulous in this bizarre situation, you are a terrible person and the world needs fewer people like you and in the less-artificial contexts that make up maybe 99.9999% of real situations your enthusiasm for baby-killing will not be any sort of advantage to anyone.
(There is a big difference, in principle, between the questions “Will doing X make the world a better place on balance than not doing X?” and “Is X the sort of thing a good rather than a bad person would do?”. Unfortunately, everyday moral discourse doesn’t make that distinction very clearly. Fortunately, in most actually-arising situations (I think) the two questions have similar answers.)
(separate reply, so you can downvote either or both points)
I don’t think anyone’s tried to poll abortion feelings on LW, and expect the topic to be fairly mind-killing. For myself, I tend not to see moment-of-birth as much of a moral turning point—it’s about the same badness to me whether the euthanasia takes place an hour before, or during, or an hour after delivery. Somewhere long before that, the badness of never existing changes to the badness of probably-but-then-not existing, and then to the badness of almost-but-then-not-existing, and then to existing-then-not, and then later to existing-and-understanding-then-not.
It’s a continuum of unpleasant to reprehensible, not a switch between acceptible and not.
Potential sentience had got to count, or it would be ok to kill sleeping peopje
I don’t think this is quite the LW norm. We might distinguish several different meanings of right to life:
1: The moral value I place on other peoples’ lives. In this sense “right to life” is just the phrase I use to describe the fact that I don’t want people to kill or die, and the details can easily vary from person to person. If LW users value sentience, this is a fact about demographics, not an argument that should be convincing. This is what we usually mean when we say something is “okay.”
2: The norms that society is willing to enforce regarding the value of a life. Usually fairly well agreed upon, though with some contention (e.g. fertilized ova). This is the most common use of the word “right” by people who understand that rights aren’t ontologically basic. Again, this is a descriptive definition, not a prescriptive one, but you can see how people might decide what to collectively protect based compromises between their own individual values.
3: Something we should protect for game-theoretic reasons. This is the only somewhat prescriptive one, since you can argue that it is a mistake in reasoning to, say, pollute the environment if you’re part of a civilization of agents very similar to you. Although this still depends on individual values, it’s the similarity of peoples’ decisions that does the generalizing, rather than compromise between different people. Values derived in this way can be added to or subtracted from values derived in the other ways. It’s unclear how much this applies to the case of abortion—this seems like an interesting argument.
I don’t know if this is mainstream, but IMO it’s massively oversimplified to the point of incorrectness. There’s plenty of controversy over what “right” even means, and how to value sentience is totally unsolved. I tend to use predicted-quality-adjusted-experience-minutes as a rough guideline, but adjust it pretty radically based on emotional distance and other factors.
I think of it more as a placeholder than an example. It’s not an assertion that this is universally awful in all circumstances (though many probably do think that), it’s intended to be “or something else you think is really bad”.
The mainstream moral concept of LW is utilitarianism not deontology to the extend that there’s a LW mainstream moral concept.
I don’t most people here like dualistic thinking but consider being sentient to be a sliding scale.
How would you write a better “Probability theory, the logic of science”?
Brainstorming a bit:
accounting for the corrections and rederivations of Cox’ theorem
more elementary and intermediate exercises
regrouping and expanding the sections on methods “from problem formulation to prior”: uniform, Laplace, group invariance, maxent and its evolutions (MLM and minxent), Solomonoff
regroup and reduce all the “orthodox statistics is shit” sections
a chapter about anthropics
a chapter about Bayesian network and causality, that flows into...
an introduction to machine learning
My perspective on anthropics is somewhat different than many, but I think that in a probability theory textbook, anthropics should only be treated as a special case of assigning probabilities to events generated by causal systems. Which requires some familiarity with causal graphs. It might be worth thinking about organizing material like that into a second book, which can have causality in an early chapter.
I would include Savage’s theorem, which is really pretty interesting. A bit more theorem-proving in general, really.
Solomonoff induction is a bit complicated, I’m not sure it’s worthwhile to cover at a more than cursory level, but it’s definitely an important part of a discussion about what properties we want priors to have.
On that note, a subject of some modern interest is how to make good inferences when we have limited computational resources. This both means explicitly using probability distributions that are easy to calculate with (e.g. gaussian, cauchy, uniform) , and also implicitly using easy distributions by neglecting certain details or applying certain approximations.
Any book on statistics needs to have a section about regularization/complexity control and the relationship to generalization. This is an enormous lacuna in standalone Bayesian philosophy.
I now see that most of Jaynes’ effort in the book is an attempt to repair the essential problem with Bayesian statistics, which is the subjectivity of the prior. In particular, Jaynes believed that the MaxEnt idea provided a way to derive a prior directly from the problem formulation.
I believe he failed in his effort. The prior is intrinsically subjective and there is no way to get around this in the traditional small-N data regime of statistics. Two observers, looking at the same small data set, can justifiably come to very different conclusions. Objectivity is only re-achieved when the size of the data set becomes large, so that Solomonoff-style reasoning can be used.
I think too that Jaynes failed in his attempt, but just because he died too, too soon.
Otherwise, have he lived a long life, I believe we would have had much more advancement in the field. To the present moment, nobody seems interested in bringing forward his vision, not even his closest student (Bretthorst, who edited the printed version of Jaynes’ book).
Having said this, I believe you are wrong, because while it’s true that two different agents can come up with two different prior for the same problem, they can do so only if they have different information (or, being the same thing, have different beliefs about it). Otherwise one is violating either common sense or coherence or basic logic.
There lies, I believe, all the meaning of objective Bayesian probability: a collection of methods, possibly with a unifying framework such as minxent, that allow to uncover the information hidden in a problem formulation, or to reveal why two different subjective priors actually differ.
Sure, but agents almost always have different background information, in some cases radically different background information.
Let’s say a pharma company comes out with a new drug. The company claims: “Using our specially-developed prior, which is based on our extensive background knowledge of human biochemistry, in combination with the results of our recent clinical trial, we can see that our new drug has a miraculous ability to save lives!” An outsider looks at the same data, but without the background biochemical knowledge, and concludes that the drug is actually killing people.
You can partially alleviate this problem by requiring the pharma company to submit its special prior before the trial begins. But that’s not what Jaynes wanted; he wanted to claim that there exists some ideal prior that can be derived directly from the problem formulation.
In that case, it’s the correct situation that they come up with different prior.
Yes, but he also made a point to always include all background information in the problem formulation. He explicitly wrote so, and his formulas had a trailing term P(...|...,X) to account for this.
It might be interesting to explore what happens to models if you change part of the background information, but I think it’s undeniable that with the same information you are bound to come up with the same prior.
This is why I think objective Bayesian probability is a better framework than subjective Bayesian: objectivity accounts for and explains subjectivity.
If you are not a Boltzmann Brain, then sentience produced by evolution or simulation is likely more common than sentience produced by random quantum fluctuations.
If sentience produced by evolution or simulation is more common than sentience produced by random quantum fluctuations, and given an enormous universe available as simulation or alien resources, then the amount of sentient aliens or simulations is high.
Therefore, P(Sentient Aliens or Simulation) and P(You are a Boltzmann Brain) move in opposite directions when updated with new evidence. As SETI continues to come up empty, the possibility that you are a floating brain in space is increasingly likely, all else equal.
Are these statements logical? Criticism and suggestions welcome.
If those are the only two possibilities then of course they do. A complete set of probabilities must sum to 1 and so P(Aliens) is simply (1 - P(Bolzmann)).