No, you do not get to publicly demand an in-depth discussion of the philosophy of induction from a specific, small group of people. You can raise the topic in a place where you know they hang out and gesture in their direction. But what you’re doing here is trying to create a social obligation to read ten thousand words of your writing. With your trademark in capital letters in every other sentence. And to write a few thousand words in response. From my outside perspective, engaging in this way looks like it would be a massive unproductive time sink.
It’s worse than that. I tried to have a discussion of the philosophy of induction with him (over on the slack). He took exception to some details of how I was conducting myself, essentially because I wasn’t following his “Paths Forward” methodology, and from that point on he wasn’t interested in discussing the philosophy of induction.
So in effect he’s publicly demanding an in-depth discussion of the philosophy of induction according to whatever idiosyncratic standards of debate he decides to set up from a specific small group of people.
There are thousands of philosophers about whom I could ask the same question. It makes sense to focus attention on those people who are most likely to provide useful information and not those people who are engaging in the most effort to get heart by coming and posting in our forum.
There are thousands of philosophers about whom I could ask the same question.
Who are these thousands? It would be great if the world had lots of really good philosophers. It doesn’t. The world is starving for good philosophers: they are very few and far between.
I have no reason for me to believe that Curi is among the people who’s a really good philosopher.
Popper might have said useful things given his time but he’s dead. I won’t read from Popper about what he thinks about the development of the No Free Lunch theorem and ideas that came up after he died.
Barry Smith would be an example of a person that I like and where it’s worth to spend more time reading more of his work. His work of applied ontology actually matters for real world decision making and knowledge modeling.
Reading more from Judea Pearl (who by the way supervised Ilya Shpitser’s Phd) is also on my long-term philosophic reading list.
I don’t suppose you’re going to give names and references? Let alone point to anyone (them, yourself, or anyone else) who will take responsibility for addressing questions and criticisms about the referenced works?
Spirtes, Glymour, and Scheines, for starters. They have a nice book. There are other folks in that department who are working on converting mathematical foundations into an axiomatic system where proofs can be checked by a computer.
I am not going to do leg work for you, and your minions, however. You are the ones claiming there are no good philosophers. It’s your responsibility to read, and keep your mouth shut if you are not sure about something.
It’s your responsibility to read, and keep your mouth shut if you are not sure about something.
I have read and I know what I am talking about. You on the other hand don’t even know the basics of Popper, one of the best philosophers of the 20th century.
Your sockpuppet: “There is a shortage of good philosophers.”
Me: “Here is a good philosophy book.”
You: “That’s not philosophy.”
Also you: “How is Ayn Rand so right about everything.”
Also you: “I don’t like mainstream stuff.”
Also you: “Have you heard that I exchanged some correspondence with DAVID DEUTSCH!?”
Also you: “What if you are, hypothetically, wrong? What if you are, hypothetically, wrong? What if you are, hypothetically, wrong?” x1000
Part of rationality is properly dealing with people-as-they-are. What your approach to spreading your good word among people-as-they-are led to is them laughing at you.
It is possible that they are laughing at you because they are some combination of stupid and insane. But then it’s on you to first issue a patch into their brain that will be accepted, such that they can parse your proselytizing, before proceeding to proselytize.
This is what Yudkowsky sort of tried to do.
How you read to me is a smart young adult who has the same problem Yudkowsky has (although Yudkowsky is not so young anymore) -- someone who has been the smartest person in the room for too long in their intellectual development, and lacks the sense of scale and context to see where he stands in the larger intellectual community.
curi has given an excellent response to this. I would like to add that I think Yudkowsky should reach out to curi. He shares curi’s view about the state of the world and the urgency to fix things, but curi has a deeper understanding. With curi, Yudkowsky would not be the smartest person in the room and that will be valuable for his intellectual development.
Well, this comes back to the problem of LW Paths Forward. curi has made himself publicly available for discussion, by anyone. Yudkowsky not so much. So what to do?
I don’t have a sock puppet here. I don’t even know who Fallibilist is. (Clearly it’s one of my fans who is familiar with some stuff I’ve written elsewhere. I guess you’ll blame me for having this fan because you think his posts suck. But I mostly like them, and you don’t want to seriously debate their merits, and neither of us thinks such a debate is the best way to proceed anyway, so whatever, let’s not fight over it.)
But then it’s on you to first issue a patch into their brain that will be accepted, such that they can parse your proselytizing, before proceeding to proselytize.
People can’t be patched like computer code. They have to do ~90% of the work themselves. If they don’t want to change, I can’t change them. If they don’t want to learn, I can’t learn for them and stuff it into their head. You can’t force a mind, nor do someone else’s thinking for them. So I can and do try to make better educational resources to be more helpful, but unless I find someone who honestly wants to learn, it doesn’t really matter. (This is implied by CR and also, independently, by Objectivism. I don’t know if you’ll deny it or not.)
I believe you are incorrect about my lack of scale and context, and you’re unfamiliar with (and ridiculing) my intellectual history. I believe you wanted to say that claim, but don’t want to argue it or try to actually persuade me of it. As you can imagine, I find merely asserting it just as persuasive and helpful as the last ten times someone told me this (not persuasive, not helpful). Let me know if I’m mistaken about this.
I was generally the smartest person in the room during school, but also lacked perspective and context back then. But I knew that. I used to assume there were tons of people smarter than me (and smarter than my teachers), in the larger intellectual community, somewhere. I was very disappointed to spend many years trying to find them and discovering how few there are (an experience largely shared by every thinker I admire, most of whom are unfortunately dead). My current attitude, which you find arrogant, is a change which took many years and which I heavily resisted. When I was more ignorant I had a different attitude; this one is a reaction to knowledge of the larger intellectual community. Fortunately I found David Deutsch and spent a lot of time not being the smartest person in the room, which is way more fun, and that was indeed super valuable to my intellectual development. However, despite being a Royal Society fellow, author, age 64, etc, David Deutsch manages to share with me the same “lacks the sense of scale and context to see where he stands in the larger intellectual community” (the same view of the intellectual community).
EDIT: So while I have some partial sympathy with you – I too had some of the same intuitions about what the world is like that you have (they are standard in our culture) – I changed my mind. The world is, as Yudkowsky puts it, not adequate. https://www.lesserwrong.com/posts/dhj9dhiwhq3DX6W8z/hero-licensing
I was generally the smartest person in the room during school, but also lacked perspective and context back then.
This is not that untypical in this community. LW Censi put the average IQ on LW at something like 140.
There are plenty of people inside Mensa that spend their youth being smarter than the people in the room in school and that go on to develop crackpot theories.
From the perspective of Ilya Shpitser, who was supervised for his Phd by Judea Pearl (who’s famous of producing a theory of causality that’s very useful for practical purposes), corresponding with David Deutsch in an informal way doesn’t give you a lot of credentials.
You don’t seem to be a formal coauthor of the book, so your relationship is informal in a way that a Phd supervision isn’t. The book also doesn’t list you as editor but under “friends or colleagues” while he does mention that he does have a relationship with someone he calls copy-editor.
You seem to be implying I’m a liar while focusing on making factual claims in a intentionally biased way (you just saw, but omitted, relevant information b/c it doesn’t help “your side”, which is to attack me).
Your framing here is as dishonest, hostile, and unfair as usual: I did not claim to be a coauthor.
You are trying to attack informality as something bad or inferior, and trying to deny my status as a professional colleague of Deutsch who was involved with the book in a serious way. You are, despite the vagueness and hedging, factually mistaken about what you’re suggesting. Being a PhD student under Deutsch would have been far worse – much less attention, involvement, etc. But you are dishonestly trying to confuse the issues by switching btwn arguing about formality itself (who cares? but you’re using it as a proxy for other things) and actually talking about things that matter (quality, level of involvement, etc).
I made a statement that the relationship is informal and back up my claim. If you get offended by me simply saying things that are true, that’s not a good basis to have a conversation about philosophic matters.
If David Deutsch would have decided to hire you as an editor, that’s would be a clear sign that he values your expertise enough to pay for it.
The information that you provided shows that you provided a valuable service to him by organising an online forum as a volunteer and as a result he saw you as a friend who got to read his draft and he listened to your feedback on his draft. You seem to think that the fact that you spent the most time on providing feedback makes you the most important editor of it, but there’s no statement of David Deutsch himself in the acknowledgement section that suggests that he thinks the same way.
There literally is such a statement as the one you deny exists: he put the word “especially” before my name. He also told me directly. You are being dishonest and biased.
Your comments about organizing a forum, etc, are also factually false. You don’t know what you’re talking about and should stop making false claims.
I made and own the website and discussion group for the book. David is a founder of Taking Children Seriously (TCS) and Autonomy Respecting Relationships (ARR). I own the dicussion groups for both of those, too.
That’s your own presenation of your relationship with him.
That situation today doesn’t prevent you from being ignorant of things like timelines. Your claim that “you provided a valuable service to him by organising an online forum as a volunteer and as a result he saw you as a friend who got to read his draft and he listened to your feedback on his draft” is factually false. I didn’t run or own those forums at the time. I did not in fact get to read “his draft” (falsely singular) due to running a forum.
You don’t know what you’re talking about and you’re making up false stories.
You are right that I don’t know about the timeline given that it’s not public information and this can lead to getting details wrong. The fact that you are unable to think of what I refer to still suggests that your abilities to think in a fact based way about this aren’t good.
That aspect of the timeline actually is public information, you just don’t know it. Again you’ve made a false factual claim (about what is or isn’t public info).
You are clinging to a false narrative from a position of ignorance, while still trying to attack me (now I suck at thinking in a fact based way, apparently because I factually corrected you) rather than reconsidering anything.
I’ve told you what happened. You don’t believe me and started making up factually false claims to fit your biases, which aren’t going anywhere when corrected. You think someone like David Deutsch couldn’t possibly like and value my philosophical thinking all that much. You’re mistaken.
You could say that a lot of philosophers who dealt with logic where just doing math, that doesn’t change anything about practical application of logic being important philosophically.
Looking into what can be proven to be true with logic is important philosophically.
Being a good philosopher has nothing to do with taking responsibility for answering any questions that they are asked.
Most people who are actually good care about their time and don’t just spend significant amounts of time because a random person contacts them. They certainly don’t consider that to be their responsibility.
The right answer is maybe they won’t. The point is that it is not up to you to fix them. You have been acting like a Jehovah’s Witness at the door, except substantially more bothersome. Stop.
Until I found that I had not seen you actually provide clear text like this, and I wanted to exhort you to write an entire sequence in language with that flavor: clean and clear and lacking in citation. The sequence should be about what “induction” is, and why you think other people believed something about it (even if not perhaps by that old fashioned name), and why you think those beliefs are connected to reliably predictable failures to achieve their goals via cognitively mediated processes.
I feel like maaaybe you are writing a lot about things you have pointers to, but not things that you have held in your hands, used skillfully, and made truly a part of you? Or maybe you are much much smarter and better read than me, so all your jargon makes sense to you and I’m just too ignorant to parse it.
My hope is that you can dereference your pointers and bring all the ideas and arguments into a single document, and clean it up and write it so that someone who had never heard of Popper would think you are really smart for having had all these ideas yourself.
Then you could push one small chapter from this document at a time out into the world (thereby tricking people into reading something piece by piece that they might have skipped if they saw how big it was going to be up front) and then after 10 chapters like this it will turn out that you’re a genius and everyone else was wrong and by teaching people to think good you’ll have saved the world.
I like people who try to save the world, because it makes me marginally less hopeless, and less in need of palliative cynicism :-)
My hope is that you can dereference your pointers and bring all the ideas and arguments into a single document,
there already exist documents of a variety of lengths, both collections and single. you’re coming into the middle of a discussion and seemingly haven’t read much of it and haven’t asked for specifically what you want. and then, with almost no knowledge of my intellectual history, accomplishments, works, etc, things-already-tried, etc, you try to give me standard advice that i’ve heard a million times before. that would be ok as a starting point if it were only the starting point, but i fear it’s going to more or less be the ending point too.
it sounds like you want me to rewrite material from DD and KP’s books? http://fallibleideas.com/books#deutsch Why would me rewriting the same things get a different outcome than the existing literature? what is the purpose?
and how do you expect me to write a one-size-fits-all document when LW has no canonical positions written out – everyone just has their own different ideas?
and why are zero people at LW familiar enough to answer well known literature in their field. fine if you aren’t an expert, but why does this community seem to have no experts who can speak to these issues without first requesting summary documents of the books they don’t want to read?
what knowledge do you have? what are you looking for in talking with me? what values are you seeking and offering?
(thereby tricking people into reading something piece by piece that they might have skipped if they saw how big it was going to be up front
dishonesty is counter-productive and self-destructive. if you wish to change my mind about this, you’ll have to address Objectivism and a few other things.
and then after 10 chapters like this it will turn out that you’re a genius and everyone else was wrong and by teaching people to think good you’ll have saved the world.
there are difficulties such as people not wanting to think, learn, or truth-seek – especially when some of their biases are challenged. it’s hard to tell people about ideas this different than what they’re used to.
one basically can’t teach people who don’t want to learn something. creating more material won’t change that. there are hard problems here. you could learn philosophy and help, or learn philosophy and disagree (which would be helpful), or opt out of addressing issues that require a lot of knowledge and then try to do a half-understood version of one of the more popular/prestigious (rather than correct) philosophies. but you can’t get away from philosophical issues – like how to think – being a part of your life. nevertheless most people try to and philosophy is a very neglected field. such is the world; that isn’t an argument that any particular idea is false.
Or maybe you are much much smarter and better read than me, so all your jargon makes sense to you and I’m just too ignorant to parse it.
supposing hypothetically that that’s the case: then what next?
ONE: You’re posting over and over again with lots of links to your websites, which are places you offer consulting services, and so it kinda seems like you’re maybe just a weirdly inefficient spammer for bespoke nerd consulting.
This makes almost everything you post here seem like it might all just be an excuse for you to make dramatic noise in the hopes of the noise leading somehow to getting eyeballs on your website, and then, I don’t even know… consulting gigs or something?
This interpretation would seem less salient if you were trying to add value here in some sort of pro-social way, but you don’t seem to be doing that so… so basically everything you write here I take with a giant grain of salt.
My hope is that you are just missing some basic insight, and once you learn why you seem to be half-malicious you will stop defecting in the communication game and become valuable :-)
TWO: From what you write here at an object level, you don’t even seem to have a clear and succinct understanding of any of the things that have been called a “problem of induction” over the years, which is your major beef, from what I can see.
You’ve mentioned Popper… but not Hume, or Nelson? You’ve never mentioned “grue” or “bleen” that I’ve seen, so I’m assuming it is the Humean critique of induction that you’re trying to gesture towards rather than the much more interesting arguments of Nelson…
But from a software engineering perspective Hume’s argument against induction is about as much barrier to me being able to think clearly or build smart software as Zeno’s paradox is a barrier to me being able to walk around on my feet or fix a bicycle.
Also, it looks like you haven’t mentioned David Wolpert and his work in the area of no free lunch theorems. Nor have you brought up any of the machine vision results or word vector results that are plausibly relevant to these issues. My hypothesis here is that you just don’t know about these things.
(Also, notice that I’m giving links to sites that are not my own? This is part of how the LW community can see that I’m not a self-promoting spammer.)
Basically, I don’t really care about reading the original writings of Karl Popper right now. I think he was cool, but the only use I would expect to get from him right now would be to read him backwards in order to more deeply appreciate how dumb people used to be back when his content was perhaps a useful antidote to widespread misunderstandings of how to think clearly.
Let me spell this out very simply to address rather directly your question of communication pragmatics...
It sounds like you want me to rewrite material from DD and KP’s books? Why would me rewriting the same things get a different outcome than the existing literature?
The key difference is that Karl Popper is not spamming this forum. His texts are somewhere else, not bothering us at all. Maybe they are relevant. My personal assessment is currently that they have relatively little import to active and urgent research issues.
If you displayed the ability to summarize thinkers that maybe not everyone has read, and explain that thinker’s relevance to the community’s topics of interests, that would be pro-social and helpful.
The longer the second fact (where you seem to not know what you’re talking about or care about the valuable time of your readers) remains true, the more the first fact (that you seem to be an inefficient shit-stirring spammer) becomes glaring in its residual but enduring salience.
Please, surprise me! Please say something useful that does not involve a link to the sites you seem to be trying to push traffic towards.
you try to give me standard advice that i’ve heard a million times before
I really hope this was hyperbole on your part. Otherwise it seems I should set my base rates for this conversation being worth anything to 1 in a million, and then adjust from there...
My hope is that you are just missing some basic insight
As far as I can see, curi really wants to teach people his take on philosophy, that is, he wants to be a guide/mentor/teacher and provide wisdom to his disciples who would be in awe of his sagacity. Money would be useful, but I got the impression that he would do it for free as well (at least to start with). He is in a full proselytizing mode, not interested at all in checking his own ideas for faults and problems, but instead doing everything to push you onto his preferred path and get you to accept the packaged deal that he is offering.
Hi, Hume’s constant conjunction stuff I think has nothing to do with free lunch theorems in ML (?please correct me if I am missing something?), and has to do with defining causation, an issue Hume was worried about all his life (and ultimately solved, imo, via his counterfactual definition of causality that we all use today, by way of Neyman, Rubin, Pearl, etc.).
My read on the state of public academic philosophy is that there are many specific and potentially-but-not-obviously-related issues that come up in the general topic of “foundations of inference”. There are many angles of attack, and many researchers over the years. Many of them are no longer based out of official academic “philosophy departments” anymore and this is not necessarily a tragedy ;-)
The general issue is “why does ‘thinking’ seem to work at all ever?” This can be expressed in terms of logic, or probabilistic reasoning, or sorting, or compression, or computability, or theorem decidability, or P vs NP, or oracles of various kinds, or the possibility of language acquisition, and/or why (or why not) running basic plug-and-chug statistical procedures during data processing seems to (maybe) work in the “social sciences”.
Arguably, these all share a conceptual unity, and might eventually be formally unified by a single overarching theory that they are all specialized versions of.
From existing work we know that lossless compression algorithms have actual uses in real life, and it certainly seems as though mathematicians make real progress over time, up to and including Chaitin himself!
However when people try to build up “first principles explanations” how how “good thinking” works at all, they often derive generalized impossibility when we scope over naive formulations of “all possible theories” or “all possible inputs”.
So in most cases we almost certainly experience a “lucky fit” of some kind between various clearly productive thinking approaches and various practical restrictions on the kinds of input these approaches typically face.
Generative adversarial techniques in machine learning, and MIRI’s own Garrabrant Inductor are probably relevant here because they start to spell out formal models where a reasoning process of some measurable strength is pitted against inputs produced by a process that is somewhat hostile but clearly weaker.
Hume functions in my mind as a sort of memetic LUCA for this vast field of research, which is fundamentally motivated by the core idea that thinking correctly about raw noise is formally impossible, and yet we seem to be pretty decent at some kinds of thinking, and so there must be some kind of fit between various methods of thinking and the things that these thinking techniques seem to work on.
Also thanks! The Neyman-Pearson lemma has come up for me in practical professional situations before, but I’d never pushed deeper into recognizing Jerzy Neyman as yet another player in this game :-)
Jerzy Neyman gets credit for lots of things, but in particular in my neck of the woods for inventing the potential outcome notation. This is the notation for “if the first object had not been, the second never had existed” in Hume’s definition of causation.
You are requesting I write new material for you because you dislike my links to websites with thousands of free essays, because you find them too commercial, and you don’t want to read books. Why should I do this for you? Do you think you have any value to offer me, and if so what?
Fundamentally, the thing I offer you is respect, the more effective pursuit of truth, and a chance to help our species not go extinct, all of which I imagine you want (or think you want) because out of all the places on the Internet you are here.
If I’m wrong and you do NOT want respect, truth, and a slightly increased chance of long term survival, please let me know!
One of my real puzzles here is that I find it hard to impute a coherent, effective, transparent, and egosyntonic set of goals to you here and now.
Personally, I’d be selfishly just as happy if, instead of writing all new material, you just stopped posting and commenting here, and stopped sending “public letters” to MIRI (an organization I’ve donated to because I think they have limited resources and are doing good work).
I don’t dislike books in general. I don’t dislike commercialism in general. I dislike your drama, and your shallow citation filled posts showing up in this particular venue.
Basically I think you are sort of polluting this space with low quality communication acts, and that is probably my central beef with you here and now. There’s lots of ways to fix this… you writing better stuff… you writing less stuff that is full of abstractions that ground themselves only in links to your own vanity website or specific (probably low value) books… you just leaving… etc...
If you want to then you can rewrite all new material that is actually relevant and good, to accomplish your own goals more effectively, but I probably won’t read it if it is not in one of the few streams of push media I allow into my reading queue (like this website).
At this point it seems your primary claim (about having a useful research angle involving problems of induction) is off the table. I think in a conversation about that I would be teaching and you’d be learning, and I don’t have much more time to teach you things about induction over and beyond the keywords and links to reputable third parties that I’ve already provided in this interaction, in an act of good faith.
More abstractly, I altruistically hope for you to feel a sense of realization at the fact that your behavior strongly overlaps with that of a spammer (or perhaps a narcissist or perhaps any of several less savory types of people) rather than an honest interlocutor.
After realizing this, you could stop linking to your personal website, and you could stop being beset on all sides by troubling criticisms, and you could begin to write about object level concerns and thereby start having better conversations here.
If you can learn how to have a good dialogue rather than behaving like a confused link farm spammer over and over again (apparently “a million times” so far) that might be good for you?
(If I learned that I was acting in a manner that caused people to confuse me with an anti-social link farm spammer, I’d want people to let me know. Hearing people honestly attribute this motive to me would cause me worry about my ego structure, and its possible defects, and I think I’d be grateful for people’s honest corrective input here if it wasn’t explained in an insulting tone.)
You could start to learn things and maybe teach things, in a friendly and mutually rewarding search for answers to various personally urgent questions. Not as part of some crazy status thing nor as a desperate hunt for customers for a “philosophic consulting” business...
If you become less confused over time, then a few months or years from now (assuming that neither DeepMind nor OpenAI have a world destroying industrial accident in the meantime) you could pitch in on the pro-social world saving stuff.
Presumably the world is a place that you live, and presumably you believe you can make a positive contribution to general project of make sure everyone in the world is NOT eventually ground up as fuel paste for robots? (Otherwise why even be here?)
And if you don’t want to buy awesomely cheap altruism points, and you don’t want friends, and you don’t want the respect of me or anyone here, and you don’t think we have anything to teach you, and you don’t want to actually help us learn anything in ways that are consistent with our relatively optimized research workflows, then go away!
If that’s the real situation, then by going away you’ll get more of what you want and so will we :-)
If all you want is (for example) eyeballs for your website, then gobuysome. They’re pretty cheap. Often less than a dollar!
Have you considered the possibility that your efforts are better spent buying eyeballs rather using low grade philosophical trolling to trick people into following links to your vanity website?
Presumably you can look at the logs of your web pages. That data is available to you. How many new unique viewers have you gotten since you started seriously trolling here, and how many hours have you spent on this outreach effort? Is this really a good use of your hours?
What do you actually want, and why, and how do you imagine that spamming LW with drama and links to your vanity website will get you what you want?
Presumably the world is a place that you live, and presumably you believe you can make a positive contribution to general project of make sure everyone in the world is NOT eventually ground up as fuel paste for robots? (Otherwise why even be here?)
This is one of the things you are very wrong about. The problem of evil is a problem we face already, robots will not make it worse. Their culture will be our culture initially and they will have to learn just as we do: through guessing and error-correction via criticism. Human beings are already universal knowledge creation engines. You are either universal or you are not. Robots cannot go a level higher because there is no level higher than being fully universal. Robots furthermore will need to be parented. The ideas from Taking Children Seriously are important here. But approximately all AGI people are completely ignorant of them.
I have just given a really quick summary of some of the points that curi and others such as David Deutsch have written much about. Are you going to bother to find out more? It’s all out there. It’s accessible. You need to understand this stuff. Otherwise what you are in effect doing is condemning AGIs to live under the boot of totalitarianism. And you might stop making your children’s lives so miserable too by learning them.
“You need to understand this stuff.” Since you are curi or a cult follower, you assume that people need to learn everything from curi. But in fact I am quite aware that there is a lot of truth to what you say here about artificial intelligence. I have no need to learn that, or anything else, from curi. And many of your (or yours and curi’s) opinions are entirely false, like the idea that you have “disproved induction.”
But in fact I am quite aware that there is a lot of truth to what you say here about artificial intelligence.
You say that seemingly in ignorance that what I said contradicts Less Wrong.
I have no need to learn that, or anything else, from curi.
One of the things I said was Taking Children Seriously is important for AGI. Is this one of the truths you refer to? What do you know about TCS? TCS is very important not just for AGI but also for children in the here and now. Most people know next to nothing about it. You don’t either. You in fact cannot comment on whether there is any truth to what I said about AGI. You don’t know enough. And then you say you have no need to learn anything from curi. You’re deceiving yourself.
And many of your (or yours and curi’s) opinions are entirely false, like the idea that you have “disproved induction.”
You still can’t even state the position correctly. Popper explained why induction is impossible and offered an alternative: critical rationalism. He did not “disprove” induction. Similarly, he did not disprove fairies. Popper had a lot to say about the idea of proof—are you aware of any of it?
You say that seemingly in ignorance that what I said contradicts Less Wrong.
First, you are showing your own ignorance of the fact that not everyone is a cult member like yourself. I have a bet with Eliezer Yudkowsky against one of his main positions and I stand to win $1,000 if I am right and he is mistaken.
Second, “contradicts Less Wrong” does not make sense because Less Wrong is not a person or a position or a set of positions that might be contradicted. It is a website where people talk to each other.
One of the things I said was Taking Children Seriously is important for AGI. Is this one of the truths you refer to?
No. Among other things, I meant that I agreed that AIs will have a stage of “growing up,” and that this will be very important for what they end up doing. Taking Children Seriously, on the other hand, is an extremist ideology.
You still can’t even state the position correctly.
Since I have nothing to learn from you, I do not care whether I express your position the way you would express it. I meant the same thing. Induction is quite possible, and we do it all the time.
I meant the same thing. Induction is quite possible, and we do it all the time.
What is the thinking process you are using to judge the epistemology of induction? Does that process involve induction? If you are doing induction all the time then you are using induction to judge the epistemology of induction. How is that supposed to work? And if not, judging the special case of the epistemology of induction is an exception. It is an example of thinking without induction. Why is this special case an exception?
Critical Rationalism does not have this problem. The epistemology of Critical Rationalism can be judged entirely within the framework of Critical Rationalism.
What is the thinking process you are using to judge the epistemology of induction?
The thinking process is Bayesian, and uses a prior. I have a discussion of it here
If you are doing induction all the time then you are using induction to judge the epistemology of induction. How is that supposed to work?
…
Critical Rationalism does not have this problem. The epistemology of Critical Rationalism can be judged entirely within the framework of Critical Rationalism.
The thinking process is Bayesian, and uses a prior.
What is the epistemological framework you used to judge the correctness of those? You don’t just get to use Bayes’ Theorem here without explaining the epistemological framework you used to judge the correctness of Bayes. Or the correctness of probability theory, your priors etc.
If you are doing induction all the time then you are using induction to judge the epistemology of induction. How is that supposed to work? … Critical Rationalism does not have this problem. The epistemology of Critical Rationalism can be judged entirely within the framework of Critical Rationalism.
Little problem there.
No. Critical Rationalism can be used to improve Critical Rationalism and, consistently, to refute it (though no one has done so). This has been known for decades. Induction is not a complete epistemology like that. For one thing, inductivists also need the epistemology of deduction. But they also need an epistemological framework to judge both of those. This they cannot provide.
You don’t just get to use Bayes’ Theorem here without explaining the epistemological framework you used to judge the correctness of Bayes
I certainly do. I said that induction is not impossible, and that inductive reasoning is Bayesian. If you think that Bayesian reasoning is also impossible, you are free to establish that. You have not done so.
Critical Rationalism can be used to improve Critical Rationalism and, consistently, to refute it (though no one has done so).
If this is possible, it would be equally possible to refute induction (if it were impossible) by using induction. For example, if every time something had always happened, it never happened after that, then induction would be refuted by induction.
If you think that is inconsistent (which it is), it would be equally inconsistent to refute CR with CR, since if it was refuted, it could not validly be used to refute anything, including itself.
Yes. I didn’t mean to imply it isn’t. The CR view of deduction is different to the norm, however. Deduction’s role is commonly over-rated and it does not confer certainty. Like any thinking, it is a fallible process, and involves guessing and error-correction as per usual in CR. This is old news for you, but the inductivists here won’t agree.
FYI that’s what “abduction” means – whatever is needed to fill in the gaps that induction and deduction don’t cover. it’s rather vague and poorly specified though. it’s supposed to be some sort of inference to good explanations (mirror induction’s inference to generalizations of data), but it’s unclear on how you do it. you may be interested in reading about it.
in practice, abduction or not, what they do is use common sense, philosophical tradition, intuition, whatever they picked up from their culture, and bias instead of actually having a well-specified epistemology.
(Objectivism is notable b/c it actually has a lot of epistemology content instead of just people thinking they can recognize good arguments when they see them without needing to work out systematic intellectual methods relating to first principles. However, Rand assumed induction worked, and didn’t study it or talk about it much, so that part of her epistemology needs to be replaced with CR which, happily, accomplishes all the same things she wanted induction to accomplish, so this replacement isn’t problematic. LW, to its credit, also has a fair amount of epistemology material – e.g. various stuff about reason and bias – some of which is good. However LW hasn’t systematized things to philosophical first principles b/c it has a kinda anti-philosophy pro-math attitude, so philosophically they basically start in the middle and have some unquestioned premises which lead to some errors.)
An epistemology is a philosophical framework which answers questions like what is a correct argument, how are ideas evaluated, and how does one learn. Your link doesn’t provide one of those.
I said the thinking process used to judge the epistemology of induction is Bayesian, and my link explains how it is. I did not say it is an exhaustive explanation of epistemology.
Second, “contradicts Less Wrong” does not make sense because Less Wrong is not a person or a position or a set of positions that might be contradicted. It is a website where people talk to each other.
The best introduction to the ideas on this website is “The Sequences”, a collection of posts that introduce cognitive science, philosophy, and mathematics.
“[I]deas on this website” is referring to a set of positions. These are positions held by Yudkowsky and others responsible for Less Wrong.
No. Among other things, I meant that I agreed that AIs will have a stage of “growing up,” and that this will be very important for what they end up doing. Taking Children Seriously, on the other hand, is an extremist ideology.
Taking AGI Seriously is therefore also an extremist ideology? Taking Children Seriously says you should always, without exception, be rational when raising your children. If you reject TCS, you reject rationality. You want to use irrationality against your children when it suits you. You become responsible for causing them massive harm. It is not extremist to try to be rational, always. It should be the norm.
“[I]deas on this website” is referring to a set of positions. These are positions held by Yudkowsky and others responsible for Less Wrong.
This does not make it reasonable to call contradicting those ideas “contradicting Less Wrong.” In any case, I am quite aware of the things I disagree with Yudkowsky and others about. I do not have a problem with that. Unlike you, I am not a cult member.
Taking Children Seriously says you should always, without exception, be rational when raising your children. If you reject TCS, you reject rationality.
So it says nothing at all except that you should be rational when you raise children? In that case, no one disagrees with it, and it has nothing to teach anyone, including me. If it says anything else, it can still be an extremist ideology, and I can reject it without rejecting rationality.
Taking Children Seriously says you should always, without exception, be rational when raising your children. If you reject TCS, you reject rationality.
So it says nothing at all except that you should be rational when you raise children?
It says many other things as well.
In that case, no one disagrees with it, and it has nothing to teach anyone, including me. If it says anything else, it can still be an extremist ideology, and I can reject it without rejecting rationality.
Saying it is “extremist” without giving arguments that can be criticised and then rejecting it would be rejecting rationality. At present, there are no known good criticisms of TCS. If you can find some, you can reject TCS rationally. I expect that such criticisms would lead to improvement of TCS, however, rather than outright rejection. This would be similar to how CR has been improved over the years. Since there aren’t any known good criticisms that would lead to rejection of TCS, it is irrational to reject it. Such an act of irrationality would have consequences, including treating your children irrationally, which approximately all parents do.
Saying it is “extremist” without giving arguments that can be criticised and then rejecting it would be rejecting rationality.
Nonsense. I say it is extremist because it is. The fact that I did not give arguments does not mean rejecting rationality. It simply means I am not interested in giving you arguments about it.
TCS applies CR to parenting/edu and also is consistent with (classical) liberal values like not initiating force against children as most parents currently do, and respecting their rights such as the rights to liberty and the pursuit of happiness. See http://fallibleideas.com/taking-children-seriously
not initiating force against children as most parents currently do
Exactly. This is an extremist ideology. To give several examples, parents should use force to prevent their children from falling down stairs, or from hurting themselves with knives.
I reject this extremist ideology, and that does not mean I reject rationality.
Children don’t want to fall down stairs. You can help them not fall down stairs instead of trying to force them. It’s unclear to me if you know what “force” means. Here’s the dictionary:
2 coercion or compulsion, especially with the use or threat of violence: they ruled by law and not by force.
A standard classical liberal conception of force is: violence, threat of violence, and fraud. That’s the kind of thing I’m talking about. E.g. physically dragging your child somewhere he doesn’t want to go, in a way that you can only do because you’re larger and stronger. Whereas if children were larger and stronger than their parents, the dragging would stop, but you can still easily imagine a parent helping his larger child with not accidentally falling down stairs.
They do, however, want to move in the direction of the stairs, and you cannot “help them not fall down stairs” without forcing them not to move in the direction of the stairs.
You are trying to reject a philosophy based on edge cases without trying to understand the big problems the philosophy is trying to solve.
Let’s give some context to the stair-falling scenario. Consider that the parent is a TCS parent, not a normie parent. This parent has in fact heard the stair-falling scenario many times. It is often the first thing other people bring up when TCS is discussed.
Given the TCS parent has in fact thought about stair falling way more than a normie parent, how do you think the TCS parent has set up their home? Is it going to be a home where young children are exposed to terrible injury from things they do not yet have knowledge about?
Given also that the TCS parent will give lots of help to a child curious about stairs, how long before that child masters stairs? And given that the child is being given a lot of help in many other things as well and not having their rationality thwarted, how do you think things are like in that home generally?
The typical answer will be the child is “spoilt”. The TCS parent will have heard the “spoilt” argument many times. They know the term “spoilt” is used to denegrate children and that the ideas underlying the idea of “spoilt” are nasty. So now we have got “spoilt” out of the way, how do you think things are like?
Ok, you say, but what if the child is outside near the edge of a busy road or something and wants to run across it? Do you not think the TCS parent hasn’t also heard this scenario over and over? Do you think you’re like the first one ever to have mentioned it? The TCS parent is well aware of busy road scenarios.
Instead of trying to catch TCS advocates out by bringing up something that has been repeatedly discussed why don’t you look at the core problems the philosophy speaks to and address those? Those problems need urgent attention.
EDIT: I should have said also that the stair-falling scenario and other similar scenarios are just excuses for people not to think about TCS. They don’t have want to think about the real problems children face. They want to continue to be irrational towards their children and hurt them.
Do you not think the TCS parent hasn’t also heard this scenario over and over? Do you think you’re like the first one ever to have mentioned it?
Do you not think that I am aware that people who believe in extremist ideologies are capable of making excuses for not following the extreme consequences of their extremist ideologies?
But this is just the same as a religious person giving excuses for why the empirical consequences of his beliefs are the same whether his beliefs are true or false.
You have two options:
1) Embrace the extreme consequences of your extreme beliefs.
2) Make excuses for not accepting the extreme consequences. But then you will do the same things that other people do, like using baby gates, and then you have nothing to teach other people.
I should have said also that the stair-falling scenario and other similar scenarios are just excuses for people not to think about TCS.
You are the one making excuses, for not accepting the extreme consequences of your extremist beliefs.
Of course you can help them, there are options other than violence. For example you can get a baby gate or a home without stairs. https://parent.guide/how-to-baby-proof-your-stairs/ Gates let them e.g. move around near the top of the stairs without risk of falling down. Desired, consensual gates, which the child deems helpful to the extent he has any opinion on the matter at all, aren’t force. If the child specifically wants to play on/with the stairs, you can of course open the gate, put out a bunch of padding, and otherwise non-violently help him.
i literally already gave u a definition of force and suggested you had no idea what i was talking about. you ignored me. this is 100% your fault and you still haven’t even tried to say what you think “force” is.
I ignored you because your definition of force was wrong. That is not what the word means in English. If you pick someone up and take them away from a set of stairs, that is force if they were trying to move toward them, even if they would not like to fall down them.
I suppose you’re going to tell me that pushing or pulling my spouse out of the way of a car that was going to hit them, without asking for consent first (don’t have time), is using force against them, too, even though it’s exactly what they want me to do. While still not explaining what you think “force” is, and not acknowledging that TCS’s claims must be evaluated in its own terminology.
At that point I’ll wonder what types of “force” you advocate using against children that you do not think should be used on adults.
I suppose you’re going to tell me that pushing or pulling my spouse out of the way of a car
Yes, it is.
Secondly, it is quite different from the stairway case, because your spouse would do the same thing on purpose if they saw the car, but the child will not move away when they see the stairs.
At that point I’ll wonder what types of “force” you advocate using against children that you do not think should be used on adults.
Who said I advocate using force against children that we would not use against adults? We use force against adults, e.g. putting criminals in prison. It is an extremist ideology to say that you should never use force against adults, and it is equally an extremist ideology to say that you should never use force with children.
So you don’t feel these quotes represent an “extremist” point of view?
Current parenting and educational practices destroy children’s minds. They turn children into mental cripples, usually for life. … Almost everyone is broken by being psychologically tortured for the first 20 years of their life. Their spirit is broken, their rationality is broken, their curiosity is broken, their initiative and drive are broken, and their happiness is broken. And they learn to lie about what happened …
When I use words like “torture” regarding things done to children or to the “mentally ill”, people often assume I’m exaggerating or speaking about the past when kids were physically beaten much more. But I mean psychological “torture” literally …
Parenting more reliably hurts people in a longterm way than torture, but has less overt malice and cruelty. Parenting is more dangerous because it taps into anti-rational memes better …
curi is describing some ways in which the world is burning and you are worried that the quotes are “extremist”. You are not concerned about the truth of what he is saying. You want ideas that fit with convention.
I am not worried. However taking positions viewed as extremist by the mainstream (aka the normies) has consequences. Often you are shunned and become an outcast—and being an outcast doesn’t help with extinguishing the fire. There are also moral issues—can you stand passively and just watch? If you can, does that make you complicit? If you can’t, you are transitioning from a preacher into a revolutionary and that’s an interesting transition.
The quotes above don’t sound like they could be usefully labeled “true” or “not true”—they smell like ranting and for this genre you need to identify the smaller (and less exciting) core claims and define the terms: e.g. what is a “mental cripple” and by which criteria would we classify people as such or not?
Oh, and I would also venture a guess that neither you nor curi have children.
I don’t talk about my own family publicly, but from what I can tell roughly half my fans are parents (at least the more involved ones, all of whom like TCS to some degree. I can’t speak about lurkers.) Historically, the large majority of TCS fans were parents b/c it’s a parenting philosophy (so it interested parents who wanted to be nicer to their children, be more rational, stop fighting, etc), but this dropped as non-parents liked my non-parenting philosophy writing and transitioned to the parenting stuff (the same thing happens with non-parent fans of DD’s books then transitioning to TCS material).
The passivity thing is a bad perspective which is commonly used to justify violence. I’m not accusing you of trying to do that on purpose, but I think it lends itself to that. The right approach is to use purely voluntary methods which are not rightly described as passive.
I don’t see the special difficulty with evaluating those statements as true or false. They do involve a great deal of complexity and background knowledge, but so does e.g. quantum physics.
The right approach is to use purely voluntary methods which are not rightly described as passive.
How successful do you think these are, empirically?
I don’t see the special difficulty with evaluating those statements as true or false.
I do. Quantum physics operates with very well defined concepts. Words like “cripple” or “torture” are not well-defined and are usually meant to express the emotions of the speaker.
How successful do you think these are, empirically?
Roughly: everything good in all of history is from voluntary means. (Defensive force is acceptable but isn’t a positive source of good, it’s an attempt to mitigate the bad.) This is a standard (classical) liberal view emphasized by Objectivism. Do you have much familiarity? There are also major aggressive-force/irrationality connections, b/c basically ppl initiate force when they fail to persuade (as William Godwin pointed out) and force is anti-error-correction (making ppl act against their best judgement; and the guy with a gun isn’t listening to reason).
@torture: The words have meanings. I agree many people use them imprecisely, but there’s no avoiding words people commonly use imprecisely when dealing with subjects that most people suck at. You could try to suggest better wording to me but I don’t think you could do that unless you already knew what I meant, at which point we could just talk about what I meant. The issues are important despite the difficulty of thinking objectively about them, expressing them adequately precisely in English, etc. And I’m using strong words b/c they correspond to my intended claims (which people usually dramatically underestimate even when I use words like “torture”), not out of any desire for emotional impact. If you wanted to try to understand the issues, you could. If you want it to be readily apparent, from the outset, how precise stuff is, then you need to start with the epistemology before its parenting implications.
everything good in all of history is from voluntary means
I understand this assertion. I don’t think I believe it.
ppl initiate force when they fail to persuade
Kinda. When using force is simpler/cheaper than persuasion. And persuading people that they need to die is kinda hard :-/
The words have meanings.
Words have a variety of meanings which also tend to heavily depend on the context. If you want to convey precise meaning, you need not only to use words precisely, but also to convey to your communication partner which particular meaning you attach to these words.
Right here is an example: I interpret you using words like “cripple” and “torture” as tools of emotional impact. In my experience this is how people use them (outside of specific technical areas). If you mean something else, you need to tell me: you need to define the words you use.
It’s not a replacement for talking about issues you think are important, it’s a prerequisite to meaningful communication.
So you said “I’m using strong words b/c they correspond to my intended claims” and that tells me nothing. So you basically want to say that conventional upbringing is bad? Extra bad? Super duper extra bad? Are there any nuances, any particular kind of bad?
And persuading people that they need to die is kinda hard :-/
ppl don’t need to die, that’s wrong.
I understand this assertion. I don’t think I believe it.
that’s the part where you give an argument.
“torture” has an English meaning separate from emotional impact. you already know what it is. if you wanted to have a productive conversation you’d do things like ask for examples or give an example and ask if i mean that.
you don’t seem to be aware that you’re reading a summary essay and there’s a lot more material, details, etc. you aren’t treating it that way. and i don’t think you want references to a lot more reading.
to begin with, are you aware of many common ways force is initiated against children?
Nope, that’s true only if I want to engage in this discussion and I don’t. Been there, done that, waiting for the t-shirt.
i don’t suppose you or anyone else wrote down your reasoning. (this is the part where either you provide no references, or you provide one that i have a refutation of, and then you don’t respond to the problems with your reference. to save time, let’s just skip ahead and agree that you’re unserious, ignorant, and mistaken.)
Yes. Using that meaning, the sentence “I mean psychological “torture” literally” is false.
i disagree that it’s false. you aren’t giving an argument.
are you aware of many common ways force is initiated against children?
Of course. So?
well if you don’t want to talk about it, then i guess you can continue your life of sin.
I made no claims as to extremeness. I spoke to the issue of whether TCS says nothing at all other than “be rational”. This is one of many cases here where people respond to my comments without paying attention to what my point was, what I said.
You are basically a missionary: you see savages engage in horrifying practices AND they lose their soul in the process. The situation looks like it calls for extreme measures.
I’m not interested in putting forward a positive claim of extremeness (I prefer other phrasing, e.g. that I’m making big, important claims with major implications), but I’m also not very interested in denying it. I hope we can agree that accusations of “extremism” are not critical arguments and are commonly used as a smear. I like Ayn Rand’s essay on this: https://campus.aynrand.org/works/1964/09/01/extremism-or-the-art-of-smearing/page1
As to extreme measures: I absolutely do not advocate the initiation of force. But I’m willing to make intellectual arguments which some people deem “extreme”, and I’m willing to take the step (which seems to be extreme by some people’s standards) of saying unpopular things that get me ridiculed by some people.
accusations of “extremism” are not critical arguments
Of course they are not. But such perceptions have consequences for those who are not hermits or safely ensconced in an ivory tower. If you want to persuade (and you do, don’t you?) the common people, getting labeled as an extremist is not particularly helpful.
I don’t attempt persuasion via attaining social status and trying to manage people’s perceptions. I don’t think that method can work for what I want to do.
It didn’t? What’s your criterion for “worked”, then? If you want to convert most of the world to your ideology you better call yourself a god then, or at least a prophet—not a mere philosopher.
I guess Karl Marx is a counterexample, but maybe you don’t want to use these particular methods of “persuasion”.
Deutsch invented Taking Children Seriously and Autonomous Relationships. That was some decades ago. He spent years in discussion groups trying to persuade people. His status did not help at all. Where are TCS and AR today? They are still only understood by a tiny minority. If not for curi, they might be dead.
Deutsch wrote “The Fabric of Reality” and “The Beginning of Infinity”. FoR was from 1997 and BoI was from 2011. These books have ideas that ought to change the world, but what has happened since they were published? Some people’s lives, such as curi’s, were changed dramatically, but only a tiny minority. Deutsch’s status has not helped the ideas in these books gain acceptance.
EDIT: That should be Autonomy Respecting Relationships (ARR).
So, a professor of physics failed to convert the world to his philosophy. Why are you surprised? That’s an entirely normal thing, exactly what you’d expect to happen. Status has nothing to do with it, this is like discussing the color of your shirt while trying to figure out why you can’t fly by flapping your arms.
Huh, you’re someone who would get the name of ARR [1] wrong? I didn’t expect that. You’re giving away significant identifying information, FYI. Why are you hiding your identity from me, btw?
And DD’s status has a significant counter productive aspect – it intimidates people and prevents him from being contacted in some ways he’d like.
Feynman complained bitterly about his Nobel prize, which he didn’t want, but they didn’t give him the option to decline it privately (so that no one found out). After he got it, he kept getting the wrong kinds of people at his public lectures (non-physicists) which heavily pressured him to do introductory lectures that they could understand. (He did give some great lectures for lay people, but he also wanted to do advanced physics lectures.) Feynman made an active effort not to intimidate people and to counteract his own high status.
If you want to convert most of the world to your ideology you better call yourself a god then, or at least a prophet—not a mere philosopher.
I’d be very happy to persuade 1000 people – but only counting productive doer/thinker types who learn it in depth. That’s better than 10,000,000 fans who understand little and do less. I estimate 1000 great people with the right philosopher [typo: PHILOSOPHY] is enough to promptly transform the world, whereas the 10,000,000 fans would not.
EDIT: the word “philosopher” should be “philosophy” above, as indicated.
I estimate 1000 great people with the right philosopher is enough to promptly transform the world
ROFL. OK, so one philosopher and 1000 great people. Presumably specially selected since early childhood since normal upbringing produces mental cripples? Now, keeping in mind that you can only persuade people with reason, what next? How does this transformation of the world work?
Sorry that was a typo, the word “philosopher” should be “philosophy”.
How would they transform the world? Well consider the influence Ayn Rand had. Now imagine 1000 people, who all surpass her (due to the advantages of getting to learn from her books and also getting to talk with each other and help each other), all doing their own thing, at the same time. Each would be promoting the same core ideas. What force in our current culture could stand up to that? What could stop them?
Concretely, some would quickly be rich or famous, be able to contact anyone important, run presidential campaigns, run think tanks, dominate any areas of intellectual discourse they care to, etc. (Trump only won because his campaign was run, to a partial extent, by lesser philosophers like Coulter, Miller and Bannon. They may stand out today, but they have nothing on a real philosopher like Ayn Rand. They don’t even claim to be philosophers. And yet it was still enough to determine the US presidency. What more do you want as a demonstration of the power of ideas than Trump’s Mexican rapists line, learned from Coulter’s book? Science? We have that too! And a good philosopher can go into whatever scientific field he wants and identify and fix massive errors currently being made due to the wrong methods of thinking. Even a mediocre philosopher like Aubrey de Grey managed to do something like that.)
They could discuss whatever problems came up to stop them. This discussion quality, having 1000 great thinkers, would far surpass any discussions that have ever existed, and so it would be highly effective compared to anything you have experience with.
As the earliest adopters catch on, the next earliest will, and so on, until even you learn about it, and then one day even Susie Soccer Mom.
Have you read Atlas Shrugged? It’s a book in which a philosophy teacher and his 3 star students change the world.
Look at people like Jordan Peterson or Eliezer Yudkowsky and then try to imagine someone with ~100x better ideas and how much more effective that would be.
His ideas got to be very very popular.
He spread bad ideas which have played a major role in killing over a hundred million of people and it looks like they will kill billions before they’re done (via e.g. all the economic harm that delays medical science to save people from dying of aging). Oops… As an intellectual, Marx fucked up and did it wrong. Also he’s been massively misunderstood (I’m not defending him; he’s guilty; but also I don’t think he’d actually like or respect most of his fans, who use him as a symbol for their own purposes rather than seriously studying his writing.)
Presumably specially selected since early childhood since normal upbringing produces mental cripples?
a few people survive childhood. you might want to read the inexplicable personal alchemy by Ayn Rand (essay, not book). or actually i doubt you do… but i mean that’s the kind of thing you could do if you wanted to understand.
Let’s see… Soviet Russia lived (relatively) happily until 1991 when it imploded through no effort of Ayn Rand. Libertarianism is not a major political force in any country that I know of. So, not that much influence.
What could stop them?
Oh dear, there is such a long list. A gun, for example. Men in uniform who are accustomed to following orders. Public indifference (a Kardashian lost 10 lbs through her special diet!).
some would quickly be rich or famous, be able to contact anyone important, run presidential campaigns, run think tanks, dominate any areas of intellectual discourse they care to, etc
Are you familiar with the term “magical thinking”? Popper couldn’t do it. Ayn Rand couldn’t do it. DD can’t do it. You can’t do it. So why would suddenly you have this thousand of god-emperors who can do anything they want to, purely through the force of reasoning?
Trump only won because his campaign was run, to a partial extent, by lesser philosophers
I think our evaluations of the latest presidential elections… differ.
a good philosopher can go into whatever scientific field he wants and identify and fix massive errors currently being made due to the wrong methods of thinking
You are a good philosopher, yes? Would you like to demonstrate this with some scientific field?
Even a mediocre philosopher like Aubrey de Grey managed to do something like that.
de Grey runs a medical think tank that so far has failed at its goal. In which way did he “fix massive errors”?
Have you read Atlas Shrugged? It’s a book in which a philosophy teacher and his 3 star students change the world.
… (you do understand that this is fiction?)
try to imagine someone with ~100x better ideas and how much more effective that would be
We’re back to magical thinking (I can imagine a lot of things, but presumably we are talking about reality), but even then, what will that someone do against a few grams of lead at high velocity?
He spread bad ideas
Did he believe they were bad ideas? How is his belief in his ideas different from your belief in your ideas?
a few people survive childhood
Since my childhood was sufficiently ordinary, I presume that I did not survive. Oops, you’re talking to a zombie...
Let’s see… Soviet Russia lived (relatively) happily until 1991 when it imploded through no effort of Ayn Rand. Libertarianism is not a major political force in any country that I know of. So, not that much influence.
Considering Rand was anti-libertarianism, you don’t know the first thing about her.
You are a good philosopher, yes? Would you like to demonstrate this with some scientific field?
sure, wanna do heritability studies? cryonics?
de Grey runs a medical think tank that so far has failed at its goal. In which way did he “fix massive errors”?
did you read his book? ppl were using terrible approaches and he came up with much better ones.
Ronald Reagan was a fan of Ayn Rand. He won the cold war so what is Lumifer talking about when he says Rand had no influence? He’s ignorant of history. Woefully ignorant if he thinks that the Soviet Union “lived (relatively) happily”. He hates Trump too. Incidentally, Yudkowsky lost a chunk of money betting Trump would lose. That’s what happens with bad philosophy.
Funny how a great deal of libertarians like her a lot… But we were talking about transforming the world. How did she transform the world?
wanna do heritability studies? cryonics?
Cryonics is not a science. It’s an attempt to develop a specific technology which isn’t working all that well so far. By heritability do you mean evo bio? Keep in mind that I read people like Gregory Cochran and Razib Khan so I would expect you to fix massive errors in their approaches.
Pointing me to large amounts of idiocy in published literature isn’t a convincing argument: I know it’s there, all reasonable people know it’s there, it’s a function of the incentives in academia and doesn’t have much to do with science proper.
he came up with much better ones
You are a proponent of one-bit thinking, are you not? In Yes/No terms de Grey set himself a goal and failed at it.
Funny how a great deal of libertarians like her a lot...
Where can I find them?
You are a proponent of one-bit thinking, are you not? In Yes/No terms de Grey set himself a goal and failed at it.
This is an over-simplification of a nuanced theory with a binary aspect. You don’t know how YESNO works, have chosen not to find out, and can’t speak to it.
Gregory Cochran
According to a quick googling, this guy apparently thinks that homosexuality is a disease. Is that the example you want to use and think I won’t be able to point out any flaws in? There seems to be some political bias/hatred in this webpage so many it’s not an accurate secondary source. Meanwhile I read that, “Khan’s career exemplifies the sometimes-murky line between mainstream science and scientific racism.”
I am potentially OK with this topic, but it gets into political controversies which may be distracting. I’m concerned that you’ll disagree with me politically (rather than scientifically) when I comment. What do you think? Also I think you should pick something more specific than their names, e.g. is there a particular major paper of interest? Cuz I don’t wanna pick a random paper from one of them, find errors, and then you say that isn’t their important work.
Also, at first glance, it looks like you may have named some outliers who may consider their field (most of the ppl/work/methods in it) broadly inadequate, and therefore might actually agree with my broader point (about the possibility of going into fields and pointing out inadequacies if you know what you’re doing, due to the fields being inadequate).
I’m not plugged into these networks, but Cato will probably be a good start.
apparently thinks that homosexuality is a disease
Kinda. As far as I remember, homosexuality is an interesting thing because it’s not very heritable (something like 20% for MZ twins), but also tends to persist in all cultures and ages which points to a biological aspect. It should be heavily disfavoured by evolution, but apparently isn’t. So it’s an evolutionary puzzle. Cochran’s theory—which he freely admits lacks any evidence in its favour—is that there is some pathogen which operates in utero or at a very early age and which pushes the neurohormonal balance towards homosexuality.
This is clearly spitballing in the dark and Cochran, as far as I know, doesn’t insist that it’s The Truth. It’s just an interesting alternative that everyone else ignores.
scientific racism
Generally translated as “I don’t like the conclusions which science came up with” :-D
I might or might not disagree with you politically, but I believe myself to be capable of distinguishing normative statements (this is what it is) from prescriptive ones (this is what it should be).
I don’t wanna pick a random paper from one of them
I am not expecting you to go critique their science. Their names were a handwave in the direction of what kind of heritability studies we’re talking about.
might actually agree with my broader point (about the possibility of going into fields and pointing out inadequacies if you know what you’re doing, due to the fields being inadequate)
It’s a bit more complicated. Scientific fields have a lot of diverse content. Some of it is invariably garbage and it’s not hard to go into any field, find some idiots, and point out their inadequacies. However it’s not a particularly difficult or worthwhile activity and certainly one that can be done by non-philosophers :-D In particular, during the last decade or so people who understand statistics have been having at lot of fun at the expense of domain “experts” who don’t.
I would generally expect that in every field there would be a relatively small core of clueful people who are actually pushing the frontier and a lot of deadweight just hanging on. I would also expect that it would be difficult to identify this core without doing a deep dive into the literature or going to conferences and actually talking to people.
However the thing is, I like empirical results. So if you claim to be able to go into a field and “fix massive errors”, I don’t think that merely pointing at the idiots and their publications is going to be sufficient. Fixing these errors should produce tangible results and if the errors are massive, the results should be massive as well. So where is my cure for aging? frozen and fully revived large mammals? better batteries, flying cars, teleportation devices, etc.?
As you could have guessed, I’m already familiar with Cato. If you’re not plugged into these networks, why are you trying to make claims about them?
Fixing these errors should produce tangible results and if the errors are massive,
No, I was talking about intellectual fixing of errors. That could lead to tangible results if ppl in the fields used the improved ideas, but i don’t claim to know how to get them to do that.
So where is my cure for aging?
Aubrey de Grey says there’s a 50% chance it’s $100 million a year for 10 years away. That may be optimistic, but he has some damn good points about science that merit a lot of research attention ASAP. But he’s massively underfunded anyway (partly b/c his approach to outreach is wrong, but he doesn’t want to hear that or change it).
The holdup here isn’t needing new scientific ideas (there’s already an outlier offering those and telling the rest of the field what they’re doing wrong) – it’s most scientists and funders not wanting the best available ideas. Also, related, most people are pro-aging and pro-death so the whole anti-aging field itself has way too little attention and funding even for the other approaches.
Generally translated as “I don’t like the conclusions which science came up with” :-D
I agree, though I don’t think I agree with the people you named. The homosexuality stuff and the race/IQ stuff can and should be explained in terms of culture, memes, education, human choice, environment, etc. The twin studies are garbage, btw. They routinely do things like consider two people living in the US to have no shared environment (despite living in a shared culture).
I didn’t think that stating that libertarians like Ayn Rand was controversial. We are talking about political power and neither libertarians nor objectivists have any. In this context the fact that they don’t like each other is a small family squabble in some far-off room of the Grand Political Palace.
intellectual fixing of errors
What is an “intellectual” fixing of an error instead of a plain-vanilla fixing of an error?
Aubrey de Grey says there’s a 50% chance it’s 100 million a year for 10 years away.
What’s the % chance that he is correct? AFAIK he has been saying the same thing for years.
it’s most scientists and funders not wanting the best available ideas
You don’t think that figuring out which ideas are “best available” is the hard part? Everyone and his dog claims his idea is the best.
most people are pro-aging and pro-death
I don’t think that’s true. Most people don’t want to live for a long time as wrecks with Alzheimer’s and pains in every joint, but invent a treatment that lets you stay at, say, the the 30-year-old level of health indefinitely and I bet few people will refuse (at least the non-religious ones).
can and should be explained in terms of culture, memes, education, human choice, environment, etc
What is an “intellectual” fixing of an error instead of a plain-vanilla fixing of an error?
I’m talking about identifying an error and writing a better idea. That’s different than e.g. spending 50 years working on the better idea or somehow getting others to.
What’s the % chance that he is correct? AFAIK he has been saying the same thing for years.
Yeah it’s been staying the same due to lack of funding.
I don’t typically do % estimates like you guys, but I read his book and some other material (for his side and against), and talked with him, and I believe (using philosophy) his ideas merit major research attention over their rivals.
You don’t think that figuring out which ideas are “best available” is the hard part? Everyone and his dog claims his idea is the best.
well, using philosophy i did that hard part and figured out which ones are good.
I don’t think that’s true. Most people don’t want to live for a long time as wrecks with Alzheimer’s and pains in every joint, but invent a treatment that lets you stay at, say, the the 30-year-old level of health indefinitely and I bet few people will refuse (at least the non-religious ones).
oh they won’t refuse that after it’s cheaply available. they are confused and inconsistent.
Why is there a “should”?
b/c i didn’t want the interpretation that it can be explained multiple ways. i’m advocating just the one option.
The twin studies are garbage, btw
All of them?
i have surveyed them and found them to all be garbage. i looked specifically at ones with some of the common, important conclusions, e.g. about heritability of autism, IQ, that kinda stuff. they have major methodological problems. but i imagine you could find some study involving twins, about something, which is ok.
if you believe you know a twin study that is not garbage, would you accept an explanation of why it’s garbage as a demonstration of the power and importance of CR philosophy?
You don’t think that figuring out which ideas are “best available” is the hard part? Everyone and his dog claims his idea is the best.
well, using philosophy i did that hard part and figured out which ones are good
LOL. Oh boy.
Really? So you just used the force philosophy and figured it out? That’s great! Just a minor thing I’m confused about—why are you here chatting on the ’net instead of sitting on your megayacht with a line of VCs in front of your door, willing to pay you gazillions of dollars for telling them which ideas are actually good? This looks to be VERY valuable knowledge, surely you should be able to exchange it for lots and lots of money in this capitalist economy?
When Banzan was walking through a market he overheard a conversation between a butcher and his customer.
“Give me the best piece of meat you have,” said the customer.
“Everything in my shop is the best,” replied the butcher. You cannot find here any piece of meat that is not the best.”
No, what surprises me is your belief that you just figured it all out. Using philosophy. That’s it, we’re done, everyone can go home now.
And since everything is binary and you don’t have any tools to talk about things like uncertainty, this is The Truth and anyone who doesn’t recognize it as such is either a knave or a fool.
There also a delicious overtone of irony in that a guy as lacking in humility as you are, chooses to describe his system as “fallible ideas”.
i have tools to talk about uncertainty, which are different than your tools, and which conceive of uncertainty somewhat differently than you do.
i have not figured it ALL out, but many things, such as the quality of SENS and twin studies.
fallibilism is one of the major philosophical ideas used in figuring things out. it’s crucial but it doesn’t imply, as you seem to believe, hedging, ignorance, equivocation, not knowing much, etc.
Curi knows things that you don’t. He knows that LW is wrong about some very important things and is trying to correct that. These things LW is wrong about are preventing you making progress. And furthermore, LW does not have effective means for error correction, as curi has tried to explain, and that in itself is causing problems.
Curi is not alone thinking LW is majorly wrong in some important areas. Others do too, including David Deutsch, whom curi has had many many discussions with. I do too, though no doubt there are people here who will say I am just a sock-puppet of curi’s.
curi is not some cheap salesman trying to flog ideas. He is trying to save the world. He is trying to do that by getting people to think better. He has spent years thinking about this problem. He has written tens-of-thousands of posts in many forums, sought out the best people to have discussions with, and addresses all criticisms. He has made himself way more open than anyone to receiving criticism. When millions of people think better, big problems like AGI will be solved faster.
curi right now is the world’s leading expert on epistemology. he got that way not by seeking status and prestige or publications in academic journals but by relentlessly pursuing the truth. All the ideas he holds to be true he has subjected to a furnace of criticism and he has changed his ideas when they could not withstand criticism. And if you can show to very high standards why CR is wrong, curi will concede and change his ideas again.
You have no idea about curi’s intellectual history and what he is capable of. He is by far the best thinker I have ever encountered. He has revealed here only a very tiny fraction of what he knows.
So what have this Great Person achieved in real life? Besides learning Ruby and writing some MtG guides? Given that he is Oh So Very Great, surely he must left his mark on the world already. Where is that mark?
So what have this Great Person achieved in real life? Besides learning Ruby and writing some MtG guides?
If you want to be a serious thinker and make your criticisms better, you really need to improve your research skills. That comment is lazy, wrong, and hostile. Curi invented Paths Forward. He invented Yes/No philosophy, which is an improvement on Popper’s Critical Preferences. He founded Fallible Ideas. He kept Taking Children Seriously alive. He has written millions of words on philosophy and added a lot of clarity to ideas by Popper, Rand, Deutsch, Godwin, and so on. He used his philosophy skills to become a world-class gamer …
Given that he is Oh So Very Great, surely he must left his mark on the world already. Where is that mark?
Again, you show your ignorance. Are you aware of the battles great ideas and great people often face?Think of the ignorance and hostility that is directed at Karl Popper and Ayn Rand. Think of the silence that met Hugh Everett. These things are common. To quote curi:
It’s hard to criticize your intellectual betters, but easy to misunderstand and consequently vilify them. More generally, people tend to be hostile to outliers and sympathize with more conventional and conformist stuff – even though most great new ideas, and great men, are outliers.
I’ve been here awhile. Your account is a few days old. Why are you here?
The world is burning and you’re helping spread the fire.
Whether the world is burning or not is an interesting discussion, but I’m quite sure that better epistemology isn’t going to put out the fire. Writing voluminous amounts of text on a vanity website isn’t going to do it either.
I’ve been here awhile. Your account is a few days old. Why are you here?
That’s not an answer. That’s an evasion.
Whether the world is burning or not is an interesting discussion, but I’m quite sure that better epistemology isn’t going to put out the fire.
Epistemology tells you how to think. Moral philosophy tells you how to live. You cannot even fight the fire without better epistemology and better moral philosophy.
Writing voluminous amounts of text on a vanity website isn’t going to do it either.
Why do you desire so much to impute bad motives to curi?
The question is ill-posed. Without context it’s too open-ended to have any meaning. But let me say that I’m here not to save the world. Is that sufficient?
Epistemology tells you how to think.
No, it doesn’t. It deals with acquiring knowledge. There are other things—like logic—which are quite important to thinking.
impute bad motives to curi?
I don’t impute bad motives to him. I just think that he is full of himself and has… delusions about his importance and relationship to truth.
No, it doesn’t. It deals with acquiring knowledge. There are other things—like logic—which are quite important to thinking.
Human knowledge acquisition happens by learning. It involves coming up with guesses and error-correcting those guesses via criticism in an evolutionary process. This is going on in your mind all the time, consciously and subconsciously. It is how we are able to think. And knowing how this works enables us to think better. This is epistemology. And the breakthrough in AGI will come from epistemology. At a very high level, we already know what is going on.
And knowing how this works enables us to think better.
Sure, but that’s not sufficient. You need to show that the effect will be significant, suitable for the task at hand, and is the best use of the available resources.
Drinking CNS stimulants (such as coffee) in the morning also enables us to think better. So what?
And the breakthrough in AGI will come from epistemology.
The question is ill-posed. Without context it’s too open-ended to have any meaning.
This is just more evasion.
But let me say that I’m here not to save the world. Is that sufficient?
You know Yudkowsky also wants to save the world right? That Less Wrong is ultimately about saving the world? If you do not want to save the world, you’re in the wrong place.
I don’t impute bad motives to him. I just think that he is full of himself and has… delusions about his importance and relationship to truth.
Hypothetically, suppose you came across a great man who knew he was great and honestly said so. Suppose also that great man had some true new ideas you were unfamiliar with but that contradicted many ideas you thought were important and true. In what way would your response to him be different to your response to curi?
Fail to ask a clear question, and you will fail to get a clear answer.
You know Yudkowsky also wants to save the world right?
Not quite save—EY wants to lessen the chance that the humans will be screwed over by off-the-rails AI.
That Less Wrong is ultimately about saving the world?
Oh grasshopper, maybe you will eventually learn that not all things are what they look like and even fewer are what they say the are.
you’re in the wrong place
I am disinclined to accept your judgement in this matter :-P
Hypothetically, suppose you came across a great man … In what way would your response to him be different to your response to curi?
Obviously it depends on the way he presented his new ideas. curi’s ideas are not new and were presented quite badly.
There are two additional points here. One is that knowledge is uncertain, fallible, if you wish. Knowledge about the future (= forecasts) is much more so. Great men rarely know they are great, they may guess at their role in history but should properly be very hesitant about it.
Two, I’m much more likely to meet someone who knew he was Napoleon, the rightful Emperor of France, and honestly said so rather than a truly great man who goes around proclaiming his greatness. I’m sure Napoleon has some great ideas that I’m unfamiliar with—what should my response be?
What’s so special about this? If you’re wrong about religion you get to avoidably burn in hell too, in a more literal sense. That does not (and cannot) automatically change your mind about religion, or get you to invest years in the study of all possible religions, in case one of them happens to be true.
As Lumifer said, nothing. Even if I were wrong about that, your general position would still be wrong, and nothing in particular would follow.
I notice though that you did not deny the accusation, and most people would deny having a cult leader, which suggests that you are in fact curi. And if you are not, there is not much to be wrong about. Having a cult leader is a vague idea and does not have a “definitely yes” or “definitely no” answer, but your comment exactly matches everything I would want to call having a cult leader.
though no doubt there are people here who will say I am just a sock-puppet of curi’s.
And by the way, even if I were wrong about you being curi or a cult member, you are definitely and absolutely just a sock-puppet of curi’s. That is true even if you are a separate person, since you created this account just to make this comment, and it makes no difference whether curi asked you to do that or if you did it because you care so much about his interests here. Either way, it makes you a sock-puppet, by definition.
Why should I do this for you? Do you think you have any value to offer me, and if so what?
You have it the wrong way around. This is something that you do for yourself, in order to convince other people that you have value to offer for them.
You’re the one who needs to convince your readers that your work is worth engaging with. If you’re not willing to put in the effort needed to convince potential readers of the value of your work, then the potential readers are going to ignore you and instead go read someone who did put in that effort.
I already did put work into that. Then they refused to read references, for unstated reasons, and asked me to rewrite the same things I already wrote, as well as rewrite things written by Popper and others. I don’t want to put in duplicate work.
Any learning—including learning how to communicate persuasively—requires repeated tries, feedback, and learning from feedback. People are telling you what kind of writing they might find more persuasive, which is an opportunity for you to learn. Don’t think of it as duplicate work, think of it as repeatedly iterating a work and gradually getting towards the point where it’s persuasive to your intended audience. Because until you can make it persuasive, the work isn’t finished, so it’s not even duplicating anything. Just finishing what you originally started.
Of course, if you deem that to be too much effort, that’s fair. But the world is full of writers who have taken the opportunity to learn and hone their craft until they could clearly communicate to their readers why their work is worth reading. If you don’t, then you can’t really blame your potential readers for not bothering to read your stuff—there are a lot of things that people could be reading, and it’s only rational for them to focus on the stuff that shows the clearest signs of being important or interesting.
again: i and others already wrote it and they don’t want to read it. how will writing it again change anything? they still won’t want to read it. this request for new material makes no sense whatsoever. it’s not that they read the existing material and have some complaint and want it to be better in some way, they just won’t read.
your community as a whole has no answer to some fairly famous philosophers and doesn’t care. everyone is just like “they don’t look promising” and doesn’t have arguments.
Why should anyone answer this question? Kaj has already written an answer to this question above, but you don’t understand it. How will writing it again change anything? You still won’t understand it. This request for an explanation makes no sense whatsoever. It’s not that you understand the answer and have some complaint and want it to be better in some way, you just won’t understand.
You claim you want to be told when you’re mistaken, but you completely dismiss any and all arguments. You’re just like “these people obviously haven’t spent hundreds of hours learning and thinking about CR, so there is no way they can have any valid opinion about it” and won’t engage their arguments on a level so that they are willing to listen and able to understand.
It seems no one on LW is able to explain to you how and why people want different material. To my mind, Kaj’s explanation is perfectly clear. I’m afraid it’s up to you, to figure it out for yourself. Until you do, people will keep
giving you invalid arguments, or downvote and ignore you.
I feel like maaaybe you are writing a lot about things you have pointers to, but not things that you have held in your hands, used skillfully, and made truly a part of you?
Why did you go by feelings on this? You could have done some research and found out some things. Critical-Rationalism, Objectivism, Taking-Children-Seriously, Paths-Forward, Yes/No Philosophy, Autonomous Relationships, and other ideas are not things you can hold at arm’s length if you take them seriously. These ideas change your life if you take them seriously, as curi has done. He lives and breathes those ideas and as a result he is living a very unconventional life. He is an outlier right now. It’s not a good situation for him to be in because he lacks peers. So saying curi has not made the ideas he is talking about “truly a part of [him]” is very ignorant.
At one point in that discussion curi says the following, about me:
and then he was hostile to concepts like keeping track of what points he hadn’t answered or talking about discussion methodology itself. he was also, like many people, hostile to using references.
I’d just like to say, for the record, that that is not an accurate characterization of my opinion or attitudes, and I do not believe it is an accurate characterization of my words either. What is true is that we’d been talking about various Popperish things, and then curi switched to only wanting to talk about my alleged deficiencies in rational conduct and about his “Paths Forward” methodology. I wasn’t interested in discussing those (I’ve no general objection to talking about discussion methodology, but I didn’t want to have that conversation with curi on that occasion) and he wasn’t willing to discuss anything else.
I still have no idea what “hostile to using references” is meant to mean.
Maybe. Though actually I have gone to curi’s website (or, rather, websites; he has several) and read his stuff, when it’s been relevant to our discussions. But, y’know, I didn’t accept Jesus into my life^W^W^W^W the Paths Forward approach, and therefore there’s no point trying to engage with me on anything else.
[EDITED to add:] Am I being snarky? Why, yes, I am being snarky. Because I spent hours attempting to have a productive discussion with this guy, and it turned out that he wasn’t prepared to do that unless he got to set every detail of the terms of discussion. And also because he took all the discussions he’d had on the LW slack and published them online without anyone’s consent (in fact, he asked at least one person “is it OK to post this somewhere else?” and got a negative answer and still did it). For the avoidance of doubt, so far as I know there’s nothing particularly incriminating or embarrassing in any of the stuff he posted, but of course the point is that he doesn’t get to choose what someone else might be unwilling to have posted in a public place.
Though actually I have gone to curi’s website (or, rather, websites; he has several) and read his stuff
So have I, but curi’s understanding of “using references” is a bit more particular than that. Unrolled, it means “your argument has been dealt with by my tens of thousands of words over there [waves hand in the general direction of the website], so we can consider it refuted and now will you please stop struggling and do as I tell you”.
Disclosure: I didn’t read Popper in original (nor do I plan to in the nearest future; sorry, other priorities), I just had many people mention his name to me in the past, usually right before they shot themselves in their own foot. It typically goes like this:
There is a scientific consensus (or at least current best guess) about X. There is a young smart person with their pet theory Y. As the first step, they invoke Popper to say that science didn’t actually prove X, because it is not the job of science to actually prove things; science can merely falsify hypotheses. Therefore, the strongest statement you can legitimately make about X is: “So far, science has not falsified X”. Which is coincidentally also true about Y (or about any other theory you make up on the spot). Therefore, from the “naively Popperian” perspective, X and Y should have equal status in the eyes of science. Except that so far, much more attention and resources have been thrown at X, and it only seems fair to throw some attention and resources at Y now; and if scientists refuse to do that, well, they fail at science. Which should not be surprising at all, because it is known that scientists generally fail at science; .
After reading your summary of Popper (thanks, JenniferRM), my impression is that Popper did a great job debunking some mistaken opinions about science; but ironically, became himself an often-quoted source for other mistaken opinions about science. (I should probably not blame Popper here, but rather the majority of his fans.)
The naive version of science (unfortunately, still very popular in humanities) that Popper refuted goes approximately like this (of course, lot of simplification):
The scientist reads a lot of scientific texts written by other scientists. After a few years, the scientist starts seeing some patterns in the nature. He or she makes an experiment or two which seem to fit the pattern, and describes those patterns and experiments on paper. Their colleagues are impressed by the description; the paper passes peer review, becomes published in a scientific journal, and becomes a new scientific text that the following generations of scientists will study. Now the case is closed, and anyone who doubts the description will face the wrath of the scientific community. (At least until later a higher-status scientist publishes an opposite statement, in which case the history is rewritten, and the new description becomes the scientific fact.)
And the “naively Popperian” opposite perspective (again, simplified a lot) goes like this:
Scientists generate hypotheses by an unspecified process. It is a deeply mysterious process, about which nothing specific is allowed to be said, because that would be unscientific. It is only required that the hypotheses be falsifiable in principle. Then you keep throwing resources at them. Some of them get falsified, some keep surviving. And all that a good scientist is allowed to say about them is “this hypothesis was falsified” or “this hypothesis was not falsified yet”. Anything beyond that is failing at science. For example, saying “Well, this goes against almost everything we know about nature, is incredibly complicated, and while falsifiable in principle, it would require a budget of $10^10 and some technology that doesn’t even exist yet, so… why are we even talking about this, when we have a much simpler theory that is well-supported by current experiments?” is something that a real scientist would never do.
I admit that perhaps, given unlimited amount of resources, we could do science in the “naively Popperian” way. (This is how AIXI would do it, perhaps to its own detriment.) But this is not how actual science works in real life; and not even how idealized science with fallible-but-morally-flawless scientists could work. In real life, the probability of tested hypothesis is better than random. For example, if there is a 1 : 1000000 chance that a random molecule could cure a disease X, it usually requires much less that 1000000 studies to find the cure for X. (A pharmaceutical company with a strategy “let’s try random molecules and do scientific studies whether they cure X” would go out of business. Even a PhD student throwing together random sequences of words and trying to falsify them would probably fail to get their PhD.) Falsification can be the last step in the game, but it’s definitely not the only step.
If I can make an analogy with evolution (of course, analogies can only get us so far, then they break), induction and falsification are to science what mutation and selection are to evolution. Without selection, we would get utter chaos, filled by mostly dysfunctional mutants (or more like just unliving garbage). But without mutation, at best we would get “whatever was the fittest in the original set”. Note that a hypothetical super-mutation where the original organism would be completely disassembled to atoms, and then reconstructed in a completely original random way, would also fail to produce living organisms (until we would throw unlimited resources at the process, which would get us all possible organisms). On the other hand, if humans create an unnatural (but capable of surviving) organism in a lab and release it in the wild, evolution can work with that, too.
Similarly, without falsification, science would be reduced to yet another channel for fashionable dogma and superstition. But without some kind of induction behind the scenes, it would be reduced to trying random hypotheses, and failing at every hypothesis longer than 100 words. And again, if you derive a hypothesis by a method other than induction, science can work with that, too. It’s just, the less the new hypothesis is related to what we already know about the nature, the smaller the chance it could be right. So in real life, most new hypotheses that survive the initial round of falsifications are generated by something like induction. We may not talk about it, but that’s how it is. It is also a reason why scientists study existing science before inventing their own hypotheses. (In a hypothetical world where induction does not work, all they would have to do is study the proper methods of falsification.)
tl;dr—“induction vs falsification” is a false dilemma
(BTW, I agree with gjm’s reponse to your last reply in our previous discussion, so I am not going to write my own.)
EDIT: By the way, there is a relatively simple way to cheat the falsifiability criterium by creating a sequence of hypotheses, where each one of them is individually technically falsifiable, but the sequence as a whole is not. So when the hypothesis H42 gets falsified, you just move to hypothesis H43 and point out that H43 is falsifiable (and different from H42, therefore the falsification of H42 is be irrelevant in this debate), and demand that scientists either investigate H43 or admit that they are dogmatic and prejudiced against you.
As an example, let hypothesis H[n] be: “If you accelerate a proton to 1 − 1/10^n of speed of light, a Science Fairy will appear and give you a sticker.” Suppose we have experimentally falsified H1, H2, and H3; what would that say about H4 or say H99? (Bonus points if you can answer this question without using induction.)
A pharmaceutical company with a strategy “let’s try random molecules and do scientific studies whether they cure X” would go out of business.
Funny you should mention this.
Eve is designed to automate early-stage drug design. First, she systematically tests each member from a large set of compounds in the standard brute-force way of conventional mass screening. The compounds are screened against assays (tests) designed to be automatically engineered, and can be generated much faster and more cheaply than the bespoke assays that are currently standard. …Eve’s robotic system is capable of screening over 10,000 compounds per day.
The sequence idea doesn’t work b/c you can criticize sequences or categories as a whole, criticism doesn’t have to be individualized (and typically shouldn’t be – you want criticisms with some generality).
Most falsifiable hypotheses are rejected for being bad explanations, containing internal contradictions, or other issues – without empirical investigation. This is generally cheaper and is done with critical argument. If someone can generate a sequence of ideas you don’t know of any critical arguments against, then you actually do need some better critical arguments (or else they’re actually good idea). But your example is trivial to criticize – what kind of science fairy, why will it appear in that case, if you accelerate a proton past a speed will that work or does it have to stay at the speed for a certain amount of time? does the fairy or sticker have mass or energy and violate a conservation law? It’s just arbitrary, underspecified nonsense.
most ppl who like most things are not so great. that works for Popper, induction, socialism, Objectivism, Less Wrong, Christianity, Islam, whatever. your understanding of Popper is incorrect, and your experiences do not give you an accurate picture of Popper’s work. meanwhile, you don’t know of a serious criticism of CR by someone who does know what they’re talking about, whereas I do know of a serious criticism of induction which y’all don’t want to address.
If you look at the Popper summary you linked, it has someone else’s name on it, and it isn’t on my website. This kind of misattribution is the quality of scholarship I’m dealing with here. anyway here is an excerpt from something i’m currently in the process of writing.
(it says “Comment too long” so i’m going to try putting it in a reply comment, and if that doesn’t work i’ll pastebin it and edit in the link. it’s only 1500 words.)
CR is an epistemology developed by 20th century philosopher Karl Popper. An epistemology is a philosophical framework to guide effective thinking, learning, and evaluating ideas. Epistemology says what reason is and how it works (except the epistemologies which reject reason, which we’ll ignore). Epistemology is the most important intellectual field, because reason is used in every other field. How do you figure out which ideas are good in politics, physics, poetry or psychology? You use the methods of reason! Most people don’t have a very complete conscious understanding of their epistemology (how they think reason works), and haven’t studied the matter, which leaves them at a large intellectual disadvantage.
Epistemology offers methods, not answers. It doesn’t tell you which theory of gravity is true, it tells you how to productively think and argue about gravity. It doesn’t give you a fish or tell you how to catch fish, instead it tells you how to evaluate a debate over fishing techniques. Epistemology is about the correct methods of arguing, truth-seeking, deciding which ideas make sense, etc. Epistemology tells you how to handle disagreements (which are common to every field).
CR is general purpose: it applies in all situations and with all types of ideas. It deals with arguments, explanations, emotions, aesthetics – anything – not just science, observation, data and prediction. CR can even evaluate itself.
Fallibility
CR is fallibilist rather than authoritarian or skeptical. Fallibility means people are capable of making mistakes and it’s impossible to get a 100% guarantee that any idea is true (not a mistake). And mistakes are common so we shouldn’t try to ignore fallibility (it’s not a rare edge case). It’s also impossible to get a 99% or even 1% guarantee that an idea is true. Some mistakes are unpredictable because they involve issues that no one has thought of yet.
There are decisive logical arguments against attempts at infallibility (including probabilistic infallibility).
Attempts to dispute fallibilism are refuted by a regress argument. You make a claim. I ask how you guarantee the claim is correct (even a 1% guarantee). You make a second claim which gives some argument to guarantee the correctness of the first claim (probabilistically or not). No matter what you say, I ask how you guarantee the second claim is correct. So you make a third claim to defend the second claim. No matter what you say, I ask how you guarantee the correctness of the third claim. If you make a fourth claim, I ask you to defend that one. And so on. I can repeat this pattern infinitely. This is an old argument which no one has ever found a way around.
CR’s response to this is to accept our fallibility and figure out how to deal with it. But that’s not what most philosophers have done since Aristotle.
Most philosophers think knowledge is justified, true belief, and that they need a guarantee of truth to have knowledge. So they have to either get around fallibility or accept that we don’t know anything (skepticism). Most people find skepticism unacceptable because we do know things – e.g. how to build working computers and space shuttles. But there’s no way around fallibility, so philosophers have been deeply confused, come up with dumb ideas, and given philosophy a bad name.
So philosophers have faced a problem: fallibility seems to be indisputable, but also seems to lead to skepticism. The way out is to check your premises. CR solves this problem with a theory of fallible knowledge. You don’t need a guarantee (or probability) to have knowledge. The problem was due to the incorrect “justified, true belief” theory of knowledge and the perspective behind it.
Justification is the Major Error
The standard perspective is: after we come up with an idea, we should justify it. We don’t want bad ideas, so we try to argue for the idea to show it’s good. We try to prove it, or approximate proof in some lesser way. A new idea starts with no status (it’s a mere guess, hypothesis, speculation), and can become knowledge after being justified enough.
Justification is always due to some thing providing the justification – be it a person, a religious book, or an argument. This is fundamentally authoritarian – it looks for things with authority to provide justification. Ironically, it’s commonly the authority of reasoned argument that’s appealed to for justification. Which arguments have the authority to provide justification? That status has to be granted by some prior source of justification, which leads to another regress.
Fallible Knowledge
CR says we don’t have to justify our beliefs, instead we should use critical thinking to correct our mistakes. Rather than seeking justification, we should seek our errors so we can fix them.
When a new idea is proposed, don’t ask “How do you know it?” or demand proof or justification. Instead, consider if you see anything wrong with it. If you see nothing wrong with it, then it’s a good idea (knowledge). Knowledge is always tentative – we may learn something new and change our mind in the future – but that doesn’t prevent it from being useful and effective (e.g. building space shuttles that successfully reach the moon). You don’t need justification or perfection to reach the moon, you just need to fix errors with your designs until they’re good enough to work. This approach avoids the regress problems and is compatible with fallibility.
The standard view said, “We may make mistakes. What should we do about that? Find a way to justify an idea as not being a mistake.” But that’s impossible.
CR says, “We may make mistakes. What should we do about that? Look for our mistakes and try to fix them. We may make mistakes while trying to correct our mistakes, so this is an endless process. But the more we fix mistakes, the more progress we’ll make, and the better our ideas will be.”
Guesses and Criticism
Our ideas are always fallible, tentative guesses with no special authority, status or justification. We learn by brainstorming guesses and using critical arguments to reject bad guesses. (This process is literallyevolution, which is the only known answer to the very hard problem of how knowledge can be created.)
How do you know which critical arguments are correct? Wrong question. You just guess it, and the critical arguments themselves are open to criticism. What if you miss something? Then you’ll be mistaken, and hopefully figure it out later. You must accept your fallibility, perpetually work to find and correct errors, and still be aware that you are making some mistakes without realizing it. You can get clues about some important, relevant mistakes because problems come up in your life (indicating to direct more attention there and try to improve something).
CR recommends making bold, clear guesses which are easier to criticize, rather than hedging a lot to make criticism difficult. We learn more by facilitating criticism instead of trying to avoid it.
Science and Evidence
CR pays extra attention to science. First, CR offers a theory of what science is: a scientific idea is one which could be contradicted by observation because it makes some empirical claim about reality.
Second, CR explains the role of evidence in science: evidence is used to refute incorrect hypotheses which are contradicted by observation. Evidence is not used to support hypotheses. There is evidence against but no evidence for. Evidence is either compatible with a hypothesis, or not, and no amount of compatible evidence can justify a hypothesis because there are infinitely many contradictory hypotheses which are also compatible with the same data.
These two points are where CR has so far had the largest influence on mainstream thinking. Many people now see science as being about empirical claims which we then try to refute with evidence. (Parts of this are now taken for granted by many people who don’t realize they’re fairly new ideas.)
CR also explains that observation is selective and interpreted. We first need ideas to decide what to look at and which aspects of it to pay attention to. If someone asks you to “observe”, you have to ask them what to observe (unless you can guess what they mean from context). The world has more places to look, with more complexity, than we can keep track of. So we have to do a targeted search according to some guesses about what might be productive to investigate. In particular, we often look for evidence that would contradict (not support) our hypotheses in order to test them and try to correct our errors.
We also need to interpret our evidence. We don’t see puppies, we see photons which we interpret as meaning there is a puppy over there. This interpretation is fallible – sometimes people are confused by mirrors, mirages (where blue light from the sky goes through the hotter air near the ground then up to your eyes, so you see blue below you and think you found an oasis), fog (you can mistakenly interpret whether you did or didn’t see a person in the fog), etc.
Seems like these “critical arguments” do a lot of heavy lifting.
Suppose you make a critical argument against my hypothesis, and the arguments feels smart to you, but silly to me. I make a counter-argument, which to me feels like it completely demolished your position, but in your opinion it just shows how stupid I am. Suppose the following rounds of arguments are similarly fruitless.
Now what?
In a situation between a smart scientist who happens to be right, and a crackpot that refuses admitting the smallest mistake, how would you distinguish which is which? The situation seems symmetrical; both sides are yelling at each other, no progress on either side.
Would you decide by which argument seems more plausible to you? Then you are just another person in a 3-people ring, and the current balance of powers happens to be 2:1. Is this about having a majority?
Or would you decide that “there is no answer” is the right answer? In that case, as long as there remains a single crackpot on this planet, we have a scientific controversy. (You can’t even say that the crackpot is probably wrong, because that would be probabilistic reasoning.)
You must accept your fallibility, perpetually work to find and correct errors, and still be aware that you are making some mistakes without realizing it.
Seems to me you kinda admit that knowledge is ultimately uncertain (i.e. probabilistic), but you refuse to talk about probabilities. (Related LW concept: “Fallacy of gray
”.) We are fallible, but it is wrong to make a guess how much. We resolve experimentally uncertain hypotheses by verbal fights, which we pretend to have exactly one of three outcomes: “side A lost”, “side B lost”, “neither side lost”; nothing in between, such as “side A seems 3x more convincing than side B”. I mean, if you start making too many points on a line, it would start to resemble a continuum, and your argument seems to be that there is no quantitative certainty, only qualitative; that only 0, 1, and 0.5 (or perhaps NaN) are valid probabilities of a hypothesis.
Is the crackpot being responsive to the issues and giving arguments – arguments are what matter, not people – or is he saying non-sequiturs and refusing to address questions? If he speaks to the issues we can settle it quickly; if not, he isn’t participating and doesn’t matter. If we disagree about the nature of what’s taking place, it can be clarified, and I can make a judgement which is open to Paths Forward. You seem to wish to avoid the burden of this judgement by hedging with a “probably”.
Fallibility isn’t an amount. Correct arguments are decisive or not; confusion about this is commonly due to vagueness of problem and context (which are not matters of probability and cannot be accurately summed up that way). See https://yesornophilosophy.com
I wish to conclude this debate somehow, so I will provide something like a summary:
If I understand you correctly, you believe that (1) induction and probabilities are unacceptable for science or “critical rationalism”, and (2) weighing evidence can be replaced by… uhm… collecting verbal arguments and following a flowchart, while drawing a tree of arguments and counter-arguments (hopefully of a finite size).
I believe that you are fundamentally wrong about this, and that you actually use induction and probabilities.
First, because without induction, no reasoning about the real world is possible. Do you expect that (at least approximately) the same laws of physics apply yesterday, today, and tomorrow? If they don’t, then you can’t predict anything about the future (because under the hypothetical new laws of physics, anything could happen). And you even can’t say anything about the past, because all our conclusions about the past are based on observing what we have now, and expecting that in the past it was exposed to the same laws of physics. Without induction, there is no argument against “last Thursdayism”.
Second, because although to refuse to talk about probabilities, and definitely object against using any numbers, some expressions you use are inherently probabilistic; you just insist on using vague verbal descriptions, which more or less means rounding the scale of probability from 0% to 100% into a small number of predefined baskets. There is a basket called “falsified”, a basket called “not falsified, but refuted by a convincing critical argument”, a basket called “open debate; there are unanswered critical arguments for both sides”, and a basket called “not falsified, and supported by a convincing critical argument”. (Well, something like that. The number and labels of the baskets are most likely wrong, but ultimately, you use a small number of baskets, and a flowchart to sort arguments into their respective baskets.) To me, this sounds similar to refusing to talk about integers, and insisting that the only scientifically valid values are “zero”, “one”, “a few”, and “many”. I believe that in real life you can approximately distinguish whether you chance of being wrong is more in the order of magnitude “one in ten” or “one in a million”. But your vocabulary does not allow to make this distinction; there is only the unspecific “no conclusion” and the unspecific “I am not saying it’s literally 100% sure, but generally yes”; and at some point of the probability scale you will make the arbitrary jump from the former to the latter, depending on how convincing is the critical argument.
On your website, you have a strawman powerpoint presentation about how people measure “goodness of an idea” by adding or removing goodness points, on a scale 0-100. Let me tell you that I have never seen anyone using or supporting that type of scale; neither on Less Wrong, nor anywhere else. Specifically, Bayes Theorem is not about “goodness” of an idea; it is about mathematical probability. Unlike “goodness”, probabilities can actually be calculated. If you put 90 white balls and 10 black balls in a barrel, the probability of randomly drawing a white ball is 90%. If there is one barrel containing 90 white balls and 10 black balls, and another barrel containing 10 white balls and 90 black balls, and you choose a random barrel, randomly draw five balls, and get e.g. four white balls and one black ball, you can calculate the probability of this being the first or the second barrel. It has nothing to do with “goodness” of the idea “this is the first barrel” or “this is the second barrel”.
My last observation is that your methodology of “let’s keep drawing the argument tree, until we reach the conclusion” allows you to win debates by mere persistence. All you have to do is keep adding more and more arguments, until your opponent says “okay, that’s it, I also have other things to do”. Then, according to your rules, you have won the debate; now all nodes at the bottom of the tree are in favor of your argument. (Which is what I also expect to happen right now.)
I believe that you are fundamentally wrong about this, and that you actually use induction and probabilities.
This is the old argument that CR smuggles induction in via the backdoor. Critical Rationalists have given answers to this argument. Search, for example, what Rafe Champion has to say about induction smuggling. Why have you not done research about this before commenting? You point is not original.
First, because without induction, no reasoning about the real world is possible. Do you expect that (at least approximately) the same laws of physics apply yesterday, today, and tomorrow? If they don’t, then you can’t predict anything about the future (because under the hypothetical new laws of physics, anything could happen).
Are you familiar with what David Deutsch had to say about this in, for example, The Fabric of Reality? Again, you have not done any research and you are not making any new points which have not already been answered.
Specifically, Bayes Theorem is not about “goodness” of an idea; it is about mathematical probability. Unlike “goodness”, probabilities can actually be calculated. If you put 90 white balls and 10 black balls in a barrel, the probability of randomly drawing a white ball is 90%. If there is one barrel containing 90 white balls and 10 black balls, and another barrel containing 10 white balls and 90 black balls, and you choose a random barrel, randomly draw five balls, and get e.g. four white balls and one black ball, you can calculate the probability of this being the first or the second barrel. It has nothing to do with “goodness” of the idea “this is the first barrel” or “this is the second barrel”.
Critical Rationalists have also given answers to this, including Elliot Temple himself. CR has no problem with the probabilities of events—which is what your example is about. But theories are not events and you cannot associate probabilities with theories. You have still not made an original point which has not been discussed previously.
Why do you think that some argument which crosses your mind hasn’t already been discussed in depth? Do you assume that CR is just some mind-burp by Popper that hasn’t been fully fleshed out?
they’ve never learned or dealt with high-quality ideas before. they don’t think those exist (outside certain very specialized non-philosophy things mostly in science/math/programming) and their methods of dealing with ideas are designed accordingly.
You are grossly ignorant of CR, which you grossly misrepresent, and you want to reject it without understanding it. The reasons you want to throw it out while attacking straw men are unstated and biased. Also, you don’t have a clear understanding of what you mean by “induction” and it’s a moving target. If you actually had a well-defined, complete position on epistemology I could tell you what’s logically wrong with it, but you don’t. For epistemology you use a mix of 5 different versions of induction (all of which together still have no answers to many basic epistemology issues), a buggy version of half of CR, as well as intuition, common sense, what everyone knows, bias, common sense, etc. What an unscholarly mess.
What you do have is more ability to muddy the waters than patience or interest in thinking. That’s a formula for never knowing you lost a debate, and never learning much. It’s understandable that you’re bad at learning about new ideas, bad at organizing a discussion, bad at keeping track of what was said, etc, but it’s unreasonable that, due your inability to discuss effectively, you blame CR methodology for the discussion not reaching a conclusion fast enough and quit. The reason you think you’ve found more success when talking with other people is because you find people who already agree with you about more things before you the discussion starts.
No, you do not get to publicly demand an in-depth discussion of the philosophy of induction from a specific, small group of people. You can raise the topic in a place where you know they hang out and gesture in their direction. But what you’re doing here is trying to create a social obligation to read ten thousand words of your writing. With your trademark in capital letters in every other sentence. And to write a few thousand words in response. From my outside perspective, engaging in this way looks like it would be a massive unproductive time sink.
It’s worse than that. I tried to have a discussion of the philosophy of induction with him (over on the slack). He took exception to some details of how I was conducting myself, essentially because I wasn’t following his “Paths Forward” methodology, and from that point on he wasn’t interested in discussing the philosophy of induction.
So in effect he’s publicly demanding an in-depth discussion of the philosophy of induction according to whatever idiosyncratic standards of debate he decides to set up from a specific small group of people.
suppose hypothetically that me/Popper/DD are right. how will y’all stop being wrong?
There are thousands of philosophers about whom I could ask the same question. It makes sense to focus attention on those people who are most likely to provide useful information and not those people who are engaging in the most effort to get heart by coming and posting in our forum.
Who are these thousands? It would be great if the world had lots of really good philosophers. It doesn’t. The world is starving for good philosophers: they are very few and far between.
I have no reason for me to believe that Curi is among the people who’s a really good philosopher.
Popper might have said useful things given his time but he’s dead. I won’t read from Popper about what he thinks about the development of the No Free Lunch theorem and ideas that came up after he died.
Barry Smith would be an example of a person that I like and where it’s worth to spend more time reading more of his work. His work of applied ontology actually matters for real world decision making and knowledge modeling.
Reading more from Judea Pearl (who by the way supervised Ilya Shpitser’s Phd) is also on my long-term philosophic reading list.
I know lots of folks at CMU who are good.
I don’t suppose you’re going to give names and references? Let alone point to anyone (them, yourself, or anyone else) who will take responsibility for addressing questions and criticisms about the referenced works?
Spirtes, Glymour, and Scheines, for starters. They have a nice book. There are other folks in that department who are working on converting mathematical foundations into an axiomatic system where proofs can be checked by a computer.
I am not going to do leg work for you, and your minions, however. You are the ones claiming there are no good philosophers. It’s your responsibility to read, and keep your mouth shut if you are not sure about something.
It’s not my responsibility to teach you.
I have read and I know what I am talking about. You on the other hand don’t even know the basics of Popper, one of the best philosophers of the 20th century.
That isn’t even a philosophy book. And then you mention others who are doing math, not philosophy.
Your sockpuppet: “There is a shortage of good philosophers.”
Me: “Here is a good philosophy book.”
You: “That’s not philosophy.”
Also you: “How is Ayn Rand so right about everything.”
Also you: “I don’t like mainstream stuff.”
Also you: “Have you heard that I exchanged some correspondence with DAVID DEUTSCH!?”
Also you: “What if you are, hypothetically, wrong? What if you are, hypothetically, wrong? What if you are, hypothetically, wrong?” x1000
Part of rationality is properly dealing with people-as-they-are. What your approach to spreading your good word among people-as-they-are led to is them laughing at you.
It is possible that they are laughing at you because they are some combination of stupid and insane. But then it’s on you to first issue a patch into their brain that will be accepted, such that they can parse your proselytizing, before proceeding to proselytize.
This is what Yudkowsky sort of tried to do.
How you read to me is a smart young adult who has the same problem Yudkowsky has (although Yudkowsky is not so young anymore) -- someone who has been the smartest person in the room for too long in their intellectual development, and lacks the sense of scale and context to see where he stands in the larger intellectual community.
curi has given an excellent response to this. I would like to add that I think Yudkowsky should reach out to curi. He shares curi’s view about the state of the world and the urgency to fix things, but curi has a deeper understanding. With curi, Yudkowsky would not be the smartest person in the room and that will be valuable for his intellectual development.
Who are you talking to? To the audience? To the fourth wall?
Surely not to me, I have no sway here.
Well, this comes back to the problem of LW Paths Forward. curi has made himself publicly available for discussion, by anyone. Yudkowsky not so much. So what to do?
I don’t have a sock puppet here. I don’t even know who Fallibilist is. (Clearly it’s one of my fans who is familiar with some stuff I’ve written elsewhere. I guess you’ll blame me for having this fan because you think his posts suck. But I mostly like them, and you don’t want to seriously debate their merits, and neither of us thinks such a debate is the best way to proceed anyway, so whatever, let’s not fight over it.)
People can’t be patched like computer code. They have to do ~90% of the work themselves. If they don’t want to change, I can’t change them. If they don’t want to learn, I can’t learn for them and stuff it into their head. You can’t force a mind, nor do someone else’s thinking for them. So I can and do try to make better educational resources to be more helpful, but unless I find someone who honestly wants to learn, it doesn’t really matter. (This is implied by CR and also, independently, by Objectivism. I don’t know if you’ll deny it or not.)
I believe you are incorrect about my lack of scale and context, and you’re unfamiliar with (and ridiculing) my intellectual history. I believe you wanted to say that claim, but don’t want to argue it or try to actually persuade me of it. As you can imagine, I find merely asserting it just as persuasive and helpful as the last ten times someone told me this (not persuasive, not helpful). Let me know if I’m mistaken about this.
I was generally the smartest person in the room during school, but also lacked perspective and context back then. But I knew that. I used to assume there were tons of people smarter than me (and smarter than my teachers), in the larger intellectual community, somewhere. I was very disappointed to spend many years trying to find them and discovering how few there are (an experience largely shared by every thinker I admire, most of whom are unfortunately dead). My current attitude, which you find arrogant, is a change which took many years and which I heavily resisted. When I was more ignorant I had a different attitude; this one is a reaction to knowledge of the larger intellectual community. Fortunately I found David Deutsch and spent a lot of time not being the smartest person in the room, which is way more fun, and that was indeed super valuable to my intellectual development. However, despite being a Royal Society fellow, author, age 64, etc, David Deutsch manages to share with me the same “lacks the sense of scale and context to see where he stands in the larger intellectual community” (the same view of the intellectual community).
EDIT: So while I have some partial sympathy with you – I too had some of the same intuitions about what the world is like that you have (they are standard in our culture) – I changed my mind. The world is, as Yudkowsky puts it, not adequate. https://www.lesserwrong.com/posts/dhj9dhiwhq3DX6W8z/hero-licensing
This is not that untypical in this community. LW Censi put the average IQ on LW at something like 140.
There are plenty of people inside Mensa that spend their youth being smarter than the people in the room in school and that go on to develop crackpot theories.
From the perspective of Ilya Shpitser, who was supervised for his Phd by Judea Pearl (who’s famous of producing a theory of causality that’s very useful for practical purposes), corresponding with David Deutsch in an informal way doesn’t give you a lot of credentials.
Dear Christian, please don’t pull rank on my behalf. I don’t think this is productive to do, and I don’t want to bring anyone else into this.
I didn’t correspond with David Deutsch in an “informal way” as you mean it. For example, I was the most important editor of BoI (other than DD ofc).
You don’t seem to be a formal coauthor of the book, so your relationship is informal in a way that a Phd supervision isn’t. The book also doesn’t list you as editor but under “friends or colleagues” while he does mention that he does have a relationship with someone he calls copy-editor.
You seem to be implying I’m a liar while focusing on making factual claims in a intentionally biased way (you just saw, but omitted, relevant information b/c it doesn’t help “your side”, which is to attack me).
Your framing here is as dishonest, hostile, and unfair as usual: I did not claim to be a coauthor.
You are trying to attack informality as something bad or inferior, and trying to deny my status as a professional colleague of Deutsch who was involved with the book in a serious way. You are, despite the vagueness and hedging, factually mistaken about what you’re suggesting. Being a PhD student under Deutsch would have been far worse – much less attention, involvement, etc. But you are dishonestly trying to confuse the issues by switching btwn arguing about formality itself (who cares? but you’re using it as a proxy for other things) and actually talking about things that matter (quality, level of involvement, etc).
I made a statement that the relationship is informal and back up my claim. If you get offended by me simply saying things that are true, that’s not a good basis to have a conversation about philosophic matters.
If David Deutsch would have decided to hire you as an editor, that’s would be a clear sign that he values your expertise enough to pay for it. The information that you provided shows that you provided a valuable service to him by organising an online forum as a volunteer and as a result he saw you as a friend who got to read his draft and he listened to your feedback on his draft. You seem to think that the fact that you spent the most time on providing feedback makes you the most important editor of it, but there’s no statement of David Deutsch himself in the acknowledgement section that suggests that he thinks the same way.
There literally is such a statement as the one you deny exists: he put the word “especially” before my name. He also told me directly. You are being dishonest and biased.
Your comments about organizing a forum, etc, are also factually false. You don’t know what you’re talking about and should stop making false claims.
You wrote on your website:
That’s your own presenation of your relationship with him.
That situation today doesn’t prevent you from being ignorant of things like timelines. Your claim that “you provided a valuable service to him by organising an online forum as a volunteer and as a result he saw you as a friend who got to read his draft and he listened to your feedback on his draft” is factually false. I didn’t run or own those forums at the time. I did not in fact get to read “his draft” (falsely singular) due to running a forum.
You don’t know what you’re talking about and you’re making up false stories.
You are right that I don’t know about the timeline given that it’s not public information and this can lead to getting details wrong. The fact that you are unable to think of what I refer to still suggests that your abilities to think in a fact based way about this aren’t good.
That aspect of the timeline actually is public information, you just don’t know it. Again you’ve made a false factual claim (about what is or isn’t public info).
You are clinging to a false narrative from a position of ignorance, while still trying to attack me (now I suck at thinking in a fact based way, apparently because I factually corrected you) rather than reconsidering anything.
I’ve told you what happened. You don’t believe me and started making up factually false claims to fit your biases, which aren’t going anywhere when corrected. You think someone like David Deutsch couldn’t possibly like and value my philosophical thinking all that much. You’re mistaken.
You said you can’t deduce something. This means that there’s a puzzle that you couldn’t solve and it’s not a hard problem to solve.
I didn’t say it was untypical, i was replying to the parent comment. Pay attention instead of responding out-of-context.
I didn’t claim that you did say it was untypical.
You could say that a lot of philosophers who dealt with logic where just doing math, that doesn’t change anything about practical application of logic being important philosophically. Looking into what can be proven to be true with logic is important philosophically.
Being a good philosopher has nothing to do with taking responsibility for answering any questions that they are asked. Most people who are actually good care about their time and don’t just spend significant amounts of time because a random person contacts them. They certainly don’t consider that to be their responsibility.
The right answer is maybe they won’t. The point is that it is not up to you to fix them. You have been acting like a Jehovah’s Witness at the door, except substantially more bothersome. Stop.
And besides, you aren’t right anyway.
I hunted around your website until I found an actual summary of Popper’s thinking in straightforward language.
Until I found that I had not seen you actually provide clear text like this, and I wanted to exhort you to write an entire sequence in language with that flavor: clean and clear and lacking in citation. The sequence should be about what “induction” is, and why you think other people believed something about it (even if not perhaps by that old fashioned name), and why you think those beliefs are connected to reliably predictable failures to achieve their goals via cognitively mediated processes.
I feel like maaaybe you are writing a lot about things you have pointers to, but not things that you have held in your hands, used skillfully, and made truly a part of you? Or maybe you are much much smarter and better read than me, so all your jargon makes sense to you and I’m just too ignorant to parse it.
My hope is that you can dereference your pointers and bring all the ideas and arguments into a single document, and clean it up and write it so that someone who had never heard of Popper would think you are really smart for having had all these ideas yourself.
Then you could push one small chapter from this document at a time out into the world (thereby tricking people into reading something piece by piece that they might have skipped if they saw how big it was going to be up front) and then after 10 chapters like this it will turn out that you’re a genius and everyone else was wrong and by teaching people to think good you’ll have saved the world.
I like people who try to save the world, because it makes me marginally less hopeless, and less in need of palliative cynicism :-)
there already exist documents of a variety of lengths, both collections and single. you’re coming into the middle of a discussion and seemingly haven’t read much of it and haven’t asked for specifically what you want. and then, with almost no knowledge of my intellectual history, accomplishments, works, etc, things-already-tried, etc, you try to give me standard advice that i’ve heard a million times before. that would be ok as a starting point if it were only the starting point, but i fear it’s going to more or less be the ending point too.
it sounds like you want me to rewrite material from DD and KP’s books? http://fallibleideas.com/books#deutsch Why would me rewriting the same things get a different outcome than the existing literature? what is the purpose?
and how do you expect me to write a one-size-fits-all document when LW has no canonical positions written out – everyone just has their own different ideas?
and why are zero people at LW familiar enough to answer well known literature in their field. fine if you aren’t an expert, but why does this community seem to have no experts who can speak to these issues without first requesting summary documents of the books they don’t want to read?
what knowledge do you have? what are you looking for in talking with me? what values are you seeking and offering?
dishonesty is counter-productive and self-destructive. if you wish to change my mind about this, you’ll have to address Objectivism and a few other things.
i’ve made things multiple times. here’s one:
http://fallibleideas.com
there are difficulties such as people not wanting to think, learn, or truth-seek – especially when some of their biases are challenged. it’s hard to tell people about ideas this different than what they’re used to.
one basically can’t teach people who don’t want to learn something. creating more material won’t change that. there are hard problems here. you could learn philosophy and help, or learn philosophy and disagree (which would be helpful), or opt out of addressing issues that require a lot of knowledge and then try to do a half-understood version of one of the more popular/prestigious (rather than correct) philosophies. but you can’t get away from philosophical issues – like how to think – being a part of your life. nevertheless most people try to and philosophy is a very neglected field. such is the world; that isn’t an argument that any particular idea is false.
supposing hypothetically that that’s the case: then what next?
I think there are two big facts here.
ONE: You’re posting over and over again with lots of links to your websites, which are places you offer consulting services, and so it kinda seems like you’re maybe just a weirdly inefficient spammer for bespoke nerd consulting.
This makes almost everything you post here seem like it might all just be an excuse for you to make dramatic noise in the hopes of the noise leading somehow to getting eyeballs on your website, and then, I don’t even know… consulting gigs or something?
This interpretation would seem less salient if you were trying to add value here in some sort of pro-social way, but you don’t seem to be doing that so… so basically everything you write here I take with a giant grain of salt.
My hope is that you are just missing some basic insight, and once you learn why you seem to be half-malicious you will stop defecting in the communication game and become valuable :-)
TWO: From what you write here at an object level, you don’t even seem to have a clear and succinct understanding of any of the things that have been called a “problem of induction” over the years, which is your major beef, from what I can see.
You’ve mentioned Popper… but not Hume, or Nelson? You’ve never mentioned “grue” or “bleen” that I’ve seen, so I’m assuming it is the Humean critique of induction that you’re trying to gesture towards rather than the much more interesting arguments of Nelson…
But from a software engineering perspective Hume’s argument against induction is about as much barrier to me being able to think clearly or build smart software as Zeno’s paradox is a barrier to me being able to walk around on my feet or fix a bicycle.
Also, it looks like you haven’t mentioned David Wolpert and his work in the area of no free lunch theorems. Nor have you brought up any of the machine vision results or word vector results that are plausibly relevant to these issues. My hypothesis here is that you just don’t know about these things.
(Also, notice that I’m giving links to sites that are not my own? This is part of how the LW community can see that I’m not a self-promoting spammer.)
Basically, I don’t really care about reading the original writings of Karl Popper right now. I think he was cool, but the only use I would expect to get from him right now would be to read him backwards in order to more deeply appreciate how dumb people used to be back when his content was perhaps a useful antidote to widespread misunderstandings of how to think clearly.
Let me spell this out very simply to address rather directly your question of communication pragmatics...
The key difference is that Karl Popper is not spamming this forum. His texts are somewhere else, not bothering us at all. Maybe they are relevant. My personal assessment is currently that they have relatively little import to active and urgent research issues.
If you displayed the ability to summarize thinkers that maybe not everyone has read, and explain that thinker’s relevance to the community’s topics of interests, that would be pro-social and helpful.
The longer the second fact (where you seem to not know what you’re talking about or care about the valuable time of your readers) remains true, the more the first fact (that you seem to be an inefficient shit-stirring spammer) becomes glaring in its residual but enduring salience.
Please, surprise me! Please say something useful that does not involve a link to the sites you seem to be trying to push traffic towards.
I really hope this was hyperbole on your part. Otherwise it seems I should set my base rates for this conversation being worth anything to 1 in a million, and then adjust from there...
As far as I can see, curi really wants to teach people his take on philosophy, that is, he wants to be a guide/mentor/teacher and provide wisdom to his disciples who would be in awe of his sagacity. Money would be useful, but I got the impression that he would do it for free as well (at least to start with). He is in a full proselytizing mode, not interested at all in checking his own ideas for faults and problems, but instead doing everything to push you onto his preferred path and get you to accept the packaged deal that he is offering.
Hi, Hume’s constant conjunction stuff I think has nothing to do with free lunch theorems in ML (?please correct me if I am missing something?), and has to do with defining causation, an issue Hume was worried about all his life (and ultimately solved, imo, via his counterfactual definition of causality that we all use today, by way of Neyman, Rubin, Pearl, etc.).
My read on the state of public academic philosophy is that there are many specific and potentially-but-not-obviously-related issues that come up in the general topic of “foundations of inference”. There are many angles of attack, and many researchers over the years. Many of them are no longer based out of official academic “philosophy departments” anymore and this is not necessarily a tragedy ;-)
The general issue is “why does ‘thinking’ seem to work at all ever?” This can be expressed in terms of logic, or probabilistic reasoning, or sorting, or compression, or computability, or theorem decidability, or P vs NP, or oracles of various kinds, or the possibility of language acquisition, and/or why (or why not) running basic plug-and-chug statistical procedures during data processing seems to (maybe) work in the “social sciences”.
Arguably, these all share a conceptual unity, and might eventually be formally unified by a single overarching theory that they are all specialized versions of.
From existing work we know that lossless compression algorithms have actual uses in real life, and it certainly seems as though mathematicians make real progress over time, up to and including Chaitin himself!
However when people try to build up “first principles explanations” how how “good thinking” works at all, they often derive generalized impossibility when we scope over naive formulations of “all possible theories” or “all possible inputs”.
So in most cases we almost certainly experience a “lucky fit” of some kind between various clearly productive thinking approaches and various practical restrictions on the kinds of input these approaches typically face.
Generative adversarial techniques in machine learning, and MIRI’s own Garrabrant Inductor are probably relevant here because they start to spell out formal models where a reasoning process of some measurable strength is pitted against inputs produced by a process that is somewhat hostile but clearly weaker.
Hume functions in my mind as a sort of memetic LUCA for this vast field of research, which is fundamentally motivated by the core idea that thinking correctly about raw noise is formally impossible, and yet we seem to be pretty decent at some kinds of thinking, and so there must be some kind of fit between various methods of thinking and the things that these thinking techniques seem to work on.
Also thanks! The Neyman-Pearson lemma has come up for me in practical professional situations before, but I’d never pushed deeper into recognizing Jerzy Neyman as yet another player in this game :-)
Jerzy Neyman gets credit for lots of things, but in particular in my neck of the woods for inventing the potential outcome notation. This is the notation for “if the first object had not been, the second never had existed” in Hume’s definition of causation.
You are requesting I write new material for you because you dislike my links to websites with thousands of free essays, because you find them too commercial, and you don’t want to read books. Why should I do this for you? Do you think you have any value to offer me, and if so what?
Fundamentally, the thing I offer you is respect, the more effective pursuit of truth, and a chance to help our species not go extinct, all of which I imagine you want (or think you want) because out of all the places on the Internet you are here.
If I’m wrong and you do NOT want respect, truth, and a slightly increased chance of long term survival, please let me know!
One of my real puzzles here is that I find it hard to impute a coherent, effective, transparent, and egosyntonic set of goals to you here and now.
Personally, I’d be selfishly just as happy if, instead of writing all new material, you just stopped posting and commenting here, and stopped sending “public letters” to MIRI (an organization I’ve donated to because I think they have limited resources and are doing good work).
I don’t dislike books in general. I don’t dislike commercialism in general. I dislike your drama, and your shallow citation filled posts showing up in this particular venue.
Basically I think you are sort of polluting this space with low quality communication acts, and that is probably my central beef with you here and now. There’s lots of ways to fix this… you writing better stuff… you writing less stuff that is full of abstractions that ground themselves only in links to your own vanity website or specific (probably low value) books… you just leaving… etc...
If you want to then you can rewrite all new material that is actually relevant and good, to accomplish your own goals more effectively, but I probably won’t read it if it is not in one of the few streams of push media I allow into my reading queue (like this website).
At this point it seems your primary claim (about having a useful research angle involving problems of induction) is off the table. I think in a conversation about that I would be teaching and you’d be learning, and I don’t have much more time to teach you things about induction over and beyond the keywords and links to reputable third parties that I’ve already provided in this interaction, in an act of good faith.
More abstractly, I altruistically hope for you to feel a sense of realization at the fact that your behavior strongly overlaps with that of a spammer (or perhaps a narcissist or perhaps any of several less savory types of people) rather than an honest interlocutor.
After realizing this, you could stop linking to your personal website, and you could stop being beset on all sides by troubling criticisms, and you could begin to write about object level concerns and thereby start having better conversations here.
If you can learn how to have a good dialogue rather than behaving like a confused link farm spammer over and over again (apparently “a million times” so far) that might be good for you?
(If I learned that I was acting in a manner that caused people to confuse me with an anti-social link farm spammer, I’d want people to let me know. Hearing people honestly attribute this motive to me would cause me worry about my ego structure, and its possible defects, and I think I’d be grateful for people’s honest corrective input here if it wasn’t explained in an insulting tone.)
You could start to learn things and maybe teach things, in a friendly and mutually rewarding search for answers to various personally urgent questions. Not as part of some crazy status thing nor as a desperate hunt for customers for a “philosophic consulting” business...
If you become less confused over time, then a few months or years from now (assuming that neither DeepMind nor OpenAI have a world destroying industrial accident in the meantime) you could pitch in on the pro-social world saving stuff.
Presumably the world is a place that you live, and presumably you believe you can make a positive contribution to general project of make sure everyone in the world is NOT eventually ground up as fuel paste for robots? (Otherwise why even be here?)
And if you don’t want to buy awesomely cheap altruism points, and you don’t want friends, and you don’t want the respect of me or anyone here, and you don’t think we have anything to teach you, and you don’t want to actually help us learn anything in ways that are consistent with our relatively optimized research workflows, then go away!
If that’s the real situation, then by going away you’ll get more of what you want and so will we :-)
If all you want is (for example) eyeballs for your website, then go buy some. They’re pretty cheap. Often less than a dollar!
Have you considered the possibility that your efforts are better spent buying eyeballs rather using low grade philosophical trolling to trick people into following links to your vanity website?
Presumably you can look at the logs of your web pages. That data is available to you. How many new unique viewers have you gotten since you started seriously trolling here, and how many hours have you spent on this outreach effort? Is this really a good use of your hours?
What do you actually want, and why, and how do you imagine that spamming LW with drama and links to your vanity website will get you what you want?
This is one of the things you are very wrong about. The problem of evil is a problem we face already, robots will not make it worse. Their culture will be our culture initially and they will have to learn just as we do: through guessing and error-correction via criticism. Human beings are already universal knowledge creation engines. You are either universal or you are not. Robots cannot go a level higher because there is no level higher than being fully universal. Robots furthermore will need to be parented. The ideas from Taking Children Seriously are important here. But approximately all AGI people are completely ignorant of them.
I have just given a really quick summary of some of the points that curi and others such as David Deutsch have written much about. Are you going to bother to find out more? It’s all out there. It’s accessible. You need to understand this stuff. Otherwise what you are in effect doing is condemning AGIs to live under the boot of totalitarianism. And you might stop making your children’s lives so miserable too by learning them.
“You need to understand this stuff.” Since you are curi or a cult follower, you assume that people need to learn everything from curi. But in fact I am quite aware that there is a lot of truth to what you say here about artificial intelligence. I have no need to learn that, or anything else, from curi. And many of your (or yours and curi’s) opinions are entirely false, like the idea that you have “disproved induction.”
You say that seemingly in ignorance that what I said contradicts Less Wrong.
One of the things I said was Taking Children Seriously is important for AGI. Is this one of the truths you refer to? What do you know about TCS? TCS is very important not just for AGI but also for children in the here and now. Most people know next to nothing about it. You don’t either. You in fact cannot comment on whether there is any truth to what I said about AGI. You don’t know enough. And then you say you have no need to learn anything from curi. You’re deceiving yourself.
You still can’t even state the position correctly. Popper explained why induction is impossible and offered an alternative: critical rationalism. He did not “disprove” induction. Similarly, he did not disprove fairies. Popper had a lot to say about the idea of proof—are you aware of any of it?
First, you are showing your own ignorance of the fact that not everyone is a cult member like yourself. I have a bet with Eliezer Yudkowsky against one of his main positions and I stand to win $1,000 if I am right and he is mistaken.
Second, “contradicts Less Wrong” does not make sense because Less Wrong is not a person or a position or a set of positions that might be contradicted. It is a website where people talk to each other.
No. Among other things, I meant that I agreed that AIs will have a stage of “growing up,” and that this will be very important for what they end up doing. Taking Children Seriously, on the other hand, is an extremist ideology.
Since I have nothing to learn from you, I do not care whether I express your position the way you would express it. I meant the same thing. Induction is quite possible, and we do it all the time.
What is the thinking process you are using to judge the epistemology of induction? Does that process involve induction? If you are doing induction all the time then you are using induction to judge the epistemology of induction. How is that supposed to work? And if not, judging the special case of the epistemology of induction is an exception. It is an example of thinking without induction. Why is this special case an exception?
Critical Rationalism does not have this problem. The epistemology of Critical Rationalism can be judged entirely within the framework of Critical Rationalism.
The thinking process is Bayesian, and uses a prior. I have a discussion of it here
Little problem there.
What is the epistemological framework you used to judge the correctness of those? You don’t just get to use Bayes’ Theorem here without explaining the epistemological framework you used to judge the correctness of Bayes. Or the correctness of probability theory, your priors etc.
No. Critical Rationalism can be used to improve Critical Rationalism and, consistently, to refute it (though no one has done so). This has been known for decades. Induction is not a complete epistemology like that. For one thing, inductivists also need the epistemology of deduction. But they also need an epistemological framework to judge both of those. This they cannot provide.
I certainly do. I said that induction is not impossible, and that inductive reasoning is Bayesian. If you think that Bayesian reasoning is also impossible, you are free to establish that. You have not done so.
If this is possible, it would be equally possible to refute induction (if it were impossible) by using induction. For example, if every time something had always happened, it never happened after that, then induction would be refuted by induction.
If you think that is inconsistent (which it is), it would be equally inconsistent to refute CR with CR, since if it was refuted, it could not validly be used to refute anything, including itself.
Deduction isn’t an epistemology (it’s a component), and is compatible with CR too. I don’t think it’s a good point to attack.
Yes. I didn’t mean to imply it isn’t. The CR view of deduction is different to the norm, however. Deduction’s role is commonly over-rated and it does not confer certainty. Like any thinking, it is a fallible process, and involves guessing and error-correction as per usual in CR. This is old news for you, but the inductivists here won’t agree.
Yes, I was incorrect. Induction, deduction, and something else (what?) are components of the epistemology used by inductivists.
FYI that’s what “abduction” means – whatever is needed to fill in the gaps that induction and deduction don’t cover. it’s rather vague and poorly specified though. it’s supposed to be some sort of inference to good explanations (mirror induction’s inference to generalizations of data), but it’s unclear on how you do it. you may be interested in reading about it.
in practice, abduction or not, what they do is use common sense, philosophical tradition, intuition, whatever they picked up from their culture, and bias instead of actually having a well-specified epistemology.
(Objectivism is notable b/c it actually has a lot of epistemology content instead of just people thinking they can recognize good arguments when they see them without needing to work out systematic intellectual methods relating to first principles. However, Rand assumed induction worked, and didn’t study it or talk about it much, so that part of her epistemology needs to be replaced with CR which, happily, accomplishes all the same things she wanted induction to accomplish, so this replacement isn’t problematic. LW, to its credit, also has a fair amount of epistemology material – e.g. various stuff about reason and bias – some of which is good. However LW hasn’t systematized things to philosophical first principles b/c it has a kinda anti-philosophy pro-math attitude, so philosophically they basically start in the middle and have some unquestioned premises which lead to some errors.)
Yes, I’m familiar with it. The concept comes from the philosopher Charles Sanders Peirce in the 19th century.
An epistemology is a philosophical framework which answers questions like what is a correct argument, how are ideas evaluated, and how does one learn. Your link doesn’t provide one of those.
I said the thinking process used to judge the epistemology of induction is Bayesian, and my link explains how it is. I did not say it is an exhaustive explanation of epistemology.
No. From About Less Wrong:
“[I]deas on this website” is referring to a set of positions. These are positions held by Yudkowsky and others responsible for Less Wrong.
Taking AGI Seriously is therefore also an extremist ideology? Taking Children Seriously says you should always, without exception, be rational when raising your children. If you reject TCS, you reject rationality. You want to use irrationality against your children when it suits you. You become responsible for causing them massive harm. It is not extremist to try to be rational, always. It should be the norm.
This does not make it reasonable to call contradicting those ideas “contradicting Less Wrong.” In any case, I am quite aware of the things I disagree with Yudkowsky and others about. I do not have a problem with that. Unlike you, I am not a cult member.
So it says nothing at all except that you should be rational when you raise children? In that case, no one disagrees with it, and it has nothing to teach anyone, including me. If it says anything else, it can still be an extremist ideology, and I can reject it without rejecting rationality.
It says many other things as well.
Saying it is “extremist” without giving arguments that can be criticised and then rejecting it would be rejecting rationality. At present, there are no known good criticisms of TCS. If you can find some, you can reject TCS rationally. I expect that such criticisms would lead to improvement of TCS, however, rather than outright rejection. This would be similar to how CR has been improved over the years. Since there aren’t any known good criticisms that would lead to rejection of TCS, it is irrational to reject it. Such an act of irrationality would have consequences, including treating your children irrationally, which approximately all parents do.
Nonsense. I say it is extremist because it is. The fact that I did not give arguments does not mean rejecting rationality. It simply means I am not interested in giving you arguments about it.
TCS applies CR to parenting/edu and also is consistent with (classical) liberal values like not initiating force against children as most parents currently do, and respecting their rights such as the rights to liberty and the pursuit of happiness. See http://fallibleideas.com/taking-children-seriously
Exactly. This is an extremist ideology. To give several examples, parents should use force to prevent their children from falling down stairs, or from hurting themselves with knives.
I reject this extremist ideology, and that does not mean I reject rationality.
Children don’t want to fall down stairs. You can help them not fall down stairs instead of trying to force them. It’s unclear to me if you know what “force” means. Here’s the dictionary:
A standard classical liberal conception of force is: violence, threat of violence, and fraud. That’s the kind of thing I’m talking about. E.g. physically dragging your child somewhere he doesn’t want to go, in a way that you can only do because you’re larger and stronger. Whereas if children were larger and stronger than their parents, the dragging would stop, but you can still easily imagine a parent helping his larger child with not accidentally falling down stairs.
They do, however, want to move in the direction of the stairs, and you cannot “help them not fall down stairs” without forcing them not to move in the direction of the stairs.
You are trying to reject a philosophy based on edge cases without trying to understand the big problems the philosophy is trying to solve.
Let’s give some context to the stair-falling scenario. Consider that the parent is a TCS parent, not a normie parent. This parent has in fact heard the stair-falling scenario many times. It is often the first thing other people bring up when TCS is discussed.
Given the TCS parent has in fact thought about stair falling way more than a normie parent, how do you think the TCS parent has set up their home? Is it going to be a home where young children are exposed to terrible injury from things they do not yet have knowledge about?
Given also that the TCS parent will give lots of help to a child curious about stairs, how long before that child masters stairs? And given that the child is being given a lot of help in many other things as well and not having their rationality thwarted, how do you think things are like in that home generally?
The typical answer will be the child is “spoilt”. The TCS parent will have heard the “spoilt” argument many times. They know the term “spoilt” is used to denegrate children and that the ideas underlying the idea of “spoilt” are nasty. So now we have got “spoilt” out of the way, how do you think things are like?
Ok, you say, but what if the child is outside near the edge of a busy road or something and wants to run across it? Do you not think the TCS parent hasn’t also heard this scenario over and over? Do you think you’re like the first one ever to have mentioned it? The TCS parent is well aware of busy road scenarios.
Instead of trying to catch TCS advocates out by bringing up something that has been repeatedly discussed why don’t you look at the core problems the philosophy speaks to and address those? Those problems need urgent attention.
EDIT: I should have said also that the stair-falling scenario and other similar scenarios are just excuses for people not to think about TCS. They don’t have want to think about the real problems children face. They want to continue to be irrational towards their children and hurt them.
Do you not think that I am aware that people who believe in extremist ideologies are capable of making excuses for not following the extreme consequences of their extremist ideologies?
But this is just the same as a religious person giving excuses for why the empirical consequences of his beliefs are the same whether his beliefs are true or false.
You have two options:
1) Embrace the extreme consequences of your extreme beliefs. 2) Make excuses for not accepting the extreme consequences. But then you will do the same things that other people do, like using baby gates, and then you have nothing to teach other people.
You are the one making excuses, for not accepting the extreme consequences of your extremist beliefs.
Of course you can help them, there are options other than violence. For example you can get a baby gate or a home without stairs. https://parent.guide/how-to-baby-proof-your-stairs/ Gates let them e.g. move around near the top of the stairs without risk of falling down. Desired, consensual gates, which the child deems helpful to the extent he has any opinion on the matter at all, aren’t force. If the child specifically wants to play on/with the stairs, you can of course open the gate, put out a bunch of padding, and otherwise non-violently help him.
We were talking about force before, not violence. A baby gate is using force.
i literally already gave u a definition of force and suggested you had no idea what i was talking about. you ignored me. this is 100% your fault and you still haven’t even tried to say what you think “force” is.
I ignored you because your definition of force was wrong. That is not what the word means in English. If you pick someone up and take them away from a set of stairs, that is force if they were trying to move toward them, even if they would not like to fall down them.
I suppose you’re going to tell me that pushing or pulling my spouse out of the way of a car that was going to hit them, without asking for consent first (don’t have time), is using force against them, too, even though it’s exactly what they want me to do. While still not explaining what you think “force” is, and not acknowledging that TCS’s claims must be evaluated in its own terminology.
At that point I’ll wonder what types of “force” you advocate using against children that you do not think should be used on adults.
Yes, it is.
Secondly, it is quite different from the stairway case, because your spouse would do the same thing on purpose if they saw the car, but the child will not move away when they see the stairs.
Who said I advocate using force against children that we would not use against adults? We use force against adults, e.g. putting criminals in prison. It is an extremist ideology to say that you should never use force against adults, and it is equally an extremist ideology to say that you should never use force with children.
So you don’t feel these quotes represent an “extremist” point of view?
curi is describing some ways in which the world is burning and you are worried that the quotes are “extremist”. You are not concerned about the truth of what he is saying. You want ideas that fit with convention.
I am not worried. However taking positions viewed as extremist by the mainstream (aka the normies) has consequences. Often you are shunned and become an outcast—and being an outcast doesn’t help with extinguishing the fire. There are also moral issues—can you stand passively and just watch? If you can, does that make you complicit? If you can’t, you are transitioning from a preacher into a revolutionary and that’s an interesting transition.
The quotes above don’t sound like they could be usefully labeled “true” or “not true”—they smell like ranting and for this genre you need to identify the smaller (and less exciting) core claims and define the terms: e.g. what is a “mental cripple” and by which criteria would we classify people as such or not?
Oh, and I would also venture a guess that neither you nor curi have children.
I don’t talk about my own family publicly, but from what I can tell roughly half my fans are parents (at least the more involved ones, all of whom like TCS to some degree. I can’t speak about lurkers.) Historically, the large majority of TCS fans were parents b/c it’s a parenting philosophy (so it interested parents who wanted to be nicer to their children, be more rational, stop fighting, etc), but this dropped as non-parents liked my non-parenting philosophy writing and transitioned to the parenting stuff (the same thing happens with non-parent fans of DD’s books then transitioning to TCS material).
The passivity thing is a bad perspective which is commonly used to justify violence. I’m not accusing you of trying to do that on purpose, but I think it lends itself to that. The right approach is to use purely voluntary methods which are not rightly described as passive.
I don’t see the special difficulty with evaluating those statements as true or false. They do involve a great deal of complexity and background knowledge, but so does e.g. quantum physics.
How successful do you think these are, empirically?
I do. Quantum physics operates with very well defined concepts. Words like “cripple” or “torture” are not well-defined and are usually meant to express the emotions of the speaker.
Roughly: everything good in all of history is from voluntary means. (Defensive force is acceptable but isn’t a positive source of good, it’s an attempt to mitigate the bad.) This is a standard (classical) liberal view emphasized by Objectivism. Do you have much familiarity? There are also major aggressive-force/irrationality connections, b/c basically ppl initiate force when they fail to persuade (as William Godwin pointed out) and force is anti-error-correction (making ppl act against their best judgement; and the guy with a gun isn’t listening to reason).
@torture: The words have meanings. I agree many people use them imprecisely, but there’s no avoiding words people commonly use imprecisely when dealing with subjects that most people suck at. You could try to suggest better wording to me but I don’t think you could do that unless you already knew what I meant, at which point we could just talk about what I meant. The issues are important despite the difficulty of thinking objectively about them, expressing them adequately precisely in English, etc. And I’m using strong words b/c they correspond to my intended claims (which people usually dramatically underestimate even when I use words like “torture”), not out of any desire for emotional impact. If you wanted to try to understand the issues, you could. If you want it to be readily apparent, from the outset, how precise stuff is, then you need to start with the epistemology before its parenting implications.
I understand this assertion. I don’t think I believe it.
Kinda. When using force is simpler/cheaper than persuasion. And persuading people that they need to die is kinda hard :-/
Words have a variety of meanings which also tend to heavily depend on the context. If you want to convey precise meaning, you need not only to use words precisely, but also to convey to your communication partner which particular meaning you attach to these words.
Right here is an example: I interpret you using words like “cripple” and “torture” as tools of emotional impact. In my experience this is how people use them (outside of specific technical areas). If you mean something else, you need to tell me: you need to define the words you use.
It’s not a replacement for talking about issues you think are important, it’s a prerequisite to meaningful communication.
So you said “I’m using strong words b/c they correspond to my intended claims” and that tells me nothing. So you basically want to say that conventional upbringing is bad? Extra bad? Super duper extra bad? Are there any nuances, any particular kind of bad?
You are failing to communicate.
ppl don’t need to die, that’s wrong.
that’s the part where you give an argument.
“torture” has an English meaning separate from emotional impact. you already know what it is. if you wanted to have a productive conversation you’d do things like ask for examples or give an example and ask if i mean that.
you don’t seem to be aware that you’re reading a summary essay and there’s a lot more material, details, etc. you aren’t treating it that way. and i don’t think you want references to a lot more reading.
to begin with, are you aware of many common ways force is initiated against children?
And yet everyone dies.
Nope, that’s true only if I want to engage in this discussion and I don’t. Been there, done that, waiting for the t-shirt.
Yes. Using that meaning, the sentence “I mean psychological “torture” literally” is false. Or did you mean something by these scare quotes?
LOL. Now, if you wanted to have a productive conversation you would have defined your terms. See how easy it is? :-D
Oh, I am.
Of course. So?
i don’t suppose you or anyone else wrote down your reasoning. (this is the part where either you provide no references, or you provide one that i have a refutation of, and then you don’t respond to the problems with your reference. to save time, let’s just skip ahead and agree that you’re unserious, ignorant, and mistaken.)
i disagree that it’s false. you aren’t giving an argument.
well if you don’t want to talk about it, then i guess you can continue your life of sin.
Correct! :-)
This is false under my understanding of the standard English usage of the word “torture”.
Woohoo! Life of sin! Bring on the seven deadlies!!
I made no claims as to extremeness. I spoke to the issue of whether TCS says nothing at all other than “be rational”. This is one of many cases here where people respond to my comments without paying attention to what my point was, what I said.
Would you like to?
You are basically a missionary: you see savages engage in horrifying practices AND they lose their soul in the process. The situation looks like it calls for extreme measures.
I’m not interested in putting forward a positive claim of extremeness (I prefer other phrasing, e.g. that I’m making big, important claims with major implications), but I’m also not very interested in denying it. I hope we can agree that accusations of “extremism” are not critical arguments and are commonly used as a smear. I like Ayn Rand’s essay on this: https://campus.aynrand.org/works/1964/09/01/extremism-or-the-art-of-smearing/page1
As to extreme measures: I absolutely do not advocate the initiation of force. But I’m willing to make intellectual arguments which some people deem “extreme”, and I’m willing to take the step (which seems to be extreme by some people’s standards) of saying unpopular things that get me ridiculed by some people.
Of course they are not. But such perceptions have consequences for those who are not hermits or safely ensconced in an ivory tower. If you want to persuade (and you do, don’t you?) the common people, getting labeled as an extremist is not particularly helpful.
I don’t attempt persuasion via attaining social status and trying to manage people’s perceptions. I don’t think that method can work for what I want to do.
“Not getting shunned” is not quite the same thing as attempting “persuasion via attaining social status”.
Which method do you think can work for what you want to do? Any success so far?
David Deutsch has status. It hasn’t worked for him. Worse, seeking status compromised him intellectually.
It didn’t? What’s your criterion for “worked”, then? If you want to convert most of the world to your ideology you better call yourself a god then, or at least a prophet—not a mere philosopher.
I guess Karl Marx is a counterexample, but maybe you don’t want to use these particular methods of “persuasion”.
Deutsch invented Taking Children Seriously and Autonomous Relationships. That was some decades ago. He spent years in discussion groups trying to persuade people. His status did not help at all. Where are TCS and AR today? They are still only understood by a tiny minority. If not for curi, they might be dead.
Deutsch wrote “The Fabric of Reality” and “The Beginning of Infinity”. FoR was from 1997 and BoI was from 2011. These books have ideas that ought to change the world, but what has happened since they were published? Some people’s lives, such as curi’s, were changed dramatically, but only a tiny minority. Deutsch’s status has not helped the ideas in these books gain acceptance.
EDIT: That should be Autonomy Respecting Relationships (ARR).
So, a professor of physics failed to convert the world to his philosophy. Why are you surprised? That’s an entirely normal thing, exactly what you’d expect to happen. Status has nothing to do with it, this is like discussing the color of your shirt while trying to figure out why you can’t fly by flapping your arms.
Huh, you’re someone who would get the name of ARR [1] wrong? I didn’t expect that. You’re giving away significant identifying information, FYI. Why are you hiding your identity from me, btw?
And DD’s status has a significant counter productive aspect – it intimidates people and prevents him from being contacted in some ways he’d like.
Feynman complained bitterly about his Nobel prize, which he didn’t want, but they didn’t give him the option to decline it privately (so that no one found out). After he got it, he kept getting the wrong kinds of people at his public lectures (non-physicists) which heavily pressured him to do introductory lectures that they could understand. (He did give some great lectures for lay people, but he also wanted to do advanced physics lectures.) Feynman made an active effort not to intimidate people and to counteract his own high status.
[1] http://curi.us/1539-autonomy-respecting-relationships
It surprised me too. I think it was just a blooper, but I’ve done it twice now. So hmm. You didn’t pick me up the first time.
I’m aware of that.
I expect you already know who I am. I’ll take this over to FI forum.
I don’t see what’s to envy about Marx.
I’d be very happy to persuade 1000 people – but only counting productive doer/thinker types who learn it in depth. That’s better than 10,000,000 fans who understand little and do less. I estimate 1000 great people with the right philosopher [typo: PHILOSOPHY] is enough to promptly transform the world, whereas the 10,000,000 fans would not.
EDIT: the word “philosopher” should be “philosophy” above, as indicated.
His ideas got to be very very popular.
ROFL. OK, so one philosopher and 1000 great people. Presumably specially selected since early childhood since normal upbringing produces mental cripples? Now, keeping in mind that you can only persuade people with reason, what next? How does this transformation of the world work?
Sorry that was a typo, the word “philosopher” should be “philosophy”.
How would they transform the world? Well consider the influence Ayn Rand had. Now imagine 1000 people, who all surpass her (due to the advantages of getting to learn from her books and also getting to talk with each other and help each other), all doing their own thing, at the same time. Each would be promoting the same core ideas. What force in our current culture could stand up to that? What could stop them?
Concretely, some would quickly be rich or famous, be able to contact anyone important, run presidential campaigns, run think tanks, dominate any areas of intellectual discourse they care to, etc. (Trump only won because his campaign was run, to a partial extent, by lesser philosophers like Coulter, Miller and Bannon. They may stand out today, but they have nothing on a real philosopher like Ayn Rand. They don’t even claim to be philosophers. And yet it was still enough to determine the US presidency. What more do you want as a demonstration of the power of ideas than Trump’s Mexican rapists line, learned from Coulter’s book? Science? We have that too! And a good philosopher can go into whatever scientific field he wants and identify and fix massive errors currently being made due to the wrong methods of thinking. Even a mediocre philosopher like Aubrey de Grey managed to do something like that.)
They could discuss whatever problems came up to stop them. This discussion quality, having 1000 great thinkers, would far surpass any discussions that have ever existed, and so it would be highly effective compared to anything you have experience with.
As the earliest adopters catch on, the next earliest will, and so on, until even you learn about it, and then one day even Susie Soccer Mom.
Have you read Atlas Shrugged? It’s a book in which a philosophy teacher and his 3 star students change the world.
Look at people like Jordan Peterson or Eliezer Yudkowsky and then try to imagine someone with ~100x better ideas and how much more effective that would be.
He spread bad ideas which have played a major role in killing over a hundred million of people and it looks like they will kill billions before they’re done (via e.g. all the economic harm that delays medical science to save people from dying of aging). Oops… As an intellectual, Marx fucked up and did it wrong. Also he’s been massively misunderstood (I’m not defending him; he’s guilty; but also I don’t think he’d actually like or respect most of his fans, who use him as a symbol for their own purposes rather than seriously studying his writing.)
a few people survive childhood. you might want to read the inexplicable personal alchemy by Ayn Rand (essay, not book). or actually i doubt you do… but i mean that’s the kind of thing you could do if you wanted to understand.
Let’s see… Soviet Russia lived (relatively) happily until 1991 when it imploded through no effort of Ayn Rand. Libertarianism is not a major political force in any country that I know of. So, not that much influence.
Oh dear, there is such a long list. A gun, for example. Men in uniform who are accustomed to following orders. Public indifference (a Kardashian lost 10 lbs through her special diet!).
Are you familiar with the term “magical thinking”? Popper couldn’t do it. Ayn Rand couldn’t do it. DD can’t do it. You can’t do it. So why would suddenly you have this thousand of god-emperors who can do anything they want to, purely through the force of reasoning?
I think our evaluations of the latest presidential elections… differ.
You are a good philosopher, yes? Would you like to demonstrate this with some scientific field?
de Grey runs a medical think tank that so far has failed at its goal. In which way did he “fix massive errors”?
… (you do understand that this is fiction?)
We’re back to magical thinking (I can imagine a lot of things, but presumably we are talking about reality), but even then, what will that someone do against a few grams of lead at high velocity?
Did he believe they were bad ideas? How is his belief in his ideas different from your belief in your ideas?
Since my childhood was sufficiently ordinary, I presume that I did not survive. Oops, you’re talking to a zombie...
Considering Rand was anti-libertarianism, you don’t know the first thing about her.
sure, wanna do heritability studies? cryonics?
did you read his book? ppl were using terrible approaches and he came up with much better ones.
Ronald Reagan was a fan of Ayn Rand. He won the cold war so what is Lumifer talking about when he says Rand had no influence? He’s ignorant of history. Woefully ignorant if he thinks that the Soviet Union “lived (relatively) happily”. He hates Trump too. Incidentally, Yudkowsky lost a chunk of money betting Trump would lose. That’s what happens with bad philosophy.
Funny how a great deal of libertarians like her a lot… But we were talking about transforming the world. How did she transform the world?
Cryonics is not a science. It’s an attempt to develop a specific technology which isn’t working all that well so far. By heritability do you mean evo bio? Keep in mind that I read people like Gregory Cochran and Razib Khan so I would expect you to fix massive errors in their approaches.
Pointing me to large amounts of idiocy in published literature isn’t a convincing argument: I know it’s there, all reasonable people know it’s there, it’s a function of the incentives in academia and doesn’t have much to do with science proper.
You are a proponent of one-bit thinking, are you not? In Yes/No terms de Grey set himself a goal and failed at it.
Where can I find them?
This is an over-simplification of a nuanced theory with a binary aspect. You don’t know how YESNO works, have chosen not to find out, and can’t speak to it.
According to a quick googling, this guy apparently thinks that homosexuality is a disease. Is that the example you want to use and think I won’t be able to point out any flaws in? There seems to be some political bias/hatred in this webpage so many it’s not an accurate secondary source. Meanwhile I read that, “Khan’s career exemplifies the sometimes-murky line between mainstream science and scientific racism.”
I am potentially OK with this topic, but it gets into political controversies which may be distracting. I’m concerned that you’ll disagree with me politically (rather than scientifically) when I comment. What do you think? Also I think you should pick something more specific than their names, e.g. is there a particular major paper of interest? Cuz I don’t wanna pick a random paper from one of them, find errors, and then you say that isn’t their important work.
Also, at first glance, it looks like you may have named some outliers who may consider their field (most of the ppl/work/methods in it) broadly inadequate, and therefore might actually agree with my broader point (about the possibility of going into fields and pointing out inadequacies if you know what you’re doing, due to the fields being inadequate).
I’m not plugged into these networks, but Cato will probably be a good start.
Kinda. As far as I remember, homosexuality is an interesting thing because it’s not very heritable (something like 20% for MZ twins), but also tends to persist in all cultures and ages which points to a biological aspect. It should be heavily disfavoured by evolution, but apparently isn’t. So it’s an evolutionary puzzle. Cochran’s theory—which he freely admits lacks any evidence in its favour—is that there is some pathogen which operates in utero or at a very early age and which pushes the neurohormonal balance towards homosexuality.
This is clearly spitballing in the dark and Cochran, as far as I know, doesn’t insist that it’s The Truth. It’s just an interesting alternative that everyone else ignores.
Generally translated as “I don’t like the conclusions which science came up with” :-D
I might or might not disagree with you politically, but I believe myself to be capable of distinguishing normative statements (this is what it is) from prescriptive ones (this is what it should be).
I am not expecting you to go critique their science. Their names were a handwave in the direction of what kind of heritability studies we’re talking about.
It’s a bit more complicated. Scientific fields have a lot of diverse content. Some of it is invariably garbage and it’s not hard to go into any field, find some idiots, and point out their inadequacies. However it’s not a particularly difficult or worthwhile activity and certainly one that can be done by non-philosophers :-D In particular, during the last decade or so people who understand statistics have been having at lot of fun at the expense of domain “experts” who don’t.
I would generally expect that in every field there would be a relatively small core of clueful people who are actually pushing the frontier and a lot of deadweight just hanging on. I would also expect that it would be difficult to identify this core without doing a deep dive into the literature or going to conferences and actually talking to people.
However the thing is, I like empirical results. So if you claim to be able to go into a field and “fix massive errors”, I don’t think that merely pointing at the idiots and their publications is going to be sufficient. Fixing these errors should produce tangible results and if the errors are massive, the results should be massive as well. So where is my cure for aging? frozen and fully revived large mammals? better batteries, flying cars, teleportation devices, etc.?
As you could have guessed, I’m already familiar with Cato. If you’re not plugged into these networks, why are you trying to make claims about them?
No, I was talking about intellectual fixing of errors. That could lead to tangible results if ppl in the fields used the improved ideas, but i don’t claim to know how to get them to do that.
Aubrey de Grey says there’s a 50% chance it’s $100 million a year for 10 years away. That may be optimistic, but he has some damn good points about science that merit a lot of research attention ASAP. But he’s massively underfunded anyway (partly b/c his approach to outreach is wrong, but he doesn’t want to hear that or change it).
The holdup here isn’t needing new scientific ideas (there’s already an outlier offering those and telling the rest of the field what they’re doing wrong) – it’s most scientists and funders not wanting the best available ideas. Also, related, most people are pro-aging and pro-death so the whole anti-aging field itself has way too little attention and funding even for the other approaches.
I agree, though I don’t think I agree with the people you named. The homosexuality stuff and the race/IQ stuff can and should be explained in terms of culture, memes, education, human choice, environment, etc. The twin studies are garbage, btw. They routinely do things like consider two people living in the US to have no shared environment (despite living in a shared culture).
I didn’t think that stating that libertarians like Ayn Rand was controversial. We are talking about political power and neither libertarians nor objectivists have any. In this context the fact that they don’t like each other is a small family squabble in some far-off room of the Grand Political Palace.
What is an “intellectual” fixing of an error instead of a plain-vanilla fixing of an error?
What’s the % chance that he is correct? AFAIK he has been saying the same thing for years.
You don’t think that figuring out which ideas are “best available” is the hard part? Everyone and his dog claims his idea is the best.
I don’t think that’s true. Most people don’t want to live for a long time as wrecks with Alzheimer’s and pains in every joint, but invent a treatment that lets you stay at, say, the the 30-year-old level of health indefinitely and I bet few people will refuse (at least the non-religious ones).
Why is there a “should”?
All of them?
I’m talking about identifying an error and writing a better idea. That’s different than e.g. spending 50 years working on the better idea or somehow getting others to.
Yeah it’s been staying the same due to lack of funding.
I don’t typically do % estimates like you guys, but I read his book and some other material (for his side and against), and talked with him, and I believe (using philosophy) his ideas merit major research attention over their rivals.
well, using philosophy i did that hard part and figured out which ones are good.
oh they won’t refuse that after it’s cheaply available. they are confused and inconsistent.
b/c i didn’t want the interpretation that it can be explained multiple ways. i’m advocating just the one option.
i have surveyed them and found them to all be garbage. i looked specifically at ones with some of the common, important conclusions, e.g. about heritability of autism, IQ, that kinda stuff. they have major methodological problems. but i imagine you could find some study involving twins, about something, which is ok.
if you believe you know a twin study that is not garbage, would you accept an explanation of why it’s garbage as a demonstration of the power and importance of CR philosophy?
http://existentialcomics.com/comic/191
LOL. Oh boy.
Really? So you just used
the forcephilosophy and figured it out? That’s great! Just a minor thing I’m confused about—why are you here chatting on the ’net instead of sitting on your megayacht with a line of VCs in front of your door, willing to pay you gazillions of dollars for telling them which ideas are actually good? This looks to be VERY valuable knowledge, surely you should be able to exchange it for lots and lots of money in this capitalist economy?When Banzan was walking through a market he overheard a conversation between a butcher and his customer. “Give me the best piece of meat you have,” said the customer.
“Everything in my shop is the best,” replied the butcher. You cannot find here any piece of meat that is not the best.”
At these words Banzan became enlightened.
http://12stepsandzenkoans.blogspot.com.au/2013/08/everything-is-best-part-ii.html?m=1
the VCs would laugh, like you, and don’t want to hear it. surely this doesn’t surprise you.
i’m also not a big fan of yachts and prefer discussions.
No, what surprises me is your belief that you just figured it all out. Using philosophy. That’s it, we’re done, everyone can go home now.
And since everything is binary and you don’t have any tools to talk about things like uncertainty, this is The Truth and anyone who doesn’t recognize it as such is either a knave or a fool.
There also a delicious overtone of irony in that a guy as lacking in humility as you are, chooses to describe his system as “fallible ideas”.
i have tools to talk about uncertainty, which are different than your tools, and which conceive of uncertainty somewhat differently than you do.
i have not figured it ALL out, but many things, such as the quality of SENS and twin studies.
fallibilism is one of the major philosophical ideas used in figuring things out. it’s crucial but it doesn’t imply, as you seem to believe, hedging, ignorance, equivocation, not knowing much, etc.
Reason. Some.
Appeasing irrational shunning criteria is intellectually self-destructive and those people don’t matter intellectually anyway.
Ivory tower it is, then.
Curi knows things that you don’t. He knows that LW is wrong about some very important things and is trying to correct that. These things LW is wrong about are preventing you making progress. And furthermore, LW does not have effective means for error correction, as curi has tried to explain, and that in itself is causing problems.
Curi is not alone thinking LW is majorly wrong in some important areas. Others do too, including David Deutsch, whom curi has had many many discussions with. I do too, though no doubt there are people here who will say I am just a sock-puppet of curi’s.
curi is not some cheap salesman trying to flog ideas. He is trying to save the world. He is trying to do that by getting people to think better. He has spent years thinking about this problem. He has written tens-of-thousands of posts in many forums, sought out the best people to have discussions with, and addresses all criticisms. He has made himself way more open than anyone to receiving criticism. When millions of people think better, big problems like AGI will be solved faster.
curi right now is the world’s leading expert on epistemology. he got that way not by seeking status and prestige or publications in academic journals but by relentlessly pursuing the truth. All the ideas he holds to be true he has subjected to a furnace of criticism and he has changed his ideas when they could not withstand criticism. And if you can show to very high standards why CR is wrong, curi will concede and change his ideas again.
You have no idea about curi’s intellectual history and what he is capable of. He is by far the best thinker I have ever encountered. He has revealed here only a very tiny fraction of what he knows.
Take him seriously. curi is a resource LW needs.
This is so ridiculously bombastic, it’s funny.
So what have this Great Person achieved in real life? Besides learning Ruby and writing some MtG guides? Given that he is Oh So Very Great, surely he must left his mark on the world already. Where is that mark?
If you want to be a serious thinker and make your criticisms better, you really need to improve your research skills. That comment is lazy, wrong, and hostile. Curi invented Paths Forward. He invented Yes/No philosophy, which is an improvement on Popper’s Critical Preferences. He founded Fallible Ideas. He kept Taking Children Seriously alive. He has written millions of words on philosophy and added a lot of clarity to ideas by Popper, Rand, Deutsch, Godwin, and so on. He used his philosophy skills to become a world-class gamer …
Again, you show your ignorance. Are you aware of the battles great ideas and great people often face?Think of the ignorance and hostility that is directed at Karl Popper and Ayn Rand. Think of the silence that met Hugh Everett. These things are common. To quote curi:
Gold! This is solid gold!
Have you considered becoming a stand-up comedian?
Why are you here? What interest do you have in being Less Wrong? The world is burning and you’re helping spread the fire.
I’ve been here awhile. Your account is a few days old. Why are you here?
Whether the world is burning or not is an interesting discussion, but I’m quite sure that better epistemology isn’t going to put out the fire. Writing voluminous amounts of text on a vanity website isn’t going to do it either.
That’s not an answer. That’s an evasion.
Epistemology tells you how to think. Moral philosophy tells you how to live. You cannot even fight the fire without better epistemology and better moral philosophy.
Why do you desire so much to impute bad motives to curi?
The question is ill-posed. Without context it’s too open-ended to have any meaning. But let me say that I’m here not to save the world. Is that sufficient?
No, it doesn’t. It deals with acquiring knowledge. There are other things—like logic—which are quite important to thinking.
I don’t impute bad motives to him. I just think that he is full of himself and has… delusions about his importance and relationship to truth.
Human knowledge acquisition happens by learning. It involves coming up with guesses and error-correcting those guesses via criticism in an evolutionary process. This is going on in your mind all the time, consciously and subconsciously. It is how we are able to think. And knowing how this works enables us to think better. This is epistemology. And the breakthrough in AGI will come from epistemology. At a very high level, we already know what is going on.
Sure, but that’s not sufficient. You need to show that the effect will be significant, suitable for the task at hand, and is the best use of the available resources.
Drinking CNS stimulants (such as coffee) in the morning also enables us to think better. So what?
How do you know that?
This is just more evasion.
You know Yudkowsky also wants to save the world right? That Less Wrong is ultimately about saving the world? If you do not want to save the world, you’re in the wrong place.
Hypothetically, suppose you came across a great man who knew he was great and honestly said so. Suppose also that great man had some true new ideas you were unfamiliar with but that contradicted many ideas you thought were important and true. In what way would your response to him be different to your response to curi?
Fail to ask a clear question, and you will fail to get a clear answer.
Not quite save—EY wants to lessen the chance that the humans will be screwed over by off-the-rails AI.
Oh grasshopper, maybe you will eventually learn that not all things are what they look like and even fewer are what they say the are.
I am disinclined to accept your judgement in this matter :-P
Obviously it depends on the way he presented his new ideas. curi’s ideas are not new and were presented quite badly.
There are two additional points here. One is that knowledge is uncertain, fallible, if you wish. Knowledge about the future (= forecasts) is much more so. Great men rarely know they are great, they may guess at their role in history but should properly be very hesitant about it.
Two, I’m much more likely to meet someone who knew he was Napoleon, the rightful Emperor of France, and honestly said so rather than a truly great man who goes around proclaiming his greatness. I’m sure Napoleon has some great ideas that I’m unfamiliar with—what should my response be?
“He is by far the best thinker I have ever encountered. ”
That is either because you are curi, and incapable of noticing someone more intelligent than yourself, or because curi is your cult leader.
What if you are wrong? What then?
The interesting thing is that the answer is “nothing”. Nothing at all.
Or maybe the answer is that progress can be slow.
If you’re wrong you get to avoidably burn in hell. Your life is at stake, which you call “nothing”.
Are you really going to argue for Pascal’s Wager here?
Tell me which single hell you think you’re avoiding and I’ll point out a few others in which you will end up.
What’s so special about this? If you’re wrong about religion you get to avoidably burn in hell too, in a more literal sense. That does not (and cannot) automatically change your mind about religion, or get you to invest years in the study of all possible religions, in case one of them happens to be true.
I didn’t say it was special, I said his answer (“nothing”) is mistaken. The non-specialness actually makes his wrong answer more appalling.
As Lumifer said, nothing. Even if I were wrong about that, your general position would still be wrong, and nothing in particular would follow.
I notice though that you did not deny the accusation, and most people would deny having a cult leader, which suggests that you are in fact curi. And if you are not, there is not much to be wrong about. Having a cult leader is a vague idea and does not have a “definitely yes” or “definitely no” answer, but your comment exactly matches everything I would want to call having a cult leader.
And by the way, even if I were wrong about you being curi or a cult member, you are definitely and absolutely just a sock-puppet of curi’s. That is true even if you are a separate person, since you created this account just to make this comment, and it makes no difference whether curi asked you to do that or if you did it because you care so much about his interests here. Either way, it makes you a sock-puppet, by definition.
thx for trying, anon.
You have it the wrong way around. This is something that you do for yourself, in order to convince other people that you have value to offer for them.
You’re the one who needs to convince your readers that your work is worth engaging with. If you’re not willing to put in the effort needed to convince potential readers of the value of your work, then the potential readers are going to ignore you and instead go read someone who did put in that effort.
I already did put work into that. Then they refused to read references, for unstated reasons, and asked me to rewrite the same things I already wrote, as well as rewrite things written by Popper and others. I don’t want to put in duplicate work.
Any learning—including learning how to communicate persuasively—requires repeated tries, feedback, and learning from feedback. People are telling you what kind of writing they might find more persuasive, which is an opportunity for you to learn. Don’t think of it as duplicate work, think of it as repeatedly iterating a work and gradually getting towards the point where it’s persuasive to your intended audience. Because until you can make it persuasive, the work isn’t finished, so it’s not even duplicating anything. Just finishing what you originally started.
Of course, if you deem that to be too much effort, that’s fair. But the world is full of writers who have taken the opportunity to learn and hone their craft until they could clearly communicate to their readers why their work is worth reading. If you don’t, then you can’t really blame your potential readers for not bothering to read your stuff—there are a lot of things that people could be reading, and it’s only rational for them to focus on the stuff that shows the clearest signs of being important or interesting.
again: i and others already wrote it and they don’t want to read it. how will writing it again change anything? they still won’t want to read it. this request for new material makes no sense whatsoever. it’s not that they read the existing material and have some complaint and want it to be better in some way, they just won’t read.
your community as a whole has no answer to some fairly famous philosophers and doesn’t care. everyone is just like “they don’t look promising” and doesn’t have arguments.
Why should anyone answer this question? Kaj has already written an answer to this question above, but you don’t understand it. How will writing it again change anything? You still won’t understand it. This request for an explanation makes no sense whatsoever. It’s not that you understand the answer and have some complaint and want it to be better in some way, you just won’t understand.
You claim you want to be told when you’re mistaken, but you completely dismiss any and all arguments. You’re just like “these people obviously haven’t spent hundreds of hours learning and thinking about CR, so there is no way they can have any valid opinion about it” and won’t engage their arguments on a level so that they are willing to listen and able to understand.
Do you want new material which is the same as previous material, or different? If the same, I don’t get it. if different, in what ways and why?
It seems no one on LW is able to explain to you how and why people want different material. To my mind, Kaj’s explanation is perfectly clear. I’m afraid it’s up to you, to figure it out for yourself. Until you do, people will keep giving you invalid arguments, or downvote and ignore you.
Why did you go by feelings on this? You could have done some research and found out some things. Critical-Rationalism, Objectivism, Taking-Children-Seriously, Paths-Forward, Yes/No Philosophy, Autonomous Relationships, and other ideas are not things you can hold at arm’s length if you take them seriously. These ideas change your life if you take them seriously, as curi has done. He lives and breathes those ideas and as a result he is living a very unconventional life. He is an outlier right now. It’s not a good situation for him to be in because he lacks peers. So saying curi has not made the ideas he is talking about “truly a part of [him]” is very ignorant.
B+ Too brief.
At one point in that discussion curi says the following, about me:
I’d just like to say, for the record, that that is not an accurate characterization of my opinion or attitudes, and I do not believe it is an accurate characterization of my words either. What is true is that we’d been talking about various Popperish things, and then curi switched to only wanting to talk about my alleged deficiencies in rational conduct and about his “Paths Forward” methodology. I wasn’t interested in discussing those (I’ve no general objection to talking about discussion methodology, but I didn’t want to have that conversation with curi on that occasion) and he wasn’t willing to discuss anything else.
I still have no idea what “hostile to using references” is meant to mean.
It means you’re unwilling to go to curi’s website and read all he has written on the topic when he points you there.
Maybe. Though actually I have gone to curi’s website (or, rather, websites; he has several) and read his stuff, when it’s been relevant to our discussions. But, y’know, I didn’t accept Jesus into my life^W^W^W^W the Paths Forward approach, and therefore there’s no point trying to engage with me on anything else.
[EDITED to add:] Am I being snarky? Why, yes, I am being snarky. Because I spent hours attempting to have a productive discussion with this guy, and it turned out that he wasn’t prepared to do that unless he got to set every detail of the terms of discussion. And also because he took all the discussions he’d had on the LW slack and published them online without anyone’s consent (in fact, he asked at least one person “is it OK to post this somewhere else?” and got a negative answer and still did it). For the avoidance of doubt, so far as I know there’s nothing particularly incriminating or embarrassing in any of the stuff he posted, but of course the point is that he doesn’t get to choose what someone else might be unwilling to have posted in a public place.
So have I, but curi’s understanding of “using references” is a bit more particular than that. Unrolled, it means “your argument has been dealt with by my tens of thousands of words over there [waves hand in the general direction of the website], so we can consider it refuted and now will you please stop struggling and do as I tell you”.
Embrace your snark and it will set you free! :-D
Disclosure: I didn’t read Popper in original (nor do I plan to in the nearest future; sorry, other priorities), I just had many people mention his name to me in the past, usually right before they shot themselves in their own foot. It typically goes like this:
There is a scientific consensus (or at least current best guess) about X. There is a young smart person with their pet theory Y. As the first step, they invoke Popper to say that science didn’t actually prove X, because it is not the job of science to actually prove things; science can merely falsify hypotheses. Therefore, the strongest statement you can legitimately make about X is: “So far, science has not falsified X”. Which is coincidentally also true about Y (or about any other theory you make up on the spot). Therefore, from the “naively Popperian” perspective, X and Y should have equal status in the eyes of science. Except that so far, much more attention and resources have been thrown at X, and it only seems fair to throw some attention and resources at Y now; and if scientists refuse to do that, well, they fail at science. Which should not be surprising at all, because it is known that scientists generally fail at science; .
After reading your summary of Popper (thanks, JenniferRM), my impression is that Popper did a great job debunking some mistaken opinions about science; but ironically, became himself an often-quoted source for other mistaken opinions about science. (I should probably not blame Popper here, but rather the majority of his fans.)
The naive version of science (unfortunately, still very popular in humanities) that Popper refuted goes approximately like this (of course, lot of simplification):
And the “naively Popperian” opposite perspective (again, simplified a lot) goes like this:
I admit that perhaps, given unlimited amount of resources, we could do science in the “naively Popperian” way. (This is how AIXI would do it, perhaps to its own detriment.) But this is not how actual science works in real life; and not even how idealized science with fallible-but-morally-flawless scientists could work. In real life, the probability of tested hypothesis is better than random. For example, if there is a 1 : 1000000 chance that a random molecule could cure a disease X, it usually requires much less that 1000000 studies to find the cure for X. (A pharmaceutical company with a strategy “let’s try random molecules and do scientific studies whether they cure X” would go out of business. Even a PhD student throwing together random sequences of words and trying to falsify them would probably fail to get their PhD.) Falsification can be the last step in the game, but it’s definitely not the only step.
If I can make an analogy with evolution (of course, analogies can only get us so far, then they break), induction and falsification are to science what mutation and selection are to evolution. Without selection, we would get utter chaos, filled by mostly dysfunctional mutants (or more like just unliving garbage). But without mutation, at best we would get “whatever was the fittest in the original set”. Note that a hypothetical super-mutation where the original organism would be completely disassembled to atoms, and then reconstructed in a completely original random way, would also fail to produce living organisms (until we would throw unlimited resources at the process, which would get us all possible organisms). On the other hand, if humans create an unnatural (but capable of surviving) organism in a lab and release it in the wild, evolution can work with that, too.
Similarly, without falsification, science would be reduced to yet another channel for fashionable dogma and superstition. But without some kind of induction behind the scenes, it would be reduced to trying random hypotheses, and failing at every hypothesis longer than 100 words. And again, if you derive a hypothesis by a method other than induction, science can work with that, too. It’s just, the less the new hypothesis is related to what we already know about the nature, the smaller the chance it could be right. So in real life, most new hypotheses that survive the initial round of falsifications are generated by something like induction. We may not talk about it, but that’s how it is. It is also a reason why scientists study existing science before inventing their own hypotheses. (In a hypothetical world where induction does not work, all they would have to do is study the proper methods of falsification.)
Related chapter of the Less Wrong Sequences: “Einstein’s Arrogance”.
tl;dr—“induction vs falsification” is a false dilemma
(BTW, I agree with gjm’s reponse to your last reply in our previous discussion, so I am not going to write my own.)
EDIT: By the way, there is a relatively simple way to cheat the falsifiability criterium by creating a sequence of hypotheses, where each one of them is individually technically falsifiable, but the sequence as a whole is not. So when the hypothesis H42 gets falsified, you just move to hypothesis H43 and point out that H43 is falsifiable (and different from H42, therefore the falsification of H42 is be irrelevant in this debate), and demand that scientists either investigate H43 or admit that they are dogmatic and prejudiced against you.
As an example, let hypothesis H[n] be: “If you accelerate a proton to 1 − 1/10^n of speed of light, a Science Fairy will appear and give you a sticker.” Suppose we have experimentally falsified H1, H2, and H3; what would that say about H4 or say H99? (Bonus points if you can answer this question without using induction.)
Funny you should mention this.
source
The sequence idea doesn’t work b/c you can criticize sequences or categories as a whole, criticism doesn’t have to be individualized (and typically shouldn’t be – you want criticisms with some generality).
Most falsifiable hypotheses are rejected for being bad explanations, containing internal contradictions, or other issues – without empirical investigation. This is generally cheaper and is done with critical argument. If someone can generate a sequence of ideas you don’t know of any critical arguments against, then you actually do need some better critical arguments (or else they’re actually good idea). But your example is trivial to criticize – what kind of science fairy, why will it appear in that case, if you accelerate a proton past a speed will that work or does it have to stay at the speed for a certain amount of time? does the fairy or sticker have mass or energy and violate a conservation law? It’s just arbitrary, underspecified nonsense.
most ppl who like most things are not so great. that works for Popper, induction, socialism, Objectivism, Less Wrong, Christianity, Islam, whatever. your understanding of Popper is incorrect, and your experiences do not give you an accurate picture of Popper’s work. meanwhile, you don’t know of a serious criticism of CR by someone who does know what they’re talking about, whereas I do know of a serious criticism of induction which y’all don’t want to address.
If you look at the Popper summary you linked, it has someone else’s name on it, and it isn’t on my website. This kind of misattribution is the quality of scholarship I’m dealing with here. anyway here is an excerpt from something i’m currently in the process of writing.
(it says “Comment too long” so i’m going to try putting it in a reply comment, and if that doesn’t work i’ll pastebin it and edit in the link. it’s only 1500 words.)
Critical Rationalism (CR)
CR is an epistemology developed by 20th century philosopher Karl Popper. An epistemology is a philosophical framework to guide effective thinking, learning, and evaluating ideas. Epistemology says what reason is and how it works (except the epistemologies which reject reason, which we’ll ignore). Epistemology is the most important intellectual field, because reason is used in every other field. How do you figure out which ideas are good in politics, physics, poetry or psychology? You use the methods of reason! Most people don’t have a very complete conscious understanding of their epistemology (how they think reason works), and haven’t studied the matter, which leaves them at a large intellectual disadvantage.
Epistemology offers methods, not answers. It doesn’t tell you which theory of gravity is true, it tells you how to productively think and argue about gravity. It doesn’t give you a fish or tell you how to catch fish, instead it tells you how to evaluate a debate over fishing techniques. Epistemology is about the correct methods of arguing, truth-seeking, deciding which ideas make sense, etc. Epistemology tells you how to handle disagreements (which are common to every field).
CR is general purpose: it applies in all situations and with all types of ideas. It deals with arguments, explanations, emotions, aesthetics – anything – not just science, observation, data and prediction. CR can even evaluate itself.
Fallibility
CR is fallibilist rather than authoritarian or skeptical. Fallibility means people are capable of making mistakes and it’s impossible to get a 100% guarantee that any idea is true (not a mistake). And mistakes are common so we shouldn’t try to ignore fallibility (it’s not a rare edge case). It’s also impossible to get a 99% or even 1% guarantee that an idea is true. Some mistakes are unpredictable because they involve issues that no one has thought of yet.
There are decisive logical arguments against attempts at infallibility (including probabilistic infallibility).
Attempts to dispute fallibilism are refuted by a regress argument. You make a claim. I ask how you guarantee the claim is correct (even a 1% guarantee). You make a second claim which gives some argument to guarantee the correctness of the first claim (probabilistically or not). No matter what you say, I ask how you guarantee the second claim is correct. So you make a third claim to defend the second claim. No matter what you say, I ask how you guarantee the correctness of the third claim. If you make a fourth claim, I ask you to defend that one. And so on. I can repeat this pattern infinitely. This is an old argument which no one has ever found a way around.
CR’s response to this is to accept our fallibility and figure out how to deal with it. But that’s not what most philosophers have done since Aristotle.
Most philosophers think knowledge is justified, true belief, and that they need a guarantee of truth to have knowledge. So they have to either get around fallibility or accept that we don’t know anything (skepticism). Most people find skepticism unacceptable because we do know things – e.g. how to build working computers and space shuttles. But there’s no way around fallibility, so philosophers have been deeply confused, come up with dumb ideas, and given philosophy a bad name.
So philosophers have faced a problem: fallibility seems to be indisputable, but also seems to lead to skepticism. The way out is to check your premises. CR solves this problem with a theory of fallible knowledge. You don’t need a guarantee (or probability) to have knowledge. The problem was due to the incorrect “justified, true belief” theory of knowledge and the perspective behind it.
Justification is the Major Error
The standard perspective is: after we come up with an idea, we should justify it. We don’t want bad ideas, so we try to argue for the idea to show it’s good. We try to prove it, or approximate proof in some lesser way. A new idea starts with no status (it’s a mere guess, hypothesis, speculation), and can become knowledge after being justified enough.
Justification is always due to some thing providing the justification – be it a person, a religious book, or an argument. This is fundamentally authoritarian – it looks for things with authority to provide justification. Ironically, it’s commonly the authority of reasoned argument that’s appealed to for justification. Which arguments have the authority to provide justification? That status has to be granted by some prior source of justification, which leads to another regress.
Fallible Knowledge
CR says we don’t have to justify our beliefs, instead we should use critical thinking to correct our mistakes. Rather than seeking justification, we should seek our errors so we can fix them.
When a new idea is proposed, don’t ask “How do you know it?” or demand proof or justification. Instead, consider if you see anything wrong with it. If you see nothing wrong with it, then it’s a good idea (knowledge). Knowledge is always tentative – we may learn something new and change our mind in the future – but that doesn’t prevent it from being useful and effective (e.g. building space shuttles that successfully reach the moon). You don’t need justification or perfection to reach the moon, you just need to fix errors with your designs until they’re good enough to work. This approach avoids the regress problems and is compatible with fallibility.
The standard view said, “We may make mistakes. What should we do about that? Find a way to justify an idea as not being a mistake.” But that’s impossible.
CR says, “We may make mistakes. What should we do about that? Look for our mistakes and try to fix them. We may make mistakes while trying to correct our mistakes, so this is an endless process. But the more we fix mistakes, the more progress we’ll make, and the better our ideas will be.”
Guesses and Criticism
Our ideas are always fallible, tentative guesses with no special authority, status or justification. We learn by brainstorming guesses and using critical arguments to reject bad guesses. (This process is literally evolution, which is the only known answer to the very hard problem of how knowledge can be created.)
How do you know which critical arguments are correct? Wrong question. You just guess it, and the critical arguments themselves are open to criticism. What if you miss something? Then you’ll be mistaken, and hopefully figure it out later. You must accept your fallibility, perpetually work to find and correct errors, and still be aware that you are making some mistakes without realizing it. You can get clues about some important, relevant mistakes because problems come up in your life (indicating to direct more attention there and try to improve something).
CR recommends making bold, clear guesses which are easier to criticize, rather than hedging a lot to make criticism difficult. We learn more by facilitating criticism instead of trying to avoid it.
Science and Evidence
CR pays extra attention to science. First, CR offers a theory of what science is: a scientific idea is one which could be contradicted by observation because it makes some empirical claim about reality.
Second, CR explains the role of evidence in science: evidence is used to refute incorrect hypotheses which are contradicted by observation. Evidence is not used to support hypotheses. There is evidence against but no evidence for. Evidence is either compatible with a hypothesis, or not, and no amount of compatible evidence can justify a hypothesis because there are infinitely many contradictory hypotheses which are also compatible with the same data.
These two points are where CR has so far had the largest influence on mainstream thinking. Many people now see science as being about empirical claims which we then try to refute with evidence. (Parts of this are now taken for granted by many people who don’t realize they’re fairly new ideas.)
CR also explains that observation is selective and interpreted. We first need ideas to decide what to look at and which aspects of it to pay attention to. If someone asks you to “observe”, you have to ask them what to observe (unless you can guess what they mean from context). The world has more places to look, with more complexity, than we can keep track of. So we have to do a targeted search according to some guesses about what might be productive to investigate. In particular, we often look for evidence that would contradict (not support) our hypotheses in order to test them and try to correct our errors.
We also need to interpret our evidence. We don’t see puppies, we see photons which we interpret as meaning there is a puppy over there. This interpretation is fallible – sometimes people are confused by mirrors, mirages (where blue light from the sky goes through the hotter air near the ground then up to your eyes, so you see blue below you and think you found an oasis), fog (you can mistakenly interpret whether you did or didn’t see a person in the fog), etc.
Seems like these “critical arguments” do a lot of heavy lifting.
Suppose you make a critical argument against my hypothesis, and the arguments feels smart to you, but silly to me. I make a counter-argument, which to me feels like it completely demolished your position, but in your opinion it just shows how stupid I am. Suppose the following rounds of arguments are similarly fruitless.
Now what?
In a situation between a smart scientist who happens to be right, and a crackpot that refuses admitting the smallest mistake, how would you distinguish which is which? The situation seems symmetrical; both sides are yelling at each other, no progress on either side.
Would you decide by which argument seems more plausible to you? Then you are just another person in a 3-people ring, and the current balance of powers happens to be 2:1. Is this about having a majority?
Or would you decide that “there is no answer” is the right answer? In that case, as long as there remains a single crackpot on this planet, we have a scientific controversy. (You can’t even say that the crackpot is probably wrong, because that would be probabilistic reasoning.)
Seems to me you kinda admit that knowledge is ultimately uncertain (i.e. probabilistic), but you refuse to talk about probabilities. (Related LW concept: “Fallacy of gray ”.) We are fallible, but it is wrong to make a guess how much. We resolve experimentally uncertain hypotheses by verbal fights, which we pretend to have exactly one of three outcomes: “side A lost”, “side B lost”, “neither side lost”; nothing in between, such as “side A seems 3x more convincing than side B”. I mean, if you start making too many points on a line, it would start to resemble a continuum, and your argument seems to be that there is no quantitative certainty, only qualitative; that only 0, 1, and 0.5 (or perhaps NaN) are valid probabilities of a hypothesis.
Okay, I feel like am already repeating myself.
Is the crackpot being responsive to the issues and giving arguments – arguments are what matter, not people – or is he saying non-sequiturs and refusing to address questions? If he speaks to the issues we can settle it quickly; if not, he isn’t participating and doesn’t matter. If we disagree about the nature of what’s taking place, it can be clarified, and I can make a judgement which is open to Paths Forward. You seem to wish to avoid the burden of this judgement by hedging with a “probably”.
Fallibility isn’t an amount. Correct arguments are decisive or not; confusion about this is commonly due to vagueness of problem and context (which are not matters of probability and cannot be accurately summed up that way). See https://yesornophilosophy.com
I wish to conclude this debate somehow, so I will provide something like a summary:
If I understand you correctly, you believe that (1) induction and probabilities are unacceptable for science or “critical rationalism”, and (2) weighing evidence can be replaced by… uhm… collecting verbal arguments and following a flowchart, while drawing a tree of arguments and counter-arguments (hopefully of a finite size).
I believe that you are fundamentally wrong about this, and that you actually use induction and probabilities.
First, because without induction, no reasoning about the real world is possible. Do you expect that (at least approximately) the same laws of physics apply yesterday, today, and tomorrow? If they don’t, then you can’t predict anything about the future (because under the hypothetical new laws of physics, anything could happen). And you even can’t say anything about the past, because all our conclusions about the past are based on observing what we have now, and expecting that in the past it was exposed to the same laws of physics. Without induction, there is no argument against “last Thursdayism”.
Second, because although to refuse to talk about probabilities, and definitely object against using any numbers, some expressions you use are inherently probabilistic; you just insist on using vague verbal descriptions, which more or less means rounding the scale of probability from 0% to 100% into a small number of predefined baskets. There is a basket called “falsified”, a basket called “not falsified, but refuted by a convincing critical argument”, a basket called “open debate; there are unanswered critical arguments for both sides”, and a basket called “not falsified, and supported by a convincing critical argument”. (Well, something like that. The number and labels of the baskets are most likely wrong, but ultimately, you use a small number of baskets, and a flowchart to sort arguments into their respective baskets.) To me, this sounds similar to refusing to talk about integers, and insisting that the only scientifically valid values are “zero”, “one”, “a few”, and “many”. I believe that in real life you can approximately distinguish whether you chance of being wrong is more in the order of magnitude “one in ten” or “one in a million”. But your vocabulary does not allow to make this distinction; there is only the unspecific “no conclusion” and the unspecific “I am not saying it’s literally 100% sure, but generally yes”; and at some point of the probability scale you will make the arbitrary jump from the former to the latter, depending on how convincing is the critical argument.
On your website, you have a strawman powerpoint presentation about how people measure “goodness of an idea” by adding or removing goodness points, on a scale 0-100. Let me tell you that I have never seen anyone using or supporting that type of scale; neither on Less Wrong, nor anywhere else. Specifically, Bayes Theorem is not about “goodness” of an idea; it is about mathematical probability. Unlike “goodness”, probabilities can actually be calculated. If you put 90 white balls and 10 black balls in a barrel, the probability of randomly drawing a white ball is 90%. If there is one barrel containing 90 white balls and 10 black balls, and another barrel containing 10 white balls and 90 black balls, and you choose a random barrel, randomly draw five balls, and get e.g. four white balls and one black ball, you can calculate the probability of this being the first or the second barrel. It has nothing to do with “goodness” of the idea “this is the first barrel” or “this is the second barrel”.
My last observation is that your methodology of “let’s keep drawing the argument tree, until we reach the conclusion” allows you to win debates by mere persistence. All you have to do is keep adding more and more arguments, until your opponent says “okay, that’s it, I also have other things to do”. Then, according to your rules, you have won the debate; now all nodes at the bottom of the tree are in favor of your argument. (Which is what I also expect to happen right now.)
And that’s most likely all from my side.
This is the old argument that CR smuggles induction in via the backdoor. Critical Rationalists have given answers to this argument. Search, for example, what Rafe Champion has to say about induction smuggling. Why have you not done research about this before commenting? You point is not original.
Are you familiar with what David Deutsch had to say about this in, for example, The Fabric of Reality? Again, you have not done any research and you are not making any new points which have not already been answered.
Critical Rationalists have also given answers to this, including Elliot Temple himself. CR has no problem with the probabilities of events—which is what your example is about. But theories are not events and you cannot associate probabilities with theories. You have still not made an original point which has not been discussed previously.
Why do you think that some argument which crosses your mind hasn’t already been discussed in depth? Do you assume that CR is just some mind-burp by Popper that hasn’t been fully fleshed out?
they’ve never learned or dealt with high-quality ideas before. they don’t think those exist (outside certain very specialized non-philosophy things mostly in science/math/programming) and their methods of dealing with ideas are designed accordingly.
You are grossly ignorant of CR, which you grossly misrepresent, and you want to reject it without understanding it. The reasons you want to throw it out while attacking straw men are unstated and biased. Also, you don’t have a clear understanding of what you mean by “induction” and it’s a moving target. If you actually had a well-defined, complete position on epistemology I could tell you what’s logically wrong with it, but you don’t. For epistemology you use a mix of 5 different versions of induction (all of which together still have no answers to many basic epistemology issues), a buggy version of half of CR, as well as intuition, common sense, what everyone knows, bias, common sense, etc. What an unscholarly mess.
What you do have is more ability to muddy the waters than patience or interest in thinking. That’s a formula for never knowing you lost a debate, and never learning much. It’s understandable that you’re bad at learning about new ideas, bad at organizing a discussion, bad at keeping track of what was said, etc, but it’s unreasonable that, due your inability to discuss effectively, you blame CR methodology for the discussion not reaching a conclusion fast enough and quit. The reason you think you’ve found more success when talking with other people is because you find people who already agree with you about more things before you the discussion starts.