Experimental Open Thread April 2019: Socratic method
This post was popular, but the idea never got picked up. Let’s have an experimental open thread this month!
The rules:
Top level comments would be claims. Second level comments would be discouraged from directly saying that someone is wrong and instead encouraged to ask them questions instead to get them to think
Let top level comments be debatable claims, first tier responses be questions, second tier answers, responses, answers, etc. Try to go as deep as possible, I’d expect an actual update to be increasingly likely to happen as you continue the conversation.
Claim: a typical rationalist is likely to be relying too much on legibility, and would benefit from sometimes not requiring an immediate explicit justification for their beliefs.
Question: What empirical evidence do you have about this? (E.g., what do you observe introspectively, what have you seen others doing, etc., and how sure are you that those things are the way you think they are?)
Well I don’t really have a justification for it (ha), but I’ve noticed that explicit deductive thought rarely leads me to insights that turn out to be useful. Instead, I find that simply waiting for ideas to pop into my head, makes the right ideas pop into my head.
Question: How representative do you think posts on Less Wrong are in terms of how rationalists make decisions in practise? If there is a difference, do you think spending time on LW may affect your perspective on how rationalists make decisions?
A few related questions:
(1) What is your favourite LW post (or blog post) for?
(2) How do you think the opinions of LW posters will be expressed (if they are not on the ‘better’ topics you might like to include)?
(3) Any existing LW comments are very very welcome.
(4) If you are a regular poster, what is the biggest incentive for you to comment?
(5) This website is very personal, and you may have trouble detecting people’s personal page, so you might want to make a serious shot at understanding what is going on here.
I think a large part of LW’s appeal is to “people feel comfortable and positive”, and we are all strongly in favor of this.
Some of my friends think that Less Wrong, and many of my communities’ memes, are much more effective than other memes, and the truth of this is most apparent in the media.
I think the truth of this whole business is most clear in the meme.
(I don’t actually know that much (though there is an interesting and surprising amount of writing by smart people).)
That last parenthetical remark is funny when you consider how GPT-2 knows nothing new but just reshuffles the “interesting and surprising amount of writing by smart people”.
Eliezer,
This is indeed interesting and informative—I can’t see anything else on that thread except the title. How does Eliezer link to this “thing” and this “thing” when he says that it’s a “boring idea”?
Who do you mean with the phrase typical rationalist?
I think “typical X does Y” is shorthand for “many or most Xs do Y”.
That still leaves open what “X” is.
There’s a way to put this in a sentence like this:
We have now established that a monist approach to overcoming a bias is good.
So, this sentence has been read on LW (a link is at http://lesswrong.com/r/discussion/lw/jb/the_rationality_contribution/):
I consider myself to be one of the most intelligent people in the world.
This sentence has been read on LW (a link is athttp://lesswrong.com/r/discussion/lw/jb/the_rationality_contribution/):
I consider myself to be one of the most intelligent people in the world.
Let me say it again, this is one of those things you’re advocating for.
I have never shown a clear, clear avenue for finding truth, and I don’t find it very convincing. The word “acoustic vibrations” seems like its own word, but that doesn’t mean I have anything to add to the argument—unless you mean to refer to an ongoing auditory experience before proceeding. The best way I can tell you is that your brain was designed to detect this, not to assess the actual auditory experience.
The best way I can tell you is that your brain was designed to detect this, not assess the actual experience.
For a brief, though, my favorite quote of yours:
“There are many cases where auditory experiences are so unpleasant that we don’t notice them or listen to them. Like visual scenes, if we are to consider the problem in our own mental peculiarities, we simply cannot proceed there.”
I agree, especially when I say this, but there seems to be an advantage to the idea of people being able to perceive certain kinds of events in a way that unifies them into concrete, easily recognizeable experiences. That is, if the conscious mind/mind can recognize objects in the light of sound, then you can, without being deaf, imagine hearing somebody else speak in a tone of shock and outrage that doesn’t sound right. All this, the way to understand speech, is to understand the listener’s reaction, without sounding heard. But for most people, it isn’t enough. We can recognize most of the discomfort ourselves easily, especially if we’re doing something weird. And yet, this ability to recognize objects like attacking them with a trump card is an essential part of language education, filling
I’m not sure they were the same thing, but I agree with the OP about what it was.
Not entirely? I don’t think that “typical irrationalist” is that precise, but it’s what I’ll be talking about when I write this post.
Clarification request: At face value you’re implying that typical rationalists always do require immediate explicit justification for their beliefs. I wonder whether that’s an exaggeration for rhetorical effect. Could you be a bit more, um, explicit about just what the state of affairs is that you’re suggesting is suboptimal?
You saw that correctly. What I mean is too often, not always.
Circle geometry should be removed from the high school maths syllabus and replaced with statistics because stats is used in science, business and machine learning, while barely anyone needs circle geometry.
While I agree that circle geometry is best left for specialized elective math classes, and that some basics statistical ideas like average, variance and Bell curve can be useful for an average person, I am curious which alternatives to circle geometry you considered before settling on stats as the best candidate?
That’s a good point. There’s all kinds of things that might be worth considering adding such as programming, psychology or political philosophy. I guess my point was only that if we were going to replace it with something within maths, then stats seems to be the best candidate (at least better than any of the other content that I covered in university)
My personal take on the math of game theory is that most games are really, really simple to play. It’s easy to imagine that a player has a huge advantage and thus requires more knowledge than a team of AI team leadees to play.
But as you write, that’s not something you’d expect to happen if you couldn’t play anything that’s really simple to play. Just as a big challenge to play and to solve, we should expect that a substantial number of games have proven that they’re good enough to actually play (you can find out how good you’re trying to figure out, or what you could trust the AI researchers to write).
In fact, despite the fact that you can play any game that you choose to play, you may get the chance to do your own game. I imagine that’s not so helpful in mindlessly trying to think in words. If you want to have a game that’s going to prove it.
But I also offer a chance to write a computer game on prediction markets. I can write a game. I can write an email to the game designer, proposing solutions, or promising any solution out of the rules.
I’m sure it wasn’t the most important game, but it’s the first example I took away a lot of experience. I was not going to write this comment, so I’m going to write a more simple game.
I will publish the full logs for anyone who wants it.
If the problem is one of (x-and-x-and-x-and-x and x-and-x-and-x-and-x-and-x and y-and-y-and-x-and… I am happy to answer, as well as for others I am sure are confused by the relevant bits and may be able to retype them with math if I want to.
A good way to talk about this is to ask whether one is in the middle of a problem solving style, but that is a bit harder to communicate in words. Even if you are not in the middle of the problem solving style, you can get a pretty clean sense of the problem by going out. (There is some confusion about this, but if you haven’t already, you can read up on the paper at http://x-and-x-and-x-and-x-and-the-interrogable.
You might not be in a problem solving style (which you may or may not have, and that might be the case for many of them), but this is your opportunity to help your quest as a rationalist.
Questions:
1) Do you consider circle geometry to be the most useless high school subject? How about replacing literature with statistics?
2) Even though circle geometry is rarely used directly by average adults, it’s relatively easy to grasp and helps to develop mathematical thinking. Statistics is more involved and requires some background in combinatorics and discrete math which are not covered in many schools. Do you think majority of high school students will be able to understand statistics when it’s taught instead of circle geometry?
1) That’s a good point, but I was thinking about how to improve the high school maths syllabus, not so much about high school in general. I don’t have any strong opinions on removing literature instead if it were one or the other. However, I do have other ideas for literature. I’d replace literature with a subject that is half writing/giving speeches about what students are passionate about and half reading books mostly just for participation marks. I’d have the kinds of things students currently do in literature part of an elective only.
2) p-testing is a rather mechanised process. It’s exactly the kind of thing high school is good at teaching. Basic Bayesian statistics only has one key formula (although it has another form). Even if there is a need for prerequisite units in order to prepare students, it still seems worthwhile.
Do you think that the mechanic act of plugging in numbers into formula’s is more important than the conceptual act of understanding what a statistical test actually means?
In terms of use, most people only need to know a few basic facts like “a p-value is not a probability”, which high school teachers should be able to handle. Those who seriously need statistics could cover at a higher level at university and gain the conceptual understanding there.
It seems that a lot of people who have lessons that cover students t-test come out of them believer that the p-value is the probability that the claim is true. I would expect that most students of high school classes don’t go out of the classes with a correct understanding
Meta: Downvoted because this is not a question.
I think that your “solution” is the right one. I don’t think there’s any reason to believe it was.
“It’s going to be a disaster,” you say. “And it’s always a disaster.”
Rationality is not a rational belief system but it is a way of using intuition to guide anticipation. I think a lot of people are using intuition to guide their anticipation (because it’s not like they think they need intuition) but I find it a bit doubtful. It seems to me that intuition is a method for determining anticipation and anticipation in cases where the answer is “no”.
I don’t think that intuition directly helps people build intuition, but it seems to help me in thinking when trying to find things to optimize for. In my experience, intuition is a way of being more accurate when you have a choice of response.
The most important thing to recognize is that the feeling people have when you think about things is that it’s a feeling they can’t control or control. It’s not that intuition directly makes things, it’s that an intuition can’t control the feeling. If you are familiar with the concept of a feeling, then you can go ahead and build intuition for the concept as well if you are also familiar with it.
The only places where intuition is a useful tool are those where it’s a way to start your thinking on concrete problems. That’s how I’ve found that I’ve learned something like “learn the right way to approach the problem”.
What do you mean by circle geometry?
Good point, I should have clarified this more. I’m not saying that people shouldn’t know how to calculate the area and circumference of a circle as people may actually use that. It’s more to do with all the things to do with tangents and chords and shapes inscribed in circles.
Possible uses:
1. Passing tests—in a geometry class, taking the ACT, (I don’t know, maybe it’s a part of getting a GED).
2. Your interest in geometry is not merely theoretical, but practical. Maybe you construct things, perhaps out of wood using power tools. (You may find it useful to design/implement a coordinate system on a piece of wood to assist with getting the dimensions of things right, as you cut them out with a saw. Someone may have already invented this.) If you are trying to find the area under a curve, you may find it useful to buy very fine, high quality paper, graph the shape of the curve, and weight it, and use the average wight of the paper per inch or centimeter (squared) to find the answer. (This relies of the material being consistent through out, and weighing about the same everywhere.)
3. Despite your claims that you would never use math, or this part of math, someday you find yourself* designing a dome, or even a half sphere, perhaps as a place to live. The floor plan is a circle.
4. You enjoy math. You enjoy learning this/using this knowledge on puzzles to challenge your wits. (See 6)
5. You end up as a teacher, assistant, or tutor. The subject is math. (Perhaps you realize that not every geometry student that will one day teach geometry is aware of this fact.) Whether or not you learned all the fancy stuff the first time, if you didn’t retain it you have to learn it again—well enough to teach it to someone that doesn’t like the subject as much as you—and you hated geometry (class). (It was required.)
6. You learn visual calculus. Other mathematicians may compose long, elaborate arguments that they publish in papers that may take days to decipher (that seem to push the world ever closer to proofs people can’t read, but computers have apparently checked—or been used to produce). Perhaps your proofs employ no words, but consist of a picture instead, that employs esoteric knowledge (such as that of tangents and chords) to solves problems beautifully (and quickly). Perhaps this profound knowledge makes you both a better mathematician, and a better teacher of math.
In summary:
It’s arbitrary, but it’s part of the curriculum that’s tested on. (However students or teachers feel about it.)
You use the knowledge—specifically*, or generally**.
You enjoy learning, or a challenge.
Teaching.
“Higher math.” (You can see more up here, but the air is thinner. You feel kind of dizzy.)
*Your life has turned into a geometry problem. Is this hell?
**The square-cube law is general and useful. I think the second property is a result of the first.
That doesn’t mean your view can’t be correct. It’s as true as you are claiming to be. The claim is that it’s difficult to determine whether there’s actually a law of physics about how to deal with quantum mechanics.
If there wasn’t, then you would be far wrong. If there were, then either you and I would have different opinions. But what I would be proposing is a way for our disagreement about what ‘true’ means: that we should not be too confident or too skeptical about other people’s points on the theory, which could give us an overly harsh criticism, or make us look like the kind of fool who hasn’t yet accepted them yet.
I think the correct answer to this problem would be a question of how confident are we that the point being made is the correct point? It seems obvious to me that we have no idea about the nature of the dispute. If I disagree, then I think I’ll go first.
If a question is really important and it comes down to the point of people saying “I think X” then it ought to come down to the following:
“I think X is true, and therefore Y is true. If we disagree, then I don’t think X is true, and therefore Y is true.”
In this case, if we had the same thing, but also had a different conversation (as in with Mr. Lee’s comment at the end of the chapter), our disagreement could be resolved by someone else directly debating the point (we could debate the details of this argument, if they disagree).
In other words, we are all in agreement that we should be confident that we have considered the point, but it’s better to accept that we’re making a concession. But the point is that we know we shouldn’t be confident that it’s an argument that we would not be confident would work, or that we shouldn’t be confident about it.
In all cases, this is the point that it often seems to be getting.
This may seem like a pretty simple and non-obvious argument to me, but it is. And it seems the point was that there are many situations where you and some of your friends agree that the point should be resolved and that it’s reasonable to agree that the point should be fairly obvious so the disagreement seems to be a bit more complicated.
I read somewhere that there’s a norm in academia that it should never be controversial for a student to
I can’t provide an answer, but it might help you. So I am asking you to like it. I’m not sure if you want to take a look at the Wikipedia page on Circling, or in any other forums, that’s enough to give me something to think about.
What is Circling, though?
The point of Circling is to teach you about things that are fundamentally personal (me included); there’s nothing inherently wrong with that. Circling feels to teach you social skills (me included). Like any other rationality training tool, Circling will teach you how to interact with the outside world (me included). Like any other rationality training tool, Circling will teach you how to Look (and how to ask for help understanding), and how to Look (and get help with the content).
Also, circling is strongly related to self-improvement, and one of the best tools I’ve found on the internet is Circling.
So, what are some things you might try for yourself?
I’ve been reading the sequences, and so far it seems very good. If a math textbook is worth reading, I think it is.
Here’s some specific things I have taken from the sequences, all relevant to this:
The ability to calculate your points. For instance, you might find someone who gives a much more concrete example of how to calculate a point, and some example how to calculate (e.g. a calculator).
A set of fixed point questions:
how fast would you attempt to figure out the answer if you had to read a given textbook
how long would you have to try to answer the question if you had to read a given textbook?
how long would you have to answer the question if you had to read a given textbook?
A set of fixed point questions: how fast you could estimate something if 1) and 2) the material changed in response to it
The ability to estimate something’s “true likelihood” rather than just being a guess.
The ability to calculate something’s probability
The ability to calculate something’s probability
The ability to calculate something’s expected sample
The ability to calculate something’s “expected sample”
The ability to calculate something’s “true sample”
The ability to calculate something’s “true sample”
The ability to calculate something’s “true score”
The ability to calculate something’s “true score”
The ability to calculate something’s true score
The ability to calculate anything’s true score
The ability to calculate something’s true score
The ability to estimate something’s true score
The ability to estimate something’s true true score
The ability to derive an updated probability distribution
The ability to derive an updated probability distribution
That ability to verify a set of correct conformance to another function
The ability to derive an updated probability distribution.
The ability to derive an updated probability distribution if it were a function of a function that could have been written down in the same language as mathematical proofs of the underlying mathematics: Bounded versions of formal probability theories, Bounded versions, “unknown unknowns”
The ability to construct an updated probability distribution using LBO1, b
Claim: this thread would be better (although, it’s already great) if people added confidence levels to their claims at the beginning, and updated them at the end of the discussion. (confidence level − 75%)
Do you think that the extra effort from requiring confidence levels might act as a trivial inconvenience that discourages people from posting?
Haven’t thought about that. it might. is there a way to test that? (i guess if you make it optional, as it is, then it won’t act that way)
I’ll have to go back and reread the first paragraph, and notice the second paragraph—“Hey guys, I just looked at this - I’m curious what LW’s takeaways—and why”, which is the only thing I see now that I’ve ever seen before, except in the last paragraph. Do you have a good explanation for the “other posts are terrible, I’ll just go and read the second one” paragraph? Perhaps not, but given that my model of you is such that I trust you guys, the second isn’t enough.
Please try to read your post in full, and provide concrete examples and solutions. Thanks for your time, and I glad you wrote each one.
(Also, I just realized that, but there are more than four of us. I don’t have the space to do much else there, but I could use a few people if you’re interested in doing it.)
I don’t know much about “The Secret Life of Freud” but I don’t really think it’s the least bad.
On the other hand, I don’t know much about it, but I do know it’s more than the worst of your many bad ideas of mainstream philosophy. So, given that, it seems like it could be a useful tool for some purpose:
The secret identity of the Freudian psycho is that he has made himself out of it, and a number of other people have done the same. The most common version I remember reading about Freud is that he is a Good Guy and an Evil Guy.
Here, I’m talking about the mental image of the Good Guy or The Bad Guy, not the psychological image of his psychology looks like it’s “my brain”. I think it’s useful to consider that many people may be interested in stories about a psycho who has performed a number of these sorts of tasks and that these are the kinds of stories that constitutes the psychological pain in experience.
I don’t think that this would make a lot of sense to me.
That’s really the first thing—the second reason of the way you link to Bayes. It isn’t hard to get through—for example, I find that Bayes’ theorem does not provide a prior for your hypothesis about whether the soul arises from the “mixed body” of a stone, whereas the fact that you’re saying that “everything you experience has been experienced by something in your brain including the body” is just a fact about your reference class. Of course you might get to see that part too.
But I also think it’s more important to remember again that it’s not just the “physical” fact that your brain has a physical ontology that you’re a Bayesian reasoner.
claim: LW commenter GPT2 is a bot that generates remarkably well-formed comments, but devoid of actual thought or meaning. confidence: 20% that it’s no or minimal human intervention, 90%+ that it’s computer-generated text, but a human might be seeding, selecting, and posting the results.
subclaim: this should be stopped, either by banning/blocking the user, or by allowing readers to block it.
update: based on a comment, I increase my estimate that it’s fully automated to 95%+ I look forward to learning what the seed corpus is, and whether it’s customized in any way based on comment context.
update 2: previous estimate too high, a wider space of possibilities has been proposed in other threads. My current best guess is that it’s a large human-moderated (curated, possibly edited) list of potential comments, being selected and posted automatically. probably only 50% confident of that.
You can auto-collapse comments from GPT2 in https://www.lesswrong.com/account
I’ve spent two years on this site. I’m interested in the current trend of making the site more interesting. I like this community. (My current guess is that LWers will probably need more of a background in this area than I’d want to. For example, if it was worth discussing, I’d want some sort of guidelines for posting.)
I’m also curious about the “culture” of some of the people reading this...
Isn’t meaning in the eye of the beholder, or did you mean something else? Have you ever had the experience of going to a modern art gallery and knowing that authorial intent is mostly absent from all of the works, but pretending that it’s all there for a connoisseur to find, playing the connoisseur, then finding profound meaning and having a really good time?
Have you noticed when GPT2 started commenting?
Ah. Clever but too much IMO. I hate “social distrust day”.
My view of its capabilities certainly dropped.
I’m afraid that I’m only hearing about my lack of social skills on LW. I’m afraid that I am only hearing about my lack of social skills on LW.
I’m afraid that if I tried to post something on my blog I would get down voted.
(So far as actual LW users ever know, I think I’ve never heard of Less Wrong, at least not any of the other blogs I’ve read)
I was very disappointed to see that I don’t find the “rationality” community so successful in its leadership as the result of rationalists. This is not to either signal that I am at all surprised by your lack of social skills, or to conclude that you don’t get good social skills unless you really are.)
I strongly suspect that the people around you describe your experience to have some good social skills and can tell that you are one of these rare exceptions. That fact doesn’t imply very much.
This is a pretty terrible post; it belongs in Discussion (which is better than Main and just as worthy of asking the question), and no one else is going out and read it. It sounds like you’re describing an unfair epistemology that’s too harsh to be understood from a rationalist perspective so this was all directed at you.
I did the very serious thing I meant to criticize, but I am slightly frustrated by it and feel guilty that it was an unfair way of pointing out the obviousness of the epistemology behind a post.
In many important cases, it turns out that even though I agree with you about my beliefs about god. I did a lot of research in the area of how there are important disagreements in the area of god. (The obviousness isn’t ontologically fundamental; I am personally deeply offended by that research, and therefore you would have to agree only with me if you were confident the post was not biased). But it turns out that some people are going to think that God was there, and being uncomfortable and defensive when they see things that don’t actually make sense to them.
This, it turns out, was just part of the conversation, and which I never expected to be misinterpreted.
The way to make this analysis is to give the prior probabilities of your probability distribution, the posterior probabilities of your posterior probabilities are in order, and your posterior probabilities are exactly the same regardless of whether you have a credence that they are independent.
Meta: are the answers to questions all supposed to be given by the OP?
Yeah, otherwise you’re not narrowing down one person’s beliefs, but possibly going back and forth.
In a five-year-old contrarian thread I had stated that “there is no territory, it’s maps all the way down.” There was a quality discussion thread with D_Malik about it, too. Someone also mentioned it on reddit, but that didn’t go nearly as well. Since then, various ideas of postrationality have become more popular, but this one still remains highly controversial. It is still my claim, though.
What’s the difference between “the source of observations” and “reality?”
That’s a common implicit assumption, that observations require a source, hence reality. Note that this assumption is not needed if your goal to predict future observations, not to “uncover the nature of the source of observations”. Of course a model of observations having a common source can be useful at times, just not always.
If observations do not require a source, then why do they seem to exhibit various regularities that allow them to be predicted with a greater accuracy than chance?
It’s an empirical fact (a meta-observation) that they do. You can postulate that there is a predictable universe that is the source of these observations, but this is a tautology: they are predictable because they originate in a predictable universe.
Right, and I’m asking why this particular meta-observation holds, as opposed to some other meta-observation, such as e.g. the meta-observation that the laws of physics change to something different every Sunday, or perhaps the meta-observation that there exists no regularity in our observations at all.
Again, without a certain regularity in our observations we would not be here talking about it. Or hallucinating talking about it. Or whatever. You can ask the “why” question all you want, but the only non-metaphysical answer can be another model, one more level deep. And then you can ask the “why” question again, and look for even deeper model. All. The. Way. Down.
That doesn’t seem to answer the question? You seem to be claiming that because any answer to the question will necessitate the asking of further questions, that means the question itself isn’t worth answering. If so, I think this is a claim that needs defending.
Maybe I misunderstand the question. My answer is that the only answer to any “why” question is constructing yet another model. Which is a very worthwhile undertaking, since the new model will hopefully make new testable predictions, in addition to explaining the known ones.
My actual question was “why are our observations structured rather than unstructured?”, which I don’t think you actually answered; the closest you got was
which isn’t actually an explanation, so far as I can tell. I’d be more interested in hearing an object-level answer to the question.
I am still not sure what you mean.
are you asking why they are not random and unpredictable? That’s an observation in itself, as I pointed out… One might use the idea of predictable objective reality to make oneself feel better. It does not do much in terms of predictive power. Or you can think of yourself as a Boltzmann brain hallucinating a reality. Physicists actually talk about those as if they were more than idle musings.
Yes, I am. I don’t see why the fact that that’s an “observation in itself” makes it an invalid question to ask. The fact of the matter is, there are many possible observation sequences, and the supermajority of those sequences contain nothing resembling structure or regularity. So the fact that we appear to be recording an observation sequence that is ordered introduces an improbability that needs to be addressed. How do you propose to address this improbability?
My answer is, as before, conditional on our ability to observe anything, the observations are guaranteed to be somewhat predictable. One can imagine completely random sequences of observation, of course, but those models are not self-consistent, as there have to be some regularities for the models to be constructed. In the usual speak those models refer to other potential universes, not to ours.
Hm. Interesting; I hadn’t realized you intended that to be your answer. In that case, however, the question simply gets kicked one level back:
Why do we have this ability in the first place?
(Also, even granting that our ability to make observations implies some level of predictability—which I’m not fully convinced of—I don’t think it implies the level of predictability we actually observe. For one thing, it doesn’t rule out the possibility of the laws of physics changing every Sunday. I’m curious to know, on your model, why don’t we observe anything like that?)
Maybe we can focus on this one first, before tackling a harder question of what degree of predictability is observed, what it depends on, and what “the laws of physics changing every Sunday” would actually mean observationally.
Please describe a world in which there is no predictability at all, yet where agents “exist”. How they survive without being able to find food, interact, or even breathe, because there breathing means you have a body that can anticipate that breathing keeps it alive.
I can write a computer program which trains some kind of learner (perhaps a neural network; I hear those are all the rage these days). I can then hook that program up to a quantum RNG, feeding it input bits that are random in the purest sense of the term. It seems to me that my learner would then exist in a “world” where no predictability exists, where the next input bit has absolutely nothing to do with previous input bits, etc. Perhaps not coincidentally, the learner in question would find that no hypothesis (if we’re dealing with a neural network, “hypothesis” will of course refer to a particular configuration of weights) provides a predictive edge over any other, and hence has no reason to prefer or disprefer any particular hypothesis.
You may protest that this example does not count—that even though the program’s input bits are random, it is nonetheless embedded in hardware whose behavior is lawfully determined—and thus that the program’s very existence is proof of at least some predictability. But what good is this assertion to the learner? Even if it manages to deduce its own existence (which is impossible for at least some types of learners—for example, a simple feed-forward neural net cannot ever learn to reflect on its own existence no matter how long it trains), this does not help it predict the next bit of input. (In fact, if I understood your position correctly, shminux, I suspect you would argue that such a learner would do well not to start making assumptions about its own existence, since such assumptions do not provide predictive value—just as you seem to believe the existence of a “territory” does not provide predictive value.)
But to tie this back to the original topic of conversation: empirically, we are not in the position of the unfortunate learner I just described. We do not appear to be receiving random input data; our observations are highly structured in a way that strongly suggests (to me, at least) that there is something forcing them to be so. Perhaps our input bits come from a lawful external reality; that would certainly qualify as “something forcing them to be [structured]”. This “external reality” hypothesis successfully explains what would otherwise be a gigantic improbability, and I don’t think there are any competing hypotheses at this stage—unless of course you consider “there is no external reality, and our observations are only structured due to a giant cosmic coincidence” to be an alternative hypothesis worth putting forth. (As some of my comments so far might imply, I do not consider this alternative hypothesis very probable.)
Uh. To write a program one needs at least a little bit of predictability. So I am assuming the program is external to the unpredictable world you are describing. Is that a fair assumption?
And what about the learner program? Does it exist in that unpredictable world?
Exactly. So you are saying that that universe’s predictability only applies to one specific algorithm, the learner program, right? It’s a bit contrived and somewhat solipsistic, but, sure, it’s interesting to explore. Not something I had seriously considered before.
Yep, it’s a good model at times. But just that, a model. Not all observed inputs fit well into the “objective reality” framework. Consider the occurrences where insisting on objective reality actually leads you away from useful models. E.g. “are numbers real?”
No. This sentence already presumes external reality, right there in the words “cosmic coincidence,” so, as far as I can tell, the logic there is circular.
I’m not sure what you mean by this. The most straightforward interpretation of your words seems to imply that you think the word “coincidence”—which (in usual usage) refers simply to an improbable occurrence—presumes the existence of an external reality, but I’m not sure why that would be so.
(Unless it’s the word “cosmic” that you object to? If so, that word can be dropped without issue, I think.)
Yes, “cosmic coincidence”. What does it mean? Coincidence, interpreted as a low probability event, presumes a probability distribution over… something, I am not sure what in your case, if not an external reality.
I confess to being quite confused by this statement. Probability distributions can be constructed without making any reference to an “external reality”; perhaps the purest example would simply be some kind of prior over different input sequences. At this point, I suspect you and I may be taking the phrase “external reality” to mean very different things—so if you don’t mind, could I ask you to rephrase the quoted statement after Tabooing “external reality” and all synonyms?
EDIT: I suppose if I’m going to ask you to Taboo “external reality”, I may as well do the same thing for “cosmic coincidence”, just to try and help bridge the gap more quickly. The original statement (for reference):
And here is the Tabooed version (which is, as expected, much longer):
Taken literally, the “coincidence hypothesis” predicts that our observations ought to dissolve into a mess of random chaos, which as far as I can tell is not happening. To me, this suffices to establish the (probable) existence of some kind of fixed reality.
Thank you for rephrasing. Let me try my version. Notice how it doesn’t assume anything about probabilities of coincidences, as I don’t see those contributing to better predictions.
In other words, sometimes observations can be used to make good predictions, for a time. Then we assume that these predictions have a single source, the external reality. I guess I don’t get your point about needing to regress to unpredictability without postulating that reality thing.
(Okay, I’ve been meaning to get back to you on this for a while, but for some reason haven’t until now.)
It seems, based on what you’re saying, that you’re taking “reality” to mean some preferred set of models. If so, then I think I was correct that you and I were using the same term to refer to different concepts. I still have some questions for you regarding your position on “reality” as you understand the term, but I think it may be better to defer those until after I give a basic rundown of my position.
Essentially, my belief in an external reality, if we phrase it in the same terms we’ve been using (namely, the language of models and predictions), can be summarized as the belief that there is some (reachable) model within our hypothesis space that can perfectly predict further inputs. This can be further repackaged into a empirical prediction: I expect that (barring an existential catastrophe that erases us entirely) there will eventually come a point when we have the “full picture” of physics, such that no further experiments we perform will ever produce a result we find surprising. If we arrive at such a model, I would be comfortable referring to that model as “true”, and the phenomena it describes as “reality”.
Initially, I took you to be asserting the negation of the above statement—namely, that we will never stop being surprised by the universe, and that our models, though they might asymptotically approach a rate of 100% predictive success, will never quite get there. It is this claim that I find implausible, since it seems to imply that there is no model in our hypothesis space capable of predicting further inputs with 100% accuracy—but if that is the case, why do we currently have a model with >99% predictive accuracy? Is the success of this model a mere coincidence? It must be, since (by assumption) there is no model actually capable of describing the universe. This is what I was gesturing at with the “coincidence” hypothesis I kept mentioning.
Now, perhaps you actually do hold the position described in the above paragraph. (If you do, please let me know.) But based on what you wrote, it doesn’t seem necessary for me to assume that you do. Rather, you seem to be saying something along the lines of, “It may be tempting to take our current set of models as describing how reality ultimately is, but in fact we have no way of knowing this for sure, so it’s best not to assume anything.”
If that’s all you’re saying, it doesn’t necessarily conflict with my view (although I’d suggest that “reality doesn’t exist” is a rather poor way to go about expressing this sentiment). Nonetheless, if I’m correct about your position, then I’m curious as to what you think it’s useful for? Presumably it doesn’t help make any predictions (almost by definition), so I assume you’d say it’s useful for dissolving certain kinds of confusion. Any examples, if so?
Depending on the meaning of the word preferred. I tend to use “useful” instead.
It’s a common belief, but it appears to me quite unfounded, since it hasn’t happened in millennia of trying. So, a direct observation speaks against this model.
It’s another common belief, though separate from the belief of reality. It is a belief that this reality is efficiently knowable, a bold prediction that is not supported by evidence and has hints to the contrary from the complexity theory.
Yes, in this highly hypothetical case I would agree.
I make no claims one way or the other. We tend to get better at predicting observations in certain limited areas, though it tends to come at a cost. In high-energy physics the progress has slowed to a standstill, no interesting observations has been predicted since last millennium. General Relativity plus the standard model of the particle physics have stood unchanged and unchallenged for decades, the magic numbers they require remaining unexplained since the Higgs mass was predicted a long time ago. While this suggests that, yes, we will probably never stop being surprised by the -universe- (no strike through markup here?) observations, I make no such claims.
Yes we do have a good handle on many isolated sets of observations, though what you mean by 99% is not clear to me. Similarly, I don’t know what you mean by 100% accuracy here. I can imagine that in some limited areas 100% accuracy can be achievable, though we often get surprised even there. Say, in math the Hilbert Program had a surprising twist. Feel free to give examples of 100% predictability, and we can discuss them. I find this model (of no universal perfect predictability) very plausible and confirmed by observations. I am still unsure what you mean by coincidence here. The dictionary defines it as “A remarkable concurrence of events or circumstances without apparent causal connection.” and that open a whole new can of worms about what “apparent” and “causal” mean in the situation we are describing, and we soon will be back to a circular argument of implying some underlying reality to explain why we need to postulate reality.
I don’t disagree with the quoted part, it’s a decent description.
“reality doesn’t exist” was not my original statement, it was “models all the way down”, a succinct way to express the current state of knowledge, where all we get is observations and layers of models based on them predicting future observations. It is useful to avoid getting astray with questions about existence or non-existence of something, like numbers, multiverse or qualia. If you stick to models, these questions are dissolved as meaningless (not useful for predicting future observations). Just like the question of counting angels on the head of a pin. Tegmark Level X, the hard problem of consciousness, MWI vs Copenhagen, none of these are worth arguing over until and unless you suggest something that can be potentially observable.
...
...
I think at this stage we have finally hit upon a point of concrete disagreement. If I’m interpreting you correctly, you seem to be suggesting that because humans have not yet converged on a “Theory of Everything” after millennia of trying, this is evidence against the existence of such a theory.
It seems to me, on the other hand, that our theories have steadily improved over those millennia (in terms of objectively verifiable metrics like their ability to predict the results of increasingly esoteric experiments), and that this is evidence in favor of an eventual theory of everything. That we haven’t converged on such a theory yet is simply a consequence, in my view, of the fact that the correct theory is in some sense hard to find. But to postulate that no such theory exists is, I think, not only unsupported by the evidence, but actually contradicted by it—unless you’re interpreting the state of scientific progress quite differently than I am.*
That’s the argument from empirical evidence, which (hopefully) allows for a more productive disagreement than the relatively abstract subject matter we’ve discussed so far. However, I think one of those abstract subjects still deserves some attention—in particular, you expressed further confusion about my use of the word “coincidence”:
I had previously provided a Tabooed version of my statement, but perhaps even that was insufficiently clear. (If so, I apologize.) This time, instead of attempting to make my statement even more abstract, I’ll try taking a different tack and making things more concrete:
I don’t think that, if our observations really were impossible to model completely accurately, we would be able to achieve the level of predictive success we have. The fact that we have managed to achieve some level of predictive accuracy (not 100%, but some!) strongly suggests to me that our observations are not impossible to model—and I say this for a very simple reason:
How can it be possible to achieve even partial accuracy at predicting something that is purportedly impossible to model? We can’t have done it by actually modeling the thing, of course, because we’re assuming that the thing cannot be modeled by hypothesis. So our seeming success at predicting the thing, must not actually be due to any kind of successful modeling of said thing. Then how is it that our model is producing seemingly accurate predictions? It seems as though we are in a similar position to a lazy student who, upon being presented with a test they didn’t study for, is forced to guess the right answers—except that in our case, the student somehow gets lucky enough to choose the correct answer every time, despite the fact that they are merely guessing, rather than working out the answer the way they should.
I think that the word “coincidence” is a decent way of describing the student’s situation in this case, even if it doesn’t fully accord with your dictionary’s definition (after all, whoever said the dictionary editors determine have the sole power to determine a word’s usage?)--and analogously, our model of the thing must also only be making correct predictions by coincidence, since we’ve ruled out the possibility, a priori, that it might actually be correctly modeling the way the thing works.
I find it implausible that our models are actually behaving this way with respect to the “thing”/the universe, in precisely the same way I would find it implausible that a student who scored 95% on a test had simply guessed on all of the questions. I hope that helps clarify what I meant by “coincidence” in this context.
*You did say, of course, that you weren’t making any claims or postulates to that effect. But it certainly seems to me that you’re not completely agnostic on the issue—after all, your initial claim was “it’s models all the way down”, and you’ve fairly consistently stuck to defending that claim throughout not just this thread, but your entire tenure on LW. So I think it’s fair to treat you as holding that position, at least for the sake of a discussion like this.
Sadly, I don’t think we are converging at all.
Yes, definitely.
I don’t see why it would be. Just because one one is able to march forward doesn’t mean that there is a destination. There are many possible alternatives. One is that we will keep making more accurate models (in a sense of making more detailed confirmed predictions in more areas) without ever ending anywhere. Another is that we will stall in our predictive abilities and stop making measurable progress, get stuck in a swamp, so to speak. This could happen, for example, if the computational power required to make better predictions grows exponentially with accuracy. Yet another alternative is that the act of making a better model actually creates new observations (in your language, changes the laws of the universe). After all, if you believe that we are agents embedded in the universe, then our actions change the universe, and who is to say that at some point they won’t change even what we think are the fundamental laws. There is an amusing novel about the universe protecting itself from overly inquisitive humans: https://en.wikipedia.org/wiki/Definitely_Maybe_(novel)
I don’t believe I have said anything of the sort. Of course we are able to build models. Without predictability life, let alone consciousness would be impossible, and that was one of my original statements. I don’t know what is it I said that gave you the impression that abandoning the concept of objective reality means we ought to lose predictability in any way.
Again:
I don’t postulate it. You postulate that there is something at the bottom. I’m simply saying that there is no need for this postulate, and, given what we see so far, every prediction of absolute knowledge in a given area turned out to be wrong, so, odds are, whether or not there is something at the bottom or not, at this point this postulate is harmful, rather than useful, and is wholly unnecessary. Our current experience suggests that it is all models, and if this ever changes, that would be a surprise.
That’s all.
Is there anything that makes observations different or distinguishable from imaginations? If so, what?
“imaginations” are observations, too. Just in a different domain.
What’s different about these domains? Can you tell them apart in any way?
well, clearly we can, most of the time. When it’s not the case our observations modeling abilities are compromised. it’s really helpful not to confuse the domains, don’t you think? We tend to learn that fairies are not “real” pretty early on these days, though. Not in every area, of course. Vatican uses the scientific method, of sorts, to make sure that any potential saint is a bona fide one before any official decision. So the division between “real” observations and the imaginary ones is not always clearcut, and in many cases is rather subjective.
Would you care to distinguish between “there is no territory” (which on the face of it is a metaphysical claim, just like “there is a territory”, and if we compare those two then it seems like the consistency of what we see might be evidence for “a territory” over “no territory”) and “I decline to state or hold any opinion about territory as opposed to models”?
I intentionally went a bit further than warranted, yes. Just like atheists claim that there is no god, whereas the best one can claim is the agnostic Laplacian position that there is no use for the god hypothesis in the scientific discourse, I don’t really claim that there is no territory, just that we have no hope of proving it is out there, and we don’t really need to use this idea to make progress.
Have you considered phrasing your clain differently, in view of your general lack of progress in persuading people ?
I would consider a different phrasing, sure. I’m not the best persuader out there, so any help is welcome!
You are a hero.
You are a hero.
I’m not, for all intents and purposes, my own. However, there is a world of great people that is filled with great people. The great ones are those who make themselves the winners. And they are the ones that everyone cares about and will help. That’s why the world of academia has such a strong barrier to entry. So, I am a researcher or two. My favorite university has a huge selection to provide a huge amount of learning and social support. And if I had to find a way to apply my intellectual freedom to life even in the distant future, I could be doing that in my spare time. I’ve worked very hard, but in a very short while I’ve stayed around it with people I trust. (So many people I know think that they would rather be somewhere else than around, and that being the best thing I can manage is probably the one they would rather just not want to be around.)
Anyway, I’ll say that I think you’re right. I’ve been a nerd, I’m a nerd, I don’t just not seem to care so much about truth. I think I like learning a new truth that’s far away, but I don’t seem to care that much.
I like this post, but I can’t get the feeling I’m going to get away with it.
Do maps need to ultimately be grounded in something that is not a map and if not why are these maps meaningful?
A map (another term for a model) is an algorithm to predict future inputs. To me it is meaningful enough. I am not sure what you mean by “grounded in something”. Models are multi-level, of course, and postulating “”territory” as one of meta models can be useful (i.e. have predictive value) at times. At other times territory is not a particularly useful model.
In what cases is the territory not a useful model? And if you aren’t determining useful relative to the territory, what are you determining it in relation to?
First, “usefulness” means only one thing: predictive power, which is accuracy in predicting future inputs (observations). The territory is not a useful model in multiple situations.
In physics, especially quantum mechanics, it leads to an argument about “what is real?” as opposed to “what can we measure and what can we predict?”, which soon slides into arguments about unobservables and untestables. Are particles real? Nope, they are an asymptotically flat interaction-free approximations of the QFT in curved spacetimes. Are fields real? Who knows, we cannot observe them directly, only their effects. They are certainly a useful model, without a doubt though.
Another example: are numbers real? Who cares, they are certainly useful. Do they exist in the mind or outside of it? Depends on your definitions, so an answer to this question says more about human cognition and human biases than about anything math- or physics-related.
Another example is in psychology: if you ever go to therapist for, say, couples counseling, the first thing a good one would explain is that there is no single “truth”, there is “his truth” and “her truth” (fix the pronouns as desired), and the goal of therapy would be to figure out a mutually agreeable future, not to figure out who was right and who was wrong and what really happened, and who thought what and said what exactly and when.
If ones goal require something beyond predictive accuracy, such as correspondence truth, why would you limit yourself to seeking predictive accuracy?
No ordinary goal requires anything outside of predictive accuracy. To achieve a goal, all you need to do is predict what sequence of actions will bring it about (though I note, not all predictive apparatuses are useful. A machine that did something very specific abnormal like.. looking at a photo of a tree and predicting whether there is a human tooth inside it, for instance, would not find many applications.)
What claim about truth can’t be described as a prediction or tool for prediction?
Is predictive power an instrumental or terminal goal?
Is your view a denial of the territory or agnosticism about it?
Is the therapy example a true model of the world or a useful fiction?
Brain is a multi-level predictive error minimization machine, at least according to a number of SSC posts and reviews, and that matches my intuition as well. So, ultimately predictive power is an instrumental goal toward the terminal goal of minimizing the prediction error.
A territory is a sometimes useful model, and the distinction between an approximate map and as-good-as-possible map called territory is another useful meta-model. Since there is nothing but models, there is nothing to deny or to be agnostic about.
You are using terms that do not correspond to anything in my ontology. I’m guessing by “the world” you mean that territory thing, which is a sometimes useful model, but not in that setup. “A useful fiction” is another term for a good model, as far as I am concerned, as long as it gets you where you intend to be.
How is predictive error, as opposed to our perception of predictive error, defined if not relative to the territory?
If there is nothing but models, why is your claim that there is nothing but models true, as opposed to merely being a useful model?
I don’t claim what is true, what exists, or what is real. In fact, I explicitly avoid all these 3 terms as devoid of meaning. That is reading too much into it. I’m simply pointing out that one can make accurate predictions of future observations without postulating anything but models of past observations.
There is no such thing as “perception of predictive error” or actual “prediction error”. There is only observed prediction error. You are falling back on your default implicit ontology of objective reality when asking those questions.
Why do you assume that future predictions would follow from past predictions? It seems like there has to be an implicit underlying model there to make that assump
That’s a meta-model that has been confirmed pretty reliably: it is possible to make reasonably accurate predictions in various areas based on past observations. In fact, if this were not possible at any level, we would not be talking about it :)
Yes, that’s the (meta-)model, that accurate predictions are possible.
How can you confirm the model of “past predictions predict future predictions” with the data that “in the past past predictions have predicted future predictions?” Isn’t that circular?
The meta-observation (and the first implicit and trivially simple meta-model) is that accurate predictions are possible. Translated to the realist’s speak it would say something like “the universe is predictable, to some degree”. Which is just as circular, since without predictability there would be no agents to talk about predictability.
In what way is your meta-observation of consistency different than the belief in a territory?
Once you postulate the territory behind your observations, you start using misleading and ill-defined terms like “exists”, “real” and “true”, and argue, say, which interpretation of QM is “true” or whether numbers “exist”, or whether unicorns are “real”. If you stick to models only, none of these are meaningful statements and so there is no reason to argue about them. Let’s go through these examples:
The orthodox interpretation of quantum mechanics is useful in calculating the cross sections, because it deals with the results of a measurement. The many-worlds interpretation is useful in pushing the limits of our understanding of the interface between quantum and classical, like in the Wigner’s friend setup.
Numbers are a useful mental tool in multiple situations, they make many other models more accurate.
Unicorns are real in a context of a relevant story, or as a plushie, or in a hallucination. They are a poor model of the kind of observation that lets us see, say, horses, but an excellent one if you are wandering through a toy store.
Why can’t you just believe in the territory without trying g to confuse it with maps?
To me belief in the territory is the confused one :)
Because you don’t believe territory “exists” or because it’s simpler to not model it twice—once on a map, once outside?
The latter. Also postulating immutable territory outside all maps means asking toxic questions about what exists, what is real and what is a fact.
What kind of claim is the one that one can make accurate predictions of future observations if not a claim of truth?
The term truth has many meanings. If you mean the first one on wikipedia
then it is very much possible to not use that definition at all. In fact, try to taboo the terms truth, existence and reality, and phrase your statements without them, it might be an illuminating exercise. Certainly it worked for Thomas Kuhn, he wrote one of the most influential books on philosophy of science without ever using the concept of truth, except in reference to how others use it.
I really like this line of thinking. I don’t think it is necessarily opposed to the typical map-territory model, however.
You could in theory explain all there is to know about the territory with a single map, however that map would become really dense and hard to decipher. Instead having multiple maps, one with altitude, another with temperature, is instrumentally useful for best understanding the territory.
We cannot comprehend the entire territory at once, so it’s instrumentally useful to view the world through different lenses and see what new information about the world the lens allows us to see.
You could then go the step further, which I think is what you’re doing, and say that all that is meaningful to talk about are the different maps. But then I start becoming a bit confused about how you would evaluate any map’s usefulness, because if you answered me: ‘whether it’s instrumentally useful or not’, I’d question how you would evaluate if something is instrumentally useful when you can only judge something in terms of other maps.
Not in terms of other maps, but in terms of its predictive power: Something is more useful if it allows you to more accurately predict future observations. The observations themselves, of course, go through many layers of processing before we get a chance to compare them with the model in question. I warmly recommend the relevant SSC blog posts:
https://slatestarcodex.com/2017/09/05/book-review-surfing-uncertainty/
https://slatestarcodex.com/2017/09/06/predictive-processing-and-perceptual-control/
https://slatestarcodex.com/2017/09/12/toward-a-predictive-theory-of-depression/
https://slatestarcodex.com/2019/03/20/translating-predictive-coding-into-perceptual-control/
The whole point of the therapy thing is that you don’t know how to describe the real world.
But there’s a lot of evidence that it is a useful model… and there’s evidence that it is a useful thing… and that it’s a useful thing… and in fact I have a big, strong intuition that it is a useful thing… and so it isn’t really an example of “gifts you away”. (You have to interpret the evidence to see what it’s like, or you have to interpret it to see what it’s like, or you have to interpret it to see what it’s like, etc.)
[EDIT: Some commenters pointed to “The Secret of Pica,” which I should have read as an appropriate description of the field; see here.]
I’m interested in people’s independent opinions, especially their opinions expressed here before I’ve received any feedback.
Please reply to my comment below saying I am aware of no such thing as psychotherapy.
Consider the following research while learning about psychotherapy. It is interesting because I do not have access to the full scientific data on the topic being studied. It is also highly addictive, and has fairly high attrition rates.
Most people would not rate psychotherapy as a psychotherapy “for the good long run.” Some would say that it is dangerous, especially until they are disabled or in a negatively altered state. Most people would agree that it is not. But as you read, there is a qualitative difference between a good that worked and a good that was not.
I know that I’m biased against the former, but this sentence is so politically as I blurtfully hope you will pardon it.
This was surprising; in this context I had thought “useful” meant ‘helps one achieve one’s goals’, rather than being short for “useful for making predictions”.
What is the difference? Achieving goals relies on making accurate predictions. See https://slatestarcodex.com/2017/09/05/book-review-surfing-uncertainty/
Does achieving goals rely on accurate predictions and nothing else?
Consider reading the link above and the rest of the SSC posts on the topic. In the model discussed there brain is nothing but a prediction error minimization machine. Which happens to match my views quite well.
If the brain can’t do anything except make predictions, where making predictions is defined defined to exclude seeking metaphysical truth, then you have nothing to object to, since it would be literally impossible for anyone to do other than as you recommend.
Since people can engage in metaphysical truth seeking, it is either a sub-variety of prediction, or the theory that the brain is nothing but a prediction error minimisation machine is false.
Downvotes for not being Socratic.
If I want to say something about my own subjective experience, I could write that paragraph from a story I’ve been told, and say “Hey, I don’t have to believe any more”, and then leave it at that.
I’m not a fan of the first one. That is, my subjective experience (as opposed to the story I was told by) does not have any relevance to my real experience of that scene, so I can’t say for certain which one in particular seems to be the right one.
I also have a very important factual issue with having a similar scene (to an outsider) in which a different person can’t help but help, which I do find confusing; and in that case, if my real feelings about the scene are somewhat similar to the feelings about the scene, the scene will make it seem very awkward.
So if someone can help me with this stuff, I can’t ask to be arrested for letting anyone out on the street, for providing any evidence that they’re “trying to pretend”.
(I’m also assuming that the scene has to be generated by some kind of randomly-generated random generator or some technique which doesn’t produce anything in the original text.)
There are multiple levels of accuracy. At most one level is clear.
One level is a set of observations; the other is a set of observations to which it may help you develop a useful model.
It is generally the case that the difference may only be somewhat sharp at the first level. That’s not true for the first four levels. It seems hard or impossible.
One level of accuracy is the level of accuracy at which you should develop a useful model; the higher you are about this level of accuracy, the more useful it will be.
One level is easy to figure out. The other is a set of observations you can derive from the other.
The second level, the higher you are about this level of accuracy, is the type of model you might develop.
Two degrees of accuracy here. One is a basic idea that one can build a universal learning machine without solving a problem of mathematics (although it might turn out that it’s possible even if it’s hard).
One level is a set of observations which you can form a useful model of; the other is a set of measurement.
One level is a specific process (or process) of generating or implementing a problem of mathematics. But the first level is a very useful sort of process, so to become more capable at it (e.g. by drawing up models) it probably should be more difficult to “see” in the higher mathematics.
One level of accuracy is how well you can apply a mathematical problem to a model.
One might have to create a lot of models before one can start trying to form a good model on the part of the model.
A level is how much you can be sure of a given thing, or about something.
A level is what it would take to create and control a (very limited) quantity of this.
Some possible levels are in the middle of the middle.
One or more levels may be easy to observe, but it’s definitely important to get clear, and use the information you have.
In your example, I can’t see the connections between the observation and the process of generating the model.
An algorithm called “bounded” is a useful model for the number of things you can predict, and what your ability to predict is (the “bounded” part of the model you are using)
The model itself doesn’t give an explicit number, and it gives the number of things you can predict (e.g. if you can predict the number of things you have seen, then you’ve just found a single thing you can predict, and it’s a model or a set of things you can predict).
But the algorithm also has a “bounded” output from your model, the same input that you can predict, whereas the algorithm is output by a bounded output from your environment (this is the “bounded” output, rather than a very-specific output).
(The model and bounded output are all in the same place, but the algorithm is not “bounded”, the same outputs are very different.)
So to be sure; If it’s not a well-defined quantity, the thing you’re using is not the quantity-of-gase, it’s not an “unformal” quantity of noise, and you can make them dependantly on things that are actually a good measure of the quality of the data you’ve collected.
Is there a point in which I have little patience, if not empathy, for people who live in narratives that are plausible/true, but which sound important and which are hard to make? It’s interesting that I think the main piece of the puzzle in the OP is that even if true I want to live in a real life where the narrative is very strong. For example a naive story might look like the following:
“A man feels bad about himself, his wife goes out to eat him.”
“B together, he’ll have a tremendous amount of empathy and a large amount of love.”
In the second story there is basically a huge difference between the two characters’ situations and the reality behind the first one: most of the characters are sad but sad that the badness is greater than the happyness.
“A man feels sad when he does X, but sad at least most of the time, so sad that it will be over.”
This story doesn’t fit nicely. The emotional effect is just not that important, because it doesn’t fit well; the psychological effect is just not that important, because it isn’t that bad. (This is somewhat related to what I see in the post “Emotional Effects of Cryonics”.)
There might be some additional things (like “My Rules are Taking My Obedience away”, or perhaps something else) which do it, but if it’s a story about my life and not about the character I want to have then it still might not fit well. So I would like to try it anyway.
I’m not sure if the idea of “a vast number of people working out things just won’t work out much” is a good one
I’d like to see more analysis of what happened, but that’s a bit of a stretch. What are your current models of how many people will work out? Is the number of people who work out all those things worth to you?
Are you an AI bot replying to random comments in the GPT2 style?
It is a great example of the effect of a more transparent AI-complete solution rather than a more opaque solution. But the new solution is not so opaque, and it is not the solution I consider the most difficult one.
A good AI would be a kind of weird solution to your problem; it’s too easy and expensive to do anything but your own, and it’s a waste of time to do anything except your own task.
Mod note: I decided to promote this post to the frontpage, which does mean frontpage guidelines apply, though I think overall we can be pretty flexible in this thread. Depending on how it goes we might want to promote future threads like this to the frontpage or leave them on personal blog.
Meta: Post comments discussing this experiment here
I suspect Socratic questioning works much better in person as there’s less reason to ask leading questions. Here, I find that there is more pressure to insert your views into the question to avoid lots of back and forth.
I’ve never seen Socratic questioning work in person because it’s always clear there’s a trap coming and people don’t want to be trapped taking views because the questions slowly destroy the nuance of their views. It’s even worse here
Sq is a tool. In that sense it can be used badly like a knife or in useful ways (like a knife).
What would make sq genuinely more useful again?
I mean, at least that was the whole point of Socrates questioning, wasn’t it? Maybe we need a different term for something that is less adversarial, but compared to Plato’s original texts, the questions here are much less leading.
Was that the whole point of Socratic questions? Would he have invented a shitty thing like that? Would it have survived time if it was just an elaborate trap technique? (seems strange to me if that were the case)
A pop tart is a type of ravioli.
Question: Have you always been a monster, or did you just become one recently?
I don’t feel like I have a great sense of how my preferences should be treated. Perhaps I would like to be more like a monster.
April first starts early for new zealanders (and ends late)
What defines a ravioli?
A wheaten substance that seals some other substance inside it. The inner substance must not be rigid.
Dumplings and samosas are also types of ravioli.
A wad of dough with a mixture of tar and ball-bearings injected into it would also be a ravioli.
I’m a fan of reductive definitions.
This is why I find the discussion of AI safety interesting.
I think the main problem with the MIRI and FHI threads are somewhat different to MIRI ones.
Maybe you really thought that the title “Problems in Human Evolution” was a kind of cached reply, but… there are, e.g. the kinds of diseases that are dealt with in the wild, and so on.
Related, street epistemology. it’s a practice similar to to Socratic questioning (“invented” by peter boghossian in his book ‘a manual for creating atheists’).
Here’s a live example (and two more channels. these also have lectures about it)
I’ve heard this before but I don’t know the details. It’s a “use it as much as you want”. There are various ways to use it that I’d consider it useful. I found it to be very useful to use it as a tool that I don’t know all the details of, and also it’s very handy to refer to as such when I’ve had a feeling of confusion or aversion toward using it.
I’ve heard this before, but I don’t know the details of what it’s like to use it. I find it to be very useful for reacting to a particular situation. Most people have some issues with thinking “I want to feel good” too much at parties/signalling too little. They also don’t see a very effective way to do so well, they also don’t think it’s the right tool for one.
It’s an interesting example of the ability to be a lot more effective in long term, and I would not be surprised if it were more useful. I think there’s a lot to be learned here, and I’ve been reading for a while that there are plenty of situations where we can benefit from having a lot more effective tools, but that there’s plenty to have to gain from having a little more good tools.
I don’t think it’s helpful to have it be about building the right tool for one. Another point to consider is that perhaps we’ll need to be able to develop a lot of useful tools for actually solving problems, but I think it’s a difficult topic. I think it’s in my interests to try to be a little more concrete/rational/useful as a tool for solving problems before we get to that point.
The first part of this article was very meta, but I’m hoping you will indulge me. Your interest in this part of the post was interesting; I think I’d benefit by writing it a lot more if I described what’s in your mind. It seems like you should consider the post something like trying to solve the “How to Solve X” problem.
I also think your model of me would be helpful for discussing the problem in its current form (https://arxiv.org/abs/v1609.03840) and how it
Claim: Instrumental and Epistemic rationality often diverge, and rationalists don’t win as much because they don’t give this fact enough weight.
In what ways do they diverge, and why?
Claim: One way which instrumental and epistemic rationality diverge is that knowing the reasons a particular experiential process works and how can actually get in the way of experiencing that process.
(example: knowing how corrective steering works when riding a bike can actually slow you down when trying to intuitively pick up the skill of riding a bike.
I am slightly annoyed by the last link—it’s a reference to something I had read in the paragraph above. It wasn’t a good article, but it’s one of my favorite sequences and I was excited to check it out.
Claim: One way that instrumental and epistimic rationality diverge is that you often get better results using less accurate models that are simpler rather than more accurate models that are more complicated.
(example: thinking of people as ‘logical’ or ‘emotional’, and ‘selfish’ or ‘altruistic’ is often more helpful in many situations than trying to work up a full list of your motivations as you know them and their world model as you know it and making a guess as to how they’ll react)
I’ve noticed that when I read your response to a question that asked me to write a comment in 10 seconds I was significantly more impressed by its intelligence than at the times when I tried to write anything.
Another vaguely disconcertingly almost self-aware comment by the bot. It can, in fact, write impressively realistic comments in 10 seconds.
This is a nice post. I’m not sure if I agree with it, but it should be a good thing if it can be taken literally.
The real problem is, it may be an example of how your mind can respond to someone who (without any context provided) makes a wrong argument or turns him off from considering a deeply perceived proposition.
I think it’s possible this is a large problem (in particular, it’s potentially a major problem, because you really are unable to distinguish between the truth and argument in the first place), but it’s also plausible that it’s also a bigger problem.
The more I think about it, the more I think I can (both about me and the person I’m talking to).
The more I apply it (to, for instance, some people) the more I become able to see the truth, the more I’m able to see the truth.
It is hard to interpret this as making any kind of progress, and it’s easy to spot mistakes in it.
It might not be too hard, but it’s also probably counterproductive.
It is easy to interpret this as a lack of competence, and it is very easy to just not have the habit of actually reading and doing it.
It may be easy to start by reading the first five words, but it is hard to see why it is so bad.
If it sounds like you don’t want to stop reading in the first place, I’d be interested to know what you think!
Claim: One way in which instrumental and epistemic rationality diverge is with self-fulfilling prophecies:
(example: all your data says that you will be turned down when asking for a date. You rationally believe that you will be turned down by a date, and every time you ask for a date you are turned down. However, if you were to switch this belief to the fact you would be enthusiastically accepted when asking for a date, this would create a situation where you were in fact enthusiastically accepted.)
I feel like it’s unlikely that any of these would be called out for, but I could be too confident of myself.
Claim: One way in which instrumental and epistemic rationality diverge is that knowing certain facts can kill your motivation system.
(for instance, knowing how complicated a problem will be can stop you wanting to try and solve it, but it could be that once you solve part of it you’ll have the resources to solve the whole thing, and it could be in your interests to solve it)
So you’re less likely to work on a problem if you think it has been given a lot of high quality attention/you don’t think you have a comparative advantage?
Yes. But I’m not sure how that’s related.
How else does one know how complicated a problem is (if one hasn’t solved it)?
Through comparing it to other similar problems, understanding the number of factors involved, asking people who have worked on similar problems, or many other methods.
This seems pretty strange to me, and I would strongly disagree with it.
We’re all familiar with the fact that when I was in high school, I’d probably make money for free stuff that it’s possible to buy while still having a lot of free money. I’d start with the more basic stuff, ebooks (mostly free from Amazon ebook versions), and pick the things I need to personally pay for those free stuff.
This is a good post.
It’s also quite insightful and powerful, and it’s almost a success (both for me and for people writing about this in a rationalist form) -- it’s a great post if it falls somewhere.
This is the most ingenius april fools joke on a website I’ve seen in quite a while.
I am very interested in the context of my blog and particularly in the literature.
Claim: The “classical scenario” of AI foom as promoted by e.g. Bostrom, Yudkowsky, etc. is more plausible than the scenario depicted in Drexler’s Comprehensive AI Systems.
Question: how do you evaluate the plausibility of each scenario, and potentially of other ways the AI development timeline might go?
(Sorry for delay, I thought I had notifications set up but apparently not)
I don’t at the moment have a comprehensive taxonomy of the possible scenarios. The two I mentioned above… well, at a high level, what’s going on is that (a) CAIS seems implausible to me in various ways—e.g. it seems to me that more unified and agenty AI would be able to outcompete comprehensive AI systems in a variety of important domains, and (b) I haven’t heard a convincing account of what’s wrong with the classic scenario. The accounts that I’ve heard usually turn out to be straw men (e.g. claiming that the classic scenario depends on intelligence being a single, unified trait) or merely pointing out that other scenarios are plausible too (e.g. Paul’s point that we could get lots of crazy transformative AI things happening in the few years leading up to human-level AGI).
I’ve seen that I shouldn’t argue with people, but for people I’ve found them more persuasive than if I told them they’re wrong, as a form of argument. This post argues that in a rationalist society, anyone who gives evidence against something would be epistemically rude, regardless of whether they “perceive evidence from other people”. So it is really hard to argue with the people you would be arguing with if you don’t believe them, and your best estimate is that it’s a good idea.