A Candid Optimist
pangloss
Does it matter that you’ve misstated the problem of induction?
I wish this was separated into two comments, since I wanted to downvote the first paragraph, and upvote the second.
Glad someone mentioned that there is good reason Scott Adams is not considered a paradigm rationalist.
For anyone interested in wearing Frodo’s ring around your neck: http://www.myprecious.us/
I guess this raises a different question: I’ve been attempting to use my up and down votes as a straight expression of how I regard the post or comment. While I can’t guarantee that I am never drawn to inadvertently engage in corrective voting (where I attempt to bring a post or comment’s karma in line with where I think it should be in an absolute sense or relative to another post), it seems as though this is your conscious approach.
What are the advantages/disadvantages or the two approaches?
I voted this down, and the immediate parent up, because recognizing one’s errors and acknowledging them is worthy of Karma, even if the error was pointed out to you by another.
That puts people with a great deal of Karma in a much better position with respect to Karma gambling. You could take us normal folk all-in pretty easily.
I mean, I don’t know if “woody” or “dry” are the right words, in terms of whether they invoke the “correct” metaphors. But, the point is that if you have vocabulary that works, it can allow you to verbalize without undermining your underlying ability to recognize the wine.
I think the training the with vocabulary actually augments verbally mediated recall, not that it turns off the verbal center, but I’m not sure the vehicle by which it works.
For the most part I think that starts to address it. At the same time, on your last point, there is an important difference between “this is how fully idealized rational agents of a certain sort behave” and “this is how you, a non-fully idealized, partially rational agent should behave, to improve your rationality”.
Someone in perfect physical condition (not just for humans, but for idealized physical beings) has a different optimal workout plan from me, and we should plan differently for various physical activities, even if this person is the ideal towards which I am aiming.
So if we idealize our bayesian models too much, we open up the question: “How does this idealized agent’s behavior relate to how I should behave?” It might be that, were we to design rational agents, it makes sense to use these idealized reasoners as models, but if the goal is personal improvement, we need some way to explain what one might call the Kantian inference from “I am an imperfectly rational being” to “I ought to behave the way such-and-such a perfectly rational being would”.
I am thinking more like this: I am a scaredy-cat about roller coasters. So I prefer the tea cups to big thunder mountain rail road. And I maintain that preference after choosing the Tea Cups (I don’t regret my decision). However, had I ridden Big Thunder Mountain Rail Road, I would have been able to appreciate that it is awesome, and would have preferred Big Thunder Mountain Rail Road to the Tea Cups.
Since this case seems pretty possible, if the sorts of lessons you are going to draw only apply to hyper-idealized agents who know all their preferences perfectly and whose preferences are stable over time, that is a good thing to note, since the lessons may not apply to those of us with dynamic preference sets.
From what I’ve read, one needs to train oneself on paradigm cases. So, for example, with wine tasting, you develop your verbal acuity by learning how to describe fairly ordinary wines.
I don’t know how to port this strategy over to verbal acuity for rationality.
I agree, however, the definition of preferring A to B that he gave was choosing A over B (and if we don’t specify that A and B must be total world-states, then it would turn out that I prefer Mexican to Italian because I chose Mexican over Italian). Psy-Kosh’s comment above explains why that isn’t what he meant.
That takes care of the first concern, but not necessarily the second one.
I guess we find out how to acquire verbal expertise in a given domain, and do so for rationality, reasoning, and inference.
That’s what it means to prefer something. That if you prefer A over B, you’d give up situation B to gain situation A. You want situation A more than you want situation B.
I don’t want this to devolve into an argument about precisely how to talk about preferences, but I think this is a more substantive assumption that you are regarding it. If I prefer going to the Italian restaurant to going to the Mexican restaurant, I might still choose the Mexican restaurant over the Italian restaurant, because of the preferences of others.
It seems like you are also glossing over the importance of the possible difference between what I prefer when choosing to what I would have preferred had I chosen differently.
It depends. Sometimes it will be sight or our other senses, sometimes it will be memory, sometimes it will be testimony.
Thinks about it this way, we take in information all the time, and draw conclusions from it. “Sight” isn’t playing a key role in face recognition except providing the data, you have a mental program for matching visual face data to previous visual face data, and that program gets screwed up if you start thinking through a description of the face after you see it.
Similarly, you see a room full of objects and events. You’ve got one or more “draw conclusions” programs that run on the data you see, and that program can get screwed up by putting things into words that you don’t normally.
The data on insight puzzles shows that if you do manage to draw the right conclusions, and you try to put into words how you did it, you may get screwed up in the following way: you are confident in explanation A for how you drew the conclusion, when, in actuality, the truth is radically different explanation B.
My claim isn’t about rationality recognition per se, it is simply this: psychology has shown that verbalizing can screw us up when dealing with a process that isn’t normally done verbally. And a lot (if not most) of our inferential processes are not done in this explicitly verbalized manner (verbalized doesn’t necessarily mean spoken aloud, but just ‘thinking through in words’).
My claim is that there are known ways to get good at verbalizing non-verbal processes, and they involve training on paradigmatic cases. It is only after such training that one can start thinking about edge cases and the borderlands without worrying that the process of discussing the cases is corrupting their thinking about the cases.
Before we can advance rationality by discussion, we must first learn to discuss rationality.
I think the question about which cases to focus on when forming theories is different from the question of which cases to use to train oneself to verbalize one’s thoughts without interfering with one’s thinking. The latter requires us to train on paradigms, the former may be something we can pursue in either direction.
This is crucial: The thought isn’t to presuppose which direction our theorizing should go, but rather to make sure that when we theorize, we aren’t tripping ourselves up.
The Verbal Overshadowing effect, and how to train yourself to be a good explicit reasoner.
someone could start a thread, I guess.
In terms of whether to take your complaints about philosophy seriously, I mean.