Nevermind.
Zetetic
There’s a related problem; Humans have a tendency to once they have terms for something take for granted that something that looks at a glance to make rough syntactic sense that it actually has semantics behind it.
Isn’t this the same issue we see with surface analogies and cached thoughts?
Many thanks, I’ve been looking for some carefully worked out ideas along this line of thought!
With that view, another issue is to what extent philosophy and scientific training are more attractive to people that have a tendency to avoid compartmentalization and to what extent studying philosophy and science assists in/amplifies the ability to propagate beliefs, if at all.
It seems like the sort of study that goes into these areas of study would provide the learner with heuristics for belief propagation, though each area might come equipped with unique heuristics.
First through Scott Aaronson’s blog, I looked over a couple of things but didn’t come back. Then a friend encouraged me to check out the sequences, so I started reading them and was hooked.
This is interesting to me in a sort of tangential way. It seems like studying philosophy exercises this tendency to propagate your beliefs in order to make them coherent. In fact logical belief propagation seems to embody a large aspect of traditional philosophy, so I would expect that on average someone who studies philosophy would have this tendency to a greater degree than someone who doesn’t.
It would be interesting to me if anyone has seen any data related to this, because it feels intuitively true that studying philosophy changed my way of thinking, but it’s of course difficult to pinpoint exactly how. This seems like a big part of it.
I see. I’ll certainly be looking forward to that write up!
The book looks pretty interesting and that’s a nice story, but I’m not sure that this conclusion is much of a revelation. I’d be a bit more interested in why talking through an issue works when it does.
For instance, when I see
Part of the reason for the change was a historic conference held in Bermuda in 1996, and attended by many of the world’s leading biologists, including several of the leaders of the government-sponsored Human Genome Project.
and
The biologists in the room had enough clout that they convinced several major scientific grant agencies to make immediate data sharing a mandatory requirement of working on the human genome. Scientists who refused to share data would get no grant money to do research. This changed the game, and immediate sharing of human genetic data became the norm.
I think “Ok, so talking through something is important when most of the parties involved would be amenable to the issue, since they already have clout and don’t really need to fear rivals so much. When you happen to be part of a relatively powerful group that can make things happen via consensus, and it seems like there is an important issue you could garner consensus on, it would be good to gather up the group and have a chat.” This seems kind of trivial though.
For 1 and 2:
I think you need to qualify ‘quality of life’ a bit. Are you asking if the sequences will make you happier? Resolve some cognitive dissonance? Make you ‘win’ more (make better decisions)? Even with that sort of clarification, however, it seems difficult to say.
For me, I could say that I feel like I’ve cleared out some epistemological and ethical cobwebs (lingering bad or inconsistent ideas) by having read them. In any event, there are too many confounding variables, and this requires too much intepretation for me to feel comfortable assigning an estimate at this time.
For 3: I think I would need to know what it means to “train someone in rationality”. Do you mean have them complete a course, or are we instituting a grand design in which every human being on Earth is trained like Brennan?
I’ve been trying to apply John Perry’s Structured procrastination more purposefully. I’ve been taking on numerous projects that feel important but probably have negligible consequences if they are left undone.
As far as projects I’ve taken on (amidst having to send out grad school applications) I’ve been authoring a weekly op/ed column in my school paper with a friend. Our school is small enough for us to get away with just about anything with some intellectual merit; copyright issues, nootropics, the Ig nobel prizes, anything of interest that comes up. I use this as motivation to get some writing done every week. I’ve also have a “book” on the backburner (a vehicle for toying writing up various meta-analyses of speculative technologies, the ethical debate surrounding them and possible social implications surrounding their implementation and dissemination), something I can use to hash out thoughts and practice scholarly research. Maybe something will come of it, maybe not; as I said I’m trying to keep a critical mass of projects a la Perry, and I’m doing this because I follow the pattern that Perry is talking about; I only commit to a couple of highly important tasks and put them off by messing about doing nothing. Now I put them off by adding five pages to my “book” or writing a column or collaborating on some other project. I’ve also taken on tutoring a graduate comp sci student in theory of computation, picking up some new information/coming up with new explanations that might help the student is also something I can do while avoiding doing something else. I may also help with the layout for our undergraduate history journal.
So far I’ve actually found all of this to be helpful. I’ve been unusually productive over the past couple of months and I’ve broken out of a bit of a funk that had been building up due to a brief self-imposed isolation (and devotion to studying). I’ve actually found that the time I do spend studying now leads to more retention and progress (note that this may be in part due to my purposeful acceptance of spaced repetition; I previously had an inexplicable aversion to reading -even skimming- anything more than once or twice). All of this is to be taken with a grain of salt, however, as the novelty of structured procrastination hasn’t yet worn off and may be contributing to some level of akrasia busting.
As an additional note, I’ve lost 40 lbs and added considerable muscle mass in 5 months by getting regular exercise and cutting junk food/fast food out of my diet, and I plan to continue along with this by slowly altering my lifestyle. It isn’t exactly going paleo and sticking to a strict workout regimen, but I’m certainly making headway.
As far as educating myself, I’ve been reading a lot of material on neuroscience and cognitive science and on Bayesian networks and machine learning.
That’s it for now, though I may have left something out. Now to go do those few chores around the house I’ve been avoiding...
Great to hear from you, that’s good to know!
I think the real issue is that comparing the GRE quantitative section to the Putnam isn’t reasonable.
I wouldn’t be the least bit surprised if there are tons of people who are not capable of scoring anything at all on the Putnam, yet have perfect GRE quant scores.In any event, seeing researchers using the naive measure (if this is indeed what they did) to compare what are blatantly apples and oranges makes me feel a bit uneasy.
I wonder how these criteria were decided upon. I don’t see how a 1500 GRE score is at all comparable to an honorable mention on the Putnam. It seems like even getting a 20 on the Putnam (most years) is significantly more impressive than any GRE score.
I’ve looked over that list, but the problem is that it essentially consists of a list of items to catch you up to the state of the discussion as it was a year ago, along with a list of general mathematics texts.
I’m pretty well acquainted with mathematical logic; the main item on the list that I’m particularly weak in would be category theory, and I’m not sure why category theory is on the list. I’ve a couple of ideas about the potential use of category theory in, maybe, knowledge representation or something along those lines, but I have no clue how it could be brought to bear on the friendliness content problem.
I would really, really like to know: What areas of pure mathematics stand out to you now?
If the only value for which the machine disagrees with us is 2+2, and the human adds a trap to detect the case “Has been asked 2+2”, which overrides the usual algorithm and just outputs 4… would the human then claim they’d “made it implement arithmetic”? I don’t think so.
Well, this seems a bit unclear. We are operating under the assumption that the set up looks very similar to a correct set up, close enough to fool a reasonable expert. So while the previous fault would cause some consternation and force the expert to lower his priors for “this is a working calculator”, it doesn’t follow that he wouldn’t make the appropriate adjustment and then (upon seeing nothing else wrong with it) decide that it is likely that it should resume working correctly.
That fact-of-the-matter would still be true if there were no humans around to implement arithmetic, and it would still be true in Ancient Rome where they haven’t heard of positional numeral systems (though their system still beats the Artificial Arithmetician).
Yes, it would be true, but what exactly is it that ‘is true’? The human brain is a tangle of probabilistic algorithms playing various functional roles; it is “intuitively obvious” that there should be a Solomonoff-irreducible (up to some constant) program that can be implemented (given sufficient background knowledge of all of the components involved; Boolean circuits implemented on some substrate in such and such a way that “computes” “arithmetic operations on integers” (really it is doing some fancy electrical acrobatics, to be later interpreted first into a form we can perceive as an output, such as a sequence of pixels on some manner of screen arranged in a way to resemble the numerical output we want etc.) and that this is a physical fact about the universe (that we can have things arranged in such a way lead to such an outcome).
It is not obvious that we should then reverse the matter and claim that we ought to project a computational Platonism on to reality any more than logical positivist philosophers should have felt justified in doing that with mathematics and predicate logic a hundred years ago.
It is clear to me that we can perceive ‘computational’ patterns in top level phenomena such as the output of calculators or mental computations and that we can and have devised a framework for organizing the functional role of these processes (in terms of algorithmic information theory/computational complexity/computability theory) in a way that allows us to reason generally about them. It is not clear to me that we are further justified in taking the epistemological step that you seem to want to take.
I’m inclined to think that there is a fundamental problem with how you are approaching epistemology, and you should strongly consider looking into Bayesian epistemology (or statistical inference generally). I am also inclined to suggest that you look into the work of C.S. Peirce, and E.T. Jaynes’ book (as was mentioned previously and is a bit of a favorite around here; it really is quite profound). You might also consider Judea Pearl’s book “Causality”; I think some of the material is quite relevant and it seems likely to me that you would be very interested in it.
ETA: To Clarify, I’m not attacking the computable universe hypothesis; I think it is likely right (though I think that the term ‘computable’ in the broad sense in which it is often used needs some unpacking).
But note that there are also patterns of light which we would interpret as “the wrong answer”.
I did note that, maybe not explicitly but it isn’t really something that anyone would expect another person not to consider.
isn’t it a bit odd that whenever we build a calculator that outputs “5” for 2+2, it turns out to have something we would consider to be a wiring fault (so that it is not implementing arithmetic)?
It doesn’t seem odd at all, we have an expectation of the calculator, and if it fails to fulfill that expectation then we start to doubt that it is, in fact, what we thought it was (a working calculator). This refocuses the issue on us and the mechanics of how we compress information; we expected information ‘X’ at time t, but instead received ‘Y’ and decide that something is wrong with out model (and then aim to fix it by figuring out if it is indeed a wiring problem or a bit-flip or a bug in the programming of the calculator or some electromagnetic interference).
Can you point to a machine (or an idealised abstract algorithm, for that matter) which a reasonable human would agree implements arithmetic, but which disagrees with us on whether 2+2 equals 4?
No. But why is this? Because if (a) [a reasonable human would agree implements arithmetic] and (b) [which disagrees with us on whether 2+2 equals 4] both hold, then (c) [The human decides she ve was mistaken and needs to fix the machine]. If the human can alter the machine so as to make it agree with 2+2 = 4, then and only then will the human feel justified in asserting that it implements arithmetic.
The implementation is decidedly correct only if it demonstrates itself to be correct. Only if it fulfills our expectations of it. With a calculator, we are looking for something that allows us to extend our ability to infer things about the world. If I know that a car has a mass of 1000 kilograms and a speed of 200 kilometers for hour, then I can determine whether it will be able to topple a wall given that I have some number that encoded the amount of force it can withstand. I compute the output and compare it to the data for the wall.
Because, if arithmetic is implementation-dependent, you should be able to do so.
I tend to think it depends on a human-like brain that has been trained to interpret ‘2’, ‘+’ and ‘4’ in a certain way, so I don’t readily agree with your claim here.
Yes! (So long as we define computation as “abstract manipulation-rules on syntactic tokens”, and don’t make any condition about the computation’s having been implemented on any substrate.)
I’ll look over it, but given what you say here I’m not confident that it won’t be an attempt at a resurrection of Platonism.
If that is so, then how come others tend to reach the same truth? In the same way that there is something outside me that produces my experimental results (The Simple Truth), so there is something outside me that causes it to be the case that, when I (or any other cognitive agent) implements this particular algorithm, this particular result results.
People have very similar brains and I’d bet that all of the ideas of people that are cognitively available to you shared a similar cultural experience (at least in terms of what intellectual capital was/is available to them).
Viewing mathematics as something that is at least partially a reflection of the way that humans tend to compress information, it seems like you could argue that there is an awful lot of stuff to unpack when you say “2+2 = 4 is true outside of implementation” as well as the term “cognitive agent”.
What is clear to me is that when we set up a physical system (such as a Von Neumann machine, or a human who has been ‘set up’ by being educated and then asked a certain question) in a certain way, some part of the future state of that system is (say with 99.999% likelihood) recognizable to us as output (perhaps certain patterns of light resonate with us as “the correct answer”, perhaps some phonemes register in our cochlea and we store them in our working memory and compare them with the ‘expected’ phonemes). There appears to be an underlying regularity, but it isn’t clear to me what the true reduction looks like! Is the computation the ‘bottom level’? Do we aim to rephrase mathematics in terms of some algorithms that are capable of producing it? Are we then to take computation as “more fundamental” than physics?
Does this make sense?
I’m a bit weird with these sorts of arithmetic questions, my thought process went something like this: “Ok, 10 cents seems close, but that puts the bat at 90 cents more than the ball.. oh it seems like 5 cents and 1.05 works.” The answer just sort of pops into my head, not even thinking about the division step. Of course, I could do the simple maneuvering to get the answer, but it isn’t what I naturally do. I think this has to do with how I did math in grade school: I would never learn the formulas (and on top of this I would often forget my calculator) so I would rather come up with some roundabout method for approximating various calculations (like getting the root of a number by guessing numbers that “seemed close” to the root, usually starting at 1⁄2 the number and adjusting). Probably not really the best way to do things since there are much cleaner solutions, but this bad habit of arithmetic has sort of stuck with me through my mathematics degree (though I of course have picked up the relevant formulas by now!); instead of using the straightforward formula, I do this mental jiggling around of values and it pops into my head.
I don’t know, maybe that isn’t weird at all, but in any event no one has mentioned doing it yet.
Am I reading you right? You seem to be arguing using the form:
But since when does an event have to occur in order for us to get a reasonable probability estimate?
What if we look at one salient example? How about assessments of the probability of a global nuclear war? Any decent assessment would provide a reasonable lower bound for a man-made human extinction event. In addition to the more recent article I linked to, don’t you suppose that RAND or another group ever devised a believable estimate of the likelihood of extinction via global nuclear war some time between the 1950′s and the 1989?
It seems hard to believe that nuclear war alone wouldn’t have provided a perpetual lower-bound of greater than 10% on a man-made extinction scenario during most of the cold war. Even now, this lower-bound though smaller doesn’t seem to be totally negligible (hence the ongoing risk assessments and advocacy for disarmament).
Even if it were the case that natural risks greatly outweighed the risk of man-made extinction events,
Doesn’t follow (I’m assuming this part of the POV made sense to you as well) given my counter example. Of course you might have a good reason to reject my counter-example, and if so I’d be interested in seeing it.