As a soon-to-be maths teacher, hearing about high school students going above and beyond the terribly-designed curricula that teachers are forced to inflict on their students warms my heart enormously. May your passion for learning continue to grow, and guide you to ever greater intellectual heights. Have an upvote. :)
Maelin
I think this is a good idea. I wish LW had existed when I was a teenager; maybe I could have got started on the path to enlightenment earlier, instead of spending more than half a decade as one of those smirking, sarcastic, self-congratulatory Atheists that now make me cringe. But it does seem likely that LW could be intimidating to teenagers, and this seems to me to be a demographic we should be trying to reach.
Perhaps we could make an effort to produce some more accessible, entry-level posts that provide a gentler introduction to the material of the sequences and LW community memes, without assumed prereading, as part of this...
Ooh, I read his novel Evolution and found it to be extremely enjoyable. It’s the evolution of humans from a little ratlike mammal thing living through the KT-event, all the way through to modern humans and then speculative extension beyond—but each chapter is a narrative about some individual actual creature going through a significant event in its life, with realistic depiction of the increasing cognitive abilities (i.e. no sapient monkeys). I found it gave me an amazing subjective feeling of perspective on the evolution of primates and humans. Heartily recommended.
Done. I did all of the extra credit except the Myers-Briggs. The IQ test was the most interesting but three or four questions towards the ends were frustratingly difficult and refused to yield their secrets to me; even now I can feel lingering annoyance at the fact that I eventually gave up on them instead of wrestling with them for longer. Oh well.
But they weren’t. Trivialists certainly do assert that
) is true, and so is
Oh no. Now I have a perfect, bulletproof excuse, that I actually buy, for my habit of procrastinating so badly with assignments that I typically end up doing them in a modafinil-powered all nighter on the night before they’re due.
John_Maxwell_IV, what have you done?
I find this very unsatisfying, not least because the optimisation power over a wide range of targets is easily gamed just by dividing any given ‘target’ of a process into a whole lot of smaller targets and then saying “look at all these different targets that the process optimised for!”
Claiming that optimisation power is defined simply by a process’s ability to hit some target from a wide range of starting states, and/or has a wide range of targets that it can hit, both seem to be easily gameable by clever sophistry with your choice of how you choose the targets by which you measure its optimisation power. There must be some part of it that separates processes we feel genuinely are good at optimising (like Clippy) from processes that only come out as good at optimising if we select clever targets to measure them by.
Clarification: the current state of the art in neural preservation doesn’t preserve amounts of state in zebrafish brains that are recoverable in usable form by the current state of the art.
If we had the ability to recover the information in usable form today, there would be no need for cryonics to exist.
(apologies for delayed reply)
I really just want to know what Eliezer means by it. It seems to me like I have some notion of an optimisation process, that says “yep, that’s definitely an optimisation process” when I think about evolution and human minds and Clippy, and says “nope, that’s not really an optimisation process—at least, not one worth the name” about water rolling down a hill and thermodynamics. And I think this notion is sufficiently similar to the one that Eliezer is using. But my attempts to formalise this definition, to reduce it, have failed—I can’t find any combination of words that seems to capture the boundary that my intuition is drawing between Clippy and gravity.
But it seems like then every process can be an optimisation process, and when you measure the optimisation power that’s really telling you more about whether the ‘optimisation target’ you selected as your measure is a good fit for the process you’re looking at. It tells you more about your interpretation of the optimisation target than it does about the process itself.
Gravity isn’t very powerful for minimising distance between sources of mass, but it is very powerful for “making mass move in straight lines through curved spacetime”[1]. For any process at all, you just look at “whatever it actually ends up doing”, and then say that was its optimisation target all along, and hey presto, it turns out there are superpowerful optimisation processes everywhere you look, all being hugely successful at making things turn out how (you think) they wanted, provided you think they wanted things the way they actually turned out. If you get to choose your own interpretation of what the optimisation target is, ‘optimisation process’ doesn’t seem like a very useful notion at all.
Also, re: value independence: Evolution seems like a pretty definite candidate for what we want ‘optimisation process’ to mean, but its values seem to be pretty inextricably baked in to the algorithm. You can’t reprogram evolution to start optimising for paperclips, for example. It only optimises for whatever genes are selectively favoured by the environment.
[1] insert a more accurate description of what gravity does here if required.
I’m not sure that this does the job, but I might be misunderstanding:
Clippy the paperclip maximiser, being placed in a given system S, transitions the system from state S1 (not many paperclips) to state S2 (many paperclips), and does this reliably across many different systems. We can confidently predict that if we put Clippy in a new system, it will soon end up full of paperclips, even if we aren’t sure what the mechanism will be.
Water, being placed in a given system S, transitions from state S1 (water is anywhere) to state S2 (water occupies local minima and isn’t just floating around), and does this reliably across many different systems. We can confidently predict that if we put water in a new system, it will soon end up with a wet floor but not likely a wet ceiling. It just so happens that we do know the mechanism of transition but that shouldn’t matter, I think.
So I feel like this kind of behaviour is actually necessary but isn’t sufficient to identify an optimisation process. But I might be missing your point.
I don’t think I’m being clear. I don’t understand what it means for something to be vs not-be an optimisation process. What features or properties distinguish an optimisation process from a not-optimisation process?
But who says the water has to optimise for “lowest possible place”? Maybe it’s just optimising for “occupying local minima”. Out of all the possible arrangements of the water molecules in the entire universe that the water might move towards if you fill a bucket from the ocean and then tip it out again, it sure seems to gravitate towards a select few, pun intended.
How can we define optimisation in a way that doesn’t let us just say “it’s optimising to end up like that” about any process with an end state?
There’s some discussion in the original thread about what exactly counts as optimising but it doesn’t seem to have had any result, and I confess I’m struggling to find a definition of optimisation that says “definitely optimisation” about human minds and Deep Blue and evolution, but “definitely not optimisation” about a rock sitting on the ground or water running down a hill, and which feels like I have actually made a reduction instead of just something circular or synonymous.
Does anybody have a good working definition of optimisation that captures the things that feel optimisationy?
Agreed. The stick figures do not mesh well with the colourful cartoony backgrounds that make the images visually appealing. They feel out of place, and I found it harder to tell when I was supposed to consider one stick figure distinct from another one without actively looking for it (I also have this problem with xkcd).
Strong vote for return to the original style diagrams, with the gender imbalance fixed.
I’ve taught a few people about the complex numbers, by stepping through expanding the naturals with the introduction of negatives to make integers, fractions to make rationals, irrationals to make reals, and finally (the ‘novel’ stage for my audience) imaginary numbers to make the complex numbers.
I emphasise the point that the new system always seems weird and confusing at first to the people who aren’t used to it, and sometimes gets given a nasty name in contrast to the nice name of the old system (especially ‘imaginary’ vs ‘real’ and ‘irrational’ vs ‘rational’) but the new numbers are never more or less worthwhile than the old system—they’re just different, and useful in new ways.
Sharing this sentiment. I’m particularly impressed with the cartoon diagrams. They’re visually very appealing, and they encapsulate an idea in a way that takes just enough thought to untangle that I feel like it makes me engage with the conceptual message.
I’d be very curious to know which predictions were made and where they come from, and how we know which ones were fulfilled or not fulfilled.
Typically it’s a group discussion of various rationality-related topics. In the past we’ve had problem-solving situations where somebody will be looking at a problem or upcoming decision in their life, and be unsure of how to optimise their outcomes, and the group will discuss it and try to formulate strategies and ideas to help out (for example, we did this when I was looking at buying a car, but unsure how to go about researching and deciding on one).
Other times we’ve had more structured activities, where members give presentations and/or run exercises to develop rationality skills. For example, we did a calibration exercise a while back, and more recently a few members have done presentations based on sessions from the rationality minicamps, or from the Skill of the Week posts.
Often we’ll just discuss whatever topic is of interest, with a rational approach strongly encouraged. We’ll often share techniques and tools for self-optimisation of all sorts (particularly electronic: websites, software, mobile apps etc) or personal progress toward self-improvement.
Yeah, the interface is usually the biggest complaint and I agree it’s quite suboptimal. I guess the good bit is once you get something working you don’t have to interact with it again until you want to change it.
I haven’t tried it myself, but I believe there is a way to write the contexts and tasks in XML files or something similar… you could look that up.