Open Thread: July 2009
Here’s our place to discuss Less Wrong topics that have not appeared in recent posts. Have fun building smaller brains inside of your brains (or not, as you please).
Here’s our place to discuss Less Wrong topics that have not appeared in recent posts. Have fun building smaller brains inside of your brains (or not, as you please).
Eliezer_Yudkowsky said:
This comes from a post from almost a year ago, Excluding the Supernatural. I quote it because I was hoping to revive some discussion on it: to me, this argument seems dead wrong.
The counter-argument might go like this:
Reductionism is anything but a priori logically necessary—it’s something that must be verified with extensive empirical data and inductive, probabilistic reasoning. That is, we observe that the attributes of many entities can be explained with laws describing their internal relations. Occam’s razor tells us that we don’t need both the higher and lower order model to actually exist, so we unify our theory. The repeated experience of this success leads us to extrapolate that this can be done with all entities. Perhaps some entities present obstacles to this goal, but we then infer that their irreducibility is in the map (our model for understanding them) not in the territory (the entity itself.) But again, we infer this by assuring ourselves that they just haven’t been explained YET—which implies it’s reasonable, based on inductive reasoning from the past, to assume that they will be reduced. Or we describe some element of the entity’s complexity that makes “irreducibility in practice” something to be expected. We therefore preserve its reducibility in principle.
But we do not (it seems to me) merely exclude its irreducibility based on a priori necessity. Why would we? It’s perfectly conceivable. Eliezer describes in an earlier post the “small, hard, opaque black ball” that is a non-reductionist explanation of an entity. He claims its just a placeholder, something that fools us into thinking there’s a causal chain where nothing has actually been clarified.
But it’s perfectly conceivable that such a “black ball” could exist. I suppose there’s no way to prove that it’s irreducible, and not just unreduced as of yet, in the same way that one can’t prove a negative. But this just presupposes that the default position ought to be reductionism. We should assume innocent until proven guilty. But which is innocent in this case: reducible or non-reducible?
So what if we come across something that appears to be a “black ball”? We attempt with all our mental and technological acuity to analyze it in terms or more fundamental laws, and every attempt fails. I would argue this is a good example of empirical evidence against materialist reductionism. We indeed have an entity that obeys laws which we can describe and predict—it just has laws that can’t be reconciled with the physical laws of everything else, and when interacting with anything else, violates them.
Occam’s razor is indeed strong here: we recognize that, given the faintest hope of reduction, we should throw out irreducibility in favor of having as few types of “stuff” as possible. This happens in the case of “elan vital.” But it seems perfectly conceivable to me that there might be an entity that’s truly a black ball.
Now this seems so massively incorrect that I fear I’m misunderstanding Eliezer. Does anyone have any feedback? I’d love to make a post about this, once I generate some karma.
I didn’t get the ‘and so’ above at first, but I think it makes sense for the following reason: you can only ever “construct models made of interacting simple things” (possibly elaborated upon and abstracted to such an extent that they no longer seem simple or physical) in that universe because any model you could possibly make in that universe would be causally determined by and entangled with the quarks in your brain. The verbalization and high-level understanding of the model is just another way of explaining what is going on with the quarks in your brain (it explains nothing additionally), and so whatever the ‘irreducibly mental’ things in your model are, the chain of causal unpacking and explicating ultimately bottoms out with descriptions of quarks, etc., by hypothesis. When you think “non-reductionist”, there is a purely reductionist explanation of what you are thinking. If there is just one level, then the explanation for everything is on that level or can be reduced to that level, so you can’t concretely envision, as Eliezer says, something that can’t be reduced.
I wish I had time to make this clearer, but I don’t have any more time today.
I’m pretty sure that just can’t be right. (His argument, that is. I think your interpretation of it is dead on.) We are not limited to imagining the sorts of things our brain is causally determined by. And the way you just put it seems completely backwards. Even if everything reduces to quarks, it’s only in principle—our brains are hard wired to create multiple levels of models, and could never conceive of an explanation of a 747 in terms of quarks.
Look at it this way. Can a painting have a subject? Can it be “about” something? Of course. Certainly there’s nothing supernatural about this, but there’s also nothing legitimate on the level of quarks that could be used to differentiate between a painting that has a subject and a painting that is just random blobs. I can imagine, after all, two paintings, almost identical in their coordinate-positioning of quarks, which have completely different subjects. I can also imagine two paintings, very different in terms of coordinates of quarks (perhaps painted with two different materials) which have the same subject. So while everything reduces down to quarks, it’s the easiest thing in the world to explain a painting’s about-ness on a separate level from quarks, and completely impossible to envision an explanation for this about-ness in terms of quarks.
I’m just not sure what about a “black ball” misses the mark of conceivability.
You want to be very careful every time you find yourself saying that.
And that too.
Eliezer, in Excluding the Supernatural, you wrote:
“Fundamentally complicated” does sound like an oxymoron to me, but I’m not sure I could say why. Could you?
I’m having the same difficulty. Aren’t quarks (or whatever is the most elemental bit of matter) fundamentally complicated? What’s meant by “complicated”?
(Sorry for being so chatty.)
Are you actually implying that quantum mechanics is remotely comparable in complexity to paintings and artistic “subjects”? Please direct me to the t-shirt that summarizes all of artistic critique.
This is probably wrong. The important point is that physics isn’t a mind, and less so human mind or your mind, so it doesn’t care about your high-level concepts, which makes their materialization in reality impossible. Even though the territory computes much more data than people, it’s data not structured in a way human concepts are.
To loqi and Nesov:
Again, both of your responses seem to hinge on the fact that my challenge below is easily answerable, and has already been answered:
To loqi: Where do we draw the line? Where is an entity too complex to be considered fundamental, whereas another is somewhat less complex and can therefore be considered simple? What would be a priori illogical about every entity in the universe being explainable in terms of quarks, except for one type of entity, which simply followed different laws? (Maybe these laws wouldn’t even be deterministic, but that’s apparently not a knockdown criticism of them, right? From what I understand, QM isn’t deterministic, by some interpretations.)
To Nesov: Again, you’re presupposing that you know what’s part of the territory, and what’s part of the map, and then saying “obviously, the territory isn’t affected by the map.” Sure. But this presupposes the territory doesn’t have any irreducible entities. It doesn’t demonstrate it.
Don’t get me wrong: Occam’s razor will indeed (and rightly) push us to suspect that there are no irreducible entities. But it will do this based on some previous success with reduction—it is an inference, not an a priori necessity.
I don’t know. I wasn’t supporting the main thread of argument, I was responding specifically to your implicit comparison of the complexity of quarks and “about-ness”, and pointing out that the complexity of the latter (assuming it’s well-defined) is orders of magnitude higher than that of the former. “About-ness” may seem simpler to you if you think about it in terms that hide the complexity, but it’s there. A similar trick is possible with QM… everything is just waves. QM possesses some fundamental level of complexity, but I wouldn’t agree in this context that it’s “fundamentally complicated”.
I see what you mean. It’s certainly a good distinction to make, even if it’s difficult to articulate. Again, though, I think it’s Occam’s Razor and induction that makes us prefer the simpler entities—they aren’t the sole inhabitants of the territory by default.
I would assert that, by definition, a meaningful concept is reducible to some other set of concepts. If this chain of meaning can be extended to unambiguous physics, then their “materialization in reality” is certainly possible, it’s just a complicated boundary in Thingspace.
Certainly—that was somewhat sloppy of me. In my defense, however, a priori and conceivability/imaginability are pretty inextricably tied. Additionally, you yourself used the word “envision.”
It would perhaps be helpful if you could clarify what you meant when you said:
Your usage doesn’t seem to fit into the Kantian sense of the term—the unity of my experience of the world is not conditioned by everything being reducible. What do you mean when you say irreducibility is a priori logically incoherent?
See blog post links in Priors. A priori incoherent means that you don’t need data about the world to come to a conclusion (i.e. in this case the statement is logically false).
This doesn’t really answer the question, though. I know that a priori means “prior to experience”, but what does this consist of? Originally, for something to be “a priori illogical”, it was supposed to mean that it couldn’t be thought without contradicting oneself, because of pre-experiential rules of thought. An example would be two straight lines on a flat surface forming a bounded figure—it’s not just wrong, but inconceivable. As far as I can tell, an irreducible entity doesn’t possess this inconceivability, so I’m trying to figure out what Eliezer meant.
(He mentions some stuff about being unable to make testable predictions to confirm irreducibility, but as I’ve already said, this seems to presuppose that reducibility is the default position, not prove it.)
Some comic relief, with a serious point:
The famous cartoon of two mathematicians going over a proof, the middle step of which is “then a miracle occurs”.
If reductionism is false in the way you’ve described, then it seems that we can start at the level of quarks and work our way back up to the highest level, but that at some point there must be a “magical stuff happens here” step where level N+1 cannot be reduced to level N.
Indeed, an irreducible entity (albeit with describable, predictable, behavior) is not much better than a miracle. This is why Occam’s Razor, insisting that our model of the world should not postulate needless entities, insists that everything should be reduced to one type of stuff if possible. But the “if possible” is key: we verify through inference and induction whether or not it’s reasonable to think we’ll be able to reduce everything, not through a priori logic.
This is a good example of how the “natural” concepts are actually quite elaborate, paying utmost attention to tiny details that are almost invisible in other representations. But these details are in fact there, in the territory. The fact that they are small in one representation doesn’t belittle their significance in another representation. And the fact that one object is placed in one high-level category and a “slightly” different object is placed in another category results from exactly these “tiny” differences. You can’t visualize these differences in terms of quarks directly, but in terms of other high-level categories it is exactly what you are doing: keeping track of the tiny distinctions that are important to you for some reason.
That sounds right, but that sounds like I am (or at least could) visualize these levels as separate, since to keep track of the tiny differences that end up being important is impossible for my mind to do. This seems to necessitate that imagining irreducibility is not only possible, but natural (and perhaps unavoidable?).
This is not to say that irreducibility is logical, and our reason may insist to us that the painting is indeed reducible to quarks, whether or not we can imagine this reduction. But collapsing the levels is not the default position, a priori logically neccessary.
I’m not entirely clear on what you are saying above. Your mind keeps many overlapping concepts that build on each other. It’s also incapable of introspecting on this process in detail, or of representing one concept explicitly in terms of an arbitrary other concept, even if the model in the mind supports a lawful dependence between them. You can only visualize some concepts in the context of some other closely related concepts. Notice that we are only talking about the algorithm of human mind and its limitations.
Perhaps it would help (since I think I’ve lost you as well) to relate this all back to the original question: is all levels reducing down to a common lowest level a priori logically necessary? My contention is that it’s possible to reduce the levels, but not logically necessary—and I support this contention with the fact that we don’t necessarily collapse the levels in our reasoning, and we can’t collapse the levels in our imagination. If you weren’t disagreeing with this, then I’ve just misunderstood you, and I apologize.
There are at least 3 ways for anti-reductionism to be not only clearly consistent, but with some plausibility, true—in the sense that there is empirical as well as conceptual evidence for every position (This is connected to a quote I posted yesterday):
Ontological monism: The whole universe is prior to its parts (see this paper)
No fundamental level: The descent of levels is infinite (see that paper)
“Causation” is an inconsistent concept (I’m one free afternoon and two karma points away from a top-level post on this ;)
I have not been able to imagine a pair of (painting+context with a subject)s which have two completely different subjects but are almost identical in their coordinate-positioning of quarks.
You can, though? Can you give an example?
Well, wouldn’t a painting of the Mona Lisa, and a computer screen depicting said painting, have very different quarks, and quark patterns? While two computer screens depicting some completely different subject would be much more similar to each other? This is what I was trying to get at.
The two computer screens depicting completely different subjects have almost everything in common, in that they are of the same material. However, where they differ—namely, the color of each pixel—is where all the information about the painting is contained. So the screens have enough different information (at the quark level) to distinguish what the paintings are about.
So I don’t think you are getting at why “about-ness” isn’t related to the quarks of the painting. I think a better example is a stick figure. A child’s stick figure can be anybody. What the painting is about is in her head, or your head, or in the head of anyone thinking about what the painting is about.
So it’s not in the quarks of the painting at all. “About-ness” is in the quarks of the thoughts of the person looking at the painting, right? (And according to reductionism, completely determined by the quarks in the painting, the quarks of the observer, and the quarks of their mutual environment.)
Above, you wrote:
Thus I agree with this statement as it is written, because I think the difference in the subjects of the paintings are found instead in the thoughts of the beholder. Would you agree that there is a legitimate difference at the level of quarks between the thought that a painting has a subject and the thought that a painting is just random blobs?
But the two screens with two different subjects are probably more similar than a screen and a painting with the same subject, in terms of coordinates of quarks. Additionally, it’s not clear to me that there’s a one-to-one correspondence between color and quarks. Even establishing a correspondence between color and chemical make up is extremely difficult, due to the influence of natural selection in how we see color (I remember Dennett having a cool chapter on this in CE.)
I don’t want to make our disagreement sound more stark than it actually is. I agree that the about-ness is in the mind of the beholder, and the stick figure is a good example as well… but I think this just emphasizes my point. Let me put it this way: Given the data for the point-coordinates of the three entities, could a mind choose which one had which subject? No, even though the criteria is buried abstrusely somewhere in there. The point being that the models are inextricably separate in the imagination, and its therefore not clear to me why its a priori logically necessary that they all collapse into the same territory (though I agree that they do, ultimately).
Maybe I’ve misunderstood you and you’re not talking about what “about” means. Are you talking about how it seems impossible that we can decode the quarks into our perception of reality? And thus that while you agree everything is quarks, there’s some intermediate scale helping us interpret that would be better identified as ‘fundamental’? (If I’m wrong just downvote once, and I’ll delete, I don’t want to make this thread more confusing.
Haha if I just downvoted it, then I wouldn’t be able to explain what I do mean.
I’m simply attempting to disagree with the logical necessity of reductionism. I said this earlier, I thought it was pretty clear:
So, the fact that a painting has a subject is a good example of this: I can’t imagine the specific differences between a) the quark-configuration that would lead to me believing its “about a subject”, versus b) the quark-configuration that would lead to me believing its just a blob. I can believe that quarks are ultimately responsible, but I’m not obligated to do so by a priori logical necessity.
So I’m not contending anything about what the most fundamental level is. I’m just saying that non-reductionism isn’t inconceivable.
This is a slippery concept. With some tiny probability anything is possible, even that 2+2=3. When philosophers argue for what is logically possible and what isn’t, they implicitly apply an anthropomorphic threshold. Think of that picture with almost-the-same atoms but completely different message.
The extent to which something is a priori impossible is also probabilistic. You say “impossible”, but mean “overwhelmingly improbable”. Of course it’s technically possible that the territory will play a game of supernatural and support a fundamental object behaving according to a high-level concept in your mind. But this is improbable to an extent of being impossible, a priori, without need for further experiments to drive the certainty to absolute.
Not quite sure what you’re saying here. If you’re saying:
1)”Entities in the map will not magically jump into the territory,” Then I never disagreed with this. What I disagreed with is your labeling certain things as obviously in the map and others obviously in the territory. We can use whatever labels you like: I still don’t know why irreducible entities in the territory are “incredibly improbable prior to any empirical evidence.”
2)”The territory can’t support irreducible entities,” you still haven’t asserted why this is “incredibly improbable prior to any empirical evidence.”
I feel that someone should point out how difficult this discussion might be in light of the overwhelming empirical evidence for reductionism. Non-reductionist theories tend to get… reduced. In other words, reductionism’s logical status is a fairly fine distinction in practice.
That said, I wonder if the claim can’t be near-equivalently rephrased “it’s impossible to imagine a non-reductionist scenario without populating it with your own arbitrary fictions”. Your use of the term “conceivable” seems to mean (or include) something like “choose an arbitrary state space of possible worlds and an observation relation over that space”. Clearly anything goes.
You’re simply expanding your definition of “everything” to include arbitrary chunks of state space you bolted on, some of which are underdetermined by their interactions with every previous part of “everything”. I don’t have a fully fleshed-out logical theory of everything on hand, so I’ll give you the benefit of the doubt that what you’re saying isn’t logically invalid. Either way, it’s pointless. If there’s no link between levels, there’s no way to distinguish between states in the extended space except by some additional a priori process. Good luck acquiring or communicating evidence for such processes.
Ah, that’s very interesting. Now we’re getting somewhere.
I don’t think it has to be arbitrary. Couldn’t the following scenario be the case?:
The universe is full of entities that experiments show reducible to fundamental elements with laws (say, quarks), or entities that induction + parsimony tells us ought to be reducible to fundamental elements (since these entities are made of quarks, we just haven’t quite figured out the reduction of their emergent properties yet)… BUT there is one exception in this universe, a certain type of stuff whose behavior is quantifiable, yet not reducible to quarks. In fact, we have no reason to believe this certain type of stuff is even made of the fundamental stuff everything else seems to be. Every experiment would defy reducing this entity down to quarks, to the point that it would actually be against Occam’s Razor to try and reduce this entity to quarks! It would be a type of dualism, I suppose. It’s not a priori logically excluded, and it’s not arbitrary.
I think we might separate the ideas that there’s only one type of particle and that the world is reductionist. It is an open question as to whether everything can be reduced to a single fundamental thing (like strings) and it wouldn’t be a logical impossibility to discover that there were two or three kinds of things interacting. (Or would it?)
Reductionism, as I understand it, is the idea that the higher levels are completely explained by (are completely determined by) the lower levels. Any fundamentally new type of particle found would just be added to what we consider “lower level”.
So what does it say about the world that it is reductionist? I propose the following two things are being asserted:
(1) There’s no rule that operates at an intermediate level that doesn’t also operate on the lower levels. This means that you can’t start adding new rules when a certain level of organization is reached. For example, if you have a law that objects with mass behave a certain way, you can’t apply it to everything that has mass but not quarks. This is a consistency rule.
(2) Any rule that applies to an intermediate level is reducible to rules that can be expressed with and applied at the lower level. For example, we have the rule that two competing organisms cannot coexist in the same niche. Even though it would be very difficult to demonstrate, a reductionist worldview argues that in principle this rule can be derived from the rules we already apply to quarks.
When people argue about reductionism, they are usually arguing about (2). They have some idea that at a certain level of organization, new rules can come into play that simply aren’t expressible in the lower levels—they’re totally new rules.
Here’s a thought experiment about an apple that helped me sort through these ideas:
Suppose that I have two objects, one in my right hand and one in my left hand. The one in my left hand is an apple. The one in my right hand has exactly the same quarks in exactly the same states. But somehow, for some reason, they’re different. This implies that there is some degree of freedom between the lower level and the higher level. Now it follows that this free state is determined in some way; to determine an apple in my left hand and a non-apple in my right, either by some kind of rule or randomly, or both. In any case, we would observe this rule. Call it X. So the higher level, the object being an apple or non-apple, depends upon the lower levels and X.
(a) Was X there all along ? If so, X is part of the lower level and we just discovered it, we need to add it in to our lower level theory.
(b) What if X wasn’t “there” all along? What if for some reason, X only applies at intermediate levels? …either because
The case (a) doesn’t assert anything about the universe, it just illustrates a confusion that can result from not understanding what “lower level” means. I don’t think (b) in either part is logically impossible because you can run a simulation with these rules.
Until you require (and obviously you want to) that the universe is a closed system. Then I don’t think you can have b(i) or b(ii). A rule (Rule 1) that is inconsistently applied (bi) requires another rule (Rule 2) determining when to apply it. Rule 1 being inconsistent in a system means that Rule 2 is outside the system. If a phenomenon cannot be described by the states of the system (the lower level) (bii) then it depends on something else outside the system. So I think I’ve deduced that the logical impossibility of reductionism depends upon the universe being a closed system.
If the physical universe isn’t closed—if we allow the metaphysical—then non-reductionism is not logically impossible.
Where does randomness come in? Is the universe necessarily deterministic because of (bii) being impossible, so that the higher levels must depend deterministically on the lower levels? (I’m talking about whether a truly stochastic component is possible in Brownian motion or the creation of particles in a vacuum, etc).
Another thing to think about is how these ideas affect our theories about gravity. We have no direct evidence that gravity satisfies consistency or that it is expressible in terms of lowest level physics. Does anyone know if any well-considered theories are ever proposed for gravity that don’t satisfy these rules?
Oh! Certainly. But this doesn’t seem to exclude “mind”, or some element thereof, from being irreducible—which is what Eliezer was trying to argue, right? He’s trying to support reductionism, and this seems to include an attack on “fundamentally mental” entities. Based on what you’re saying, though, there could be a fundamental type of particle, called “feelions,” or “qualions”—the entities responsible for what we call “mind”—which would not reduce down to quarks, and therefore would deserve to be called their own fundamental type of “stuff.” It’s a bit weird to me to call this a reductionist theory, and its certainly not a reductionist materialist theory.
Everything else you said seems to me right on—emergent properties that are irreducible to their constituents in principle seems somewhat incoherent to me.
In what way would these “feelions” or “qualions” not be materials? Your answer to this question may reveal some interesting hidden assumptions.
Are you sure it’s weird because it’s not reductionist? Or because such a theory would never be seen outside of a metaphysical theory? So you automatically link the idea that minds are special because they have “qualions” with “metaphysical nonsense”.
But what if qualions really existed, in a material way and there were physical laws describing how they were caught and accumulated by neural cells. There’s absolutely no evidence for such a theory, so it’s crazy, but its not logically impossible or inconsistent with reductionism, right?
Hmm… excellent point. Here I do think it begins to get fuzzy… what if these qualions fundamentally did stuff that we typically attribute to higher-level functions, such as making decisions? Could there be a “self” qualion? Could their behavior be indeterministic in the sense that we naively attribute to humans? What if there were one qualion per person, which determined everything about their consciousness and personality irreducibly? I feel that, if these sorts of things were the case, we would no longer be within the realm of a “material” theory. It seems that Eliezer would agree:
Based on his post eventually insisting on the a priori incoherence of such possibilities (we look inside the dryad and find out he’s not made of dull quarks), I inferred that he thought fundamentally mental things, too, are excluded a priori. You seem to now disagree, as I do. Is that right?
Where things seem to get fuzzy is where things seem to go wrong. Nevertheless, forging ahead..
If they are being called “fundamentally mental” because they interact by one set of rules with things that are mental and a different set of rules with things that are not mental, then it’s not consistent with a reductionist worldview (and it’s also confused because you’re not getting at how mental is different from non-mental). However, if they are being called fundamentally mental because they happen to be mechanistically involved in mental mechanisms, but still interact with all quarks in one consistent way everywhere, it’s logically possible.
Also you asked if these qualions could be indeterministic. It doesn’t matter if you apply this question to a hypothesized new particle. The question is, is indeterminism possible in a closed system? If so, we could postulate quarks as a source of indeterminism.
Is it therefore a priori logically incoherent? That’s what I’m trying to understand. Would you exclude a “cartesian theatre” fundamental particle a priori?
What do you mean by mechanical? I think we’re both resting on some hidden assumption about dividing the mental from the physical/mechanical. I think you’re right that it’s hard to articulate, but this makes Eliezer’s original argument even more confusing. Could you clarify whether or not you’re agreeing with his argument?
I deduce that the above case would be inconsistent with reductionism. And I think that it is logically incoherent, because I think non-reductionism is logically incoherent, because I think that reductionism is equivalent with the idea of a closed universe, which I think is logically necessary. You may disagree with any step in the chain of this reasoning.
I think you guessed: I meant that there is no division between the mental and physical/mechanical. Believing that a division is a priori possible is definitely non-reductionist. If that is what Eliezer is saying, then I agree with him.
To summarize, my argument is:
[logic --> closed universe --> reductionism --> no division between the mental and the physical/mechanical]
Yes, and it does.
Could you explain? If I were presented with a data sheet full of numbers, and told “these are the point coordinates of the fundamental building blocks of three entities. Please tell me what these entities are, and if applicable, what they are about” I would be unable to do so. Would you?
Given a computer that can handle the representation and convert it into form acceptable by the interface of your mind, this data can be converted into a high-level description. The data determines its high-level properties, even if you are unable to extract them, just like a given number determines which prime factors it has, even if you are unable to factor it.
I happen to agree. However, the claim of reductionism is that what you’ve described is the case for ALL entities. I’m trying to figure out why this claim is logically necessary, and any disagreement is a confusion.
The claim is about the absence of high-level concepts in the territory. These appear only the mind, as computational abstractions in processing low-level data. The logical incoherence comes from the disagreement between the definition of high-level concepts as classes of states of the territory, which their role in the mind’s algorithm entails, and assumption that the very same concepts obey laws of physics. It’s virtually impossible for the convenience of computational abstraction to correspond exactly to the reality of physical laws, and even more impossible for this correspondence to persist. High-level concepts ever change in the minds according to chance and choice, while fundamental laws are a given, not subservient to telepathic teleological necessity.
Edit: changed “classes of low-level concepts” to “classes of states of the territory”.
That was a bit confusing, and I have to go now, so I’ll try and give a more thorough response later. I’ll just say right now that I don’t think it’s as easy as you claim to differentiate between “higher-level” and “lower-level” entities/concepts/laws, or rather, to decide whether an entity is actually a fundamental thing with laws, or whether its just a concept. You appeal to changeability, but this seems like unsteady ground.
EDIT: Here’s a better way of formulating my objection: tell me the obvious, a priori logically necessary criteria for a person to distinguish between “entities within the territory” and “high-level concepts.” If you can’t give any, then this is a big problem: you don’t know that the higher level entities aren’t within the territory. They could be within the territory, or they could be “computational abstractions.” Either position is logically tenable, so it makes no sense to say that this is where the logical incoherence comes in.
But surely there’s something in the painting that is causing the observer to have different thoughts for different subjects. But that something in the painting is not anything discernible on the level of quarks. This is why I brought the example up, after all. It was in response to:
I believe (I could be wrong, since I started this thread asking for a clarification) that the implication of this statement (derived from the context) was that “brains made of quarks can’t think about things as if they’re irreducibly not made of quarks.”
First of all, saying “brains made of quarks can’t think [blank] because quarks themselves aren’t [blank],” seems to me equivalent to saying that paintings can’t be about something because quarks can’t be about something. It’s confusing the abilities and properties of one level for those of another. I know this is a stretch, but be generous, because I think the parallelism is important.
Second of all, we think about things as if they’re not quarks all the time. We can “predict” or “envision” the subject of the painting without thinking about the quark coordinates at all (and such coordinates would not help us envision or predict anything to do with the subject).
So I clearly need some help understanding what Eliezer actually meant. I find no reason to believe that brains made of quarks can’t think about things as if they’re not made of quarks. (Or rather, Eliezer only seems to allow this if it’s a “confusion.” I don’t understand what he means by this.)
What are some examples of recent progress in AI?
In several of Elizer’s talks, such as this one, he’s mentioned that AI research has been progressing at around the expected rate for problems of similar difficultly. He also mentioned that we’ve reached around the intelligence level of a lizard so far.
Ideally I’d like to have some examples I can give to people when they say things like “AI is never going to work”—the only examples I’ve been able to come up with so far have been AI in games, but they don’t seem to think that counts because “it’s just a game”.
The Roomba is an example that seems to get a bit more respect (although it seems like a much simpler problem than many game AIs to me), but after that I pretty much run out of examples. Maybe I’m just not thinking hard enough because a lot of AI isn’t called AI when it becomes mainstream?
Examples that are more ‘geeky’ would also be good for me, even if they would be dismissed by non-geeky people I meet.
I see 7 upvotes but no answers. Should I conclude that even those who think AI is attainable find nothing to boast of in the record so far?
I usually cite the DARPA Grand Challenge, which I gather was won using such advanced modern methods as particle filtering (a Bayesian technique).
How about google? Face recognition technology? Big Dog? The theory work of Marcus Hutter? Current Chess AIs that are now so good that they utterly cream the best humans, to such an extent that human/computer matches aren’t even attempted any more? How about Asimo?
Last time I read much about computer chess, the better programs were still relying primarily on brute-force search with some minor algorithmic optimizations to prune the search space, together with enormous databases for openings and endgames. Are there actually chess programs nowadays that deserve to be called intelligent?
So what? If you get killed by an uFAI, you cannot appeal to reality and say “but the AI just used a brute-force search method with some minor algorithmic optimizations to prune the search space, together with enormous databases of weapons technology and science”, so can you please unkill me?
The problem domain of chess happens to be one where brute-force search with some clever tricks actually works. Other domains are less like this, such as allowing a robot to walk (Asimo, Big Dog), where researchers are using other, more appropriate techniques such as machine learning.
What is your criterion for
anyway?
Your first point—that you can be easily killed or checkmated by a sufficiently powerful program regardless of how it is implemented—is true but irrelevant: the question was not whether the program is powerful and effective (which I would not dispute) but whether it deserves to be called intelligent. You can say that whether it is intelligent or not is unimportant and that what matters is how effective it is, but it is wrong to conflate the two questions and pretend that an answer for one is an answer for the other, unless you are going to make an explicit argument that they are isomorphic or equivalent in some way.
I would argue that a problem domain where brute-force search with simple optimizations actually works extremely well is a problem domain that does not require intelligence. If brute-force search with a few optimizations is intelligent, then a program for factoring numbers is an artificial intelligence.
I don’t have a criterion for intelligence in mind, but like porn, “I know it when I see it”. We might disagree about edge cases, but almost all of us will agree that a number factoring program isn’t “intelligent” in any interesting sense of the term. That’s not to say that it might not be fantastically effective, or that a similarly dumb program with weapons as actuators might not be a formidable foe, but it’s a different question to that of intelligence.
The more I talk to people about intelligence, the more I realize Eliezer et al’s wisdom in abandoning the term in favour of “optimization process”.
Your intuitive criterion for labelling something as intelligent is not a good thing to be going with. For example, it seems that as soon as a computer can reliably outperform humans at some task, we drop that task from our intuitive definition of “task demonstrating true intelligence”.
150 years ago, factoring large numbers would have been considered to be the pinnacle of true intelligence.
50 years ago, chess was considered the ultimate test of true intelligence—which is why people made bets that AI would never beat the best human chess players. Perhaps in 50 years time, the ability to suffer from cognitive biases or to have one’s thought biased by emotional factors will be considered the true standard of intelligence, because computers have beaten us at everything else.
We have a moving goalpost problem.
But in any case, the ability of computers to optimize the world is what matters for the activities of SIAI, not some arbitrary, ill-defined, time varying intuitive notion of “true intelligence”—which seems to behave like the end of the rainbow—the more you approach it, the more it moves away.
And the reason for that is simple—the real working definition of “intelligence” in our brains is something like, “that invisible quality our built-in detectors label as ‘mind’ or ‘agency’”. That is, intelligence is an assumed property of things that trip our “agent” detector, not a real physical quality.
Intuitively, we can only think of something as being intelligent, to the extent that it seems “animate”. If we discover that the thing is not “animate”, then our built-in detectors stop considering it an agency… in much the same way we stopped believing in wind spirits after figuring out weather, or that we historically would’ve needed to discern an accidental branch movement from the activity of an intelligent predator-agent.
So, even though a person without the appropriate understanding might perceive a thermostat as displaying intelligent behavior, as soon as they understand the thermostat’s workings as a mechanical device, the brain stops labeling it as animate, and therefore considers it to be not “intelligent” any more.
This is one reason why it’s really hard for truly reductionist psychologies to catch on: the brain resists grasping itself as mechanical, and insists on projecting “intelligence” onto its own mechanical processes. (Which is why we have oxymoronic terms like “unconscious mind”, and why the first response many people have to PCT ideas is that their controllers are hostile entities trying to “control” them in the way a human agent might, rather than as a thermostat does.)
So, AI will always be in retreat, because anything we can understand mechanically, our brain will refuse to grant that elusive label of “mind”. To our brains, something mechanically grasped cannot be an agent. (Which may lead to interesting consequences when we eventually fully grasp ourselves.)
This is an important insight. The psychological effects of full self-understanding could be extremely distressing for the human concerned, especially as we tend to reserve moral status to “agents” rather than “machines”. In fact, I suspect that a large component of the depression I have been going through since really grasping the concept of “cognitive bias” is because my mind has started to classify itself as “mechanical” rather than “animate”.
You are wrong. Factoring large numbers has never been considered the pinnacle of true intelligence. Find me a reference if you expect me to believe that circa 1859 something so simple was considered as the pinnacle of anything.
I completely agree about the moving goalposts critique, and I think there is good AI and has been great progress, but when you find yourself defending the idea that a program that factors numbers is a good example of artificial intelligence, alarm bells should start ringing, regardless of whether you are talking about intelligence or optimization.
I think that this is perhaps a bad example, because even today if you ask someone on the street to find the factors of 9,991 there’s no way they’ll do it, and if you show them someone who can do they will say “wow that’s really clever, she must be intelligent”.
So it is still the case that factoring 9991 would be considered by most people to require lots of intelligence. Hell, most people couldn’t factorize 100, never mind 9,991 or 453,443.
People are stupider than you think.
You said it was “considered to be the pinnacle of intelligence” 150 years ago, that is, almost 150 years after calculus was invented, and now you’re interpreting that as meaning “a person on the street would think that intelligent.” And you said I was moving goalposts?
It is a bad example, but it’s a bad example because we could explain the algorithm to somebody in about 5 minutes.
I don’t think we disagree. I just think that if chess programs are no more sophisticated now than they were 5 or 10 years ago, then they’re poor examples of intelligence.
In the previous open thread, there was a request that we put together The Simple Math of Everything. There is now a wiki page, but it only has one section. Please contribute.
People who contribute to the wiki are my heroes.
Do you know who the real heroes are? The guys who wake up every morning, and go into their normal jobs, and get a distress call from the commissioner, and take off their glasses and change into capes and fly around fighting crime. Those are the real heroes.
I want to, and intended to write a top level post about it, but my internship plus studying math has taken up the majority of my time. I will try to squeeze some LW time in though
I added links to How Everything Works: Making Physics out of the Ordinary and the Khan Academy’s Youtube video library with 800+ short video tutorials from basic arithmetic and math to college level phyics and finance.
Some questions about the site:
1) How come there’s no place for a user profile? Or am I just too stupid to find it? I know there was a thread a while back to post about yourself, and I joined LW on facebook, but it would be much easier for people to see a profile when they click on someone’s name.
2) What’s with the default settings for what comments “float to the top” of the comment list? Not to whine or anything, but I made a comment that got modded to 11 on the last Perceptual Control theory thread, followed up on by a few other highly-modded comments, and a rather fruitful discussion that involved input from someone who had tried some of the “conclusive” demos pjeby linked to. But the thread got buried under the rest.
Regarding 2, I think the default setting (Popular) is to display comments as a function of karma and time since posting. As comments get old, newer comments float to the top even if the older ones have some positive karma. If some comment has very high karma, I guess it outweighs the time constraint and stays at the top.
… and the ageing function is tuned for Reddit traffic volumes, so on this site, everything ages too fast and can’t stay in popular for very long at all. Open source contributions to fix this are welcome.
Userpages are in the works, supposedly.
They’re on the list, but no-one’s working on them at the moment. It should be pretty easy to link up the wiki user pages. Open source contributions are welcome.
Some people commented on the “inner circuits” discussion that they didn’t want this site to turn into a self-help or self-improvement forum, which made me wonder whether are there any open and relatively high quality discussion forums or communities to discuss self-improvement in general and in specific?
I for one don’t object to discussions of self-improvement per se, only insist that they meet the intellectual standards of LW.
That is precisely my problem with them: in my humble opinion, the discussions about self-improvement have not met the intellectual standards of the other discussions here. And since they have represented a significant fraction of all comments here, they have decreased the intellectual standards of the average comment enough to make me worry that the kind of participants I most want to interact with are leaving Less Wrong at a rate higher than the other participants are.
EDIT. It would ease my worries if they were easier to avoid: for example, it would ease my worries if there were fewer of them in the comment sections of posts with no obvious connection to self-improvement.
I would suggest that most people here are rational enough in terms of epistemic rationality, but their instrumental rationality is lagging behind, if I may call it this way. Hence the need for self-improvement stuff.
Once you reach a certain level of epistemic rationality, you realize that what you want next is not more refined epistemic rationality (that would be sub-optimal), you’d rather have.. more winning.
Personal Development for Smart People Forums
...doesn’t strike me as overwhelmingly high-quality.
It looks cheesy, but I’ve heard quite a few people like it, and I’ve read some interesting posts on his blog.
I read the blog, which is good in parts, but I’ve never found the forums worth the time.
After seeing:
I’m gonna recant my previous post. Not worth your time (and when did dreaming and lucid dreaming become “paranormal”?).
Edit:
Sheesh, half of these forums are the same thing.
Association fallacy. Just because the forums contain sections abhorrent to you, doesn’t mean other sections are just as bad. Also, 3 of 17 is hardly half.
Are you suggesting it does not paint something about the general reliability of the community? I think this is silly.
Sure it is, modulo hyperbole :-)
Yes. I’m saying that those forums are quite large, and people who post in one section are unlikely to post in other sections. We can rely on, say, people in tech section to know tech.
True; the largeness is a factor.
I hope LW has room for self-help/improvement as well as other topics.
I’d prefer that it stay focused on refining the art of human rationality.
And I’d like to know about separate quality place to discuss self-help/improvement, as the original poster suggests.
ISTM that instrumental rationality overlaps a great deal with self-help/improvement. We could avoid the latter only by restricting ourselves to discussing epistemic rationality.
I don’t want the more practical or self-improvement posts to overwhelm the more academic or posts, but I don’t think the balance is too far off yet.
It’s a subset of it. But there are a lot of other self-help topics that don’t belong here except (as for any topic that isn’t rationality) when there’s a specific rationality angle being discussed: diet, physical fitness, personal organisation (i.e. things like GTD and 43 folders), and so on.
Inspired by Yvain’s post on Dr. Ramachandran’s model of two different reasoning models located in the two hemispheres, I am considering the hypothesis that in my normal everyday interactions, I am a walking, talking, right brain confabulating apologist. I do not update my model of how the world works unless I discover a logical inconsistency. Instead, I will find a way to fit all evidence into my preexisting model.
I’m a theist, and I’ve spent time on Less Wrong trying to be critical of this view without success. I’ve already ascertained that God’s existence doesn’t present a logical inconsistency. (An atheist thinks God’s existence is illogical, but based on assumptions that are not necessary.) All empirical evidence I’ll ever receive can be consistently incorporated into a God model. (Since, for example, I can question my perception or my sanity before questioning whether God exists.)
I’m an unusual theist, however, in that I have no emotional attachment to believing in God. The God that I believe in is already impersonal. Also, I’ve ascertained while on Less Wrong that atheism is also not logically inconsistent and, from what I can tell, is not a disadvantageous philosophical position. So how can I trigger a switch? Why is it not easy to flip from one position to another?
I hypothesize that there is something analogous to an activation energy required to update one’s model so that there must be some motivation or impetus to update the model. For example, perhaps a new model explains things in a simpler way than the current model, and thus would be chosen for aesthetic reasons, or perhaps the new model would afford some practical benefit. (A difference in predictions that effects anything tangible would be an example of a practical benefit.)
(A) Choosing atheism because it is more aesthetic than theism.
I already prefer atheism to the extent that it is a simpler theory. (Some form of Occam’s Razor.) However, it leaves a hole that God is shaped to fit, so, finally, I don’t consider it to be more aesthetic.
This hole is the reason/cause/explanation for the existence and causal dependence/inter-connectedness of everything. As far as I am aware, the atheist model has no comment on this. However, apparently you don’t experience any hole. Tell me, how does your model cover this hole? Perhaps if I could see that atheism is just as good as theism as a model, I could perform the switch, or at least hold them both as simultaneously equal hypotheses.
(B) Choosing atheism because it would provide some practical benefit.
In what way could becoming an atheist possibly improve my life for the better? Is there any actual, tangible benefit? Is there some cost that I’m not aware of that theism is exacting? As far as I know, there is no cost to being theist, because I recognize no extra guilt or obligation for my belief. Organized religion does provide some non-negligible burden to my everyday life, but that is independent of my belief. If I was an atheist, would anything in my life be easier or better?
It’s not that I, an atheist, don’t experience such a hole. Far from it! I inhabit a gaping and mysterious void of ignorance. The difference is that while you see the outline of the hole and find something hole-shaped to fit into it (God), I am more interested in changing the dimensions of the hole. I don’t want to explain why the hole exists, I want to destroy the hole altogether. The hole isn’t a problem to be solved by the atheist model but by the scientific model.
A thousand years ago the hole was a lot bigger, and yet God was still perfectly hole-shaped. Consider that one day the hole may no longer exist, that the great and ineffable cause is finally exploded with a brilliant theory that describes the every underpinning of the universe and we all go “Ohhhh… that makes sense.”
Whither then doth hole-shaped God go?
If you want to maximize practical benefit, become a Christian. Or a Muslim, if you live in the Middle East. Or a superstitious atheist, if you live in China. Being an atheist, for myself, at least, is not about practical benefit. It’s just that I don’t have any rational way of believing anything else.
So you agree there is a hole, and that this hole is fillable, by science. May I take this to mean that you do believe that there is an explanation? Suppose that science could provide an explanation, what would it look like? I understand that you don’t know (and I don’t either) but to speculate..
Exactly, sounds like God to me. I would be happy with a God = a single universal theory of everything, or more precisely any set of laws which also included some sort of self-explanation.
Many mathematicians and physicists identify God with mathematics and/or physical laws of the universe. Einstein believed in such a God. I think that belief in God, distilled to it’s most elemental component, is the belief that there is a consistent theory of everything, whether this theory is knowable or not.
Organized religions make up a lot of stuff about what the consistent theory consists of. (For example, were humans part of the plan? If the universe is deterministic, then they were.) Eliezer is correct that they focus on overly positive aspects. Perhaps they should just stay silent and appeal to the mystery, but they insist upon speculating, and then make it dogma. I find the speculation interesting, but find all kinds of dogma oppressive. You’re a sinner (religion) or an idiot (Less Wrong).
It seems to me that your idea of God has no volition and is not equipped to care about anything we do. Why is the idea important, then? Why is it a worthwhile idea to collect the regularities of the Universe in a bag labelled “God”?
First: I wholly agree that my idea of God has no volition and is not equipped to care about anything we do. This is the view of God I’m defending, not a personal God.
Good question—I’ve been anticipating it for some time now. There are three reasons why the idea is important.
(1) Many people (especially scientists) believe in this God. Many/most world religions actually assert a God that is much more more like the God I describe than you might think. So I would like atheists to understand that when they assert that belief in God is irrational or absurd, they are really (usually) just making arguments that there is no personal God, which is annoying to theists that believe in the impersonal God. Perhaps mostly because as a result of the identification God=Personal God, they can’t express their beliefs in a meaningful way. For example, even after having sketched my view of God, it was still implied that I “suppose that the entire universe is the creation of some infinite mind-like thing with an unconditional respect for reason!”
(2) Many logical arguments against God don’t focus on properties of God specific to a personal God (the problem of evil is a noteworthy exception). Since they argue that God of any kind can’t exist, but then my watered-down do-nothing version of God can exist, what went wrong with the reasoning? My favorite example of this is the argument that a supreme power would be too complex to exist. (Are the fundamental physics laws too complex to exist??) So I would really like to learn, after all, how anyone can tell the difference between an logical argument and just a line of reasoning that conforms with your point of view.
(3) As humanists, we need to identify what we have in common and not exaggerate differences. I think a lot of people, theists and atheists alike, have an innate belief that the world must make sense. As some people have pointed out in comments to me, it is possible for them to hold this as a theory rather than a belief. However, I then suspect that our personalities or the way our mind is structured is really quite different. And this difference is not a good reason to think of most of humanity as idiotic. I strongly assert that what theists really can’t let go of (even the ones who believe in a personal God) is the idea of a meaningful/consistent universe. So practically, you’d have a lot more progress pulling them “sideways” towards a belief in an impersonal God than in no God. I’ve made a similar argument here.
Finally, there is an aspect to your question that I cannot fully address. It is: what difference does believing in God make if there’s no reason to worship him and it would have no effect on my behavior? I have no response to this because I don’t think it does make a difference. I have no objection to people being atheists. But I think some people innately do have a belief in God, and for whatever reason, it is connected with their motivation to explore the universe. If I don’t believe in God—if I consider it unimportant whether or not the world actually makes sense—then I lose my interest in it. I might just take psychedelic drugs all the time. From observation of the true atheists here (who seem more or less reasonable) I suspect this is a difference in innate constitution.
I’m interested in learning how true atheists avoid this feeling of nihilism. I was actually once quite comfortable once with nihilism, but ultimately rejected it in favor of belief in an objective external universe. Which is why I am so interested in how other empiricists organize their worldview. When I say that my belief in God is innate, I should qualify that it may only be innate when I am simultaneously being an empiricist.
This seems to suggest that you either are not truly convinced that (your) God exists, or that it does not bother you when people are wrong.
Good point. I’ve been struggling for a few days with what I can possibly mean by “God exists” and still feel like this is not an empirical fact that can ever be resolved. Because the existence of God is not a scientific question. We agree, I think, that it must be based on faith or else it’s just a mundane empirical fact.
On the one hand, it seems to be a matter of interpretation. Even if we had proof that there was a set of universal laws explaining everything, I wouldn’t require that someone else find this meaningful. In this case, I personally would prefer if they said “God is meaningless” rather than “God doesn’t exist” but I can’t control what definition of God they use.
On the other hand, it may be a problem with the meaning of “existence”. I find that theists can mean God exists in a literal sense, in which case I find their views on God to be naive (and wrong), or they mean God exists in the way I mean, in which case they also seem unable to communicate what this means. I’ve been seriously toying with the idea that this concept of “existence” is an artifact of the way people-like-me think. When I think about something abstract, it seems to exist in a way, and this is the way I mean.
I’ve argued this POV in detail (but without much success) in this thread about the difference between being frequentist or Bayesian. I’ve pretty much given up on this explanation, but my interest in this topic was motivated by considering an analogy between belief in the “existence” of probability and belief in the “existence” of God. For me, while God and probability are quite different, they “exist” in a similar fashion.
Consider the set of laws of the universe, given that they exist. In what sense do they exist? You can measure their effects of course, but you infer their existence.
Perhaps I misunderstood him, but I nevertheless learned from Vladimir_Nesov that believing in the “existence” of physical laws is kind of like believing in some kind of phlogiston. I don’t consider it a demotion of God for him to only exist in the way that physical laws exist. I think this is a linguistic/communication problem only.
It seems like your problem might be isomorphic to the question of whether numbers exist.
Agreed.
I don’t think people have innate beliefs in God. I think people are creative, and afraid of the unknown, and when you put the two together, you get a lot of imagination. You might get a God who grants you immortality in paradise, a cycle of rebirth that goes on forever until you achieve enlightenment, or just a sense of being connected to everything in a mysterious way. All of these things are interchangeable. A boy is born in India and grows up to be a Hindu. The same boy, transplanted to Saudi Arabia, grows up to be a Muslim. Again, moved to England, he may turn out to be an Anglican, or a Shintoist in Japan, or a New Ager in California, etc. In each case, he may say that the “hole” is filled by his beliefs, that he has an explanation for everything, a universal theory.
The reality is that this kind of belief is no different from any other. It’s no different than preferring cats to dogs, or being afraid of spiders. We aren’t born with a reflexive cats > dogs preference in our brain, nor are we born with a God-shaped hole in our heads. The reason the hole looks God-shaped is because we made God to fit. When Laplace was queried by Napoleon about the lack of the “Creator” in his work on the solar system, he replied, “I have had no need of that hypothesis.” Is this because Laplace did not have the same motivation to explore the universe as his religious colleagues? No, it is because Laplace had an even stronger desire to explore the universe, and realized that involving God in his exploration was like finding the X on the map and laying bricks over it.
It’s extremely important to me that the world makes sense, and not believing in God is part of why I think it does. Even a God that lacks volition or interest, a totally impersonal God, injects enough uncertainty into the universe to make it absurd.
We don’t. It’s called “existential depression”, which usually first presents when you realize that Death = Not Being, although it’s probably worse if, like me, you grew up believing you’d go to heaven when you died. I don’t see how believing in an impersonal, uninterested God of physics could possibly help mitigate the feeling, though. I prefer to just be pragmatic about it; even if I could “live forever”, I probably wouldn’t escape the heat death of the universe. My instinct is to stay alive and try to enjoy myself, though, so that’s what I do. And I take psychedelic drugs.
But not like yours in the key aspects I noted—those aspects imply a lack of any need for religious practices.
My impression is that arguments that advocates of atheism are making to the public (as opposed to academia) are largely against the idea of a personal God. These atheists would just shug their shoulders at your stance.
I assert that what theists really can’t let go of is a social setting and their place within it.
I’m been writing too quickly. I meant that many/most world religions assert this God theologically—it is not espoused in popular culture. The reasons for this is that religions like to be accessible, whereas this God is rather abstract for most people. (I know a Catholic priest who winces every time someone says Christ died for their sins.) Also organized religion does want to wield control. So the churches themselves may be huge dogmatic monsters in direct opposition to the theological basis of their faith. (For example, Jesus is described as having been against organized religion and the building of any churches, yet organized Christian religions completely ignore this.)
Please do. It’s not the response I’ve been receiving.
We should look into why theists are so resistant to conversion. We disagree, but I think we may make some headway when we compare the evidence we have for our priors.
Honestly, I don’t know. I think we have some pretty good tools that could help us find an explanation, and I hope that we’ll have the universe completely sussed out one day and that everything makes sense… but then again, it could be God. I just don’t think filling the hole with God is a useful step toward figuring out what the hole is about.
If all you want is a reason for why the universe does x like x, why not just settle for the anthropic principle? I guess I don’t really understand why you would apply the label ‘God’ to a universal theory of everything that explained itself. Wherein do you see the God-nature of the physical rules of this universe? And if you had this universal theory, where is the usefulness of calling it God? Aren’t you just adding unnecessary semantic complexity?
I would hardly compare believing in a consistent theory of everything with believing in God, for any meaningful definition of “God”.
A deterministic universe doesn’t mean there must be a plan. Water doesn’t plan to conform to the shape of the container it is poured into.
You don’t seem to be understanding that for me, “everything makes sense” would be God.
I agree: we definately don’t want to dismiss the mysteries of science with the word “God”, like that answers anything. Instead, we feel like studying the mysteries of science is studying God. What we already know about the universe is also God, and the consistency of what we know bolsters our belief in God. Einstein is quoted as having said that the more he studies science, the more he believes in God. (as cited in Holt 1997).
This is semantic quibbling. I’ve observed that atheists, sort of generally, seem so uncomfortable with using words in certain ways. A “plan” need only have anthropic characteristics if you’re talking about a human plan.
Is there some reason you choose to use the word “God”? It seems like you could get away with calling the same concept “the Dao” or “the totality of things” or “the Force” or by a made-up word. That might trigger slightly fewer alarm bells around here.
You’re right; my purpose is larger than just trying to gain group acceptance regarding my belief in universal physical laws. I really want to gain some purchase in acceptance of belief in God, which I know is ambitious, but I keep trying. I’ll explain why.
I think religion poses a big problem. Perhaps I am somewhat hysterical, but I fear what religious conflict may yield over the next 20-200 years. And I think it is critically important to handle this problem with truth. While New Atheism seems to present a solution, it doesn’t present the truth. To me, it’s just another religious dogma, one that happens to be anti-religion. It gains support by asserting the supremacy of science because that is exactly where the conflict is … people believe in science but their religions don’t. But New Atheism is spiritually barren. (In the secular sense of the word). People who care about meaning won’t convert.
I think rationalists should take the supremacy of science (empiricism) and provide a better model for religion. Whether God exists or not isn’t the right question – it’s not an empirical fact about the universe. The question is, whether you believe in God or not, how do you tack towards the truth about anything? The truth isn’t in the literal translation of the Bible not because God doesn’t exist but because trusting authority is not good epistemology. Obama said it well in a speech I can’t find at the moment: it’s not that we need to reject the subjective religious experience of a theist, but they need to understand we have only empirical evidence to go by when evaluating their beliefs, and that’s all they have to evaluate each other’s. So, in other words, you can’t argue that X must be done because God wants it so. You must find empirical evidence that X is better. This is completely rational in a polytheistic society. It will raise the sanity line, and religious beliefs will depolarize.
I’ve said before: I think it is our duty to give people a better model for religion, not take away the meaning religion is giving. We can have meaning and the truth together.
I really don’t get how identifying God with the regularities of the universe rescues “meaning”. If the universe existed without humans or comparable beings, would it have meaning? (I say no.) Conversely, if we refuse to identify God with the regularities of the universe, does that imply that the universe is without meaning? (Again, I say no.)
Meaning is something humans create.
Aaah! Why? You’re right, I totally do not understand what you are saying. You keep saying “God” but I have zero idea what you mean by “God”. If the universe is God, then when we say “God” we are just saying “the universe”, which means “God” is a meaningless word. It’s like you’re saying you’ve invented a new kind of fruit called a plibb, and then handing me an apple.
Einstein also said that God doesn’t play dice with the universe. He was wrong about the dice part, why couldn’t he be wrong about the God part? It sounds an awful lot like you’re appealing to authority here.
There is no other kind of plan*. It’s not quibbling, you’re using an anthropic word in a non-anthropic context. The word “plan” requires design and carrying out a plan requires intention.
*Save for exceptions found elsewhere in the animal kingdom eg. wolf packs, chimpanzees.
I feel like I’ve made a lot of progress today, because I’ve started to get the counter-arguments and questions I was expecting. What I need to do next is buttress my argument that my belief in universal physical laws is bona fide theistic belief.
Here’s a sketch of the argument:
Major theistic religions assert the existence of a supreme all-powerful entity. (they also assert this entity is good, but let’s leave that aside for now.) Universal physical laws would qualify as all-powerful because they are universal physical laws IFF they cannot be violated. Universal physical laws also explain everything, so we recover that God is omnipresent (“omniscent” when applied to a mind) and explaining/accounting for everything that exists.
Now for the problem of goodness. Is goodness required for the existence of God, or is it just an asserted property? (Eliezer pointed out that religions assert overly positive statements, can we dispose of that without disposing of God?) So maybe theists were wrong about this property, maybe we need to look more closely at their theology to see what they mean by “good” (Keith Ward argues that the Catholic notion of goodness is actually quite limited and qualified), or maybe we have to admit this falls down to interpretation. If none of these things are true, and we agree goodness is a necessary property for a cogent definition of God that isn’t met, then I would concede that God doesn’t exist. But I think there’s plenty of room for debate here. (My personal stance is that the universe is neutral and goodness is not a necessary property.)
I presume that’s what Einstein thought, as he was opposed to the notion of a personal God (even yielding Nobel prize acceptance time to the topic). (The appeal to authority is appropriate here because I need to maintain that there are other theists with my point of view, and citing Einstein is most verifiable.)
They’re prohibited from doing a whole lot of things.
Einstein has confused so many people by his various statements about religion that we’d better leave him out. In fact everybody, no matter where they fall on the atheist-religious spectrum, goes around saying Einstein would’ve agreed with them. As if that mattered.
Opinions differ. But I’d guess the overwhelming majority of believers (yep even Deists) consider God to be mind-like, not equation-like, so argument from common use doesn’t favor you.
Under your view it would seem god lacks a mind, will, intentionality, etc., no? It’s going to be hard to convince me that those are optional characteristics of god as conceived by major theistic religions.
ETA: I’ve voted your comment up because I don’t think it deserves to be at a −5… I’d be happy to see you come up with a line of reasoning that supports your conclusions, but I don’t think this is it.
In response to your points above and here and similar ones throughout this thread, I concede I need to narrow my understanding of theism to mean belief in a personal, anthropomorphic God. I’ve asked several theists (results described here) and this appears to be the common view.
As far as I can tell, you are arguing that to you, inviolable physical laws governing the universe are equivalent to what you call “God”. If such laws don’t exist, neither does God; if they do, they are God. Is this a fair characterization?
If so, here’s my question. I also accept that there may be universal physical laws (in fact, I strongly suspect there are). To me, however, they are not God. To me, they would disallow God, by every definition of God I can think of, personal or impersonal. But seeing as we both share a belief in the existence of universal physical laws, why do you see God-nature where I just see nature?
Einstein’s belief in God was a belief in something “subtle, intangible and inexplicable” that was a “force beyond anything that we can comprehend”*. If we do some day comprehend that force, surely it would no longer possess any inexplicable “God-nature”… but simply be better information about the universe we happen to inhabit?
*The Diary of a Cosmopolitan, HG Kessler
There’s not much point in defending my beliefs, since my purpose has been to defend those of theists and the link between my beliefs and those of “theists” is not as strong as I thought. Nevertheless, to respond to your question:
An anthropomorphic God is seen as externally manipulating the universe. I think it is natural to ask, if such a God existed and he is omnipotent, why didn’t he make the universe the way he wanted it to be in the first place? It seems to me that not building self-correction into the system would be evidence of an imperfect, if not flawed, design.
I would expect that the universe created by an omnipotent God would be created so perfectly that it would just run itself. When I carry the argument even further, throwing in a bias for mathematics and logic, I would also expect that the universe would contain the rules for it own creation (self-creation), and even its own justification (self-justification). That would be most perfect. It wouldn’t make God obsolete, it would be God. (God as creator.) This is just speculation: what I would expect of a perfect, omnipotent Creator.
Yet, finally, that’s exactly what we have with empiricism: there’s nothing externally manipulating the universe, so the universe is completely self-determined. The answers to the big questions (the why and how of creation and existence) must exist here in the universe, not somewhere else. We can understand the universe by observing what it does.
I guess I never thought the right question was whether God exists or not, but where he exists and how he exists. I think it’s actually meaningless to say he doesn’t exist, and the dominant view here is that it’s meaningless to say that he does exist if he’s not the anthropomorphic, personal God that most people think of. So I’ve made my argument several different ways, not always as clearly and directly as I should have, but I did my best and now I’ll leave the debate to the next theist (or devil’s advocate) that comes along.
It seems to me that you take the existence of universal physical laws for granted to some extent.
I see God in a deterministic and ordered universe, rather than a random and disordered one. If something seems haphazard, I intuitively feel like its meaningless. However if it is highly constrained, I feel like it is meaningful because it is exactly the way it had to be. (My final perspective is a little more sophisticated than this though, because order can emerge from random behavior, and I’m interested in rules that describe this.)
In contrast, a God with volition doesn’t make sense to me. Why should things be meaningful just because they were arbitrary dictated by a God manipulating the universe? I think it is much more logical that God’s desires would not be arbitrary: thus they would be described by rules, and thus God would not have volition. Likewise, if God is manipulating the universe on a day to day basis, he must not have written complete instructions in the source code, which seems kind of sloppy and imperfect to me. (The anthropomorphism here is a communication device only.)
I suspect this is widely true :-)
It’s Yvain’s post, not Eby’s.
The null universe is self-consistent, but it isn’t unique in that aspect. There is still the question of why priors are what they are.
Why do you think God fills the hole?
I suggest you take out everything that isn’t necessary for it to fill the hole (consciousness, omniscience, etc.). This would better fulfill Occam’s razor while still filling the hole. What’s left?
I’m not sure I understand.
There very well may be things independent of us. It just looks the same as if there isn’t.
The reason for causality is that the universe seems to work by boundary conditions and a wave-function. Entropy tends to increase as you go away from the boundary condition. This is true of many sufficiently-complex systems.
What is the null universe? The universe without God?
The materialist, Godless universe seems self-consistent, but as you seem to agree, it doesn’t self-explain its existence.
Mostly, I think that something different from the usual materialistic explanations must fill this hole. It’s conceivable that science could one day get around to explaining why there is “existence”, and why the particular rules of these existence, but I believe these answers are outside this universe, which I think of as a simulation being run in some larger context, and thus outside science (and supernatural by definition?).
I would love for someone wiser than me to tell me if anything at all can be deduced about whatever is left. I have no idea.
I don’t remember what I meant while writing that. It doesn’t sound close to anything I would say now, so maybe my ideas on that topic have changed.
A universe that contains nothing.
What do you mean “outside this universe”? Isn’t everything inside this universe by definition? If you mean outside the part of the universe that we can interact with, how would it be any different.
I think a lot of this problem is people’s inherent tendency to look at plausibility rather than probability. That is, people will accept something if and only if it has a cause. If you watch a movie and a really unlikely event happens, you’ll accept it as long as you accepted everything leading up to it.
A particularly interesting example is the idea of an ontological paradox. People normally might think it’s weird, but it works, since it caused itself. My reaction is wondering how to calculate how probable it is.
The universe doesn’t have a cause, but it doesn’t need one. The universe doesn’t work that way. What it needs is non-zero prior probability.
By the way, is there a way to see replies to your posts? I thought there wasn’t, but it seems unlikely that you’ve been checking this post every day for the past two years.
Replies to your comments show up in your inbox. The little envelope under your name in the upper right of the screen turns red when there’s something new in your inbox.
IIRC, you don’t get notified of replies to your posts.
There’s also a chance of noticing replies to an old post or comment if you follow Recent Comments.
Suppose we run a cellular automaton simulation that initiates with some initial conditions and subsequently updates with a set of simple rules (e.g., like Newtonian mechanics or quantum mechanics or whatever) that sufficiently intelligent beings within the simulation could deduce and call ‘science’. Perhaps they could figure out they were in a simulation, and then everything in the simulation is their universe (there needn’t be any limits to the size of this universe) and everything outside it (of which they may know nothing) is outside it.
There are a few ways entities within the simulation could deduce a reality outside their own. First, any acausal inconsistencies in their physical laws, suggesting someone tampering with the simulation. Similarly, any truly random elements would mean outside influence—though it would be a challenging project to be certain that any elements were truly random (and acausal). Another way they could deduce they were in a simulation is the lack of an explanation for the beginning of the simulation—though again, it would be challenging to be certain that there is NO explanation rather than just an unknown one.
If the beings want to call our universe part of their universe due to these interactions of having initiated them, seeding random numbers and occasionally interfering, that is fine, but then they need to differentiate between their simulation in which their science applies and the encapsulating universe for which they don’t know what sort of rules apply.
Funny! When you get mail, such as this one, the envelope under your user name turns red until you check it. Yours is probably currently red.
I don’t mean how would it be different from not having an outside universe. I mean: how would our universe containing the reason that there is a universe be different than only their universe existing and containing the reason that there is a universe?
In other words, if you live in universe A, and either your universe exists for some reason you don’t understand, or it exists within universe B, which exists for some reason you don’t understand, why would the latter hypothesis be less confusing?
The latter hypothesis should in fact be more confusing; it’s isomorphic to the Creator’s creator problem.
I think this is a good question, and I wanted to think a while before replying. (My train of thought motivated some other comments in reply to this post.)
Our universe does look different than a universe containing an explanation for existence. The universe we imagined several centuries ago, with spontaneous generation occurring everywhere and metaphysical intervention at many different levels, had more room for such an explanation.
For now (at least until you dig down into quantum mechanics, which I know nothing about), the universe appears to be a mechanical clock, with every event causally connected to a preceding event. Nothing, nothing is expected to happen without cause—this appears to be a very fundamental rule of our current paradigm of reality.
Simultaneously, I observe that I cannot even imagine how it could be possible for something to exist without cause. On the one hand, this might just reflect a limit in my intuition, and existence without cause might be possible. On the other hand, I will present an argument that an inability to imagine something, and indeed finding it illogical, is evidence that it is not possible. (Well, it’s a necessary but not sufficient condition.)
My argument is that any actual limits in this universe will be inherited by simulations within this universe, including the mental ones we use to draw intuition and logic. Like a shape in flatland finding it impossible to imagine escaping from a ring, we cannot imagine spontaneous creation if it is not possible. (This is the argument that an impossible thing cannot be simulated or imagined. Whether our inability to imagine something implies it is impossible depends upon how flexible our minds are; I think our minds are very flexible but QM may be the first piece of evidence that we can’t grasp some things that are possible.)
But if we lived in the universe imagined centuries ago, where entirely natural things like flies and light spontaneously appeared from their sources, then we would have a chance to study spontaneity and see how it works. If spontaneity was possible, we could imagine it and simulate it and learn about it. But if spontaneity cannot occur here, we can’t collect any information about it and it stands to reason it would be mysterious. This is exactly what our universe looks like.
Imagine we had a universe where something could come from nothing. Imagine we worked out how to find what happens at t+1, given t. This still wouldn’t be enough to know everything. We’d have to know what’s going on at some t less than ours (or greater, if we can just figure out t given t+1).
In other words, even a universe with spontaneity still has to have boundary conditions. Nothing exists at t=0 is the most obvious boundary condition, and it’s probably the most likely one, but it’s not the only possible one. There’s no reason it has to be that one.
Incidentally, there’s no reason for the universe to begin at the boundary condition. The laws of how systems evolve give how past and future relate (or more accurately, how the current system and the rate at which the current system changes relate). If you’re given what happens at t=0, you can calculate t=-1 just as easily as you can t=1. Intuitively, you’d say that t=0 caused t=1, and not the other way around. To the extent that that this is correct, the laws of system evolution do not preclude spontaneity. They only preclude future and past events not matching.
I don’t yet follow.
Could you paraphrase your main thesis statement?
(I think I am having trouble considering the counterfactual, ‘imagine we had a universe where something could come from nothing’. Where should I start? Do somethings comes from nothing at any time t? Are there rules prescribing how things come from nothing?)
A simple example would be a psuedorandom number generator. For example, f(t) = f(t-1)^2 + 1. Thus, if f(0) = 0 (nothing at t=0), then f(1) = 1.
The only way to get out of boundary conditions is to define the whole universe in one step. For example, f(t) = t^3 + 3*t^2 + 1, in which case you wouldn’t have causality at all.
I can’t say I’ve ever met a theist who would recognize what you’ve outlined here and in your subsequent comments as a form of theism.
It seems to me that you’ve taken the language of theism and tweaked the definitions of all the words to be more reasonable. That’s all well and good, but just because you use the same vocabulary as a theist doesn’t mean you believe the same thing. It sounds to me like you’re practically an atheist but you use religious words to describe your beliefs because they are comfortable.
This reminds me of the last time someone tried to convince me of complementarianism (http://en.wikipedia.org/wiki/Complementarianism). They were a reasonable person: so reasonable, in fact, that by the time they finished explaining complementarianism, it was nothing like complementarianism any more… In that case, my impression was that this person would have been placed in a very awkward position if they espoused a non-fundamentalist view. So they kept the fundamentalist terms and made them reasonable.
I’m probably in the same boat, or a similar one, FWIW.
Maybe you could interpret it as heroic efforts to find middle ground, in order to bring theists on board to rationality.
(I now feel pressure to be the diplomat making promises, “If you just accept this version of theism, the theists will make concessions...”)
In my opinion, too many comments lately have explicitly incidentally discussed their authors’ votes; I think it distracts from the actual topic and metadiscussions ought to be separate comments.
Counterpoint: knowing why other people vote or don’t vote is helpful. Voting alone provides little information about what people did or did not like about a comment.
However, it does distract from the primary topic and some sort of side channel for such discussions would be nice.
What are some suggestions for approaching life rationally when you know that most of your behavior will be counter to your goals, that you’ll know this behavior is counter to your goals, and you DON’T know whether or not ending this division between what you want and what you do (ie forgetting about your goals and why what you’re doing is irrational and just doing it) has a net harmful or helpful effect?
I’m referring to my anxiety disorder. My therapist recently told me something along the lines of, “But you have a very mild form of conversion disorder. Even though your whole body gets paralyzed, whereas you could function with just a hand paralyzed, most people with the disorder aren’t aware that it has a psychological cause, and they worry about it all the time, going to doctor after doctor to try to get a physical cure.” It doesn’t FEEL mild when I’ve been barely able to move for eight hours and finally get going enough to log onto the computer and waste time browsing online. Insight can be painful when you have so long to dwell on it.
My current thinking is that the best way to get what I want out of life is to get treatment, which I am doing, and to keep an optimistic view of my ability to be non-disabled. It’s gotten a lot better, but I still spend a considerable amount of time making very bad decisions, or having the anxiety make them for me.
So, I’m looking for some advice.
I seem to have finally reached at that stage in my life where I find myself in need of an income. I’m not interested in a particularly large income; at the moment, I only want just enough to feed a Magic: the Gathering and video game habit, and maybe pay for medical insurance. Something like $8,000 a year, after taxes, would be more than enough, as long as I can continue to live in my parents’ house rent-free.
The usual method of getting an income is to get a full-time job. However, I don’t find that appealing, not one bit. I want to have lots of free time in which to use the things I buy with the money I would earn. I’d much rather just continue to spend down my savings than work more than two days a week at a normal job.
This suggests that instead, I should try to get a part-time job. Chances are, that would mean working in a local restaurant or store of some kind. Unfortunately, I tried one of these once before, and it didn’t work out very well. I was hired to be a cashier at a local supermarket. To my great surprise, I didn’t particularly mind the work, but on my third day after being hired, I was fired for insubordination. (I had a paperback novel with me, and I wouldn’t stop reading it during periods when there were no customers.) I’ve also tried working for a temp agency. That didn’t work out too well either. After completing my first assignment, I was told that the company I was contracted out to complained about my behavior (it’s a long story), and so I would not be considered for any other assignments. In effect, I was fired from there, too.
As far as I’m concerned, the ideal source of income would be something with no set hours, that I could leave and come back to as I please. In other words, if I decide that I’d rather play video games for a month instead of earning money, it won’t prevent me from earning money the month after that. Unfortunately, the only things I know of offhand that work like that are writing (which is extremely hard to make a living at, and requires a lot of time and effort anyway) and online poker (which I suck at). I’m lazy and undisciplined, and I’m not particularly interested in changing that, so I’m hoping to find a way to make money that works even if I don’t try very hard at it.
In terms of skills and education, I have a B.S. from Rutgers University in computer engineering. I can program, but when I’ve tried programming as a job (as a summer intern), it turned into a Dilbert cartoon very, very quickly. Basically, I was given vague instructions, left on my own to do whatever, and instead of working, I mostly sat and surfed the Web while feeling guilty about not working. I don’t think I want to do programming professionally. I ever have to sit in another cubicle again, there’s a good chance I’m quitting on the spot.
So, um… I need some suggestions on what to do. Bring on the other-optimizing?
Serious question: why? If there was a pill you could take that would magically make you disciplined and hard working, would you turn it down? The pill wouldn’t make you unable to play computer games, or surf the web; it would just mean that if you said to yourself “for the next two hours I’m going to do X, without getting distracted by computer games or surfing the web” you would carry that intention out.
I tend to be lazy and undisciplined, but I also tend to find that even if your job doesn’t really do anyone any good in the large, working at work is more fun than slacking off. I’m increasingly coming to think that the rewards I get when I’m lazy and undisciplined aren’t up to much. What are the upsides, for you, of being lazy and undisciplined?
I’ve always thought of “discipline” as a bit of a rip-off. To me, “discipline” suggests “the willingness to do something unpleasant now, in exchange for a later reward.” The problem with this is that, even though you do get the reward, you’ve spent all that time doing something unpleasant, when you could have been doing something pleasant—such as playing video games—instead. It doesn’t seem like a good way to maximize “moments of pleasure” over the near future. Being lazy and undisciplined means I don’t go off chasing future rewards that turn out not to be worth the trouble.
My mom says that, as a young child, I had a “low frustration tolerance,” which might explain a lot. I suspect that “doing something I don’t feel like” feels worse to me than it does to most people, although I can’t prove this. In college, I once started to feel physically ill whenever I looked at my “Engineering Mechanics—Statics” textbook. There was something deep inside me, screaming, “This is awful! Avoid this!” whenever I was confronted with my homework. I only ever got work done when I became more afraid of not doing it than I was of doing it, if that makes any sense.
Not to play psychiatrist, but this sounds like a more likely explanation for your predicament than the hypothesis of contentment. If you could take a pill that would remove your anxiety when you faced the prospect of doing something that appears difficult or that you might be judged on, would you take that pill?
ETA: This is starting to remind me of Robin Hanson’s recent post.
You know, I just might. The “don’t get frustrated” pill seems more in line with my preferences than a “be willing to play hurt” pill. The last time I tried—well, “was pushed into” is more accurate than “tried”—filling out a job application, I got frustrated halfway through and stopped.
Incidentally, I’m a lot better at getting things done when I have someone to do those things with, but there is one big exception. I have a great deal of trouble at working alongside one of my parents. Nothing kills my intrinsic motivation to do something as effectively as one of my parents telling me I need to do it.
Another note: I’ve generally found that, when I “work hard” at something, I’m usually reasonably successful at it. By simply applying enough effort for a long enough period of time, I can brute force my way through many tasks that are really, really difficult, such as learning to play an extremely difficult song on the piano, beating the notoriously difficult Battletoads on the NES, or even just cramming for an exam by doing several months’ worth of suggested problems in the space of a week or two. The difference between what I think of myself capable of doing with enough effort and what I actually achieve contributes to thinking of myself as “lazy.” I have a strong preference for avoiding anything that feels like it takes some kind of an effort to do; in other words, something that feels frustrating. (Interestingly, difficult video games often don’t trigger this reaction. I like games that show me no mercy, that let me push myself to my limits and make even the little successes feel like an accomplishment.)
The only emotion that I’ve found that really motivates me to do things I don’t normally do is, oddly enough, anger. If I get sufficiently annoyed with a problem, I’ll go to absurd, ridiculous lengths to solve or fix the problem. A trivial example of this is the time I got annoyed at the dirt on the floor in my room sticking to my feet, so I went and got the broom to sweep it. A less trivial example concerns one of my courses at college. In that course, I had to “design” digital circuits using Verilog and an automatic hardware generator. I hated doing the work, would only get started reluctantly, and could never focus on it. This one time, however, the Verilog code worked just fine, but the hardware generator gave me a design that kept giving me errors. Instead of getting frustrated, I got angry. How dare this program not work! I ended up spending several hours in the computer lab making a furious, focused effort to understand what was going on and fix it. Which I did.
In the book “A Theory of Fun for Game Design” by Ralph Koster (of possible special interest to a game nerd) he basically defines “fun” as “learning without pressure”. Learning, in this context, means improving skills and responding to a challenge where there is no extrinsic consequence for failure.
Your desire for a job you can “take or leave” on a day-to-day basis, and your anxiety about homework, fits well with (but is more extreme than, I think) my own experience. If I were to diagnose myself with something (which I am loathe to do) it would be some type of anxiety disorder ( I have a friend with similar issues who was so diagnosed, medicated, and actually seems to be doing better, although it’s difficult to separate cause from effect here).
See if you relate to the following anecdote: in grade 9 I entered a special school program which was kind of like correspondence (work through assignments at your own pace) except that it was held at a regular high school so that students could socialize, have progress monitored by and access to teachers, and take supervised written tests whenever we were ready. Sounds pretty great compared to normal classes? It was. But, my first year (grade 9) I got rather behind in my work, in more than one subject, and started getting concerned reports home. Even though the work I had to do was obviously within my capabilities, I found it very difficult to face. Eventually I had to bite the bullet and finish everything in one big cram at the end of the year, and I pulled OK grades, but I stressed out endlessly over what was really a trivial amount of work (which I recognized even at the time).
The following year (grade 10) I hit the ground running in September. By mid-october I had finished Math 10. I got similarly ahead in other subjects, and the further ahead I got the easier it was for me to work more and more. (To a point, I also had a defiant self-image of rational laziness so that I didn’t want to do more than the minimum amount of work, even if I could do it faster/better. So I never skipped a grade, I would just get ahead by a few weeks/months and then… yup, play Magic (the original (Beta/Unlimited)!) and basically fuck around with my friends, computer, porn, etc.
More recently, as a PhD student, I still encounter the same thing. When I’ve fallen behind on a project, often due to unrelated and mild doubts/laziness/underestimation, I become more and more unwilling to face work the farther behind I get. OTOH if a colleague comes to me with a problem which I am not “supposed to be” working on, I become immediately energized. Of course, I allow myself to work on side projects less and less the farther “behind” I am on the projects I am assigned to.
I have finally seen the pattern, maybe too late not to suffer serious damage in my “career”. It is largely this: I hate exposing myself to the possibility of public failure. For me, the “consequence” which makes learning/trying/failing/mastering “not fun” is simply having to admit that a) I want to get/achieve/do/win at X and b) I failed (in this instance) to get/achieve/do/win at X. When I am doing something optional, and where I am not expected to succeed (e.g. because it’s someone else’s problem and any contribution I make will be accepted with grateful surprise), I can be extremely goal-directed and work with intense focus. In the very short term, fear of missing a hard deadline (mainly in undergrad) can also make me work til the break of dawn with amazing concentration, much as you described anger doing for you.
I’m not suggesting that you have exactly the same anxieties that i do. But recognizing what it is that separates the activities you can focus and work on from those you can’t may lead to surprising revelations about yourself, and may even suggest ways to find a job that’s a good fit for your temperament.
Sorry if this was a bit rambling and self-indulgent.
This, too, makes a lot of sense.
That’s really interesting… I think I understand you better now. I think that, because of this recurring anxiety and frustration, you’ve felt for a long time that your options were:
achieve in the way others want you to, but hate every minute of it, or
restrict yourself to playing games and doing things that don’t cause anxiety or frustration for you.
As per the second pill example, I think this is a false dichotomy, but a universal one; people take their emotional reactions for granted, and don’t often imagine that it could be possible to feel differently about something that persistently troubles them. (Of course, it doesn’t seem possible to just feel differently by a direct act of will, which is all that most people ever think of to try.)
Given that you’d take the second pill, though, you can now imagine a third alternative:
become able to do some difficult and long-term (but rewarding) activities without automatically feeling this anxiety and frustration, thus giving you many more interesting options for how to spend your time.
If that sounds appealing to you (and of course it doesn’t mean you’ll have to end up doing what others want you to do; it just means you’ll be able to genuinely explore some new options), then it might be time to start carefully analyzing why you get these feelings, and whether there’s something you can do to change that...
Thank you for your help. I’ll have to let this stew in my subconscious for a while, then get back to you.
One thing I think I should look into in more detail is tutoring; I did a lot of that informally in high school and I was a teacher’s assistant of sorts for a math class during college. Does anyone here know anything about how to make money as a tutor? (I live within easy commuting distance to Rutgers University, so that might help.)
For CronoDAS or anyone else thinking of using poker as a handy side-income (or small main income), I have a few words of warning. First, the games have gotten tougher and will continue to do so.
Second, poker is really demanding game psychologically. This is because the expected value of a competent player is so small compared to the inherent variance of the game. The expected value in say one hundred hands might be 1-2 units whereas one standard deviation in similar sample might be 20 units (or even significantly more). This in turn means that a good player might have periods of tens of thousands of hands where he loses money. Dealing with this sounds easy in theory, but is really hard in practice.
That said, it shouldn’t take but couple of months to become a winning player with today’s resources (books and video coaching sites) and make a modest income with effort of only like 20 hours a week.
By the way, another downside of the occupation is that it’s completely useless for the humanity in general and your income will come from people that shouldn’t be playing the game in the first place.
When I played poker with my brother and his friends, I didn’t think it was all that fun, and I didn’t win very much either. I don’t plan on going into online poker for real money any time soon.
Magic is my game. ;)
Could you play Magic profiessionally? What’s in the way? Just a matter of startup money?
Well, there are a few things. I’m good at Magic, but I don’t think I’m good enough to play professionally. I’ve never qualified for the Pro Tour. There seem to be lots of players that are better than I am, and you usually have to be world-class in order to make more than pocket change by playing in Magic tournaments. (In order to get better at Magic, the obvious next step for me to take is to try to seek out players in my area that already are world-class and learn from them.) Additionally, competitive Magic requires a continual investment in new cards; $1000 or more a year is quite possible, and travel costs and entry fees also eat up a large chunk of change.
The closest thing to online poker for Magic is, well, “Magic Online.” At one point, I was playing it and turning a profit, at least in terms of the MTGO event tickets. However, turning MTGO event tickets into cash is difficult, as eBay and PayPal fees eat up a distressingly large percentage of what you can make by selling them, and if someone tries to cheat you, there’s little recourse.
If you channel the income in the right direction, it won’t be useless.
I read jajvirta as saying that the occupation itself doesn’t produce positive externalities for mankind, unlike productive work in physics research or something.
Its not only a lack of positive externalities, but the presence of negative externalities. Your gains are someone else’s losses.
You provide entertainment to people. Both players chose to play so even if one player has a negative expectation in $ he might enjoy playing the game.
Productive work in physics could produce negative externalities if humanity cannot be trusted with new physics results. Hell, even math education could produce negative externalities!
In Hunting Fish, A Cross-Country Search for America’s Worst Poker Players, Jay Greenspan conceives of the poker world as a giant inverted pyramid, with the fishiest (i.e., least skilled) players at the top pouring money down the pyramid toward the most skilled players at the bottom, such as Doyle Brunson and Phil Ivey.
Another thing: Can you go over this one more time:
Something like $8,000 a year, after taxes, would be more than enough, as long as I can continue to live in my parents’ house rent-free.
What made you decide you’re okay with living with your parents for the rest of your life? Did you really give up hope or something?
Well, for one, I like the house I live in, and, for the most part, my parents let me do what I want. I just don’t feel any particular need or desire to move out and, financially at least, I’m getting a great deal. Moving out would drive up my expenses enormously, because I’d no longer be able to use my parents’ stuff, including their HDTV, their internet connection, and all those other things. (Incidentally, I have a first cousin once removed who never moved out of his parents’ house. Unlike me, though, he does have a job.)
As for giving up hope, well, yeah, I basically gave up hope way back in 1997. I have a lot of trouble trying to imagine the kind of activity that I would find fulfilling and could realistically expect to get paid for. For the most part, I just try to get through life one day at a time, doing my best to anesthetize myself and not think about the future.
Crono, that’s a horrible, horrible state to be in, and in asking for advice, you’re asking completely the wrong question. For your own sanity, you need to find something you enjoy doing, not just something that can soften the pain for one more day.
I’ve been in your position before. In some respects, I still am. I thought I couldn’t get a job and any job I’d get I’d be unable to handle. I had no connections, but finally was able to find one in my field.
Maybe a standard day job isn’t right for you, but you need to look for something more ambitious than living with your parents, even if you enjoy the amenities. There are many things you can try. Just keep churning through them, or resign yourself to worsening sadness.
If you think you can do well at Intrade, I’ll loan you the money if you can put up your karma as collateral.
Well… I think I like playing Magic, or, at least, I like winning at Magic. (When I lose a lot, I have a tendency to take it pretty hard.) For some reason, video games start to become a lot less appealing when I don’t have some homework to put off. But, yeah, to paraphrase something I once heard about drug addiction, I don’t play video games to games to feel good, I play them in order to feel normal.
Let me put it this way:
If I won a huge lottery jackpot tomorrow and could easily afford to maintain my current lifestyle with no effort, independent of my parents’ financial support, I still probably wouldn’t move out, because I like living with my parents. What bothers me is that I’m dependent on them for financial support, so whenever they ask me to do something, there’s always an undercurrent of “if you make us angry enough, you’ll be out on the streets.” (It still beats working, though.)
There’s only one thing that I want that I can’t get by living at home, and that’s a cat. It might be a bit silly, but I feel as though if I had a cat, I wouldn’t have to be lonely or sad any more.
That sounds flagrantly inappropriate. If you are confident that CronoDAS trying his hand at Intrade would be a good risk, why don’t you just loan him the money and ask for interest or some percentage of what he makes? If you aren’t confident that he’d do well enough to pay you back, isn’t this just outright karma purchase?
Just a hedge against any akrasia that might pop up.
Replace “do well enough” with “make any effort at all”, then.
“make any effort at all” =/= “no akrasia”
If you expect that he’d make some effort, and be defeated by akrasia, then clearly, you are not confident that he would do well.
My point isn’t that, however. My point is that karma is inappropriate collateral, even if there were some easy way to move it from one person to another.
“Even if”? Are you serious?
The only thing inhibiting such a transfer is the very fact that those who consider it innapropriate would prevent it politically. Even then, if someone wants to and is uninterested in said social judgements beyond their political implications then it would not exactly be hard to make the transfer subtly.
I wouldn’t think that I know more than anybody else about most of the topics on Intrade, although betting against cold fusion seems like a good idea.
Find odd programming jobs to do at home, like making websites for people or whatever. Get them at RentACoder or from people you know.
Well, you really, really need to change your entire outlook and work on the laziness.
But if you’re not going to do that: Have you tried betting in prediction markets like Intrade? If you’re good at noticing things that are “obviously” going to happen but aren’t correctly priced, or have enough money to afford to be right on average, that could work. It does require an initial investment though.
I’ve been on it since August and have played conservatively so I’ve only made about a 5% return. (Made small amounts on the Chrysler and GM bankruptcies.)
Intrade is an interesting suggestion, but I don’t think he could make enough on it. He wants 8000 USD a year, and even if we assume he can get 10%, he’ll still need 80k invested.
I don’t think he has 80k to spare, and I have to wonder—is 10% feasible in the long run? I could see getting it in an election year easily, because the markets are so volatile and heavily traded, but what about off-years?
Agreed. We should always be skeptical of an individual’s ability to beat the market.
Well, I should clarify that I think a smart bias-educated person can beat the prediction markets fairly easily—I doubled my (small) investment in the IEM just by exploiting some obvious biases in the last presidential election, and I know I’m not the smartest bear around. My doubt is whether he can beat the market enough: any sum of money CronoDAS has is likely small enough he would need really absurd returns.
Are there differences between prediction and markets that make it easier for a “smart bias-educated person” to win fairly easily?
If you think its fairly easy, then I’d be curious to know whether you’re putting your money where your mouth is… how much have you invested?
Besides what Nick said, people seem to treat prediction markets more as entertainment than seriously. For example, Ron Paul or Al Gore should never have broken 1%, and Hillary shares were high long after it became obvious she wasn’t going to make the nomination. These were all pretty clear to anyone suspicious of fanciful wouldn’t-it-be-fun? scenarios and being biased towards what one would like to happen.
I started in the IEM with ~$20, and even after taking some heavy losses in 2004 and whatever fee the IEM charged ($5?), I still cashed out $38 in 2008. If you’re interested in more details, see my http://www.gwern.net/Prediction%20markets
I appreciate your careful documentation. And I thought these words of yours were wise: “I often use them [prediction markets] to sanity-check myself by asking ‘If I disagree, what special knowledge do I have?’ Often I have none.”
Words are vague, lets use numbers. Say you were forced to invest $1000 in the prediction markets over the next year. What probability would you assign various outcomes: e.g. [-100%,-50%], [-50%,-25%] [-25%,-10%], [-10%,0] [0,10%], [10%,25%] [25%, 50%], [50%,100%], [100%, 200%], and [200%, 1000000%]
One must be wary of faux precision. But I think I would put the odds of >100% or <-40% at under 30%; I’d assign another 10 or 20% to a gain between 30% and 100%, and leave the rest to the range of small losses/gains.
The ten categories I suggested may be a bit excessive, but it would be much easier to judge if you were a little more precise. You acknowledge a non-trivial chance of losing a non-trivial amount of money. The confusion is that I thought your previous statement that a “smart bias-educated person can beat the prediction markets fairly easily” would preclude this.
There are arbitrage opportunities, but they’re not what I’m thinking of.
An analogy: knowing about biases and how to play optimally is important to play poker at any high level; but that still doesn’t mean you’re going to win every hand. I might correctly call an election for Obama, but that’s not going to help me as a trader if he abruptly dies of a heart-attack or Sarah Palin stages a coup with a crack unit of Alaskan hunters—I’ll still lose my money. I don’t see any contradiction here.
Yes. Prediction markets are far smaller, and have far less intelligence devoted to exploiting away their irrationalities.
Efficiency.
my question was about how much more efficient the stock market is, and why.
My answer to whether there are differences between prediction [markets] and markets was no, except in as much as prediction markets that are currently active are far larger (noise cancellation), more heavily traded (more information from more experts is represented already), and have had longer for biasses to be exploited and so corrected for.
Efficiency.
Try babysitting.
Well, if we really wanted to other-optimize we’d try to change your outlook on life, but I’m sure you get a lot of such advice already.
One thing you could try is making websites to sell advertising and maybe amazon clickthroughs. You would have to learn some new skills and have a little bit of discipline (and have some ideas about what might be popular). You could always start with the games you are interested in.
There’s plenty of information out there about doing this. It will take a while to build up the income, and you may not be motivated enough to learn what you need to do to succeed.
Do you program for fun?
No.
Unless that changes then, I wouldn’t particularly recommend programming as a job. I quite like my programming job but that’s because I like programming and I don’t work in a dilbert cartoon.
You might want to look into setting up a business in Second Life. If you learn the programming language it uses, you can find work fairly easily writing custom code for people, and/or make various things to sell, and it’s all on your terms.
If you’re interested, and want help getting started, my screen name there is Adelene Dawner.
An e-bay business? When you feel like making money, you can post some things for sale, and as soon as the bid is over and you ship the item your obligation is done. It would require some research to find what makes enough money and what you wouldn’t mind making or finding. I know you can make money selling all kinds of personalized niche items. The materials needed to get started and try it out could also be quite modest. (To make 8K/year, you’d have to sell ~30 items a week, assuming a profit of $5 per item.)
If you have any crafting skills, or if you can make food of some kind that’s fairly portable and doesn’t need refrigeration, and you have access to someplace where you can park with your wares and bother passerby, that might work. I once made about $30 sitting in a hallway at school for an afternoon selling muffins for a buck fifty each (I was on a muffin-baking spree, and my freezer was getting full), and my town has a fair number of street vendors in nice weather (I have bought things from them before). If the only problem with cashiering was that you weren’t supposed to read, this doesn’t seem like it would present any problems for you, since, who’s going to stop you?
Some places might require you to have a permit; I’m pretty sure the street vendors have to get one every morning from town hall. Nobody bothered me when I sold muffins and I didn’t have any kind of permission, though.
Wow! $30? For only an afternoon plus baking time?
quits day job
ETA: Okay, that was too snarky, even for me. Crono only wants to make $8000/year, and that’s good enough for that goal. So, good suggestion.
Well, an afternoon, plus baking time for six basic muffins and variations, plus cooking time for the applesauce that went into one batch of muffins, plus the cost of all the ingredients, plus the time it took to write up little flavor labels for each muffin and individually wrap them in saran wrap… And transit time by bus to and from school… I baked the muffins for fun, though, and only decided to sell them when I did not have room to store them and wasn’t eating them fast enough.
I mean, I’m not knocking it as a way to spend time, or I wouldn’t have suggested it, but I’m not still doing it. I got thirty bucks, spent it on a used camera and a necklace, and called it good. And I had my laptop open the entire time and did exciting things like read Less Wrong, which is more or less what I would have been doing if I’d stayed home to goof off instead of selling muffins.
Along the same vein, Etsy is a place to do that online (not so much with the food though)
What are your thoughts on the recent “Etsy considered harmful” article?
It doesn’t seem like she has a good grasp on what people are doing with Etsy and what it’s about. If you want to make a ‘profitable’ business, you’re already looking in the wrong place on Etsy. But if your time isn’t worth much and you want to sell some crafts, it seems to work fine.
No detailed suggestions, but one thing that comes out very strongly from what you wrote is that you don’t want a job and a job doesn’t want you.
This is not necessarily a bad thing.
Steve Pavlina wrote about why getting a job is a really bad idea; as for what to do instead, to make your way in the world, some of his other stuff may be of interest. (His other stuff also includes some things I think are woo, so don’t take this as a pointer to a pure fount of wisdom.)
I second the suggestions made by others to look for freelance computing work. It sounds ideal for your situation, if you can learn to take orders from yourself, which it sounds like you won’t from other people.
del
I’m only passingly familiar with Pavlina. Would you say the same thing about the advice of Tim Ferris?
del
Ok, someone tell me what the fuck this woo shit is!
Edit: Ok, pardon my language. That rules out my two first hypotheses. Anyone?
woo
Nice. Now I have a swear word that means something actually bad as opposed to taboo for doing in public.
I always thought “taboo for doing in public” was what the swear word meant.
A retail job other than the supermarket might be interesting. Alternately, take a notepad instead of a novel and doodle/write instead of read when there are no customers.
I don’t know if your BS in comp engineering includes other aspects of computer work than programming, and I don’t know if people hire for Configuration management/process control or reliability testing right away. If the answer is yes to both, then those jobs are much more structured than “make the computer do this ’kay by.” I’ve never had a programming job where I didn’t have to report to CM/process often enough that I felt I could get away with slacking. Lots of itty bitty crunch times.
Freelance programming possibly?
Also if you attend a lot of big magic tournaments it is pretty easy to make some money with smart trading and selling on ebay. Just pay attention to ebay values for cards. Also keep track of differing values of cards in different geographic areas.
You can easily do that with a business, if you set it up correctly, and you are willing to spend money to make money. More to the point, though, you’d need to actually want to have a business, a bit more badly than you appear to want a job. ;-)
I’ve always heard that having a successful business is usually an awful lot of work, even more than being an employee. At least, that’s what my father says, and he’s almost always right.
Setting one up is. Having one is not necessarily the case.
So why aren’t you asking his advice. ;-)
I said almost, didn’t I?
It’s a bit of a cliche for children of a certain age to say that their parents don’t understand them when, in fact, they understand them perfectly well, but my father has admitted to me that he doesn’t understand my feelings and behavior, so I’m not going to him for advice on how to live my life.
And you expect complete strangers to do better? I’m not sure that’s rational.
Conversely, if you’ve adequately constrained the problem for us, surely you can adequately constrain it for him?
That’s… a pretty good point, actually.
At least there’s more of you, though; you might suggest something I haven’t thought about before.
Perhaps it is possible that your parent(s) “doesn’t understand you” but still internally expects to, and so does worse than someone who doesn’t know you or knows you from recent experience.
Is there a way to undelete posts?
That might seem a weird question—just submit it again—but it turns out that “deleting” a post doesn’t actually delete it. The post just moves to a netherworld where people can view it, link to it, discuss it in the comments etc. but: a) it doesn’t show in the sidebar, b) it doesn’t show in the user’s submitted page, c) it says “deleted” where the poster’s username should be. Editing and saving doesn’t help.
This calamity has just befallen a post of mine that I submitted by mistake, then killed, but people (presumably) saw it in their feeds and started commenting away. Vladimir_Nesov suggests that it’s an okay post and should be resurrected, but I lack the power.
If I’m causing trouble, then sorry for causing trouble.
Suppose you found yourself suddenly diagnosed with a progressive, fatal neurological disease. You have only a few years to live, possibly only a few months of good health. Do the insights discussed here offer any unique perspectives on what actions would be reasonable and appropriate?
...sign up for cryonics?
Except you presumably won’t be able to get life insurance.
Okay, sign up now.
If the sudden addition of an apparent deadline to your life changes the game completely, isn’t it likely you’ve been playing the game wrong?
You always knew about death.
Your probability estimates about how many years of health you’ll have have changed considerably, so you wouldn’t expect to continue with the exact same behavior.
For instance, if you’ve been working on something that would take you several more years of good health to accomplish, you might want to spend a month finding someone to carry it on for you who’s similarly motivated and making it easier for them to carry it on.
Or you might decide that you don’t care about that long-term goal enough to justify the time and effort it would take away from other things that are more important for you to do in your life, but that you would have spread out over a longer timespan if you were going to live longer and accomplish a number of less-important goals or ones that are only achievable if you have more time to work on them.
You might also realize that the things you want are considerably different than the ones you thought you wanted. Maybe that was previously “playing the game wrong”, but I can’t see how a human could rule out the possibility of themself having a change in outlook/values/expectations after getting such news, which may have an impact on basic motivations as well as shifting attention from old lines of thinking, which they may have tried to make very rational, to ones that they may have been neglecting—and I seriously doubt anyone lacks these. Shifts in where they reason and rationalize.
/shrugs
One question that arises is a fundamental issue of motivation. Is it rational, for example, to have a list of “things to do before I die”? Especially if you believe that it is likely that you will not remember whether you did them or not, after you die? If you find out you’re going to die in a couple of years, does it make sense to try to cram as many items from your list as possible in that limited time? What would be the point? Indeed, what is the point of any action?
Ultimately, what is the source of our motivation, if we know that after we die we won’t remember what happened? It’s one thing when death is off in a nebulous future, but when it is relatively soon and immediate, there is going to be little or no time to enjoy an accomplishment.
It seems reminiscent of the difference between the iterated and one-shot prisoner’s dilemma. A long and somewhat indefinite life span is like the iterated PD, in that we expect to experience a wide range of effects and impacts from our actions. A short and more definite life span is like the one-shot PD, with only limited and short-term effects. Perhaps another way to think of it is that our normal actions affect our future selves, while with terminal illness, there are no future selves to worry about.
It is rational to have a list of things to do before you die if you have preferences over configurations of external reality outside the small part of external reality that causes your internal experiences.
Right, that makes sense, but most things I’ve seen on such lists are more focused on personal experiences that would be enjoyable and/or challenging. The first Google hit I got was http://brass612.tripod.com/cgi-bin/things.html and it has the typical things: skydive, travel, eat rare foods, have adventures. Some of them are focused on other people or leaving the world a better (or at least different) place but most of them seem to be for the purpose of giving yourself happy memories.
Is doing this irrational? Or at least, would it be irrational to pursue such activities if you knew that you weren’t going to live long afterward?
Turning it around, suppose there were an adventure which would be unique and exciting, but also fatal? Consider skydiving without a parachute, perhaps into a scenic wilderness. Clearly you won’t remember the experience afterwards, you’ll have only those few minutes. Should the discovery of a shortened lifespan make this kind of adventure more attractive?
Haha, if you knew you were going to die without recovering enough health to do anything else of value, only perhaps drain you family’s bank accounts and emotions, along with hospital resources, hooked up to machines, that kind of adventure SHOULD be more attractive.
I think you’re underestimating the value of an experience as you live it. I would think that the value of a happy memory is only a small fraction of the value of a good experience, and a lot of the value of the memory is in directing you to seek out further good experiences and to believe in your own ability to engage in activities with good outcomes. But these positive benefits are only valuable because while you keep the happy memories in mind, you engage in further positive experiences.
Just because you don’t remember something doesn’t mean it disappears. It’s still there—just at a certain position in time. You seem to be thinking, “Well, I can’t remember this now, I can’t remember the happiness, therefore the happiness I experienced doesn’t exist.” But remember there won’t be any you to forget how good skydiving to your death felt in retrospect, and there WILL be a you at the time of diving to feel gloriously good—as opposed to the you who could feel miserably bad over a protracted deathbed.
But I would think the most important things to do would involve loved ones—either providing for them after they’re gone or bonding as much as you can with them while they’re around. Which may makes things more painful, but at least you’ll know you had an impact on the world, could convey your ideas and values—which most of us consider as an essential part of ourselves. Other priorities for extending your influence might include writing memoirs or giving and recording a talk. You might also have something you need to do—like go see something for yourself—so you can HAVE an idea or position to record and influence others with after your life.
And certainly things like saving for your retirement would become unimportant, so your overall priorities would shift.
[Edited the sentence that starts “But remember there won’t be any you...”]
Anders Sandberg—Swine Flu, Black Swans, and Geneva-eating Dragons (video/youtube)
Anders Sandberg on what statistics tells us we should (not) be worried about. Catastrophic risks, etc.
Sandberg’s post on his blog about the talk.
An interesting book is out: Information, Physics and Computation by Andrea Montanari and Marc Mézard. See this blog post for more detail.
I like this book and have already got myself lost in it. The title is confusing; they should’ve called it “Phasetransitionology” like the awesome Generatingfunctionology.
Could someone answer my question in the Where Physics Meets Experience thread, please?
Thanks.
Sorry, I sort of asked this question in a thread here, but I’m interested enough in answers that I’m going to ask it again.
Does it seem like a good idea for the long-term future of humanity for me to become a math teacher or producer of educational math software? Will having a generation of better math and science people be good or bad for humanity on net?
If I included a bit about existential risks in my lecturing/math software would that cause people to take them more seriously or less seriously?
In the unlikely event that you end up significantly improving the amount of mathematical expertise in humanity, you should be very pleased with yourself.
It’s definitely not a bad cause. You should do it if it’s something that would engage and satisfy you. If you turn out not to be suited for it, no harm done; find something else you’re good at.
So you’re not much afraid that people will develop artificial general intelligence before figuring out how to make it friendly?
It’s fine to include some low-probability catastrophe risk management in your overall planning. But are you considering all the possible catastrophes, or just one particular route to unfriendly AI (one unlocked by your marginal recruitment of mathematically capable tinkerers)?
Wouldn’t furthering our mathematical and technological prowess as soon as possible mitigate many catastrophes? See the movie Armageddon, for instance :)
Maybe general AI is inevitable even at current computing power, so long as a small, persistent cult keeps at it for a few hundred years. If so, I think having more mathematical facility gives a better chance of managing the result.
Real, all-of-the-sudden, self-optimizing with increasing speed, with limits way above human, general AI is 99.999% not implemented in the next 10 years, at least. The only reason I consider it so likely (and don’t feel comfortable predicting, say, 50 years forward) is the possibility of apparent limits in computing hardware being demolished by some unforeseen breakthrough.
When I read this paper, the risks seem to be on balance increased rather than decreased by greater human intelligence.
The median LWer’s guesses on when the singularity will occur is 2067.
Improving math education is a problem I’d really like to work on but it seems likely to be harmful unless I can include an effective anti-existential-risk disclaimer. Even if I’m guaranteed to be relatively unsuccessful, I don’t want a big part of my life’s work to be devoted to marginally increasing the probability that something really bad will happen.
I skimmed the paper. It’s interesting. Thanks.
I still don’t think you should curtail your math instruction, even if you do have a large impact on the course of humanity, in that millions of people end up more capable in math. I think you’d increase our resiliency against existential hazards, if anything.
But you’re welcome to evangelize awareness of X on the side. I would have liked to hear my math teachers raise the topic—it’s gripping stuff.
What’s a good procedure for determining whether or not to vote up a comment?
In general, I try to upvote if I think the author made a good new point in the discussion (or made an old point in a better way). I also vote up humorous comments if I find them funny and if they don’t detract from the surrounding conversation.
I try to reserve downvotes for occasions where the author is not just espousing a conclusion that I think wrong, but when they are making a rationalist mistake in the particular comment:
when I think they’re ignoring or misunderstanding a valid objection, or completely missing a particularly obvious objection
when I think their bad writing style obfuscates rather than clarifies their content
when I think they’re behaving badly towards others or established LW norms of conduct.
On the subject, newcomers should be aware that there’s some karma-based limit on how many downvotes you can make (to prevent trolls from mass-downvoting everyone they disagree with, etc), but I think it’s rare to hit that limit.
EDIT: By the way, welcome to Less Wrong! Check out the welcome thread if you haven’t already. (One point it doesn’t make: unlike most blogs, you can comment on older posts and still get a conversation, because many of us regularly follow the comments feed.)
If you think it’s more worth reading than the average in that thread, vote it up. If you think it’s less worth reading than the average in the thread, vote it down. If you want to conserve peoples’ feelings, vote down less often than these instructions suggest.
There are many. For a collection of data points on how people tend to do it, look at this post.
A terribly trivial first post, but as an anchor it’ll do: is there a way to change the timezone in which timestamps are displayed? I’d also prefer the YYYY-MM-DD HH:MM:SS 24-hour format over the current one, but it doesn’t really matter all that much. (If the timezone turns out to match up with BST here, then forget that, I guess.)
Edit: UTC, it seems. I can live with that.
A long chain of reasoning leads me to conclude that the UFAI problem would be completely averted if this question were answered—to use the vernacular, I feel like that’s the case.
But seriously. Whenever we think the thought “I want to think about apples”, we then go on to think about apples. How the heck does that work? What is the proximate cause of our control over our thoughts?
I’d really like to see your long chain of reasoning.
Wanting to think of apples already constitutes thinking about apples.
Yes, but not wanting to think about apples also constitutes thinking about apples. However, if I do want to think about apples, I’m going to think about apples more than if I didn’t want to think about apples. Perhaps a better example: if I think to myself, “I want to calculate 33 + 28”, I will. Something’s going on here; do you not agree?
That’s not entirely true: look up Wegner, “Paradoxical Effects of Thought Suppression”, 1987.
If my own experience is typical, people don’t usually think “I want to think about apples” unless it’s part of a thought experiment or something. A behaviorist model might work here: You get a stimulus: something that activates your brain’s concept of apples. It may be a sense impression, like seeing an apple, or it may be a thought, for example a long train of thoughts about gravity and Isaac Newton eventually gets by a train of spreading activation to “apples”. This stimulus gets processed by various different cognitive layers in various different ways that are interpreted by your conscious mind as “thinking about apples.”
If you want to not think about apples for some odd reason, the natural tendency is for this to activate your apple concept and cause you to think about apples. If you’re smart, though, you’ll try to distract yourself by thinking about oranges or something, and since your conscious brain can only think about one thing at a time, this will probably work.
The breakthrough for me was realizing that “I think about apples” is more a peculiarity of the English language than a good reflection of what is happening—about as useful as “I choose to produce adrenaline in response to stress”. It suggests that there’s someone named me with a flashlight illuminating certain thoughts at certain times because I feel like it. I find it less wrong (though still a little wrong) to imagine the thoughts percolating up of their own accord, and me as a spectator. This might make more sense if you meditate.
What do you guys think of the Omega Point? Perhaps more importantly, what do you think of Tipler’s claim that we’ve known the correct quantum gravity theory since 1962?
We don’t.
By that, do you mean “it’s not worth a second look”, “that’s not relevant to Less Wrong”, “I haven’t heard of it”, or something else I haven’t thought of?
It’s not worth a second look.
My previous attempt at asking this question failed in a manner that confuses me greatly, so I’m going to attempt to repair the question.
Suppose I’m taking a math test. I see that one of the questions is “Find the derivative of 1/cos(x^2).” I conclude that I should find the derivative of 1/cos(x^2). I then go on to actually do so. What is it that causes me (specifically, the proximate cause, not the ultimate) to go from concluding that I should do something to attempting to do it?
I think you are asking the question that is a major theme of Hofstadter’s book: Godel, Escher,Bach. To be more specific he raises the question humorously on page 461 in the Birthday Cantatatatatat.… to motivate chapter XV: Jumping out of the System.
He returns to the question in Chapter XX and page 685 offers a quotable answer
Another way to look at the problem is to ask what kind of life experiences would give you the anchors in reality to dissolve the question? What works for me is understanding how computers work from gate level to interpreters for high level languages. How does (eval ‘(eval ’(+ 2 2))) go from concluding it should evaluate (+ 2 2) to attempting to do it?
In what sense can you be said to conclude this? When I took tests, my mind went straight from reading questions to trying to answer them without stopping to consciously conclude anything. At no point was my attention fixed on what I should do; it was fixed on doing.
What kind of answer do you expect? For example, the obvious answer is “the algorithm implemented in your mind causes that to happen”.
It is an interesting mental exercise, when you are about to do something but have not yet begun it, to try to introspectively perceive the moment of decision. I find it’s like trying to see the back of my own head.
That’s a good question, judging by the number and variety of replies.
I’d suggest that in a way, things go the other way around. Instead of your concluding you should do something causing you to do it, instead I think you are (already) aiming to do something, and that drives you to figure out what you should do. The urge to do causes figuring out what to do, rather than the figuring causing the doing.
But that’s a little over-simplified, as discovered by people trying to program robots that interact with the world. Deciding what to do at any given moment is distinctly non-trivial.
At the risk of providing a non-answer I’ll say: Operant conditioning.
The test problem, the solving of it, and getting an answer correspond to a light coming on, pressing a lever, and getting food.
We’ve long since been trained that solving problems in that context build up token points that will pay out later in praise and promises of money.
Presumably this training translates fairly well to real world problems.
Indeed, that’s the conclusion I came to. What I wonder now is how we operant-condition ourselves without just reinforcing reinforcement itself. Which, I suppose, is more or less precisely what the Friendly AI problem is.
The Perceptual Control Theory crowd here (pjeby, RichardKennaway, Kaj) will probably respond with some kind of blackbox control systems model.
I don’t have a complete answer, but I can tell you what form it takes.
The quantum states in your body become entangled with a new Everett branch, branches being weighted by the Bohr probabilities. This is what your choice to find the derivative (or not) feels like. These new, random values get filtered through the rest of your architecture into coherent action, as opposed to the seizure you would have if this randomness were not somehow filtered.
I know, not much at the nuts-and-bolts level, but I hope that provides a good sketch.
In a deterministic classical universe, all can be the same for minds and beliefs and decisions as it is in our world. Any good argument should generalize there.
“Entanglement” is the black box there, and PCT, as set out in the materials I’ve linked to in past posts, is the general form the real answer will take.
The more general answer, but too general to be of practical use, is the one that several people have given already. At some point the hardware bottoms out in doing the task instead of thinking about it.