Although I appreciate the parallel, and am skeptical of both, the mental paths that lead to those somewhat related ideas are seriously dissimilar.
pangel
Question about metaethics
I have a question, but I try to be careful about the virtue of silence. So I’ll try to ask my question as a link :
http://www.theverge.com/2016/6/2/11837874/elon-musk-says-odds-living-in-simulation
Also, these ideas are still weird enough to win against his level of status, as I think the comments here show:
Could you expand on this?
...there are reasons why a capitalist economy works and a command economy doesn’t. These reasons are relevant to evaluating whether a basic income is a good idea.
Sorry, “fine” was way stronger than what I actually think. It just makes it better than the (possibly straw) alternative I mentioned.
No. Thanks for making me notice how relevant that could be.
I see that I haven’t even thought through the basics of the problem. “power over” is felt whenever scarcity leads the wealthier to take precedence. Okay, so to try to generalise a little, I’ve never been really hit by the scarcity that exists because my desires are (for one reason or another) adjusted to my means.
I could be a lot wealthier yet have cravings I can’t afford, or be poorer and still content. But if what I wanted kept hitting a wealth ceiling (a specific type, one due to scarcity, such that increasing my wealth and everyone else’s in proportion wouldn’t help), I’d start caring about relative wealth really fast.
I see it as a question of preference so I know by never having felt envy, etc. at someone richer than me just for being richer. I only feel interested in my wealth relative to what I need or want to purchase.
As noted in the comment thread I linked, I could start caring if someone’s relative wealth gave them power over me but I haven’t been in this situation so far (stuff like boarding priority for first-class tickets are a minor example I did experience, but that’s never bothered me).
Responding to a point about the rise of absolute wealth since 1916, this article makes (not very well) a point about the importance of relative wealth.
Comparing folks of different economic strata across the ages ignores a simple fact: Wealth is relative to your peers, both in time and geography.
I’ve had a short discussion about this earlier, and find it very interesting.
In particular, I sincerely do not care about my relative wealth. I used to think that was universal, then found out I was wrong. But is it typical? To me it has profound implications about what kind of economic world we should strive for—if most folks are like me, the current system is fine. If they are like some people I have met, a flatter real wealth distribution, even at the price of a much, much lower mean, could be preferable.
I’m interested in any thoughts you all might have on the topic :)
...people have already set up their fallback arguments once the soldier of ‘...’ has been knocked down.
Is this really good phrasing or did you manage to naturally think that way? If you do it automatically: I would like to do it too.
It often takes me a long time to recognize an argument war. Until that moment, I’m confused as to how anyone could be unfazed by new information X w.r.t. some topic. How do you detect you’re not having a discussion but are walking on a battlefield?
I think practitioners of ML should be more wary of their tools. I’m not saying ML is a fast track to strong AI, just that we don’t know if it is. Several ML people voiced reassurances recently, but I would have expected them to do that even if it was possible to detect danger at this point. So I think someone should find a way to make the field more careful.
I don’t think that someone should be MIRI though; status differences are too high, they are not insiders, etc. My best bet would be a prominent ML researcher starting to speak up and giving detailed, plausible hypotheticals in public (I mean near-future hypotheticals where some error creates a lot of trouble for everyone).
I meant it in the sense you understood first. I don’t know what to make of the other interpretation. If a concept is well-defined, the question “Does X match the concept?” is clear. Of course it may be hard to answer.
But suppose you only have a vague understanding of ancestry. Actually, you’ve only recently coined the word “ancestor” to point at some blob of thought in your head. You think there’s a useful idea there, but the best you can for now is: “someone who relates to me in a way similar to how my dad and my grandmother relate to me”. You go around telling people about this, and someone responds “yes, this is the brute fact from which the conundrum of ancestry start”. An other tells you you ought to stop using that word if you don’t know what the referent is. Then they go on to say your definition is fine, it doesn’t matter if you don’t know how someone comes to be an ancestor, you can still talk about an ancestor and make sense. You have not gone through all the tribe’s initiation rituals yet, so you don’t know how you relate to grey wolves. Maybe they’re your ancestors, maybe not. But the other says : “At least, you know what you mean when you claim they are or are not your ancestors.”.
Then your little sisters drops by and says: “Is this rock one of your ancestors?”. No, certainly not. “OK, didn’t think so. Am I one of your ancestors?”. You feel about it for a minute and say no. “Why? We’re really close family. It’s very similar to how dad or grandma relate to you.” Well, you didn’t include it in your original definition, but someone younger than you can definitely not be your ancestor. It’s not that kind of “similar”. A bit of time and a good number of family members later, you have a better definition. Your first definition was just two examples, something about “relating”, and the word “similar” thrown in to mean “and everyone else who is also an ancestor.” But similar in what way?
Now the word means “the smallest set such that your parents are in it, and any parent of an ancestor is an ancestor”...”union the elders of the tribe, dead or alive, and a couple of noble animal species.” Maybe a few generations later you’ll drop the second term of the definition and start talking about genes, whatever.
My “fuzziest starting point” was really fuzzy, and not a good definition. It was one example, something about being able to “experience” stuff, and the word “similar” thrown in to mean “and everyone else who is conscious.” I may (kind of) know what I mean when I say a rock is not conscious, since it doesn’t experience anything, but what do I mean exactly when I say that a dog isn’t conscious?
I don’t think I know what I mean when I say that, but I think it can help to keep using the word.
P.S. The final answer could be as in the ancestor story, a definition which closely matches the initial intuition. It could also be something really weird where you realize you were just confused and stop using the word. I mean, the life force of vitalism was probably a brute fact for a long time.
As an instance of the limits of replacing words with their definitions to clarify debates, this looks like an important conversation.
The fuzziest starting point for “consciousness” is “something similar to what I experience when I consider my own mind”. But this doesn’t help much. Someone can still claim “So rocks probably have consciousness!”, and another can respond “Certainly not, but brains grown in labs likely do!”. Arguing from physical similarity, etc. just relies on the other person sharing your intuitions.
For some concepts, we disagree on definitions because we don’t know actually know what those concepts refer to (this doesn’t include concepts like “art”, etc.). I’m not sure what the best way to talk about whether an entity possesses such a concept is. Are there existing articles/discussions about that?
Straussian thinking seems like a deep well full of status moves !
Level 0 - Laugh at the conspiracy-like idea. Shows you are in the pack.
Level 1 - As Strauss does, explain it / present instances of it. Shows you are the guru.
Level 2 - Like Thiel, hint at it while playing the Straussian game. Shows you are an initiate.
Level 3 - Criticize it for failing too often (bad thinking attractor, ideas that are hard to check and deploy usual rationality tools on). Shows you see through the phyg’s distortion field.
You probably already agreed with “Ghosts in the Machine” before reading it since obviously, a program executes exactly its code even in the context of AI. Also obviously, the program can still appear to not do what it’s supposed to if “supposed” is taken to mean to programmer’s intent.
These statements don’t ignore machine learning; they imply that we should not try to build an FAI using current machine learning techniques. You’re right, we understand (program + parameters learned from dataset) even less than (program). So while the outside view might say: “current machine learning techniques are very powerful, so they are likely to be used for FAI,” that piece of inside view says: “actually, they aren’t. Or at least they shouldn’t.” (“learn” has a precise operational meaning here, so this is unrelated to whether an FAI should “learn” in some other sense of the word).
Again, whether a development has been successful or promising in some field doesn’t mean it will be as successful in FAI, so imitation of the human brain isn’t necessarily good here. Reasoning by analogy and thinking about evolution is also unlikely to help; nature may have given us “goals”, but they are not goals in the same sense as : “The goal of this function is to add 2 to its input,” or “The goal of this program is to play chess well,” or “The goal of this FAI is to maximize human utility.”
Congratulations!
Thank you!
I have met people who explicitly say they prefer a lower gap between them and the better-offs over a better absolute level for themselves. IIRC they were more concerned about ‘fairness’ than about what the powerful might do to them. They also believed that most would agree with them (I believe the opposite).
Gentzen’s Cut Elimination Theorem for Non-Logicians
Knowledge and Value, Tulane Studies in Philosophy Volume 21, 1972, pp 115-126
Being in a situation somewhat similar to yours, I’ve been worrying that my lowered expectations about others’ level of agency (with elevated expectations as to what constitutes a “good” level of agency) has an influence on those I interact with: if I assume that people are somewhat influenced by what others expect of them, I must conclude that I should behave (as far as they can see) as if I believed them to be as capable of agency as myself, so that their actual level of agency will improve. This would would work on me, for instance I’d be more generally prone to take initiative if I saw trust in my peers’ eyes.
An intuition is that red-black trees encode 2-3-4 trees (B-trees of order 4) as binary trees.
For a simpler case, 2-3 trees (Ie. B-trees of order 3) are either empty, a (2-)node with 1 value and 2 subtrees, or a (3-)node with 2 values and 3 subtrees. The idea is to insert new values in their sorted position, expand 2-nodes to 3-nodes if necessary, and bubble up the extra values when a 3-node should be expanded. This keeps the tree balanced.
A 2-3-4 tree just generalises the above.
Now the intuition is that red means “I am part of a bigger node.” That is, red nodes represent the values contained in some higher black node. If the black node represents a 2-node, it has no red children. If it represents a 3-node, it has one red child, and if it represents a 4-node, it has 2 red children.
In this context, the “rules” of the red-black trees make complete sense. For instance we only count black trees when comparing branch heights because those represent the actual nodes. I’m sure that with a bit of work, it’s possible to make complete sense of the insertion/deletion rules through the B-tree lens but I haven’t done it.
edit: I went through the insertion rules and they do make complete sense if you think about a B-tree while you read them.