The fact that you so naturally used the word “version” here (it was essentially invisible, it didn’t feel like a terminology choice at all) suggests that “version” would be a good term to use instead of “lens”. Downside being that it’s a sufficiently common word that it doesn’t sound like a Term of Art.
justinpombrio
Let’s apply some data to this!
I’ve been in two high-stakes bad-vibe situations. (In one of them, someone else initially got the bad vibes, but I know enough details to comment on it.) In both cases, asking around would have revealed the issue. However, in both cases the people who knew the problematic person well, had either a good impression of them, or a very bad impression of them. Because there’s a pattern where someone who’s problematic in some way is also charismatic, or good at making up for it in other ways, etc. So my very rough model of these situations is that there were a bunch of people you could have asked about them and gotten “looks fine” with 60% probability or “stay the fuck away” with 40% probability. If you have only have a few data points of this variety, you’d want to trust your vibes because false negatives can be very costly.
stick to public spaces for a first date, do a web search for the person’s name, establish boundaries and stick to them, be prepared with concrete plans to react to signs of danger, etc.
These mitigations would do nothing against a lot of real relationship failures. Imagine that everything goes swimmingly for the first year. Then you start to realize that even though everything your partner has been doing makes sense on the surface, if you step back and look at the big picture their actions tend to have the effect of separating you from your friends and blaming yourself for a lot of things, and it just doesn’t seem healthy. When you finally decide to break up, it’s an extremely painful process because: (i) your partner is better at weaving stories than you, and from their perspective you’re the problematic person (ii) your friends all know your partner, and they’ve made a good impression, (iii) you will continue to see them at social events, and (iv) even after all of this, you don’t think they ever purposefully acted maliciously toward you.
Or in the words of Sean Carroll’s Poetic Naturalism:
There are many ways of talking about the world.
All good ways of talking must be consistent with one another and with the world.
Our purposes in the moment determine the best way of talking.
A “way of talking” is a map, and “the world” is the territory.
The orthogonality thesis doesn’t say anything about intelligences that have no goals. It says that an intelligence can have any specific goal. So I’m not sure you’ve actually argued against the orthogonality thesis.
And English has it backwards. You can see the past, but not the future. The thing which just happened is most clear. The future comes at us from behind.
Here’s the reasoning I intuitively want to apply:
where X = “you roll two 6s in a row by roll N”, Y = “you roll at least two 6s by roll N”, and Z = “the first N rolls are all even”.
This is valid, right? And not particularly relevant to the stated problem, due to the “by roll N” qualifiers mucking up the statements in complicated ways?
Where’s the pain?
Sure. For simplicity, say you play two rounds of Russian Roulette, each with a 60% chance of death, and you stop playing if you die. What’s the expected value of YouAreDead at the end?
With probability 0.6, you die on the first round
With probability 0.4*0.6 = 0.24, you die on the second round
With probability 0.4*0.4=0.16, you live through both rounds
So the expected value of the boolean YouAreDead random variable is 0.84.
Now say you’re monogamous and go on two dates, each with a 60% chance to go well, and if they both go well then you pick one person and say “sorry” to the other. Then:
With probability 0.4*0.4=0.16, both dates go badly and you have no partner.
With probability 20.40.6=0.48, one date goes well and you have one partner.
With probability 0.6*0.6=0.36, both dates go well and you select one partner.
So the expected value of the HowManyPartnersDoYouHave random variable is 0.84, and the expected value of the HowManyDatesWentWell random variable is 0.48+2*0.36 = 1.2.
Now say you’re polyamorous and go on two dates with the same chance of success. Then:
With probability 0.4*0.4=0.16, both dates go badly and you have no partners.
With probability 20.40.6=0.48, one date goes well and you have one partner.
With probability 0.6*0.6=0.36, both dates go well and you have two partners.
So the expected value of both the HowManyPartnersDoYouHave random variable and the HowManyDatesWentWell random variable is 1.2.
Note that I’ve only ever made statements about expected value, never about utility.
Probability of at least two success: ~26%
My point is that in some situations, “two successes” doesn’t make sense. I picked the dating example because it’s cute, but for something more clear cut imagine you’re playing Russian Roulette with 10 rounds each with a 10% chance of death. There’s no such thing as “two successes”; you stop playing once you’re dead. The “are you dead yet” random variable is a boolean, not an integer.
If you’re monagamous and go to multiple speed dating events and find two potential partners, you end up with one partner. If you’re polyamorous and do the same, you end up with two partners.
One way to think of it is whether you will stop trying after the first success. Though that isn’t always the distinguishing feature. For example, you might start 10 job interviews at the same time, even though you’ll take at most one job.
However it is true that doing something with a 10% success rate 10 times will net you an average of 1 success.
For the easier to work out case of doing something with a 50% success rate 2 times:
25% chance of 0 successes
50% chance of 1 success
25% chance of 2 successes
Gives an average of 1 success.
Of course this only matters for the sort of thing where 2 successes is better than 1 success:
10% chance of finding a monogamous partner 10 times yields 0.63 monogamous partners in expectation.
10% chance of finding a polyamorous partner 10 times yields 1.00 polyamorous partners in expectation.
EDIT: To clarify, a 10% chance of finding a monogamous partner 10 times yields 1.00 successful dates and 0.63 monogamous partners that you end up with, in expectation.
IQ over median does not correlate with creativity over median
That’s not what that paper says. It says that IQ over 110 or so (quite above median) correlates less strongly (but still positively) with creativity. In Chinese children, age 11-13.
And for a visceral description of a kind of bullying that’s plainly bad, read the beginning of Worm: https://parahumans.wordpress.com/2011/06/11/1-1/
I double-downvoted this post (my first ever double-downvote) because it crosses a red line by advocating for verbal and physical abuse of a specific group of people.
Alexej: this post gives me the impression that you started with a lot of hate and went looking for justifications for it. But if you have some real desire for truth seeking, here are some counterarguments:
Yeah, I think “computational irreducibility” is an intuitive term pointing to something which is true, important, and not-obvious-to-the-general-public. I would consider using that term even if it had been invented by Hitler and then plagiarized by Stalin :-P
Agreed!
OK, I no longer claim that. I still think it might be true
No, Rice’s theorem is really not applicable. I have a PhD in programming languages, and feel confident saying so.
Let’s be specific. Say there’s a mouse named Crumbs (this is a real mouse), and we want to predict whether Crumbs will walk into the humane mouse trap (they did). What does Rice’s theorem say about this?
There are a couple ways we could try to apply it:
-
We could instantiate the semantic property P with “the program will output the string ‘walks into trap’”. Then Rice’s theorem says that we can’t write a program Q that takes as input a program R and says whether R outputs ‘walks into trap’. For any Q we write, there will exist a program R that defeats it. However, this does not say anything about what the program R looks like! If R is simply
print('walks into trap')
, then it’s pretty easy to tell! And if R is the Crumbs algorithm running in Crumb’s brain, Rice’s theorem likewise does not claim that we’re unable tell if it outputs ‘walks into trap’. All the theorem says is that there exists a program R that Q fails on. The proof of the theorem is constructive, and does give a specific program as a counter-example, but this program is unlikely to look anything like Crumb’s algorithm. The counter-example program R runs Q on P and then does the opposite of it, while Crumbs does not know what we’ve written for Q and is probably not very good at emulating Python. -
We could try to instantiate the counter-example program R with Crumb’s algorithm. But that’s illegal! It’s under an existential, not a forall. We don’t get to pick R, the theorem does.
Actually, even this kind of misses the point. When we’re talking about Crumb’s behavior, we aren’t asking what Crumbs would do in a hypothetical universe in which they lived forever, which is the world that Rice’s theorem is talking about. We mean to ask what Crumbs (and other creatures) will do today (or perhaps this year). And that’s decidable! You can easily write a program Q that takes a program R and checks if R outputs ‘walks into trap’ within the first N steps! Rice’s theorem doesn’t stand in your way even a little bit, if all you care about is behavior after a fixed finite amount of time!
Here’s what Rice’s theorem does say. It says that if you want to know whether an arbitrary critter will walk into a trap after an arbitrarily long time, including long after the heat death of the universe, and you think you have a program that can check that for any creature in finite time, then you’re wrong. But creatures aren’t arbitrary (they don’t look like the very specific, very scattered counterexample programs that are constructed in the proof of Rice’s theorem), and the duration of time we care about is finite.
If you care to have a theorem, you should try looking at Algorithmic Information Theory. It’s able to make statements about “most programs” (or at least “most bitstrings”), in a way that Rice’s theorem cannot. Though I don’t think it’s important you have a theorem for this, and I’m not even sure that there is one.
-
Rice’s theorem (a.k.a. computational irreducibility) says that for most algorithms, the only way to figure out what they’ll do with certainty is to run them step-by-step and see.
Rice’s theorem says nothing of the sort. Rice’s theorem says:
For every semantic property P, For every program Q that purports to check if an arbitrary program has property P, There exists a program R such that Q(R) is incorrect: Either P holds of R but Q(R) returns false, or P does not hold of R but Q(R) returns true
Notice that the tricky program
R
that’s causing your property-checkerQ
to fail is under an existential. This isn’t saying anything about most programs, and it isn’t even saying that there’s a subset of programs that are tricky to analyze. It’s saying that after you fix a property P and a property checker Q, there exists a program R that’s tricky for Q.There might be a more relevant theorem from algorithmic information theory, I’m not sure.
Going back to the statement:
for most algorithms, the only way to figure out what they’ll do with certainty is to run them step-by-step and see
This is only sort of true? Optimizing compilers rewrite programs into equivalent programs before they’re run, and can be extremely clever about the sorts of rewrites that they do, including reducing away parts of the program without needing to run them first. We tend to think of the compiled output of a program as “the same” program, but that’s only because compilers are reliable at producing equivalent code, not because the equivalence is straightforward.
a.k.a. computational irreducibility
Rice’s theorem is not “also known as” computational irreducibility.
By the way, be wary of claims from Wolfram. He was a serious physicist, but is a bit of an egomaniac these days. He frequently takes credit for others’ ideas (I’ve seen multiple clear examples) and exaggerates the importance of the things he’s done (he’s written more than one obituary for someone famous, where he talks more about his own accomplishments than the deceased’s). I have a copy of A New Kind of Science, and I’m not sure there’s much of value in it. I don’t think this is a hot take.
for most algorithms, the only way to figure out what they’ll do with certainty is to run them step-by-step and see
I think the thing you mean to say is that for most of the sorts of complex algorithms you see in the wild, such as the algorithms run by brains, there’s no magic shortcut to determine the algorithm’s output that avoids having to run any of the algorithm’s steps. I agree!
I think we’re in agreement on everything.
Excellent. Sorry for thinking you were saying something you weren’t!
still not have an answer to whether it’s spinning clockwise or counterclockwise
More simply (and quite possibly true), Nobuyuki Kayahara rendered it spinning either clockwise or counterclockwise, lost the source, and has since forgotten which way it was going.
I like “veridical” mildly better for a few reasons, more about pedagogy than anything else.
That’s a fine set of reasons! I’ll continue to use “accurate” in my head, as I already fully feel that the accuracy of a map depends on which territory you’re choosing for it to represent. (And a map can accurately represent multiple territories, as happens a lot with mathematical maps.)
Another reason is I’m trying hard to push for a two-argument usage
Do you see the Spinning Dancer going clockwise? Sorry, that’s not a veridical model of the real-world thing you’re looking at.
My point is that:
The 3D spinning dancer in your intuitive model is a veridical map of something 3D. I’m confident that the 3D thing is a 3D graphical model which was silhouetted after the fact (see below), but even if it was drawn by hand, the 3D thing was a stunningly accurate 3D model of a dancer in the artist’s mind.
That 3D thing is the obvious territory for the map to represent.
It feels disingenuous to say “sorry, that’s not a veridical map of [something other than the territory map obviously represents]”.
So I guess it’s mostly the word “sorry” that I disagree with!
By “the real-world thing you’re looking at”, you mean the image on your monitor, right? There are some other ways one’s intuitive model doesn’t veridically represent that such as the fact that, unlike other objects in the room, it’s flashing off and on at 60 times per second, has a weirdly spiky color spectrum, and (assuming an LCD screen) consists entirely of circularly polarized light.
It was made by a graphic artist. I’m not sure their exact technique, but it seems at least plausible to me that they never actually created a 3D model.
This is a side track, but I’m very confident a 3D model was involved. Plenty of people can draw a photorealistic silhouette. The thing I think is difficult is drawing 100+ silhouettes that match each other perfectly and have consistent rotation. (The GIF only has 34 frames, but the original video is much smoother.) Even if technically possible, it would be much easier to make one 3D model and have the computer rotate it. Annnd, if you look at Nobuyuki Kayahara’s website, his talent seems more on the side of mathematics and visualization than photo-realistic drawing, so my guess is that he used an existing 3D model for the dancer (possibly hand-posed).
This is fantastic! I’ve tried reasoning along these directions, but never made any progress.
A couple comments/questions:
Why “veridical” instead of simply “accurate”? To me, the accuracy of a map is how well it corresponds to the territory it’s trying to map. I’ve been replacing “veridical” with “accurate” while reading, and it’s seemed appropriate everywhere.
Do you see the Spinning Dancer going clockwise? Sorry, that’s not a veridical model of the real-world thing you’re looking at. [...] after all, nothing in the real world of atoms is rotating in 3D.
I think you’re being unfair to our intuitive models here.
The GIF isn’t rotating, but the 3D model that produced the GIF was rotating, and that’s the thing our intuitive models are modeling. So exactly one of [spinning clockwise] and [spinning counterclockwise] is veridical, depending on whether the graphic artist had the dancer rotating clockwise or counterclockwise before turning her into a silhouette. (Though whether it happens to be veridical is entirely coincidental, as the silhouette is identical to the one that would have been produced had the dancer been spinning in the opposite direction.)
If you look at the photograph of Abe Lincoln from Feb 27, 1860, you see a 3D scene with a person in it. This is veridical! There was an actual room with an actual person in it, who dressed that way and touched that book. The map’s territory is 164 years older than the map, but so what.
(My favorite example of an intuitive model being wildly incorrect is Feynman’s story of learning to identify kinds of galaxies from images on slides. He asks his mentor “what kind of galaxy is this one, I can’t identify it”, and his mentor says it’s a smudge on the slide.)
The market named “This Market Will Resolve No At The End Of 2025” will resolve to No at the end of 2025. Like it says in its title. What’s unclear about this?