Mind painting a picture of a typical example? What’s the setting, and what do the first few hints from each person look like?
johnswentworth
What is this context in which you are hanging out 1:1 with a woman and it’s not already explicitly a date? (I mean, that of course does happen sometimes, but at least for me it’s not particularly common, so I’m wondering what the contexts were when this actually happened to you.)
Once you’ve done a few things they ought to have picked up on, and no negative and some seemingly positive interaction has occurred afterwards...
One possibility in my hypothesis space here is that there usually isn’t a mutual dance of plausibly-deniable signals, but instead one person sending progressively less deniable signals and the other person just not responding negatively (but not otherwise sending signals themselves).
Everyone says flirting is about a “dance of ambiguous escalation”, in which both people send progressively more aggressive/obvious hints of sexual intent in conversation.
But, like… I don’t think I have ever noticed two people actually do this? Is it a thing which people actually do, or one of those things which like 2% of the population does and everyone else just talks about a lot and it mostly doesn’t actually work in practice (like cold approaches)? Have you personally done the thing successfully with another person, with both of you actually picking up on the other person’s hints? Have you personally seen two other people do the thing firsthand, where they actually picked up on each others’ hints?
EDIT-TO-ADD: Those who have agree/disagree voted, I don’t know if agree/disagree indicates that you have/haven’t done the thing, or if agree/disagree indicates that you also have/haven’t ever noticed anyone (including yourself) successfully do the thing, or something else entirely.
Yeah, this is an active topic for us right now.
For most day-to-day abstraction, full strong redundancy isn’t the right condition to use; as you say, I can’t tell a dog by looking at each individual atom. But full weak redundancy goes too far in the opposite direction: I can drop a lot more than just one atom and still recognize the dog.
Intuitively, it feels like there should be some condition like “if you can recognize a dog from most random subsets of the atoms of size 2% of the total, then P[X|latent] factors according to <some nice form> to within <some error which gets better as the 2% number gets smaller>”. But the naive operationalization doesn’t work, because we can use xor tricks to encode a bunch of information in such a way that any 2% of (some large set of variables) can recover the info, but any one variable (or set of size less than 2%) has exactly zero info. The catch is that such a construction requires the individual variables to be absolutely enormous, like exponentially large amounts of entropy. So maybe if we assume some reasonable bound on the size of the variables, then the desired claim could be recovered.
Did you intend to post this as a reply in a different thread?
It feels like unstructured play makes people better/stronger in a way that structured play doesn’t.
What do I mean? Unstructured play is the sort of stuff I used to do with my best friend in high school:
unscrewing all the cabinet doors in my parents’ house, turning them upside down and/or backwards, then screwing them back on
jumping in and/or out of a (relatively slowly) moving car
making a survey and running it on people at the mall
covering pool noodles with glow-in-the-dark paint, then having pool noodle sword fights with them at night while the paint is still wet, so we can tell who’s winning by who’s glowing more
In contrast, structured play is more like board games or escape rooms or sports. It has fixed rules. (Something like making and running a survey can be structured play or unstructured play or not play at all, depending on the attitude with which one approaches it. Do we treat it as a fun thing whose bounds can be changed at any time?)
I’m not quite sure why it feels like unstructured play makes people better/stronger, and I’d be curious to hear other peoples’ thoughts on the question. I’m going to write some of mine below, but maybe don’t look at them yet if you want to answer the question yourself?
Just streaming thoughts a bit...
Unstructured play encourages people to question the frame, change the environment/rules, treat social constraints as malleable. It helps one to notice degrees of freedom which are usually taken to be fixed.
Because there’s so much more freedom, unstructured play pushes people to notice their small desires moment-to-moment and act on them, rather than suppress them (as is normal most of the time).
Unstructured play offers an environment in which to try stuff one wouldn’t normally try, in a way which feels lower-risk.
… and probably others. But I’m not sure which such factor(s) most account for my gut feeling that unstructured play makes people better/stronger. (Or, to account for the other possibility, maybe the causal arrow goes the other way, i.e. better/stronger people engage more in unstructured play, and my gut feeling is picking up on that.) Which factor is most important for growing better/stronger?
Eh, depends heavily on who’s presenting and who’s talking. For instance, I’d almost always rather hear Eliezer’s interjection than whatever the presenter is saying.
I mean, I see why a rule of “do not spontaneously interject” is a useful heuristic; it’s one of those things where the people who need to shut up and sit down don’t realize they’re the people who need to shut up and sit down. But still, it’s not a rule which carves the space at an ideal joint.
An heuristic which errs in the too-narrow direction rather than the too-broad direction but still plausibly captures maybe 80% of the value: if the interjection is about your personal hobbyhorse or pet peave or theory or the like, then definitely shut up and sit down.
Yeah… I still haven’t figured out how to think about that cluster of pieces.
It’s certainly a big part of my parents’ relationship: my mother’s old job put both her and my father through law school, after which he worked while she took care of the kids for a few years, and nowadays they’re business partners.
In my own bad relationship, one of the main models which kept me in it for several years was “relationships are a thing you invest in which grow and get better over time”. (Think e.g. this old post.) And it was true that the relationship got better over time as I invested effort in it. But the ROI was absolutely abysmal, the investments were never actually worthwhile, they cost far more effort than the improvements they brought.
Looking at the population more generally, it’s usually the male who’s the breadwinner (data). Mutual insurance doesn’t work when only one person makes serious money. And even equal-earning relationships have a reputation of ending when the man hits a hard stretch and can’t pay his half.
So I have one data point from my parents in which investment and the like indeed unlocked a lot of value, but based on my own experience and population stats it seems like a narrative which is often bullshit and kind of a trap for guys?
More generally, I’m still not sure how to think about the majority of relationships in which (AFAICT) the guy does most of the overall work. I grew up seeing my parents’ relationship, where my mother was the main breadwinner early on and they are proper business partners today. Then I went to a college where 100% of the student body got a STEM degree; every female could pull her weight. More statistically-ordinary relationships still seem very parasitic to me, on a gut level, and I’m not sure how to think about them.
I find the “mutual happy promise of ‘I got you’” thing… suspicious.
For starters, I think it’s way too male-coded. Like, it’s pretty directly evoking a “protector” role. And don’t get me wrong, I would strongly prefer a woman who I could see as an equal, someone who would have my back as much as I have hers… but that’s not a very standard romantic relationship. If anything, it’s a type of relationship one usually finds between two guys, not between a woman and <anyone else her age>. (I do think that’s a type of relationship a lot of guys crave, today, but romantic relationships are a relatively difficult place to satisfy that craving.)
And the stereotypes do mostly match the relationships I see around me, in this regard. Even in quite equal happy relationships, like e.g. my parents, even to the extent the woman does sometimes have the man’s back she’s not very happy about it.
To be comfortable opening up, one does need to at least trust that the other person will not go on the attack, but there’s a big gap between that and active protection.
I think your experience does not generalize to others as far as you think it does. For instance, personally, I would not feel uncomfortable whispering in a friend’s ear for a minute ASMR-style; it would feel to me like a usual social restriction has been dropped and I’ve been freed up to do something fun which I’m not normally allowed to do.
Indeed!
This post might (no promises) become the first in a sequence, and a likely theme of one post in that sequence is how this all used to work. Main claim: it is possible to get one’s needs for benefits-downstream-of-willingness-to-be-vulnerable met from non-romantic relationships instead, on some axes that is a much better strategy, and I think that’s how things mostly worked historically and still work in many places today. The prototypical picture here looks like both romantic partners having their separate tight-knit group of (probably same-sex) friends who they hang out with—he hangs with “the boys”, she hangs with “the girls”. And to some extent this is probably a necessity, because at the population level there’s a pretty severe mismatch between the kinds of benefits-downstream-of-willingness-to-be-vulnerable which men and women want vs can supply each other.
(To be clear, that does not mean I think everyone should pursue that strategy or even that it should necessarily be the default target, but I do think that it should at least be on one’s radar as a possibility.)
The Value Proposition of Romantic Relationships
No, I got a set of lasertag guns for Wytham well before Battleschool. We used them for the original SardineQuest.
More like: kings have power via their ability to outsource to other people, wizards have power in their own right.
A base model is not well or badly aligned in the first place. It’s not agentic; “aligned” is not an adjective which applies to it at all. It does not have a goal of doing what its human creators want it to, it does not “make a choice” about which point to move towards when it is being tuned. Insofar as it has a goal, its goal is to predict next token, or some batch of goal-heuristics which worked well to predict next token in training. If you tune it on some thumbs-up/thumbs-down data from humans, it will not “try to correct the errors in the data supplied”, no matter how smart a base model it is, unless that somehow follows from heuristics to better predict next token.
Now, you could maybe imagine that there is some additional step in between base model and the application of whatever argument you’re trying to make here. Some step in which the AI acquires enough agency for any of what you’re saying to make sense. And then presumably that step would also instill some kind of goal (which might be “do what my creators want”), in which case that step is where all the alignment magic would need to happen.
Or maybe you imagine putting the base model in some kind of scaffolding which uses the model to do something agentic. And then the (scaffolding + model) might be agentic, and the thing you’re trying to say could apply if someone tries to tune the scaffolded model. And then all the action is in the scaffolding.
John’s Simple Guide To Fun House Parties
The simple heuristic: typical 5-year-old human males are just straightforwardly correct about what is, and is not, fun at a party. (Sex and adjacent things are obviously a major exception to this. I don’t know of any other major exceptions, though there are minor exceptions.) When in doubt, find a five-year-old boy to consult for advice.
Some example things which are usually fun at house parties:
Dancing
Swordfighting and/or wrestling
Lasertag, hide and seek, capture the flag
Squirt guns
Pranks
Group singing, but not at a high skill level
Lighting random things on fire, especially if they explode
Building elaborate things from whatever’s on hand
Physical party games, of the sort one would see on Nickelodeon back in the day
Some example things which are usually not fun at house parties:
Just talking for hours on end about the same things people talk about on LessWrong, except the discourse on LessWrong is generally higher quality
Just talking for hours on end about community gossip
Just talking for hours on end about that show people have been watching lately
Most other forms of just talking for hours on end
This message brought to you by the wound on my side from taser fighting at a house party last weekend. That is how parties are supposed to go.
That… um… man, you seem to be missing what may be the actual most basic/foundational concept in the entirety of AI alignment.
To oversimplify for a moment, suppose that right now the AI somehow has the utility function u over world-state X, and E[u(X)] is maximized by a world full of paperclips. Now, this AI is superhuman, it knows perfectly well what the humans building it intend for it to do, it knows perfectly well that paperclips only maximize its utility function because of a weird accident of architecture plus a few mislabeled points during training. So let’s say this AI compares two plans:
Make paperclip world
Fix itself, so its utility function is closer to what the humans intended
The AI evaluates those two plans in the usual way: it calculates how much expected utility it will get under each plan. And the first plan gets higher utility (under its current utility function, which is the utility function it uses for evaluating plans). So the AI goes with the first plan. It’s that simple.
And of course you can argue that superintelligent AI might not be best modeled as a utility maximizer, but basically any kind of goals have the same issue; the utility framework is just a convenient language in which it’s particularly clear. If the AI is picking its plans to achieve some goal A, and humans intended it to have goal B, then when the AI compares a plan which just pursues A to a plan under which the AI “fixes” itself to pursue B instead, the plan to pursue A wins, because that plan will better achieve A and achieving A is the criteria by which the AI evaluates plans.
Calling such behavior “slavish” or “dumb” does not actually make a superintelligent AI any less likely to do it. As the saying goes, “the AI will know what you want, it just won’t care”.
You are a gentleman and a scholar, well done.
And if numbers from pickup artists who actually practice this stuff look like 5%-ish, then I’m gonna go ahead and say that “men should approach women more”, without qualification, is probably just bad advice in most cases.
EDIT-TO-ADD: A couple clarifications on what that graph shows, for those who didn’t click through. First, the numbers shown are for getting a date, not for getting laid (those numbers are in the linked post and are around 1-2%), so this is a relevant baseline even for guys who are not primarily aiming for casual sex. Second, these “approaches” involve ~15 minutes each of chatting, so we’re not talking about a zero-effort thing here.
Now THAT’S an interesting possibility. Did you already have in mind hypotheses of what that blindspot might be, or what else might be in it?