Ok, then I agreed. I was interpreting the advice in a different way, but your interpretation looks more reasonable.
rotatingpaguro
Thinking about it, I suspect I was not getting what “authenticity and openness” means. Like, it’s not “being yourself and letting go”, and more “being honest”, I guess? Could you give me >= 2 examples of a person being “authentic and open”?
Are you libertarian about this specifically? Do you think it’s better if people also have the choice of dating apps? Or would you ban them if given the choice?
As noted last time, Rob Henderson finds that women in their twenties swipe right (‘like’) twice as often for a man with a master’s degree over a bachelor’s degree.
Causal or association?
Manifold Love: pro-tip: if a woman measures her hand against yours, this is almost always flirtation.
Totally did not know this. Is this true?
2. Authenticity and openness with your partner tends to be reciprocal and strongly predicts relationship satisfaction. That makes sense, this is underrated.
Is this causal? I mean, maybe being yourself and open works for people who happen to already be relationship-compatible. People who are not would be worse off by trying to be themselves. I think I have been burned in the past a lot by that kind of advice, although my experience is too much of an anecdote to infer an average.
broken link
What are your goals when you teach?
What gives you pleasure when teaching?
After the events of April 2024, I cannot say that for Columbia or Yale. No just no.
What are these events?
Your argument would imply that competition begets worse products?
One big prize, or many small prizes like here?
First thoughts:
Context length is insanely long
Very good at predicting the next token
Knows many more abstract facts
These three things are all instances of being OOM better at something specific. If you consider the LLM somewhat human-level at the thing it does, this suggests that it’s doing it in a way which is very different from what a human does.
That said, I’m not confident about this; I can sense there could be an argument that this counts as human but ramped up on some stats, and not an alien shoggoth.
If I had to give only one line of advice to a randomly sampled prospective grad student: you don’t actually have to do what the professor says.
Ok. Then I’ll say that randomly assigned utility over full trajectories are beyond wild!
The basin of attraction just needs to be large enough. AIs will intentionally be created with more structure than that.
I read the section you linked, but I can’t follow it. Anyway, here it is its conclusive paragraph:
Conclusion: Optimal policies for u-AOH will tend to look like random twitching. For example, if you generate a u-AOH by uniformly randomly assigning each AOH utility from the unit interval , there’s no predictable regularity to the optimal actions for this utility function. In this setting and under our assumptions, there is no instrumental convergence without further structural assumptions.
From this alone, I get the impression that he hasn’t proved that “there isn’t instrumental convergence”, but that “there isn’t a totally general instrumental convergence that applies even to very wild utility functions”.
It’s AI-based, so my guess is that it uses a lot of somewhat superficial correlates that could be gamed. I expect that if it went mainstream it would be Goodharted.
I expect Goodhart would hit particularly bad if you were doing the kind of usage I guess you are implying, which is searching for a few very well selected people. A selective search is a strong optimization, and so Goodharts more.
More concrete example I have in mind, that maybe applies right now to the technology: there are people who are good at lying to themselves.
Yes, in general the state of the art is more advanced than looking at correlations.
You just need to learn when using correlations makes sense. Don’t assume that everyone is using correlations blindly; Statistics PhDs most likely decide whether to use them or not based on context and know the limited ways in which what the say applies.
Correlations make total sense when the distribution of the variables is close to multivariate Normal. The covariance matrix, which can be written as a combination of variances + correlation matrix, completely determines the shape of a multivariate Normal.
If the variables are not Normal, you can try to transform them to make them more Normal, using both univariate and multivariate transformations. This is a very common Statistics tool. Basic example: Quantile normalization.
As we get closer to maxing out
This is , right? (Feel free to delete this comment.)
There is also a hazy counting argument for overfitting:
It seems like there are “lots of ways” that a model could end up massively overfitting and still get high training performance.
So absent some additional story about why training won’t select an overfitter, it feels like the possibility should be getting substantive weight.
While many machine learning researchers have felt the intuitive pull of this hazy overfitting argument over the years, we now have a mountain of empirical evidence that its conclusion is false. Deep learning is strongly biased toward networks that generalize the way humans want— otherwise, it wouldn’t be economically useful.
I don’t know well NN history, but I have the impression good NN training is not trivial. I expect that the first attempts at NN training went bad in some way, including overfitting. So, without already knowing how to train an NN without overfitting, you’d get some overfitting in your experiments. The fact that now, after someone already poured their brain juice over finding techniques that avoid the problem, you don’t get overfitting, is not evidence that you shouldn’t have expected overfitting before.
The analogy with AI scheming is: you don’t already know the techniques to avoid scheming. You can’t use as counterargument a case in which a problem has already deliberately been solved. If you take that same case, and put yourself in the shoes of someone who doesn’t already have the solution, you see you’ll get the problem in your face a few times before solving it.
Then, it is a matter of whether it works like Yudkowsky says, that you may only get one chance to solve it.
The title says “no evidence for AI doom in counting arguments”, but the article mostly talks about neural networks (not AI in general), and the conclusion is
In this essay, we surveyed the main arguments that have been put forward for thinking that future AIs will scheme against humans by default. We find all of them seriously lacking. We therefore conclude that we should assign very low credence to the spontaneous emergence of scheming in future AI systems— perhaps 0.1% or less.
“main arguments”: I don’t think counting arguments completely fill up this category. Example: the concept of scheming originates from observing it in humans.
Overall, I have the impression of some overstatement. It can also be that I’m missing some previous discussion context/assumptions, so other background theory from you may say “humans don’t matter as examples”, and also “AI will be NNs and not other things”.
It’s quite successfully managed to urbanize it’s population and now seems to have reached the Lewis turning point where young people who try to leave their villages to find work cities often don’t find it and have to stay in their villages, in the much lower productivity jobs.
I can’t follow this. Wikipedia says that
The Lewis turning point is a situation in economic development where surplus rural labor is fully absorbed into the manufacturing sector. This typically causes agricultural and unskilled industrial real wages to rise.
So it looks like at the Lewis point there’s over-demand for workers, so they can find the jobs. Instead you describe it as if there’s over-supply, the manufacturing sector does not need any more workers so they can’t find jobs.
There’s only one way to know!
</joking> <=========
Look, I’ve measured hands with women, and not personally done it out of a relationship but I’ve seen women resting their heads and purring, and I don’t think it was flirting. Either this or I totally don’t understand social relationships (possible!) Do you think this is cultural? I’m Italian, are you US American?