Reasonable, I don’t know much about the situation
romeostevensit
Reading between the lines on the responses, it sounds like op doesn’t have the ability to evaluate grants effectively and has attribute substituted itself to doing things that superficially look like evaluation and selecting internally for people who are unable to distinguish between appearance and actuality. This sounds like a founder effect, downstream of Dustin and Cari being unable to evaluate. This seems like it rhymes with the VC world having a similar dynamic where people on the outside assume it’s about funding cutting edge highly uncertain projects but, after lots of wasted effort, those interested in such high variance projects eventually conclude that VC mostly selects for low variance with a bias towards insiders.
That is to say: investors recognize that they don’t have expertise in selecting unusual projects, so they hire people to ostensibly specialize in evaluating unusual projects, but their own taste in selecting the evaluators means that the evaluators eventually select/are selected for pleasing the investors.
To be specific: some combination of op/gv acts like its opportunity cost for capital is quite high, and it’s unclear why. One hypothesis is ‘since we’re unable to evaluate grants, if we’re profligate with money we will be resource pumped even more than we already are.’
My impression for several years has been that the effort people trying to do interesting work put into trying to engage with ea was wasted, and led to big emotional let downs that impacted their productivity.
There continue to be almost no weirdness dollars available. Temporary availability of weirdness dollars seem to get eaten by those who are conventionally attractive but put on quirky glasses and muss up their hair to appear weird. Like geek protagonists in movies. There’s no escaping the taste of the founder in the long run.
Feels complicated to atomize for some of the same reasons it’s a candidate. Think the modern most successful area was PayPal where they had the feedback loop of millions a day being lost to fraud at one point early on.
Oops meant aistudio.google.com
I think there’s a possibility for ui people to make progress on the reputation tracking problem by virtue of tight feedback loops relative to people thinking more abstractly about it. The most rapid period of learning in this regard that I know of is early days at PayPal eBay where they were burning millions a day in fraud at certain points.
Secondly: the chat interface for llm is just bad for power users. Ai Labs is slightly better but still bad.
Edit: meant aistudio
Definitely for preference cascades. For common knowledge I’d say it’s about undermining of common knowledge formation (eg meme to not share salary, strong pressure not to name that emperor is naked, etc.)
“You can not stop me, I spend thirty thousand men a month.” -Napoleon
Good timing.
Jesus: “I just got done trying to fix this!”
Less jokingly, scapegoating, accountability sinks, liability laundering, declining trust, kakonomics, form an interesting constellation that I feel is under explored for understanding human behavior when part of large systems.
Anglo armies have been extremely unusual historically speaking for their low rates of atrocity.
(I don’t think this is super relevant for AI, but I think this is where intuitions about the superiority of the west bottoms out)
Training wheels have been replaced with balance bikes for this reason.
I think the major impacts that matter are on war, pandemic risk, and x-risk. I rarely see anyone try to figure those out, perhaps the sign is too uncertain due to complexity.
Type errors:
Map-territory confusion (labels facts)
Is-ought confusion (fact value)
Means-ends confusion (value strategy)
Implementation-classification confusion (strategy label) eg “if you classify this as an emergency that must mean you support taking immediate action”
Semantic-normative confusion (label value) eg “if you classify this as art you must think it is valuable”
Empirical-procedural confusion (fact strategy) eg “recidivism rates are highest among those without stable employment, therefore job training programs are the most important intervention”
it’s about training the same muscle groups with lower joint injury. eg people do deadlifts with 2x+ bodyweight but RDLs are effective at bodyweight even for strong people.
lately i’ve been doing one legged leg press for similar reasons, though less time effective.
Prior: physical health and social success
Dating studies causing updates away from that prior: none found
It used to be weird to me how much ink was spilled on twisting the prior into knots, but I eventually realized it was people who don’t like it for the obvious reason.
What is a useful prediction that eliminatism makes?
The school I found that seemed most serious (and whose stuff also worked for me) held the position that these things basically don’t work for some people unless or until they have certain spontaneous experiences. No one knows what causes them. Some people report that they had the experiences on psychedelics, but no one knows if that’s really causal or their propensity to take psychedelics was also caused by this upstream thing. I don’t think there’s much point in trying to force it, I don’t think it works.
Found this interesting and useful. Big update for me is that ‘I cut you choose’ is basically the property that most (all?) good self therapy modalities use afaict. In that the part or part-coalition running the therapy procedure can offer but not force things, since its frames are subtly biasing the process.
Thanks for the link. I mean that predictions are outputs of a process that includes a representation, so part of what’s getting passed back and forth in the diagram are better and worse fit representations. The degrees of freedom point is that we choose very flexible representations, whittle them down with the actual data available, then get surprised that that representation yields other good predictions. But we should expect this if Nature shares any modular structure with our perception at all, which it would if there was both structural reasons (literally same substrate) and evolutionary pressure for representations with good computational properties i.e. simple isomorphisms and compressions.
The two concepts that I thought were missing from Eliezer’s technical explanation of technical explanation that would have simplified some of the explanation were compression and degrees of freedom. Degrees of freedom seems very relevant here in terms of how we map between different representations. Why are representations so important for humans? Because they have different computational properties/traversal costs while humans are very computationally limited.
Are there any papers on current efforts to tokenize video and estimating the size of available data for that?