Very interesting work. One question I’ve had about this is whether humans can do such planning ‘natively’, i.e. in our heads, or if we’re using tools in ways that are essentially the same as doing “model-based planning inefficiently, with… bottleneck being a potential need to encode intermediate states.”
Davidmanheim
Yeah, I’m only unsurprised because I’ve been tracking other visual reasoning tasks and already updated towards verbal intelligence of LLMs being pretty much disconnected from spatial and similar reasoning. (But the visual classes of task seem not obviously harder, and visual data generation is very feasible at scale, so I do expect reasonably rapid future progress now that it is a focus, conditional on sufficient attention from developers.)
I understood, very much secondhand, that current LLMs are still using a separately trained part of the model’s input space for images. I’m very unsure how the model weights are integrating the different types of thinking, but am by default skeptical that it integrates cleanly into other parts of reasoning.
That said, I’m also skeptical that this is fundamentally a had part of the problem, as simulation and generated data seems like a very tractable route to improving this, if/once model developers see it as a critical bottleneck for tens of billions of dollars in revenue.
Grounded Ghosts in the Machine—Friston Blankets, Mirror Neurons, and the Quest for Cooperative AI
That seems correct, but I don’t think any of those aren’t useful to investigate with AI, despite the relatively higher bar.
...Thus, to explain the Fermi Paradox, we should posit increased odds that the Great Filter is in front of us. (However, my prior for the Great Filter being ahead of humanity is pretty low, we’re too close to AI and the stars—keep in mind that even a paperclipper has not been Filtered, a Great Filter prevents any intelligence from escaping Earth.)
Or that the filter is far behind us—specifically, Eukaryotes only evolved once. And in the chain-model by Sandberg et al, pre-intelligence filters are the vast majority of the probability mass, so it seems to me that eliminating intelligence as a filter shifts the remaining probability mass for a filter backwards in time in expectation.
That being said, this strategy relies on approaches that are fruitful for us and fruitful to AI assisted, accelerated, or done research to be the same approaches. (again reasonable, but not certain).
What is being excluded by this qualification?
I strongly agree, and as I’ve argued before, long timelines to ASI are possible even if we have proto-AGI soon, and aligning AGI doesn’t necessarily help solve ASI risks. It seems like people are being myopic, assuming their modal outcome is effectively certain, and/or not clearly holding multiple hypotheses about trajectories in their minds, so they are undervaluing conditionally high value research directions.
Maybe we could look a 4-star generals, of which there are under 40 total in the US? Not quite as selective, but a more similar process. (Or perhaps around as selective given the number of US Catholics, vs. US citizens.)
You could compare to other strongly meritocratic organizations (US Senate? Fortune 500 C-level employees?) to see whether the church is very different.
The boring sense that is enough to say that it increases in intelligence, which was the entire point.
“infer a virtue-ethical utility function from a virtue-ethical policy”
The assumption of virtue ethics isn’t that virtue is unknown and must be discovered—it’s that it’s known and must be pursued. If the virtuous action, as you posit, is to consume ice cream, intelligence would allow an agent to acquire more ice cream, eat more over time by not making themselves sick, etc.
But any such decision algorithm, for a virtue ethicist, is routing through continued re-evaluation of whether the acts are virtuous, in the current context, not embracing some farcical LDT version of needing to pursue ice cream at all costs. There is an implicit utility function which values intelligence, but it’s not then inferring back what virtue is, as you seem to claim. Your assumption, which is evidently that the entire thing turns into a compressed and decontextualized utility function (“algorithm”) is ignoring the entire hypothetical.
OK, so your argument against my claim is that a stupid and biased decision procedure wouldn’t know that intelligence would make it more effective at being virtuous. And sure, that seems true, and I was wrong to assert unconditionally that “for virtue ethics, the derivative of that utility with respect to intelligence is positive.”
I should have instead clarified that I meant that any not idiotic virtue ethics decision procedure would have a positive first derivative in intelligence—because as your claim seems to admit, a less stupid decision procedure would not make that mistake, and would then value intelligence as it bootstrapped its way to greater intelligence.
I understand what an argument is, but I don’t understand why you think that converting policies to.utility functions needs to assume no systematic errors, or why, if true, that would make it incompatible with varying intelligence.
I don’t understand your argument here.
Yes, virtue ethics implies a utility function, because anything that outputs decisions implies a utility function. In this case, I’m noting that for virtue ethics, the derivative of that utility with respect to intelligence is positive.
My response was about your original PS, which was about this, not taboos.
I think the arguments you made there, and here, are confused, mixing up unrelated claims. The idea that some tasks will necessarily remain harder for AI than humans in the future is simply hopium.
Saying AI won’t be more efficient is obviously falsified for narrow tasks like adding numbers, and for general tasks like writing short stories, as LLMs currently do, the brain uses 20w/hour, and that’s about 30k tokens from GPT4o, i.e. it is done far more efficiently than a human.
And more generally, the argument that AI can’t be more efficient than the brain seems to follow exactly the same structure as the claim that AI can’t be smarter than humans, or the impossibility result here.
You should read the comments to that post.
I think this is confused about how virtue ethics works. Virtue ethics is centered on the virtues of the moral agent, but it certainly does not say not to predict consequences of actions. In fact, one aspect of virtue, in the Aristotelian system, is “practical wisdom,” i.e. intelligence which is critical for navigating choices—because practical wisdom includes an understanding of what consequences will follow actions.
It’s more accurate to say that intelligence is channeled differently — not toward optimizing outcomes, but toward choosing in a way consistent with one’s virtues. And even if virtues are thought of as policies, as in the “loyal friend” example, the policies for being a good friend require interpretation and context-sensitive application. Intelligence is crucial for that.
There are a number of ways that the US seems to have better values than the CCP, by my lights, but it seems incredibly strange to claim the US values being egalitarian, and social equality or harmony more.
Rule of law, fostering diversity, encouraging human excellence? Sure, there you would have an argument. But egalitarian?