Milan Weibel https://weibac.github.io/
Milan W
The incentives for early career researchers are to blame for this mindset imo. Having legible output is a very good signal of competence for employers/grantors. I think it probably makes sense for the first or first couple project of a researcher to be more of a cool demo than clear steps towards a solution.
Unfortunately, some middle career and sometimes even senior researchers keep this habit of forward-chaining from what looks cool instead of backwards-chaining from good futures. Ok, the previous sentence was a bit too strong. No reasoning is pure backward-chaining or pure forward-chaining. But I think that a common failure mode is not thinking enough about theories of change.
Wouldn’t surprise me if this got more common over time, such that what is now a deluge will become the new baseline.
OpenPhil said in April 2024 that he left them for the Carnegie Endowment for International Peace. The Carnegie Endowment says he is no longer with them. His linkedin profile (which I presume to be authentic, because it was created in 2008) says he’s at Anthropic since January 2025.
EDIT: Additional source, Harvard’s Berkman Klein Center.
I think it’s not a contradiction, because you can be doing only a small number of things at any given instant in time, but have an impressive throughput overall.
I think that being good at avoiding wasted motion while doing things is pretty fundamental to resolving the contradiction.
Mind if I try my hand at this? More concretely: if I do write it, will you review my draft?
Holden was previously Open Philanthropy’s CEO and is now settling into his new role at Anthropic.
I am not privy to any non-public information, but please do not be disappointed if he has no time for contacting you via LinkedIn for the purpose of booking the interview you are requesting.
I see. Some pretty underhanded trawlers then, to put it mildly.
Some of the stories assume a lot of AIs, wouldn’t a lot of human-level AIs be very good at creating a better AI?
That is a pretty reasonable assumption. AFAIK that is what the labs plan to do.
Maybe the trawler problem would be mitigated if lesswrong offered a daily XML or plaintext or whathever dump on a different URL and announced it in robots.txt?
Epistemic status: Late night hot take, notting it down so I don’t forget it. Not endorsed. Asked in the spirit of a question post. I am aware that people may respond both “ehm we are already that” and “no! we don’t give in to threats!”. I don’t know.
Why would we get a consequentialist AI?
Excellent question. Current AIs are not very strong-consequentialist[1], and I expect/hope that we probably won’t get AIs like that either this year (2025) nor next year (2026). However, people here are interested in how an extremely competent AI would behave. Most people here model them as instrumentally-rational agents that are usefully described as having a closed-form utility function. Here goes a seminal formalization of this model by Legg and Hutter: link.
Are these models of future super-competent AIs wrong? Somewhat. All models are wrong. I personally trust them less than the average person who has spent a lot of time in here. I still find them a useful tool for thinking about limits and worst case scenarios: the sort of AI system actually capable of single-handedly taking over the world, for instance. However, I think it is also very useful to think about how AIs (and the people making them) are likely to act before these ultra-competent AIs show up, or in the case they don’t.
- ^
Term i just made up and choose to define like this: that reasons like a naive utilitarian, independently of its goals.
- ^
all at once in 2029.
I am almost totally positive that the plan is not that.
If planning for 2029 is cheap, then it probably makes sense under a very broad class of timelines expectations.
If it is expensive, then the following applies to the hypothetical presented by the tweet:The timeline evoked in the tweet seems extremely fast and multipolar. I’d expect planning for 2029 compute scaling to make sense only if the current paradigm gets stuck at ~AGI capabilities level (ie a very good scaffolding for a model similar to but a bit smarter than o3). This is because if it scales further than that it will do so fast (requiring little compute, as the tweet suggests). If capabilities arbitrarily better than o4-with-good-scaffolding are compute-cheap to develop, then things almost certainly get very unpredictable before 2029.
I think (the results of) this process are the main reason why (some) open source desktop software provides a better user experience despite being developed with fewer resources.
I’d love to more about implications of the CURRENT level of observation
I have a feeling that the current bottleneck is data integration rather than data collection.
That is good news. Thanks.
I really hope Sama wises up once he has a kid
Context: He is married to a cis man. Not sure if he has spoken about considering adoption or surrogacy.
Agreed. However, in the fast world the game is extremely likely to end before you get to use 2029 compute.
EDIT: I’d be very interested to hear an argument against this proposition, though.
context: @DrJimFan works at nvidia
IF we got and will keep on having strong scaling law improvements, then:
openai’s plan to continue to acquire way more training compute even into 2029 is either lies or a mistakewe’ll get very interesting times quite soon
offense-defense balances and multi-agent-system dynamics seem like good research directions, if you can research fast and have reason to believe your research will be implemented in a useful way
EDIT: I no longer fully endorse the crossed-out bullet point. Details in replies to this comment.
I think this post is very clear-written. Thanks.
It may just use l33tc0d3 or palabres nonexistentes interpolatas d’idioms prossimos.