Gonna be in Berkeley on the 14th and Princeton on the 16th :’)
Terence Coelho
Discussions about possible economic future should account for the (imo high) possibility that everyone might have inexpensive access to sufficient intelligence to accomplish basically any task they would need intelligence for. There are some exceptions like quant trading where you have a use case for arbitrarily high intelligence, but for most businesses, the marginal gains for SOTA intelligence won’t be so high. I’d imagine that raw human intelligence just becomes less valuable (
as it has been for most of human historyI guess this is worse because many businesses would also not need employees for physical tasks. But the point is that many such non-tech businesses might be fine).Separately: Is AI safety at all feasible to tackle in the likely scenario that many people will be able to build extremely powerful but non-SOTA AI without safety mechanisms in place? Will the hope be that a strong enough gap exists between aligned AI and everyone else’s non-aligned AI?
I would be very surprised if this FVU_B actually another definition and not a bug. It’s not a fraction of the variance and those denominators can easily be zero or very near zero.
Not worth worrying about given context of imminent ASI.
This is something that confuses me as well: why do a lot of people in these circles seem care about the fertility crisis while also believing that ASI is coming very soon?
In both optimistic and pessimistic scenarios about what a post-ASI world looks like, I’m struggling to see a future where the fact that people in the 2020s had relatively few babies matters.
If this actually hasn’t been explored, this is a really cool idea! So you want to learn a function (Player 1, Player 2, position) → (probability Player 1 wins, probability of a draw)? Sounds like there are a lot of naive architectures to try and you have a ton of data since professional chess players play a lot of games.
Some random ideas:
Before doing any sort of positional analysis: What does the (ELO_1,ELO_2,engine eval) → Probability of win/draw function look like? What happens when choosing an engine near those ELO ratings vs. the strongest engines?
Observing how rapidly the eval changes when given to a weak engine might give a somwhat automatable metric on the “sharpness” of a chess position (so you don’t have to label everything yourself)
This whole thing about “I would give my life for two brothers or eight cousins” is just nonsense formed by taking a single concept way too far. Blood relation matters but it isn’t everything. People care about their adopted children and close unrelated friends.
The user could always write a comment (or a separate post) asking why they got a bunch of downvotes, and someone would probably respond. I’ve seen this done before.
Otherwise I’d have to assume that the user is open-minded enough to actually want feedback and not be hostile. They might not even value feedback from this community; there are certainly many communities where I would think very little about negative feedback.
Update: R1 found bullet point 3 after prompting it to try 16x16. It’s 2 minus the adjacency matrix of the tesseract graph
Would bet on this sort of strategy working; hard agree that ends don’t justify the means and see that kind of justification for misinformation/propaganda a lot amongst highly political people. (But above examples are pretty tame.)
I volunteer myself as a test subject; dm if interested
So I’m new here and this website is great because it doesn’t have bite-sized oversimplifying propaganda. But isn’t that common everywhere else? Those posts seem very typical for reddit and at least they’re not outright misinformation.
Also I… don’t hate these memes. They strike me as decent quality. Memes aren’t supposed to make you think deeply about things.
Edit: searched Kat Woods here and now feel worse about those posts
There have been a lot of tricks I’ve used over the years, some of which I’m still using now, but many of which require some level of discipline. One requires basically none, has a huge upside (to me), and has been trivial for me to maintain for years: a “newsfeed eradicator” extension. I’ve never had the temptation to turn it off unless it really messes with the functionality of a website.
It basically turns off the “front page” of whatever website you apply it to (e.g. reddit/twitter/youtube/facebook) so that you don’t see anything when you enter the site and have to actually search for whatever you’re interested in. And for youtube, you never see suggestions to the right of or at the end of a video.
I think even the scaling thing doesn’t apply here because they’re not insuring bigger trips: they’re insuring more trips (which makes things strictly better). I’m having some trouble understanding Dennis’ point.
“I don’t know, I recall something called the Kelly criterion which says you shouldn’t scale your willingness to make risky bets proportionally with available capital—that is, you shouldn’t be just as eager to bet your capital away when you have a lot as when you have very little, or you’ll go into the red much faster.
I think I’m misunderstanding something here. Let’s say you have dollars and are looking for the optimum number of dollars to bet on something that causes you to gain dollars with probability and lose dollars with probability . The optimum number of dollars you should bet via the Kelly criterion seems to be
(assuming positive expectation; i.e. the numerator is positive), which does scale linearly with . And this seems fundamental to this post.
(Epistemic status: low and interested in disagreements)
My economic expectations for the next ten years are something like:
-
Examples of powerful AI misanswering basic questions continue for a while. For this and other reasons, trust in humans over AI persists in many domains for a long time after ASI is achieved.
-
Jobs become scarcer gradually. Humans remain at the helm for a while but the willingness to replace ones workers with AI slowly creeps its way up the chain. There is a general belief that Human + AI > AI + extra compute in many roles, and it is difficult to falsify this. Regulations take a long time to cut, causing some jobs to remain far beyond their usefulness. Humans continue to get very offended if they find out they are talking to an AI in business matters.
-
Money remains a thing for the next decade and enough people have jobs to avoid a completely alien economy. There is time to slowly transition to UBI and distribution of prosperity, but there is no guarantee this occurs.
-
Ah, darn. Are there any other events/meetups you know of at Lighthaven during those weeks?
Is this going to continue in 2025? I’ll be visiting Berkeley from Jan 5th to Jan 17th and would like to come visit.
Here’s a little quick take of mine that provides a setting where centaur > AI (maybe). It’s theory of computation which is close to complexity theory
That’s incredible.
But how do they profit? They say they don’t profit on middle eastern war markets, so they must be profiting elsewhere somehow
Focusmate has been an absolute game-changer for effectively using my time after work over the last two weeks. Thank you for posting this.