I’m interested in the economics of computing and big-picture trends in machine learning. https://www.tamaybesiroglu.com/
Tamay
Are you thinking of requiring each party to accept bets on either side?
Being forced to bet both sides could ensure honesty, assuming they haven’t found other bets on the same or highly correlated outcomes they can use for arbitrage.
Yes. Good point.
And including from other parties, or only with each other?
I was thinking that betting would be restricted to the initial two parties (i.e. A and B), but I can imagine an alternative in which it’s unrestricted.
You could imagine one party was betting at odds they consider very favourable to them, and the other party betting at odds they consider only slightly favourable, based on their respective beliefs. Then, even if they don’t change their credences, one party has more room to move their odds towards their own true credences, and so drag the average towards it, and take the intermediate payments,
Sorry, I’m confused. Isn’t the ‘problem’ that the bettor who takes a relatively more favourable odds has higher expected returns a problem with betting in general?
We also propose betting using a mechanism that mitigates some of these issues:
Since we recognize that betting incentives can be weak over long time-horizons, we are also offering the option of employing Tamay’s recently described betting procedure in which we would enter a series of repeated 2-year contracts until the resolution date.
A concrete bet offer to those with short AGI timelines
Here’s a rough description of an idea for a betting procedure that enables people who disagree about long-term questions to make bets, despite not wanting to commit to waiting until the long-term questions are resolved.
Suppose person A and person B disagree about whether P, but can’t find any clear concrete disagreements related to this question that can be decided soon. Since they want to bet on things that pay out soon (for concreteness say they only want to bet on things that can pay out within 5 years), they don’t end up betting on anything.
What they can do is they could agree to bet on P, and enter into a contract (or a good-faith agreement) that requires them to, after a period of 5 years, report their true odds about P. The contract would then enable either bettor to unanimously back out of the bet, at which point the payouts would be distributed according to the difference of the odds they agreed to and the average of the odds that they currently report. In other words, the bettor who was closer to the consensus after 5 years is paid out in proportion to how much closer they were.
To ensure that bettors approximately truthfully report their odds about P after the horizon of 5 years, the contract requires A and B to report their odds to a trusted intermediary (who announces these odds simultaneously), and requires either party to accept any follow-up bets at (some function of) these reported credences.
Bettors might agree ahead of time to the range of acceptable follow-up bet sizes, though importantly, follow-up bet sizes need to be expected to be relatively large (say, a non-trivial fraction of the existing bets) to ensure that bettors have an incentive to report something close to their true beliefs.
Follow-up bets could be revisited in the same way after another 5 years, and this would continue until P resolves, or until the betters settle. However, because bettors are required to take follow-up bets, they also have an incentive to develop accurate beliefs about P so we might expect disagreements to usually be resolved short of when P resolves. They furthermore have an incentive to arrive at a consensus if they want to avoid making follow-up bets.
On this mechanism, bettors know that they can expect to fairly resolve their bets on a short horizon, as each will have an incentive to end the bet according to their consensus-view of who was closer to the truth. Hence, bettors would be keen to bet with each-other about P if they think that they’re directionally right, even when they don’t want to wait until P completely is decided.
- A concrete bet offer to those with short AGI timelines by Apr 9, 2022, 9:41 PM; 199 points) (
- Apr 9, 2022, 10:42 PM; 7 points) 's comment on A concrete bet offer to those with short AGI timelines by (
Tamay’s Shortform
Thanks!
Could you make another graph like Fig 4 but showing projected cost, using Moore’s law to estimate cost? The cost is going to be a lot, right?
Good idea. I might do this when I get the time—will let you know!
Projecting compute trends in Machine Learning
Four months later, the US is seeing a steady 7-day average of 50k to 60k new cases per day. This is a factor of 4 or 5 less than the number of daily new cases that were observed over the December-January third wave period. It seems therefore that one (the?) core prediction of this post, namely, that we’d see a fourth wave sometime between March and May that would be as bad or worse than the third wave, turned out to be badly wrong.
Zvi’s post is long, so let me quote the sections where he makes this prediction:
Instead of that being the final peak and things only improving after that, we now face a potential fourth wave, likely cresting between March and May, that could be sufficiently powerful to substantially overshoot herd immunity.
and,
If the 65% number is accurate, however, we are talking about the strain doubling each week. A dramatic fourth wave is on its way. Right now it is the final week of December. We have to assume the strain is already here. Each infection now is about a million by mid-May, six million by end of May, full herd immunity overshoot and game over by mid-July, minus whatever progress we make in reducing spread between now and then, including through acquired immunity.
It seems troubling that one of the most upvoted COVID-19 post on LessWrong is one that argued for a prediction that I think we should score really poorly. This might be an important counterpoint to the narrative that rationalists “basically got everything about COVID-19 right”*.
*from: https://putanumonit.com/2020/10/08/path-to-reason/
- Nov 27, 2021, 7:16 PM; 43 points) 's comment on Omicron Variant Post #1: We’re F***ed, It’s Never Over by (
I think GPT-3 is the trigger for 100x larger projects at Google, Facebook and the like, with timelines measured in months.
My impression is that this prediction has turned out to be mistaken (though it’s kind of hard to say because “measured in months” is pretty ambiguous.) There have been models with many-fold the number of parameters (notably one by Google*) but it’s clear that 9 months after this post, there haven’t been publicised efforts that use close to 100x the amount of compute of GPT-3. I’m curious to know whether and how the author (or others who agreed with the post) have changed their mind about the overhang and related hypotheses recently, in light of some of this evidence failing to pan out the way the author predicted.
Launching the Forecasting AI Progress Tournament
Great work! It seems like this could enable lots of useful applications. One thing in particular that I’m excited about is how this can be used to make forecasting more decision-relevant. For example, one type of application that comes to mind in particular is a conditional prediction market where conditions are continuous rather than discrete (eg. “what is GDP next year if interest rate is set to r?”, “what is Sierra Leone’s GDP in ten years if bednet spending is x?”).
If research into general-purpose systems stops producing impressive progress, and the application of ML in specialised domains becomes more profitable, we’d soon see much more investment in AI labs that are explicitly application-focused rather than basic-research focused.
It is unless it’s clear that a side that made a mistake in entering a lopsided bet. I guess the rule-of-thumb is to follow big bets (which tends to be less clearly lopsided) or bets made by two people whose judgment you trust.