In the startup world, conventional wisdom is that, if your company is default-dead (i.e. on the current growth trajectory, you will run out of money before you break even), you should pursue high-variance strategies. In one extreme example, “in the early days of FedEx, [founder of FedEx] Smith had to go to great lengths to keep the company afloat. In one instance, after a crucial business loan was denied, he took the company’s last $5,000 to Las Vegas and won $27,000 gambling on blackjack to cover the company’s $24,000 fuel bill. It kept FedEx alive for one more week.”
By contrast, if your company is default-alive (profitable or on-track to become profitable long before you run out of money in the bank), you should avoid making high-variance bets for a substantial fraction of the value of the company, even if those high-variance bets are +EV.
Obvious follow-up question: in the absence of transformative AI, is humanity default-alive or default-dead?
Yes. And that means most people will support taking large risks on achieving aligned AGI and immortality, since most people aren’t utilitarian or longtermist.
if your company is default-dead, you should pursue high-variance strategies
There are rumors OpenAI (which has no moat) is spending much more than it’s making this year despite good revenue, another datapoint on there being $1 billion training runs currently in progress.
I’m curious what sort of policies you’re thinking of which would allow for a pause which plausibly buys us decades, rather than high-months-to-low-years. My imagination is filling in “totalitarian surveillance state which is effective at banning general-purpose computing worldwide, and which prioritizes the maintenance of its own control over all other concerns”. But I’m guessing that’s not what you have in mind.
No more totalitarian than control over manufacturing of nuclear weapons. The issue is that currently there is no buy-in on a similar level, and any effective policy is too costly to accept for people who don’t expect existential risk. This might change once there are long-horizon task capable AIs that can do many jobs, if they are reined in before there is runaway AGI that can do research on its own. And establishing control over compute is more feasible if it turns out that taking anything approaching even a tiny further step in the direction of AGI takes 1e27 FLOPs.
Generally available computing hardware doesn’t need to keep getting better over time, for many years now PCs have been beyond what is sufficient for most mundane purposes. What remains is keeping an eye on GPUs for the remaining highly restricted AI research and specialized applications like medical research. To prevent their hidden stockpiling, all GPUs could be required to need regular unlocking OTPs issued with asymmetric encryption using multiple secret keys kept separately, so that all of the keys would need to be stolen simultaneously to keep the GPUs working (if the GPUs go missing or a country that hosts the datacenter goes rogue, and official unlocking OTPs wouldn’t keep being issued). Hidden manufacturing of GPUs seems much less feasible than hidden or systematically subverted datacenters.
a totalitarian surveillance state which is effective at banning general-purpose computing worldwide, and which prioritizes the maintenance of its own control over all other concerns
I much prefer that to everyone’s being killed by AI. Don’t you?
Great example. One factor that’s relevant to AI strategy is that you need good coordination to increase variance. If multiple people at the company make independent gambles without properly accounting for every other gamble happening, this would average the gambles and reduce the overall variance.
E.g. if coordination between labs is terrible, they might each separately try superhuman AI boxing+some alignment hacks, with techniques varying between groups.
It seems like lack of coordination for AGI strategy increases the variance? That is, without coordination somebody will quickly launch an attempt at value aligned AGI; if they get it, we win. If they don’t, we probably lose. With coordination, we might all be able to go slower to lower the risk and therefore variance of the outcome.
I guess it depends on some details, but I don’t understand your last sentence. I’m talking about coordinating on one gamble.
Analogous the the OP, I’m thinking of AI companies making a bad bet (like 90% chance of loss of control, 10% chance gain the tools to do a pivotal act in the next year). Losing the bet ends the betting, and winning allows everyone to keep playing. Then if many of them make similar independent gambles simultaneously, it becomes almost certain that one of them loses control.
In the absence of transformative AI, humanity survives many millennia with p = .9 IMO, and if humanity does not survive that long, the primary cause is unlikely to be climate change or nuclear war although either might turn out to be a contributor.
(I’m a little leery of your “default-alive” choice of words.)
In the startup world, conventional wisdom is that, if your company is default-dead (i.e. on the current growth trajectory, you will run out of money before you break even), you should pursue high-variance strategies. In one extreme example, “in the early days of FedEx, [founder of FedEx] Smith had to go to great lengths to keep the company afloat. In one instance, after a crucial business loan was denied, he took the company’s last $5,000 to Las Vegas and won $27,000 gambling on blackjack to cover the company’s $24,000 fuel bill. It kept FedEx alive for one more week.”
By contrast, if your company is default-alive (profitable or on-track to become profitable long before you run out of money in the bank), you should avoid making high-variance bets for a substantial fraction of the value of the company, even if those high-variance bets are +EV.
Obvious follow-up question: in the absence of transformative AI, is humanity default-alive or default-dead?
I suspect humanity is default-alive, but individual humans (the ones who actually make decisions) are default-dead[1].
Or, depending on your views on cryonics, they mistakenly en masse believe they are default-dead.
Yes. And that means most people will support taking large risks on achieving aligned AGI and immortality, since most people aren’t utilitarian or longtermist.
Almost certainly alive for several more decades if we are talking literal extinction rather than civilization-wreaking catastrophe. Therefore it makes sense to work towards global coordination to pause AI for at least this long.
There are rumors OpenAI (which has no moat) is spending much more than it’s making this year despite good revenue, another datapoint on there being $1 billion training runs currently in progress.
I’m curious what sort of policies you’re thinking of which would allow for a pause which plausibly buys us decades, rather than high-months-to-low-years. My imagination is filling in “totalitarian surveillance state which is effective at banning general-purpose computing worldwide, and which prioritizes the maintenance of its own control over all other concerns”. But I’m guessing that’s not what you have in mind.
No more totalitarian than control over manufacturing of nuclear weapons. The issue is that currently there is no buy-in on a similar level, and any effective policy is too costly to accept for people who don’t expect existential risk. This might change once there are long-horizon task capable AIs that can do many jobs, if they are reined in before there is runaway AGI that can do research on its own. And establishing control over compute is more feasible if it turns out that taking anything approaching even a tiny further step in the direction of AGI takes 1e27 FLOPs.
Generally available computing hardware doesn’t need to keep getting better over time, for many years now PCs have been beyond what is sufficient for most mundane purposes. What remains is keeping an eye on GPUs for the remaining highly restricted AI research and specialized applications like medical research. To prevent their hidden stockpiling, all GPUs could be required to need regular unlocking OTPs issued with asymmetric encryption using multiple secret keys kept separately, so that all of the keys would need to be stolen simultaneously to keep the GPUs working (if the GPUs go missing or a country that hosts the datacenter goes rogue, and official unlocking OTPs wouldn’t keep being issued). Hidden manufacturing of GPUs seems much less feasible than hidden or systematically subverted datacenters.
I much prefer that to everyone’s being killed by AI. Don’t you?
Great example. One factor that’s relevant to AI strategy is that you need good coordination to increase variance. If multiple people at the company make independent gambles without properly accounting for every other gamble happening, this would average the gambles and reduce the overall variance.
E.g. if coordination between labs is terrible, they might each separately try superhuman AI boxing+some alignment hacks, with techniques varying between groups.
It seems like lack of coordination for AGI strategy increases the variance? That is, without coordination somebody will quickly launch an attempt at value aligned AGI; if they get it, we win. If they don’t, we probably lose. With coordination, we might all be able to go slower to lower the risk and therefore variance of the outcome.
I guess it depends on some details, but I don’t understand your last sentence. I’m talking about coordinating on one gamble.
Analogous the the OP, I’m thinking of AI companies making a bad bet (like 90% chance of loss of control, 10% chance gain the tools to do a pivotal act in the next year). Losing the bet ends the betting, and winning allows everyone to keep playing. Then if many of them make similar independent gambles simultaneously, it becomes almost certain that one of them loses control.
In the absence of transformative AI, humanity survives many millennia with p = .9 IMO, and if humanity does not survive that long, the primary cause is unlikely to be climate change or nuclear war although either might turn out to be a contributor.
(I’m a little leery of your “default-alive” choice of words.)