A contract that relies on a probability to calculate payments is also a serious theoretical headache. If you are a Bayesian, there’s no objective probability to use since probabilities are subjective things that only exist relative to a state of partial ignorance about the world. If you are a frequentist there’s no dataset to use.
There’s another issue.
As the threat of extinction gets higher and also closer in time, it can easily be the case that there’s no possible payment that people ought to rationally accept.
Finally different people have different risk tolerances such that some people will gladly take a large risk of death for an upfront payment, but others wouldn’t even take it for infinity money.
E.g. right now I would take a 16% chance of death for a $1M payment, but if I had $50M net worth I wouldn’t take a 16% risk of death even if infinity money was being offered.
Since these x-risk companies must compensate everyone at once, even a single rich person in the world could make them uninsurable.
Even in a traditional accounting sense, I’m not aware that there is any term that could capture the probable existential effects of a research, but I understand what @So8res is trying to pursue in this post which I agree with. But, I think apocalypse insurance is not the proper term here.
I think IAS/IFRS 19, actuarial gains or losses / IFRS 26 Retirement benefits are more closer to the idea—though these theortical accounting approaches applies to employees of a company. But these can be tweaked to another form of accounting theory (on another form of expense or asset) that captures how much costs are due out of possible catastrophic causes. External auditors can then review this periodically. (The proceeds from such should be pooled for averting the AGI existential risk scenarios—this might be a hard one to capture as to who manages the collected funds.)
To think of it, AI companies are misrepresenting their financials for not properly addressing a component in their reporting that reflects the “responsibility they have for the future of humanity”, and this post somehow did shed some light to me that yes, this value should be somehow captured in their financial statements.
Based on what I know, these AI companies have very peculiar company setups, yet the problem is the world’s population comprises the majority of the stakeholders (in a traditional accounting sense). So I think there is a case that AI companies should be obliged to present how they capture the possibility of losses from catastrophic events, and have them audited by external auditors—so the public is somehow aware: for example a publicly available FS will show these expenses and has been audited by a big 4 audit firm and then the average citizen will say: “Okay, this is how they are trying to manage the risks of AI research and it was audited by a Big 4 firm. I expect this estimated liability will be paid to the organisation built for redistributing such funds.”[1]
(AI companies can avoid declaring such future catastrophic expense, if they can guarantee that the AGI they are building won’t destroy the world which I am pretty sure no AI company can claim for the moment.)
I’m a former certified public accountant before going to safety research.
Not sure of who will manage the collections though, haven’t gone that far in my ideas. Yet, it is safe to say that talking to the IFRS board or GAAP board about this matter can be an option, and I expect that they will listen with the most respectable members of this community re: the peculiar financial reporting aspects of AI research.
Ooops my bad, there is a pre-existing reporting standard that covers for research and development, not existential risks though: IFRS 38 intangible assets.
An intangible asset is an identifiable non-monetary asset without physical substance. Such an asset is identifiable when it is separable, or when it arises from contractual or other legal rights. Separable assets can be sold, transferred, licensed, etc. Examples of intangible assets include computer software, licences, trademarks, patents, films, copyrights and import quotas.
An update to this standard, should be necessary to cover for the nature of AI research.
Google Deepmind is using IFRS 38 as per page 16 of 2021 FS reports I found, so they are following this standard already and expect that if an update on this standard re: a proper accounting theory on the estimated liability of an AI company doing AGI research, it will be governed by the same accounting standard. Reframing this post to target this IFRS 38 standard, is recommended in my opinion.
“responsibility they have for the future of humanity”
As I read it, it only wanted to capture the possibility of killing currently living individuals. If they had to also account for ‘killing’ potential future lives it could make an already unworkable proposal even MORE unworkable.
Yes, utility of money is currently fairly well bounded. Liability insurance is a proxy for imposing risks on people, and like most proxies comes apart in extreme cases.
However: would you accept a 16% risk of death within 10 years in exchange for an increased chance of living 1000+ years? Assume that your quality of life for those 1000+ years would be in the upper few percentiles of current healthy life. How much increased chance of achieving that would you need to accept that risk?
That seems closer to a direct trade of the risks and possible rewards involved, though it still misses something. One problem is that it still treats the cost of risk to humanity as being simply the linear sum of the risks acceptable to each individual currently in it, and I don’t think that’s quite right.
If you pick 5000 years for your future lifespan if you win, 60 years if you lose, and you discount each following year by 5 percent, you should take the bet until your odds are worse than 48.6 percent doom.
Having children younger than you doesn’t change the numbers much unless you are ok with your children also being corpses and you care about hypothetical people not yet alive. (You can argue that this is a choice you cannot morally make for other people, but the mathematically optimal choice only depends on their discount rates)
Discounting also reduces your valuation of descendents because everything you are—genetics and culture—a certain percentage is lost with each year. This is I believe the “value drift” argument, over an infinite timespan mortal human generations will also lose all value in the universe those humans living today care about. It is no different in a thousand years if the future culture, 20 generations later, is human or AI. The AI descendants may have drifted less as AI models start immortal inherently.
If it pays out in advance it isn’t insurance.
A contract that relies on a probability to calculate payments is also a serious theoretical headache. If you are a Bayesian, there’s no objective probability to use since probabilities are subjective things that only exist relative to a state of partial ignorance about the world. If you are a frequentist there’s no dataset to use.
There’s another issue.
As the threat of extinction gets higher and also closer in time, it can easily be the case that there’s no possible payment that people ought to rationally accept.
Finally different people have different risk tolerances such that some people will gladly take a large risk of death for an upfront payment, but others wouldn’t even take it for infinity money.
E.g. right now I would take a 16% chance of death for a $1M payment, but if I had $50M net worth I wouldn’t take a 16% risk of death even if infinity money was being offered.
Since these x-risk companies must compensate everyone at once, even a single rich person in the world could make them uninsurable.
Even in a traditional accounting sense, I’m not aware that there is any term that could capture the probable existential effects of a research, but I understand what @So8res is trying to pursue in this post which I agree with. But, I think apocalypse insurance is not the proper term here.
I think IAS/IFRS 19, actuarial gains or losses / IFRS 26 Retirement benefits are more closer to the idea—though these theortical accounting approaches applies to employees of a company. But these can be tweaked to another form of accounting theory (on another form of expense or asset) that captures how much costs are due out of possible catastrophic causes. External auditors can then review this periodically. (The proceeds from such should be pooled for averting the AGI existential risk scenarios—this might be a hard one to capture as to who manages the collected funds.)
To think of it, AI companies are misrepresenting their financials for not properly addressing a component in their reporting that reflects the “responsibility they have for the future of humanity”, and this post somehow did shed some light to me that yes, this value should be somehow captured in their financial statements.
Based on what I know, these AI companies have very peculiar company setups, yet the problem is the world’s population comprises the majority of the stakeholders (in a traditional accounting sense). So I think there is a case that AI companies should be obliged to present how they capture the possibility of losses from catastrophic events, and have them audited by external auditors—so the public is somehow aware: for example a publicly available FS will show these expenses and has been audited by a big 4 audit firm and then the average citizen will say: “Okay, this is how they are trying to manage the risks of AI research and it was audited by a Big 4 firm. I expect this estimated liability will be paid to the organisation built for redistributing such funds.”[1]
(AI companies can avoid declaring such future catastrophic expense, if they can guarantee that the AGI they are building won’t destroy the world which I am pretty sure no AI company can claim for the moment.)
I’m a former certified public accountant before going to safety research.
Not sure of who will manage the collections though, haven’t gone that far in my ideas. Yet, it is safe to say that talking to the IFRS board or GAAP board about this matter can be an option, and I expect that they will listen with the most respectable members of this community re: the peculiar financial reporting aspects of AI research.
Ooops my bad, there is a pre-existing reporting standard that covers for research and development, not existential risks though: IFRS 38 intangible assets.
An update to this standard, should be necessary to cover for the nature of AI research.
Google Deepmind is using IFRS 38 as per page 16 of 2021 FS reports I found, so they are following this standard already and expect that if an update on this standard re: a proper accounting theory on the estimated liability of an AI company doing AGI research, it will be governed by the same accounting standard. Reframing this post to target this IFRS 38 standard, is recommended in my opinion.
As I read it, it only wanted to capture the possibility of killing currently living individuals. If they had to also account for ‘killing’ potential future lives it could make an already unworkable proposal even MORE unworkable.
Yes, utility of money is currently fairly well bounded. Liability insurance is a proxy for imposing risks on people, and like most proxies comes apart in extreme cases.
However: would you accept a 16% risk of death within 10 years in exchange for an increased chance of living 1000+ years? Assume that your quality of life for those 1000+ years would be in the upper few percentiles of current healthy life. How much increased chance of achieving that would you need to accept that risk?
That seems closer to a direct trade of the risks and possible rewards involved, though it still misses something. One problem is that it still treats the cost of risk to humanity as being simply the linear sum of the risks acceptable to each individual currently in it, and I don’t think that’s quite right.
If you pick 5000 years for your future lifespan if you win, 60 years if you lose, and you discount each following year by 5 percent, you should take the bet until your odds are worse than 48.6 percent doom.
Having children younger than you doesn’t change the numbers much unless you are ok with your children also being corpses and you care about hypothetical people not yet alive. (You can argue that this is a choice you cannot morally make for other people, but the mathematically optimal choice only depends on their discount rates)
Discounting also reduces your valuation of descendents because everything you are—genetics and culture—a certain percentage is lost with each year. This is I believe the “value drift” argument, over an infinite timespan mortal human generations will also lose all value in the universe those humans living today care about. It is no different in a thousand years if the future culture, 20 generations later, is human or AI. The AI descendants may have drifted less as AI models start immortal inherently.