Agreed that the proposal is underspecified; my point here is not “look at this great proposal” but rather “from a theoretical angle, risking others’ stuff without the ability to pay to cover those risks is an indirect form of probabilistic theft (that market-supporting coordination mechanisms must address)” plus “in cases where the people all die when the risk is realized, the ‘premiums’ need to be paid out to individuals in advance (rather than paid out to actuaries who pay out a large sum in the event of risk realization)”. Which together yield the downstream inference that society is doing something very wrong if they just let AI rip at current levels of knowledge, even from a very laissez-faire perspective.
(The “caveats” section was attempting—and apparently failing—to make it clear that I wasn’t putting forward any particular policy proposal I thought was good, above and beyond making the above points.)
Agreed that the proposal is underspecified; my point here is not “look at this great proposal” but rather “from a theoretical angle, risking others’ stuff without the ability to pay to cover those risks is an indirect form of probabilistic theft (that market-supporting coordination mechanisms must address)” plus “in cases where the people all die when the risk is realized, the ‘premiums’ need to be paid out to individuals in advance (rather than paid out to actuaries who pay out a large sum in the event of risk realization)”. Which together yield the downstream inference that society is doing something very wrong if they just let AI rip at current levels of knowledge, even from a very laissez-faire perspective.
(The “caveats” section was attempting—and apparently failing—to make it clear that I wasn’t putting forward any particular policy proposal I thought was good, above and beyond making the above points.)