the potentially enormous speed difference (https://www.lesswrong.com/posts/Ccsx339LE9Jhoii9K/slow-motion-videos-as-ai-risk-intuition-pumps) will almost certainly be an effective communications barrier between humans and AI. there’s a wonderful scene of AIs vs humans negotiation in william hertling’s “A.I. apocalypse” that highlights this.
jaan
i agree that there’s the 3rd alternative future that the post does not consider (unless i missed it!):
3. markets remain in an inadequate equilibrium until the end of times, because those participants (like myself!) who consider short timelines remain in too small minority to “call the bluff”.
see the big short for a dramatic depiction of such situation.
great post otherwise. upvoted.
yeah, this seems to be the crux: what will CEV prescribe for spending the altruistic (reciprocal cooperation) budget on. my intuition continues to insist that purchasing the original star systems from UFAIs is pretty high on the shopping list, but i can see arguments (including a few you gave above) against that.
oh, btw, one sad failure mode would be getting clipped by a proto-UFAI that’s too stupid to realise it’s in a multi-agent environment or something,
ETA: and, tbc, just like interstice points out below, my “us/me” label casts a wider net than “us in this particular everett branch where things look particularly bleak”.
roger. i think (and my model of you agrees) that this discussion bottoms out in speculating what CEV (or equivalent) would prescribe.
my own intuition (as somewhat supported by the moral progress/moral circle expansion in our culture) is that it will have a nonzero component of “try to help out the fellow humans/biologicals/evolved minds/conscious minds/agents with diminishing utility function if not too expensive, and especially if they would do the same in your position”.
yeah, as far as i can currently tell (and influence), we’re totally going to use a sizeable fraction of FAI-worlds to help out the less fortunate ones. or perhaps implement a more general strategy, like mutual insurance pact of evolved minds (MIPEM).
this, indeed, assumes that human CEV has diminishing returns to resources, but (unlike nate in the sibling comment!) i’d be shocked if that wasn’t true.
sure, this is always a consideration. i’d even claim that the “wait.. what about the negative side effects?” question is a potential expected value spoiler for pretty much all longtermist interventions (because they often aim for effects that are multiple causal steps down the road), and as such not really specific to software.
great idea! since my metamed days i’ve been wishing there was a prediction market for personal medical outcomes — it feels like manifold mechanism might be a good fit for this (eg, at the extreme end, consider the “will this be my last market if i undertake the surgery X at Y?” question). should you decide to develop such aspect at one point, i’d be very interested in supporting/subsidising.
actually, the premise of david brin’s existence is a close match to moravec’s paragraph (not a coincidence, i bet, given that david hung around similar circles).
confirmed. as far as i can tell (i’ve talked to him for about 2h in total) yi really seems to care, and i’m really impressed by his ability to influence such official documents.
indeed, i even gave a talk almost a decade ago about the evolution:humans :: humans:AGI symmetry (see below)!
what confuses me though is that “is general reasoner” and “can support cultural evolution” properties seemed to emerge pretty much simultaneously in humans—a coincidence that requires its own explanation (or dissolution). furthermore, eliezer seems to think that the former property is much more important / discontinuity causing than the latter. and, indeed, outsized progress being made by individual human reasoners (scientists/inventors/etc.) seems to evidence such view.
amazing post! scaling up the community of independent alignment researchers sounds like one of the most robust ways to convert money into relevant insights.
indeed they are now. retrocausality in action? :)
well, i’ve always considered human life extension as less important than “civilisation’s life extension” (ie, xrisk reduction). still, they’re both very important causes, and i’m happy to support both, especially given that they don’t compete much for talent. as for the LRI specifically, i believe they simply haven’t applied to more recent SFF grant rounds.
looks great, thanks for doing this!
one question i get every once in a while and wish i had a canonical answer to is (probably can be worded more pithily):
“humans have always thought their minds are equivalent to whatever’s their latest technological achievement—eg, see the steam engines. computers are just the latest fad that we currently compare our minds to, so it’s silly to think they somehow pose a threat. move on, nothing to see here.”
note that the canonical answer has to work for people whose ontology does not include the concepts of “computation” nor “simulation”. they have seen increasingly universal smartphones and increasingly realistic computer games (things i’ve been gesturing at in my poor attempts to answer) but have no idea how they work.