>But again… does this really translate to a proportional probability of doom?
If you buy a lottery ticket and get all (all out of n) numbers right, then you have glorious transhumanists utopia (still some people will get very upset). And if you get wrong a single number, then you get a weirdtopia and may be distopia. There is an unknown quantity of numbers to guess, and single ticket cost a billion now (and here enters the discrepancy). Where i get so many losing tickets? From Mind Design Space. There is also and alternative that suggests that space of possibilities is much smaller.
It is not enough to get some alignment, and it seems that we need to get clear on difference between utility maximisers (ASI and AGI) and behavior executors (humans and dogs and monkeys). That’s is where “AGI is proactive (and synonyms)” part based on.
So the probability of doom is proportioned to the probability of buying a losing (not getting all numbers right) ticket.
>But again… does this really translate to a proportional probability of doom?
If you buy a lottery ticket and get all (all out of n) numbers right, then you have glorious transhumanists utopia (still some people will get very upset). And if you get wrong a single number, then you get a weirdtopia and may be distopia. There is an unknown quantity of numbers to guess, and single ticket cost a billion now (and here enters the discrepancy). Where i get so many losing tickets? From Mind Design Space. There is also and alternative that suggests that space of possibilities is much smaller.
It is not enough to get some alignment, and it seems that we need to get clear on difference between utility maximisers (ASI and AGI) and behavior executors (humans and dogs and monkeys). That’s is where “AGI is proactive (and synonyms)” part based on.
So the probability of doom is proportioned to the probability of buying a losing (not getting all numbers right) ticket.