Without consulting my old prediction here, I answered someone asking me:
What is your probability mass for the date with > 50% chance of agi?
with:
I used to use the AGI definition “better and cheaper than humans at all economic tasks”, but now I think even if we’re dumber, we might still be better at some economic tasks simply because we know human values more. Maybe the definition could be “better and cheaper at any well defined tasks”. In that case, I’d say maybe 2080, taking into account some probability of economic stagnation and some probability that sub-AGI AIs cause an existential catastrophe (and so we don’t develop AGI)
Without consulting my old prediction here, I answered someone asking me:
with: