I like the idea and the rationale. I’ll admit I rarely put much stock in quantifications of the future (let alone quantifications crowdsourced by an audience whose qualifications cannot be ascertained). But, I think it would be fascinating to reflect back to this clock come 2028 and if AGI has not been agreeably achieved by then to ask ourselves “Well, how far have we come and how much longer do we suppose it will take?”.
What I don’t understand is why you’re convinced that the introduction of AGI will result in you personally becoming transhuman.
“Beginning of the end” doing some (/probably too much) heavy lifting there. I think it’s the inflection point in a process that will end with transhumanism. And my personal projected timeline is quite short, after that initial shift.
I’m planning to add an option for the strong AI question; I should probably just make it possible to point to arbitrary Metaculus predictions. That way someone could use a more direct transhumanism one, if they want.
I like the idea and the rationale. I’ll admit I rarely put much stock in quantifications of the future (let alone quantifications crowdsourced by an audience whose qualifications cannot be ascertained). But, I think it would be fascinating to reflect back to this clock come 2028 and if AGI has not been agreeably achieved by then to ask ourselves “Well, how far have we come and how much longer do we suppose it will take?”.
What I don’t understand is why you’re convinced that the introduction of AGI will result in you personally becoming transhuman.
“Beginning of the end” doing some (/probably too much) heavy lifting there. I think it’s the inflection point in a process that will end with transhumanism. And my personal projected timeline is quite short, after that initial shift.
I’m planning to add an option for the strong AI question; I should probably just make it possible to point to arbitrary Metaculus predictions. That way someone could use a more direct transhumanism one, if they want.