I liked the comments on this post more than I liked the post itself. As Paul commented, there’s as much criticism of short AGI timelines as there is of long AGI timelines; and as Scott pointed out, this was an uncharitable take on AI proponents’ motives.
Without the context of those comments, I don’t recommend this post for inclusion.
My guess is we agree that talk of being able to build AGI soon has lead to substantial increased funding in the AGI space (e.g. involved in the acquisition of DeepMind and the $1billion from Microsoft to OpenAI)? Naturally it’s not the sole reason for funding, but I imagine it was a key part of the value prop, given that both of them describe themselves as ‘building AGI’.
Given that, I’m curious to what extent you think that such talk, if it was responsible, has been open for scrutiny or whether it’s been systematically defended from skeptical analysis?
I agree about the effects of deep learning hype on deep learning funding, though I think very little of it has been AGI hype; people at the top level had been heavily conditioned to believe we were/are still in the AI winter of specialized ML algorithms to solve individual tasks. (The MIRI-sphere had to work very hard, before OpenAI and DeepMind started doing externally impressive things, to get serious discussion on within-lifetime timelines from anyone besides the Kurzweil camp.)
Maybe Demis was strategically overselling DeepMind, but I expect most people were genuinely over-optimistic (and funding-seeking) in the way everyone in ML always is.
I liked the comments on this post more than I liked the post itself. As Paul commented, there’s as much criticism of short AGI timelines as there is of long AGI timelines; and as Scott pointed out, this was an uncharitable take on AI proponents’ motives.
Without the context of those comments, I don’t recommend this post for inclusion.
My guess is we agree that talk of being able to build AGI soon has lead to substantial increased funding in the AGI space (e.g. involved in the acquisition of DeepMind and the $1billion from Microsoft to OpenAI)? Naturally it’s not the sole reason for funding, but I imagine it was a key part of the value prop, given that both of them describe themselves as ‘building AGI’.
Given that, I’m curious to what extent you think that such talk, if it was responsible, has been open for scrutiny or whether it’s been systematically defended from skeptical analysis?
I agree about the effects of deep learning hype on deep learning funding, though I think very little of it has been AGI hype; people at the top level had been heavily conditioned to believe we were/are still in the AI winter of specialized ML algorithms to solve individual tasks. (The MIRI-sphere had to work very hard, before OpenAI and DeepMind started doing externally impressive things, to get serious discussion on within-lifetime timelines from anyone besides the Kurzweil camp.)
Maybe Demis was strategically overselling DeepMind, but I expect most people were genuinely over-optimistic (and funding-seeking) in the way everyone in ML always is.