I think this is his latest comment, but it is on FOOM. Hanson’s opinion is that, on the margin, the current amount of people working on AI safety seems adequate. Why? Because there’s not much useful work you can do without access to advanced AI, and he thinks the latter is a long time in coming. Again, why? Hanson thinks that FOOM is the main reason to worry about AI risk. He prefers an outside view to predict technologies which we have little empirical information on and so believes FOOM is unlikely because he thinks progress historically doesn’t come in huge chunks but gradually. You might question the speed of progress, if not its lumpiness, as deep learning seems to pump out advance after advance. Hanson argues that people are estimating progress poorly and talk of deep learning is over-blown.
What would it take to get Hanson to sit up and pay more attention to AI? AI self-monologue used to guide and improves its ability to perform useful tasks.
One thing I didn’t manage to fit in here is that I feel like another crux for Hanson would be how the brain works. If the brain tackles most useful tasks using a simple learning algorithm, like Steve Byrnes argues, instead of a grab bag of specialized modules with distinct algorithms for each of them, then I think that would be a big update. But that is mostly my impression, and I can’t find the sources I used to generate it.
I’ve argued at length (1234567) against the plausibility of this scenario. Its not that its impossible, or that no one should work on it, but that far too many take it as a default future scenario.
Yeah, I think he assigns ~5% chance to FOOM, if I had to make a tenative guess. 10% seems too high to me. In general, my first impression as to Hanson’s credences on a topic won’t be accurate unless I really scrutinize his claims. So its not weird to me that someone might wind up thinking Hanson believes there’s a <1% of AI x-risks.
Do you mean hard take off, or Yudkowsky’s worry that foom causes rapid value drift and destroys all value? I think Hanson puts maybe 5% on that and a much larger number on hard take off, 10 or 20%.
Really? My impression was the opposite. He’s said stuff to the effect of “there’s nothing you can do to prevent value drift”, and seems to think that whether we create EMs or not, our successors will hold values quite different to our own. See all the stuff about the current era being a dreamtime, on the values of grabby aliens etc.
I think this is his latest comment, but it is on FOOM. Hanson’s opinion is that, on the margin, the current amount of people working on AI safety seems adequate. Why? Because there’s not much useful work you can do without access to advanced AI, and he thinks the latter is a long time in coming. Again, why? Hanson thinks that FOOM is the main reason to worry about AI risk. He prefers an outside view to predict technologies which we have little empirical information on and so believes FOOM is unlikely because he thinks progress historically doesn’t come in huge chunks but gradually. You might question the speed of progress, if not its lumpiness, as deep learning seems to pump out advance after advance. Hanson argues that people are estimating progress poorly and talk of deep learning is over-blown.
What would it take to get Hanson to sit up and pay more attention to AI? AI self-monologue used to guide and improves its ability to perform useful tasks.
One thing I didn’t manage to fit in here is that I feel like another crux for Hanson would be how the brain works. If the brain tackles most useful tasks using a simple learning algorithm, like Steve Byrnes argues, instead of a grab bag of specialized modules with distinct algorithms for each of them, then I think that would be a big update. But that is mostly my impression, and I can’t find the sources I used to generate it.
That sounds like a lot more than 1% chance.
Yeah, I think he assigns ~5% chance to FOOM, if I had to make a tenative guess. 10% seems too high to me. In general, my first impression as to Hanson’s credences on a topic won’t be accurate unless I really scrutinize his claims. So its not weird to me that someone might wind up thinking Hanson believes there’s a <1% of AI x-risks.
Do you mean hard take off, or Yudkowsky’s worry that foom causes rapid value drift and destroys all value? I think Hanson puts maybe 5% on that and a much larger number on hard take off, 10 or 20%.
Really? My impression was the opposite. He’s said stuff to the effect of “there’s nothing you can do to prevent value drift”, and seems to think that whether we create EMs or not, our successors will hold values quite different to our own. See all the stuff about the current era being a dreamtime, on the values of grabby aliens etc.