Hanson’s position being essentially destroyed by Hanson via support that made no sense...
As far as I can tell Hanson does not disagree with Yudkowsky except for the probability of risks from AI. Yudkowsky says that existential risks from AI are not under 5%. Has Yudkowsky been able to support this assertion sufficiently? Hanson only needs to show that it is unreasonable to assume that the probability is larger than 5% and my personal perception is that he was able to do so.
Note that my comment (quoted) referred to the 2008 debate, which was not on that subject.
Think I am a troll or an idiot, let me know, I want to know
You do not appear to be trolling in this particular instance.
Note that my comment (quoted) referred to the 2008 debate, which was not on that subject.
You do not appear to be trolling in this particular instance.