Sorry about that. Our first diavlog was better, IMHO, and included some material about whether rationality benefits a rationalist—but that diavlog was lost due to audio problems. Maybe we should do another for topics that would interest our respective readers. What would you want me to talk about with Scott?
I’d like you to talk about subjects that you firmly disagree on but think the other party has the best chance of persuading you of. To my mind, debates are more useful (and interesting) when arguments are conceded than when the debaters agree to disagree. Plus, I think that when smart, rational people are disadvantaged in a discussion, they are more likely to come up with fresh and compelling arguments. Find out where your weaknesses and Scott’s strengths coincide (and vice versa) and you’ll both come out of the debate stronger for it. I wouldn’t suggest this to just anyone but I know that (unlike most debaters, unlike most people) you’re both eager to admit when you’re wrong.
(I dearly love to argue, and I’m probably too good at it for my own good, but oh how difficult it can be to admit defeat at the end of an argument even when I started silently agreeing with my opponent halfway through! I grew up in an argumentative household where winning the debate was everything and it was a big step for me when I started admitting I was wrong, and even bigger when I started doing it when I knew it, not a half hour and two-thousand words of bullshit later. I was having an argument with my father about astrophysics a couple months ago, and it had gotten quite heated even though I suspected he was right. I hadn’t followed up, but the next time I saw him he showed me a couple diagrams he’d worked out. It took me thirty seconds to say, “Wow, I really was totally wrong about that. Well done.” He looked at me like a boxer who enters the ring ready for ten rounds and then flattens his opponent while the bell’s still ringing. No particular reason for this anecdote, just felt like sharing.)
I would like to see more discussion on the timing of artificial super intelligence (or human level intelligence). I really want to understand the mechanics of your disagreement.
Sorry about that. Our first diavlog was better, IMHO, and included some material about whether rationality benefits a rationalist—but that diavlog was lost due to audio problems. Maybe we should do another for topics that would interest our respective readers. What would you want me to talk about with Scott?
I’d like you to talk about subjects that you firmly disagree on but think the other party has the best chance of persuading you of. To my mind, debates are more useful (and interesting) when arguments are conceded than when the debaters agree to disagree. Plus, I think that when smart, rational people are disadvantaged in a discussion, they are more likely to come up with fresh and compelling arguments. Find out where your weaknesses and Scott’s strengths coincide (and vice versa) and you’ll both come out of the debate stronger for it. I wouldn’t suggest this to just anyone but I know that (unlike most debaters, unlike most people) you’re both eager to admit when you’re wrong.
(I dearly love to argue, and I’m probably too good at it for my own good, but oh how difficult it can be to admit defeat at the end of an argument even when I started silently agreeing with my opponent halfway through! I grew up in an argumentative household where winning the debate was everything and it was a big step for me when I started admitting I was wrong, and even bigger when I started doing it when I knew it, not a half hour and two-thousand words of bullshit later. I was having an argument with my father about astrophysics a couple months ago, and it had gotten quite heated even though I suspected he was right. I hadn’t followed up, but the next time I saw him he showed me a couple diagrams he’d worked out. It took me thirty seconds to say, “Wow, I really was totally wrong about that. Well done.” He looked at me like a boxer who enters the ring ready for ten rounds and then flattens his opponent while the bell’s still ringing. No particular reason for this anecdote, just felt like sharing.)
Ok, that’s a weird side-effect of watching the diavlog, now when I read your comments I can hear your voice in my mind.
I would like to see more discussion on the timing of artificial super intelligence (or human level intelligence). I really want to understand the mechanics of your disagreement.
It’s okay.
What do you disagree with Scott over? I don’t regularly read Shtetl-Optimized, and the only thing I associate with him is a deep belief that P != NP.
I don’t really know much about his FAI/AGI leanings. I guess I’ll go read his blog a bit.