Upvoted, but it wasn’t nearly as fascinating as I’d hoped, because it was all on our home turf. Eliezer reiterated familiar OB/LW arguments, Aaronson fought a rearguard action without saying anything game-changing. Supporting link for the first (and most interesting to me) disagreement: Aaronson’s “The Singularity Is Far”.
I have a significant disagreement with this from that link:
I see a few fragile and improbable victories against a backdrop of malice, stupidity, and greed—the tiny amount of good humans have accomplished in constant danger of drowning in a sea of blood and tears
Since destroying things is MUCH easier than building, if humans weren’t substantially inclined toward helpful and constructive values, civilization would never have existed in the first place nor could it continue to exist at all.
Maybe I’m the only one, but I’d like to see a video of Eliezer alone. Just him talking about whatever he finds interesting these days.
I’m suggesting this because so far all the 2-way dialogs I’ve seen end up with Eliezer talking about 1⁄4 of the time, and most of what he’s saying is correcting what the other person has said. So we end up with not much original Eliezer, which is what I’d really be interested in hearing.
I agree. I stopped watching about five minutes into it when it became clear that EY and Scott were just going to spend a lot of time going back-and-forth.
Nothing game-changing indeed. Debate someone who substantially disagrees with you, EY.
Sorry about that. Our first diavlog was better, IMHO, and included some material about whether rationality benefits a rationalist—but that diavlog was lost due to audio problems. Maybe we should do another for topics that would interest our respective readers. What would you want me to talk about with Scott?
I’d like you to talk about subjects that you firmly disagree on but think the other party has the best chance of persuading you of. To my mind, debates are more useful (and interesting) when arguments are conceded than when the debaters agree to disagree. Plus, I think that when smart, rational people are disadvantaged in a discussion, they are more likely to come up with fresh and compelling arguments. Find out where your weaknesses and Scott’s strengths coincide (and vice versa) and you’ll both come out of the debate stronger for it. I wouldn’t suggest this to just anyone but I know that (unlike most debaters, unlike most people) you’re both eager to admit when you’re wrong.
(I dearly love to argue, and I’m probably too good at it for my own good, but oh how difficult it can be to admit defeat at the end of an argument even when I started silently agreeing with my opponent halfway through! I grew up in an argumentative household where winning the debate was everything and it was a big step for me when I started admitting I was wrong, and even bigger when I started doing it when I knew it, not a half hour and two-thousand words of bullshit later. I was having an argument with my father about astrophysics a couple months ago, and it had gotten quite heated even though I suspected he was right. I hadn’t followed up, but the next time I saw him he showed me a couple diagrams he’d worked out. It took me thirty seconds to say, “Wow, I really was totally wrong about that. Well done.” He looked at me like a boxer who enters the ring ready for ten rounds and then flattens his opponent while the bell’s still ringing. No particular reason for this anecdote, just felt like sharing.)
I would like to see more discussion on the timing of artificial super intelligence (or human level intelligence). I really want to understand the mechanics of your disagreement.
Upvoted, but it wasn’t nearly as fascinating as I’d hoped, because it was all on our home turf. Eliezer reiterated familiar OB/LW arguments, Aaronson fought a rearguard action without saying anything game-changing. Supporting link for the first (and most interesting to me) disagreement: Aaronson’s “The Singularity Is Far”.
I have a significant disagreement with this from that link:
Since destroying things is MUCH easier than building, if humans weren’t substantially inclined toward helpful and constructive values, civilization would never have existed in the first place nor could it continue to exist at all.
Maybe I’m the only one, but I’d like to see a video of Eliezer alone. Just him talking about whatever he finds interesting these days.
I’m suggesting this because so far all the 2-way dialogs I’ve seen end up with Eliezer talking about 1⁄4 of the time, and most of what he’s saying is correcting what the other person has said. So we end up with not much original Eliezer, which is what I’d really be interested in hearing.
I agree. I stopped watching about five minutes into it when it became clear that EY and Scott were just going to spend a lot of time going back-and-forth.
Nothing game-changing indeed. Debate someone who substantially disagrees with you, EY.
Sorry about that. Our first diavlog was better, IMHO, and included some material about whether rationality benefits a rationalist—but that diavlog was lost due to audio problems. Maybe we should do another for topics that would interest our respective readers. What would you want me to talk about with Scott?
I’d like you to talk about subjects that you firmly disagree on but think the other party has the best chance of persuading you of. To my mind, debates are more useful (and interesting) when arguments are conceded than when the debaters agree to disagree. Plus, I think that when smart, rational people are disadvantaged in a discussion, they are more likely to come up with fresh and compelling arguments. Find out where your weaknesses and Scott’s strengths coincide (and vice versa) and you’ll both come out of the debate stronger for it. I wouldn’t suggest this to just anyone but I know that (unlike most debaters, unlike most people) you’re both eager to admit when you’re wrong.
(I dearly love to argue, and I’m probably too good at it for my own good, but oh how difficult it can be to admit defeat at the end of an argument even when I started silently agreeing with my opponent halfway through! I grew up in an argumentative household where winning the debate was everything and it was a big step for me when I started admitting I was wrong, and even bigger when I started doing it when I knew it, not a half hour and two-thousand words of bullshit later. I was having an argument with my father about astrophysics a couple months ago, and it had gotten quite heated even though I suspected he was right. I hadn’t followed up, but the next time I saw him he showed me a couple diagrams he’d worked out. It took me thirty seconds to say, “Wow, I really was totally wrong about that. Well done.” He looked at me like a boxer who enters the ring ready for ten rounds and then flattens his opponent while the bell’s still ringing. No particular reason for this anecdote, just felt like sharing.)
Ok, that’s a weird side-effect of watching the diavlog, now when I read your comments I can hear your voice in my mind.
I would like to see more discussion on the timing of artificial super intelligence (or human level intelligence). I really want to understand the mechanics of your disagreement.
It’s okay.
What do you disagree with Scott over? I don’t regularly read Shtetl-Optimized, and the only thing I associate with him is a deep belief that P != NP.
I don’t really know much about his FAI/AGI leanings. I guess I’ll go read his blog a bit.