You should be sure to point out that many of the readings are dumb and wrong (i.e., the readings that I disagree with). :-P
I was going to suggest Carl Shulman + Dwarkesh podcast as another week 1 option but I forgot that it’s 6 hours!
I hope week 2 doesn’t make the common mistake of supposing that the scaling hypothesis is the only possible reason that someone might think timelines might be short-ish (see here, here)
Week 3 title should maybe say “How could we safely train AIs…”? I think there are other training options if you don’t care about safety.
I was just being silly … they’re trying to present arguments on both sides of various contentious issues, so of course any given reader is going to think that ≈50% of those arguments are wrong.
Quick thoughts, feel free to ignore:
You should be sure to point out that many of the readings are dumb and wrong (i.e., the readings that I disagree with). :-P
I was going to suggest Carl Shulman + Dwarkesh podcast as another week 1 option but I forgot that it’s 6 hours!
I hope week 2 doesn’t make the common mistake of supposing that the scaling hypothesis is the only possible reason that someone might think timelines might be short-ish (see here, here)
Week 3 title should maybe say “How could we safely train AIs…”? I think there are other training options if you don’t care about safety.
Good luck!
A thing you are maybe missing is that the discussion groups are now in the past.
The hope is that the scholars notice this on their own.
Lol nice catch.
Can you expand on which readings you think are dumb and wrong?
I was just being silly … they’re trying to present arguments on both sides of various contentious issues, so of course any given reader is going to think that ≈50% of those arguments are wrong.