My mind skipped over this the first time, but hey look! He’s using Eliezer’s term. Interesting. Kinda sad, given that the term describes something you should never do. Not that you shouldn’t work on AI, but you should work on AI because it is very likely to be a big deal, and good researchers have a large impact on how a field and engineering effort plays out. (I agree this domain is quite hard, but it’s not as impossibly hard as brute-forcing a random password with a hundred ASCII characters.)
I’d imagine he was reaching for a term for “generalised pascal-like situation”. Calling it a pascal’s wager wouldn’t work because pascal’s wager proper wasn’t a valid argument.
Hm I guess it is a bit sad that there isn’t a term for this.
Thanks. And very cool. Someone should send him the AI Alignment Forum sequences, in case he wants some interesting subproblems to think about.
My mind skipped over this the first time, but hey look! He’s using Eliezer’s term. Interesting. Kinda sad, given that the term describes something you should never do. Not that you shouldn’t work on AI, but you should work on AI because it is very likely to be a big deal, and good researchers have a large impact on how a field and engineering effort plays out. (I agree this domain is quite hard, but it’s not as impossibly hard as brute-forcing a random password with a hundred ASCII characters.)
I’d imagine he was reaching for a term for “generalised pascal-like situation”. Calling it a pascal’s wager wouldn’t work because pascal’s wager proper wasn’t a valid argument.
Hm I guess it is a bit sad that there isn’t a term for this.