I’m not sure Jan would endorse “accelerating capabilities isn’t bad.” Also I doubt Jan is confident AI won’t kill everyone. I can’t speak for him of course, maybe he’ll show up & clarify.
Hmm! Yeah, I guess this doesn’t match the letter of the specification. I’m going to pay out anyway, though, because it matches the “high-status monkey” and “well-reasoned” criteria so well and it at least has the right vibes, which are, regrettably, kind of what I’m after.
Jan Leike of OpenAI:
https://aligned.substack.com/p/alignment-optimism?publication_id=328633
I’m not sure Jan would endorse “accelerating capabilities isn’t bad.” Also I doubt Jan is confident AI won’t kill everyone. I can’t speak for him of course, maybe he’ll show up & clarify.
Hmm! Yeah, I guess this doesn’t match the letter of the specification. I’m going to pay out anyway, though, because it matches the “high-status monkey” and “well-reasoned” criteria so well and it at least has the right vibes, which are, regrettably, kind of what I’m after.
Ah, my bad then.
Nice. I haven’t read all of this yet, but I’ll pay out based on the first 1.5 sections alone.