Okay so this post is great, but just want to note my confusion, why is it currently the 10th highest karma post of all time?? (And that’s inflation-adjusted!)
I’m also confused why Eliezer seems to be impressed by this. I admit it is an interesting phenomenon, but it is apparently just some oddity of the tokenization process.
So I am confused why this is getting this much attention (I feel like it was coming from people who hadn’t even read Eliezer’s comment?). But, I thought what Eliezer’s meant was less “this is particularly impressive”, and more “this just seems like the sort of thing we should be doing a ton of, as a general civilizational habit.”
I assumed it was primarily because Eliezer “strongly approved” of it, after being overwhelmingly pessimistic about pretty much everything for so long.
I didn’t realize it got popular elsewhere, that makes sense though and could help explain the crazy number of upvotes. Would make me feel better about the community’s epistemic health if the explanation isn’t that we’re just overweighting one person’s views.
Okay so this post is great, but just want to note my confusion, why is it currently the 10th highest karma post of all time?? (And that’s inflation-adjusted!)
I’m also confused why Eliezer seems to be impressed by this. I admit it is an interesting phenomenon, but it is apparently just some oddity of the tokenization process.
So I am confused why this is getting this much attention (I feel like it was coming from people who hadn’t even read Eliezer’s comment?). But, I thought what Eliezer’s meant was less “this is particularly impressive”, and more “this just seems like the sort of thing we should be doing a ton of, as a general civilizational habit.”
It shows that an AI-ish thing is kind of exploitable, if not exploitable in a scary way.
It also shows that an impressive AI-ish thing is kind of kludgey behind the scenes, if that’s a surprise to you
Some hypotheses:
it’s just incredibly trippy in a visceral sense. As someone else said, this reads like an SCP.
Further, it got popular outside lesswrong, and has brought in new users and activated old ones.
Even further, it’s actual progress on understanding an AI,
Further still, it’s extremely easy to replicate key parts with a free AI anyone can try
I assumed it was primarily because Eliezer “strongly approved” of it, after being overwhelmingly pessimistic about pretty much everything for so long.
I didn’t realize it got popular elsewhere, that makes sense though and could help explain the crazy number of upvotes. Would make me feel better about the community’s epistemic health if the explanation isn’t that we’re just overweighting one person’s views.
One of the other super-upvoted posts is What DALL-E 2 can and cannot do which I think was mostly for “coolness” reasons.
(Note that What DALL-E 2 can and cannot do is not in the top 100 when inflation-adjusted.)