I sometimes felt to me like Artibal didn’t care about attracting users at all. While Eliezer wrote posts explaining various issues about AI alignment on Arbital nobody linked to the explanations on LessWrong.
If I google: site:lesswrong.com link:arbital.com -site:wiki.lesswrong.com I don’t see any links from LessWrong to Arbitral (there are some on the LessWrong Wiki but that doesn’t get much readership). The same goes for LW 2.0.
There was a single post asking: “Eliezer wrote those great explanation on Arbitral, why is nobody linking to them” on LessWrong.
Just to say, these are amazing. I would rate them above Superintelligence, or indeed almost any other resource, for increasing someone’s concrete understanding of AI safety
At least this post of mine had a link to Eliezer’s Arbital writeup on “rescuing the utility function”. I’d be sad to see it go, there’s some damn good writing in there.
You’re right that they didn’t do enough to attract users though. I only found it by accident after someone on IAFF mentioned that Eliezer is writing stuff on Arbital.
You’re right that they didn’t do enough to attract users though.
I want to point out how trivial it would be for Eliezer to get way more users/readers on Arbital than he has gotten, just by sharing the best essays on Arbital once a month on facebook. He would’ve gotten like one or two orders of magnitude more readers. I’m confused you think Eliezer was missing this by accident, rather than fully aware of what his options are and then deciding to hold off on bringing users to his as-yet-unfinished product.
At the time I thought it was an under-construction CFAI kind of thing and he was avoiding too much attention from randoms, but your explanation makes sense too.
I sometimes felt to me like Artibal didn’t care about attracting users at all. While Eliezer wrote posts explaining various issues about AI alignment on Arbital nobody linked to the explanations on LessWrong.
If I google:
site:lesswrong.com link:arbital.com -site:wiki.lesswrong.com
I don’t see any links from LessWrong to Arbitral (there are some on the LessWrong Wiki but that doesn’t get much readership). The same goes for LW 2.0.There was a single post asking: “Eliezer wrote those great explanation on Arbitral, why is nobody linking to them” on LessWrong.
In case anybody reading this is curious about those AI alignment posts:
https://arbital.com/explore/ai_alignment/
(note: loads slowly)
Just to say, these are amazing. I would rate them above Superintelligence, or indeed almost any other resource, for increasing someone’s concrete understanding of AI safety
At least this post of mine had a link to Eliezer’s Arbital writeup on “rescuing the utility function”. I’d be sad to see it go, there’s some damn good writing in there.
You’re right that they didn’t do enough to attract users though. I only found it by accident after someone on IAFF mentioned that Eliezer is writing stuff on Arbital.
I want to point out how trivial it would be for Eliezer to get way more users/readers on Arbital than he has gotten, just by sharing the best essays on Arbital once a month on facebook. He would’ve gotten like one or two orders of magnitude more readers. I’m confused you think Eliezer was missing this by accident, rather than fully aware of what his options are and then deciding to hold off on bringing users to his as-yet-unfinished product.
At the time I thought it was an under-construction CFAI kind of thing and he was avoiding too much attention from randoms, but your explanation makes sense too.