I am experimenting with holding office hours where people can talk to me about really anything they want to talk to me about. First iteration, next week Wednesday at noon (PT) at this link:
Come by if you want to talk to me! I am a man of many hats, so many topics are up for discussion. Some topics that seem natural:
+ LessWrong + AI Alignment Forum + LTFF Grantmaking + Survival and Flourishing Fund grantmaking
Some non-institutional topics that seem interesting to discuss:
+ Is any of the stuff around Moral Uncertainty real? I think it’s probably all fake, but if you disagree, let’s debate!
+ Should people move somewhere else than the Bay? I think they probably shouldn’t, but am pretty open to changing my mind, and good arguments are appreciated.
+ Is there any way we can make it more likely that we get access to the vaccine soon and can get back to life? If you have any plans, let me know.
+ What digital infrastructure other than the EA-Forum, LessWrong, and the AI Alignmnent Forum do you want the LessWrong team to build? Should we revive reciprocity.io?
+ Do you think that we aren’t at the hinge of history because of Will’s arguments? Debate me, because I disagree, and I would really like someone to defend this position, because I feel confused about what’s happening discourse wise.
Almost anything else is also fair game. Feel free to come by and tell me about any books you recently read, or respond to any of the many things I’ve written in comments and posts over the years.
Ok. Since visiting your office hours is somewhat costly for me, I was trying to gather more information (about e.g. what kind of moral uncertainty or prior discussion you had in mind, why you decided to capitalize the term, whether this is something I might disagree with you on and might want to discuss further) to make the decision.
More generally, I’ve attended two LW Zoom events so far, both times because I felt excited about the topics discussed, and both times felt like I didn’t learn anything/would have preferred the info to just be a text dump so I could skim and move on. So I am feeling like I should be more confident that I will find an event useful now before attending.
Yeah, that’s pretty reasonable. We’ll see whether I get around to typing up my thoughts around this, but not sure whether I will ever get around to it.
+ What digital infrastructure other than the EA-Forum, LessWrong, and the AI Alignmnent Forum do you want the LessWrong team to build? Should we revive reciprocity.io?
To me that sounds like you want to divert resources away from doubling down on scaling up the existing infrastracture.
Huh, that’s a weird way of phrasing it. Why would it be “divert away”? We’ve always worked on a bunch of different things, and while LessWrong is obviously our main project, we just work on whatever stuff seems most likely to have the best effect on the world and fits well with our other projects.
I… don’t think I understand what this has to do with my comment? I agree that it’s not overwhelmingly obvious, but what does that have to do with my comment (or Christian’s for that matter?).
I guess maybe this whole thread just feels kinda confused, since I don’t understand what the goal of Christian’s comment is.
Christian responds to your comment with “you want to divert resources away from the thing you usually work on” (with possibly an implication that Christian cares a lot about the thing he thinks you usually work on, and doesn’t want fewer resources allocated to it.)
You respond “huh that’s a weird way of phrasing it. why would it be ‘divert away?’ we’ve always worked on other projects”
That seemed, to me, to be a weird way of replying, because, like, theory of mind says that ChristianKI doesn’t know about all those other projects. If you assume the LessWrong team mostly builds LessWrong, it’s quite reasonable to respond to a query about “what stuff should we build other than LW?” with “that sounds like you’re diverting resources away from LessWrong”. And a more sensible response would have been “ah, yeah I see why you’d think that if you think we only build LessWrong, but actually we do other projects.”
(moreover, I think it actually is plausibly bad that we spread our focus as thinly as we do. I don’t think it’s an obvious call because the other projects we work on are also important and it’s a reasonable high-level-call for us to be “the rationality infrastructure team” rather than “the LessWrong team”. But, a priori I do feel a lot more doomy about small teams that spread themselves thin)
I think it makes sense to have multiple installations of the same software, so that the EA-Forum and AI Alignment Forum as they reuse the code base. Code has a feature of often providing exponential returns and thus it makes sense to double down on good projects instead of speading efforts.
I am experimenting with holding office hours where people can talk to me about really anything they want to talk to me about. First iteration, next week Wednesday at noon (PT) at this link:
http://garden.lesswrong.com?code=bSu7&event=habryka-s-office-hours
Some more event description:
Can you say more about this? I only found this comment after a quick search.
Don’t really feel like writing this up in a random comment thread. That’s why I proposed it as a topic for a casual chat at my office hours!
Ok. Since visiting your office hours is somewhat costly for me, I was trying to gather more information (about e.g. what kind of moral uncertainty or prior discussion you had in mind, why you decided to capitalize the term, whether this is something I might disagree with you on and might want to discuss further) to make the decision.
More generally, I’ve attended two LW Zoom events so far, both times because I felt excited about the topics discussed, and both times felt like I didn’t learn anything/would have preferred the info to just be a text dump so I could skim and move on. So I am feeling like I should be more confident that I will find an event useful now before attending.
Yeah, that’s pretty reasonable. We’ll see whether I get around to typing up my thoughts around this, but not sure whether I will ever get around to it.
To me that sounds like you want to divert resources away from doubling down on scaling up the existing infrastracture.
Huh, that’s a weird way of phrasing it. Why would it be “divert away”? We’ve always worked on a bunch of different things, and while LessWrong is obviously our main project, we just work on whatever stuff seems most likely to have the best effect on the world and fits well with our other projects.
I think it’s not very obvious how many other projects we work on.
I… don’t think I understand what this has to do with my comment? I agree that it’s not overwhelmingly obvious, but what does that have to do with my comment (or Christian’s for that matter?).
I guess maybe this whole thread just feels kinda confused, since I don’t understand what the goal of Christian’s comment is.
My read was:
Christian responds to your comment with “you want to divert resources away from the thing you usually work on” (with possibly an implication that Christian cares a lot about the thing he thinks you usually work on, and doesn’t want fewer resources allocated to it.)
You respond “huh that’s a weird way of phrasing it. why would it be ‘divert away?’ we’ve always worked on other projects”
That seemed, to me, to be a weird way of replying, because, like, theory of mind says that ChristianKI doesn’t know about all those other projects. If you assume the LessWrong team mostly builds LessWrong, it’s quite reasonable to respond to a query about “what stuff should we build other than LW?” with “that sounds like you’re diverting resources away from LessWrong”. And a more sensible response would have been “ah, yeah I see why you’d think that if you think we only build LessWrong, but actually we do other projects.”
(moreover, I think it actually is plausibly bad that we spread our focus as thinly as we do. I don’t think it’s an obvious call because the other projects we work on are also important and it’s a reasonable high-level-call for us to be “the rationality infrastructure team” rather than “the LessWrong team”. But, a priori I do feel a lot more doomy about small teams that spread themselves thin)
I feel like Raemon got where I was coming from.
I think it makes sense to have multiple installations of the same software, so that the EA-Forum and AI Alignment Forum as they reuse the code base. Code has a feature of often providing exponential returns and thus it makes sense to double down on good projects instead of speading efforts.