flood lights seem best?
agrippa
However, Annie has not yet provided what I would consider direct / indisputable proof that her claims are true. Thus, rationally, I must consider Sam Altman innocent.
This is an interesting view on rationality that I hadn’t considered
Omen decouples but has prohibitive gas problems and sees no usage as a result.
Augur was a total failboat. Almost all of these projects couple the market protocol to the resolution protocol, which is stupid, especially if you are Augur and your ideas about making resolution protocols are really dumb.
Your understanding is correct. I built one which is currently offline, I’ll be in touch soon.
I found the stuff about relationship success in Luke’s first post here to be useful! thanks
Ok, this kind of tag is exactly what I was asking about. I’ll have a lok at these posts.
[Question] Can LessWrong provide me with something I find obviously highly useful to my own practical life?
Thanks for giving an example of a narrow project, I think it helps a lot. I have been around EA for several years, I find that grandiose projects and narratives at this point alienate me, and hearing about projects like yours make my ears perk up and feel like maybe I should devote more time and attention to the space.
I guess it’s good to know it’s possible to be both a LW-style rationalist and quite mentally ill.
Not commenting on distributions here, but it sure as fuck is possible.
I liked the analogy and I also like weird bugs
While normal from a normal perspective, this post is strange from a rationalist perspective, since the lesson you describe is X is bad, but the evidence given is that you had a good experience with X aside from mundane interpersonal drama that everyone experiences and that doesnt sound particularly exacerbated by X. Aside from that you say it contributed to psychosis years down the line, but its not very clear to me there is a strong causal relationship or any.
(of course, your friend’s bad experience with cults is a good reason to update against cults being safe to participate in)
I am not really a cult advocate. But it is okay (and certainly bayesian) to just have a good personal experience with something and conclude that can be safer or nicer than people generally think. Just because you’re crazy doesnt mean everything you did was bad.
Edit: This is still on my mind so I will write some more. I feel like the attitude in your post, especially your addendum, is that its fundamentally obviously wrong to feel like your experience was okay or an okay thing to do. And that the fact you feel/felt okay about it is strong evidence that you need to master rationality more, in order to be actually okay. And that once you do master rationality, you will no longer feel it was ok.
But “some bad things happened and also some good things, I guess it was sort of okay” is in fact a reasonable way to feel. It does sound like some bad things happened, some good things, and that it was just sort of okay (if not better). There is outside view evidence about cults being bad. Far be it from me to say that you should not avoid cults. We should certainly incorporate the outside view into our choices. But successfully squashing your inside view because it contradicts the outside view is not really an exercise in rationality, and is often the direct opposite. Also, it makes me sad.
how are you personally preparing for this?
Recently I learned that Pixel phones actually contain TPUs. This is a good indicator of how much deep learning is being used (particularly it is used by the camera I think)
Re: taboos in EA, I think it would be good if somebody who downvoted this comment said why.
Open tolerance of the people involved with status quo and fear of alienating / making enemies of powerful groups is a core part of current EA culture! Steve’s top comment on this post is an example of enforcing/reiterating this norm.
It’s an unwritten rule that seems very strongly enforced yet never really explicitly acknowledged, much less discussed. People were shadow blacklisted by CEA from the Covid documentary they funded for being too disrespectful in their speech re: how governments have handled covid. That fits what I’d consider a taboo, something any socially savvy person would pick up on and internalize if they were around it.
Maybe this norm for open tolerance is downstream of the implications of truly considering some people to be your adversaries (which you might do if you thought delaying AI development by even an hour was a considerable moral victory, as the OP seems to). Doing so does expose you to danger. I would point out that while lc’s post analogizes their relationship with AI researchers to Isreal’s relationship with Iran. When I think of Israel’s resistance to Iran nonviolence is not the first thing that comes to mind.
So the first step to good outreach is not treating AI capabilities researchers as the enemy. We need to view them as our future allies, and gently win them over to our side by the force of good arguments that meets them where they’re at, in a spirit of pedagogy and truth-seeking.
To this effect I have advocated that we should call it “Different Altruism” instead of “Effective Altruism”, because by leading with the idea that a movement involves doing altruism better than status quo, we are going to trigger and alienate people part of status quo that we could have instead won over by being friendly and gentle.
I often imagine a world where we had ended up with a less aggressive and impolite name attached to our arguments. I mean, think about how virality works: making every single AI researcher even slightly more resistant to engaging your movement (by priming them to be defensive) is going to have massive impact on the probability of ever reaching critical mass.
Thanks a lot for doing this and posting about your experience. I definitely think that nonviolent resistance is a weirdly neglected approach. “mainstream” EA certainly seems against it. I am glad you are getting results and not even that surprised.
You may be interested in discussion here, I made a similar post after meeting yet another AI capabilities researcher at FTX’s EA Fellowship (she was a guest, not a fellow): https://forum.effectivealtruism.org/posts/qjsWZJWcvj3ug5Xja/agrippa-s-shortform?commentId=SP7AQahEpy2PBr4XS
I’m interestd in working on dying with dignity
Not sure what you’re on, but “You might listen to an idiot doctor that puts you on spiro” is definitely a real transition downside