Re safety, I don’t know about Oakland but some parts of SF are genuinely the most dangerous feeling places I’ve ever been to after dark (because normally I wouldn’t go somewhere, but SF feels very fine until it isn’t). If I am travelling to places in SF after dark I’ll check how dodgy the street entrances are.
Nathan Young
What are the LessWrong norms on promotion? Writing a post about my company seems off (but I think it could be useful to users). Should I write a quick take?
Given my understanding of epistemic and necessary truths it seems plausible that I can construct epistemic truths using only necessary ones, which feels contradictory.
Eg 1 + 1 = 2 is a necessary truth
But 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 = 10 is epistemic. It could very easily be wrong if I have miscounted the number of 1s.
This seems to suggest that necessary truths are just “simple to check” and that sufficiently complex necessary truths become epistemic because of a failure to check an operation.
Similarly “there are 180 degrees on the inside of a triangle” is only necessarily true in spaces such as R2. It might look necessarily true everywhere but it’s not on the sphere. So what looks like a necessary truth actually an epistemic one.
What am I getting wrong?
A problem with overly kind PR is that many people know that you don’t deserve the reputation. So if you start to fall, you can fall hard and fast.
Likewise it incentivises investigation that you can’t back up.
If everyone thinks I am lovely, but I am two faced, I create a juicy story any time I am cruel. Not so if am known to be grumpy.
eg My sense is that EA did this a bit with the press tour around What We Owe The Future. It built up a sense of wisdom that wasn’t necessarily deserved, so with FTX it all came crashing down.
Personally I don’t want you to think I am kind and wonderful. I am often thoughtless and grumpy. I think you should expect a mediocre to good experience. But I’m not Santa Claus.
I am never sure whether rats are very wise or very naïve to push for reputation over PR, but I think it’s much more sustainable.
@ESYudkowsky can’t really take a fall for being goofy. He’s always been goofy—it was priced in.
Many organisations think they are above maintaining the virtues they profess to possess, instead managing it with media relations.
In doing this they often fall harder eventually. Worse, they lose out on the feedback from their peers accurately seeing their current state.
Journalists often frustrate me as a group, but they aren’t dumb. Whatever they think is worth writing, they probably have a deeper sense of what is going on.
Personally I’d prefer to get that in small sips, such that I can grow, than to have to drain my cup to the bottom.
I’ve made a big set of expert opinions on AI and my inferred percentages from them. I guess that some people will disagree with them.
I’d appreciate hearing your criticisms so I can improve them or fill in entries I’m missing.
Though sometimes the obligation to answer is right, right? I guess maybe it’s that obligation works well at some scale, but then becomes bad at some larger scale. In a coversation, it’s fine, in a public debate, sometimes it seems to me that it doesn’t work.
I think the motivating instances are largely:
Online debates are bad
Freedom Of Information requests suck
I think I probably backfilled from there.
I do sometimes get persistant questions on twitter, but I don’t think there is a single strong example.
Sadly you are the second person to correct me on this @Paul Crowley was first. Ooops.
The solution is not to prevent the questions, but to remove the obligation to generate an expensive answer.
Good suggestion.
Thank you, this is the kind of thing I was hoping to find.
What changes do you think the polyamory community has made?
I find this a very suspect detail, though the base rate of cospiracies is very low.
“He wasn’t concerned about safety because I asked him,” Jennifer said. “I said, ‘Aren’t you scared?’ And he said, ‘No, I ain’t scared, but if anything happens to me, it’s not suicide.’”
To be more explicit about my model, I see communities as a bit like people. And sometimes people do the hard work of changing (especially as they have incentives to) but sometimes they ignore it or blame someone else.
Similarly often communties scapegoat something or someone, or give vague general advice.
Sure sounds good. Can you crosspost to the EA forum? Also I think Nicky’s pronouns are they/them.
It seems underrated for LessWrong to have cached high quality answers to questions like this. Also stuff on exercise, nutrition, parenting and schooling. That we don’t really have a clear set seems to point towards this being difficult or us being less competent than we’d like.
Nevertheless lots of people were hassled. That has real costs, both to them and to you.
If that were true then there are many ways you could partially do that—eg give people a set of tokens to represent their mana at the time of the devluation and if at future point you raise. you could give them 10x those tokens back.
I’m discussing with Carson. I might change my mind but i don’t know that i’ll argue with both of you at once.
Austin said they have $1.5 million in the bank, vs $1.2 million mana issued. The only outflows right now are to the charity programme which even with a lot of outflows is only at $200k. they also recently raised at a $40 million valuation. I am confused by running out of money. They have a large user base that wants to bet and will do so at larger amounts if given the opportunity. I’m not so convinced that there is some tiny timeline here.
But if there is, then say so “we know that we often talked about mana being eventually worth $100 mana per dollar, but we printed too much and we’re sorry. Here are some reasons we won’t devalue in the future..”
Feels like FLI is a massively underrated org. Cos of the whole vitalik donation thing they have like $300mn.