AIS student, self-proclaimed aspiring rationalist, very fond of game theory.
”The only good description is a self-referential description, just like this one.”
momom2
It’s thought-provoking.
Many people here identify as Bayesians, but are as confused as Saundra by the troll’s questions, which indicates that they’re missing something important.
It wasn’t mine. I did grow up in a religious family, but becoming a rationalist came gradually, without sharp divide with my social network. I always figured people around me were making all sorts of logical mistakes though, and noticed very early deep flaws in what I was taught.
It’s not. The paper is hype, the authors don’t actually show that this could replace MLPs.
This is very interesting!
I did not expect that Chinese would be more optimistic about benefits than worried about risks and that they would rank it so low as an existential risk.
This is in contrast with posts I see on social media and articles showcasing safety institutes and discussing doomer opinions, which gave me the impression that Chinese academia was generally more concerned about AI risk and especially existential risk than the US.
I’m not sure how to reconcile this survey’s results with my previous model. Was I just wrong and updating too much on anecdotal evidence?
How representative of policymakers and of influential scientists do you think these results are?
About the Christians around me: it is not explicitly considered rude, but it is a signal that you want to challenge their worldview, and if you are going to predictably ask that kind of question often, you won’t be welcome in open discussions.
(You could do it once or twice for anecdotal evidence, but if you actually want to know whether many Christians believe in a literal snake, you’ll have to do a survey.)
I disagree – I think that no such perturbations exist in general, rather than that we have simply not had any luck finding them.
I have seen one such perturbation. It was two images of two people, one which was clearly male and the other female, though I wasn’t be able to tell any significant difference between the two images on 15s of trying to find one except for a slight difference in hue.
Unfortunately, I can’t find this example again on a 10mn search. It was shared on Discord; the people in the image were white and freckled. I’ll save it if I find it again.
The pyramids and Mexico and the pyramids in Egypt are related via architectural constraints and human psychology.
In practice, when people say “one in a million” in that kind of context, it’s much higher than that. I haven’t watched Dumb and Dumber, but I’d be surprised if Lloyd did not, actually, have a decent chance of ending together with Mary.
On one hand, we claim [dumb stuff using made up impossible numbers](https://www.lesswrong.com/posts/GrtbTAPfkJa4D6jjH/confidence-levels-inside-and-outside-an-argument) and on the other hand, we dismiss those numbers and fall back on there’s-a-chancism.
These two phenomena don’t always perfectly compensate one another (as examples show in both posts), but common sense is more reliable that it may seem at first. (I’m not saying it’s the correct approach nonetheless.)
Epistemic status: amateur, personal intuitions.
If this were the case, it makes sense to hold dogs (rather than their owners, or their breeding) responsible for aggressive or violent behaviour.
I’d consider whether punishing the dog would make the world better, or whether changing the system that led to its breeding, or providing incentives to the owner or any combination of other actions would be most effective.
Consequentialism is about considering the consequences of actions to judge them, but various people might wield this in various ways.
Implicitly, with this concept of responsibility, you’re considering a deontological approach to bad behavior: punish the guilty (perhaps using consequentialism to determine who’s guilty though that’s unclear from your argumentation afaict).In an idealized case, I care about whether the environment I evolve in (including other people’s and other people’s dogs’ actions) is performing well only insofar as I can change it, or said otherwise, I care only about how I can perform better.
(Then, because the world is messy, and I need to account for coordination with other people whose intuitions might not match mine, and society’s recommendations, and my own human impulses etc… My moral system is only an intuition pump for lack of satisfactory metaethics.)
I can imagine plausible mechanisms for how the first four backlash examples were a consequence of perceived power-seeking from AI safetyists, but I don’t see one for e/acc. Does someone have one?
Alternatively, what reason do I have to expect that there is a causal relationship between safetyist power-seeking and e/acc even if I can’t see one?
That’s not interesting to read unless you say what your reasons are and they differ from other critics’. Perhaps not say it all in a comment, but at least a link to a post.
Interestingly, I think that one of the examples of proving too much on Wikipedia can itself be demolished by a proving too much argument, but I’m not going to say which one it is because I want to see if other people independently come to the same conclusion.
For those interested in the puzzle, here is the page Scott was linking to at the time: https://en.wikipedia.org/w/index.php?title=Proving_too_much&oldid=542064614
The article was edited a few hours later, and subsequent conversation showed that Wikipedia editors came to the conclusion Scott hinted at, though the suspicious timing indicates that they probably did so on reading Scott’s article rather than independently.
Another way to avoid the mistake is to notice that the implication is false, regardless of the premises.
In practice, people’s beliefs are not deductively closed, and (in the context of a natural language argument) we treat propositional formulas as tools for computing truths rather than timeless statements.
it can double as a method for creating jelly donuts on demand
For those reading this years later, here’s the comic that shows how to make ontologically necessary donuts.
I’d appreciate examples of the sticker shortcut fallacy with in-depth analysis of why they’re wrong and how the information should have been communicated instead.
“Anyone thinks they’re a reckless idiot” is far too easy a bar to reach for any public figure.
I do not know of major anti-Altman currents in my country, but considering surveys consistently show a majority of people worried about AI risk, a normal distribution of extremeness of opinion on the subject ensures there’ll be many who do consider Sam Altman a reckless idiot (for good or bad reason—I expect a majority of them to consider Sam Altman to have any negative trait that comes to their attention because it is just that easy to have a narrow hateful opinion on a subject for a large portion of the population).
I have cancelled my subscription as well. I don’t have much to add to the discussion, but I think signalling participation in the boycott will help conditional on the boycott having positive value.
Thanks for the information.
Consider though that for many people the price of the subscription is motivated by convenience of access and use.
It took me a second to see how your comment was related to the post so here it is for others:
Given this information, using the API preserves most of the benefits of access to SOTA AI (assuming away the convenience value) while destroying most of the value for OpenAI, which makes this a very effective intervention compared to cancelling the subscription entirely.
When I vote, I basically know the full effect this has on what is shown to other users or to myself.
Mindblowing moment: It has been a private pet peeve of mine that it was very unclear what policy I should follow for voting.
In practice, I vote mostly on vibes (and expect most people to), but given my own practices for browsing LW, I also considered alternative approaches.
- Voting in order to assign a specific score (weighted for inflation by time and author) to the post. Related uses: comparing karma of articles, finding desirable articles on a given topic.
- Voting in order to match an equivalent-value article. Related uses: same; perhaps effective as a community norm but more effortful.
- Voting up if the article is good, down if it’s bad (after memetic/community/bias considerations) (regardless of current karma). Related uses: karma as indicator of community opinion.
In the end, making my votes consistent turned out to be too much effort in every case for extensive calculations, which is why I came back to vibes, amended by implicit considerations of consistent ways to vote.
I was trying to figure out ways to vote which would put me in a class of voters that marginally improved my personal browsing experience.
It never occurred to me to model the impact it would have on others and to optimize for their experience.
This sounds like an obviously better way to vote.
So for anyone who was in the same case as me, please optimize for others’ browsing experience (or your own) directly rather than overcalculate decision-theoretic whatevers.
Seems like you need to go beyond arguments of authority and stating your conclusions and instead go down to the object-level disagreements. You could say instead “Your argument for ~X is invalid because blah blah” and if Jacob says “Your argument for the invalidity of my argument for ~X is invalid because blah blah” then it’s better than before because it’s easier to evaluate argument validity than ground truth.
(And if that process continues ad infinitam, consider that someone who cannot evaluate the validity of the simplest arguments is not worth arguing with.)