Is this selection bias? I have had people who are overconfident and get nowhere.
I don’t think it’s independent from smartness, a smart+conscientious person is likely to do better.
Is this selection bias? I have had people who are overconfident and get nowhere.
I don’t think it’s independent from smartness, a smart+conscientious person is likely to do better.
https://www.lesswrong.com/tag/r-a-z-glossary
I found this by mistake and luckily I remembered glancing over your question
It would be an interesting meta post if someone did a analysis of each of those traction peaks due to various news or other articles.
accessibility error: Half the images on this page appear to not load.
Have you tried https://alternativeto.net ? It may not be AI specific but it was pretty useful for me to find lesser known AI tools with particular set of features.
Error: The mainstream status on the bottom of the post links back to the post itself. Instead of comments.
I prefer system 1: fast thinking or quick judgement
Vs
System 2 : slow thinking
I guess it depends on where you live and who you interact with and what background they have because fast vs slow covers the inferential distance fastest for me avoids the spirituality intuition woo woo landmine, avoids the part where you highlight a trivial thing to their vocab called “reason” etc
William James (see below) noted, for example, that while science declares allegiance to dispassionate evaluation of facts, the history of science shows that it has often been the passionate pursuit of hopes that has propelled it forward: scientists who believed in a hypothesis before there was sufficient evidence for it, and whose hopes that such evidence could be found motivated their researches.
Einstein’s Arrogance seems like a better explanation of the phenomena to me
I remember this point that yampolskiy made for impossibleness of AGI alignment on a podcast that as a young field AI safety had underwhelming low hanging fruits, I wonder if all of the major low hanging ones have been plucked.
I thought this was kind of known that few of the billionaires were rationalist adjacent in a lot of ways, given effective altruism caught on with billionaire donors, also in the emails released by OpenAI https://openai.com/index/openai-elon-musk/ there is link to slatestarcodex forwarded to elonmusk in 2016, elon attended eliezer’s conference iirc. There are a quite of places you could find them in the adjacent circles which already hint to this possibility like basedbeffjezos’s followers being billionaires etc. I was kind of predicting that some of them would read popular things on here as well since they probably have overlapping peer groups.
Few feature suggestions: (I am not sure if these are feasible)
1) Folders OR sort by tag for bookmarks.
2) When I am closing the hamburger menu on the frontpage I don’t see a need for the blogs to not be centred. It’s unusual, it might make more sense if there was a way to double stack it side by side like mastodon.
3) RSS feature for subscribed feeds? I don’t like using Emails because too many subscriptions and causes spam.
(Unrelated: can I get deratelimited lol or will I have to make quality Blogs for that to happen?)
I usually think of this in terms of Dennett’s concept of the intentional stance, according to which there is no fact of the matter of whether something is an agent or not. But there is a fact of the matter of whether we can usefully predict its behavior by modeling it as if it was an agent with some set of beliefs and goals.
That sounds awfully lot like asserting agency to be a mind-projecting fallacy.
Sorry for the late reply, I was looking through my past notifs, I would recommend you to taboo the words and replace the symbols with the substance , I would also recommend you to treat language as instrumental since words don’t have inherent meaning, that’s how an algorithm feels from inside.
Is this the copy of video which has been listed as removed? @Raemon
It is surely the case for me, I was raised a hindu nationalist,I ended up also trusting various sides of the political spectrum from far right to far left, porn addiction , later ended up falling into trusting science,technology without thinking for myself. Then i fell into epistemic helplessness, did some 16 hr/day work hrs as a denial of the situation led to me getting sleep paralysis, later my father also died due to his faulty beliefs in naturopathy and alternative medicine honestly due to his contrarian bias he didn’t go to a modern medicine doctor, I was 16 back then (last year) . Which eventually ended up leading me here, initially being every skeptical of anything but my default common sense intuition I realised the cognitive biases I fell for etc etc. so on
Most useful post, I was intuitively aware of these states, thanks for providing the underlying physiological underpinning. I am aware enough to actually feel a sense of tension in my head in general in SNS dominated states and noticed that I was biased during these states, my predictions seem to align well with the literature it seems.
Why does lesswrong.com have the bookmark feature without a way to sort them out? As in using tags or maybe even subfolders. Unless I am missing something out. I think it might be better if I just resort to browser bookmark feature.
I think what they mean is the intuitve notion of typicality rather than the statistical concept of average.
98 seems approximately 100
but 100 doesn’t seem approximately 98 due to how this heuristic works.
That is typicality is a system 1 heuristic of a similarity cluster, it’s asymmetric.
Here is the post on typicality from a human guide’s to word sequence.
To interpret what you meant when you said “my hair has grown above average” you have a extensional which you refer to with the word “average hair” and you find yourself to be on the outer ends of this extensional cluster in the hairspace. Ideally you would craft an intensional of this extensional instead of “average hair as in mathematical concept Sum of terms/no. of terms” to somewhat like “that’s the amount of hair growth I tend to experience usually” now this statement may or may not be accurate based on how much data you have provided to your inner sim. Or if you mean by average hair as in “the societal stereotype of average hair growth” then that would be subject to cultural factors like what shows you watch etc.
(also if you reply back I won’t be able to reply I have been ratelimited one post per 2 days for an year on lesswrong)
The student employing version one of the learning strategy will gain proficiency at watching information appear on a board, copying that information into a notebook, and coming up with post-hoc confirmations or justifications for particular problem-solving strategies that have already provided an answer.
ouch I wasn’t prepared for direct attacks but thank you very much for explaining this :), I now know why some of the later strategies of my experienced self of “if I was at this step how would I figure this out from scratch” and “what will the teacher teach today based on previous knowledge” worked better, or felt more engaging from my POV (I love maths and it was normal for me to try find ways to engage more) .
But this tells me I should apply rationality A-Z techniques more often to learning...given how this is just anticipation controller,fake causality and replacing symbol with the referent, positive bias.
I recommend having this question in the next lesswrong survey.
Along the lines of “How often do you use LLMs and your usecase?”