Agent foundations, AI macrostrategy, human enhancement.
I endorse and operate by Crocker’s rules.
I have not signed any agreements whose existence I cannot mention.
Agent foundations, AI macrostrategy, human enhancement.
I endorse and operate by Crocker’s rules.
I have not signed any agreements whose existence I cannot mention.
My model is that
some of it is politically/ideologically/self-interest-motivated
some of it is just people glancing at a thing, forming an impression, and not caring to investigate further
some of it is people interacting with the thing indirectly via people from the first two categories; some subset of them then take a glance at the PauseAI website or whatever, out of curiosity, form an impression (e.g. whether it matches what they’ve heard from other people), don’t care to investigate further
Making slogans more ~precise might help with (2) and (3)
Some people misinterpret/mispaint them(/us?) as “luddites” or “decels” or “anti-AI-in-general” or “anti-progress”.
Is it their(/our?) biggest problem, one of their(/our?) bottlenecks? Most likely no.
It might still make sense to make marginal changes that make it marginally harder to do that kind of mispainting / reduce misinterpretative degrees of freedom.
You can still include it in your protest banner portfolio to decrease the fraction of people whose first impression is “these people are against AI in general” etc.
This closely parallels the situation with the immune system.
One might think “I want a strong immune system. I want to be able to fight every dangerous pathogen I might encounter.”
You go to your local friendly genie and ask for a strong immune system.
The genie fulfills your wish. No more seasonal flu. You don’t need to bother with vaccines. You even considered stopping to wash your hands but then you realized that other people are still not immune to whatever bugs might there be on your skin.
Then, a few weeks in, you get an anaphylactic shot when eating your favorite peanut butter sandwich. An ambulance takes you to the hospital where they also tell you that you got Hashimoto.
You go to your genie to ask “WTF?” and the genie replies “You asked for a strong immune system, not a smart one. It was not my task to ensure that it knows that peanut protein is not the protein of some obscure worm even though they might look alike, or that the thyroid is a part of your own body.”.
I have experimented some with meditation specifically with the goal of embracing the DMN (with few definite results)
I’d be curious to hear more details on what you’ve tried.
Relevant previous discussion: https://www.lesswrong.com/posts/XYYyzgyuRH5rFN64K/what-makes-people-intellectually-active
Then the effect would be restricted to people who are trying to control their eating which we would probably have heard of by now.
What is some moderately strong evidence that China (by which I mean Chinese AI labs and/or the CCP) is trying to build AGI, rather than “just”: build AI that is useful for whatever they want their AIs to do and not fall behind the West while also not taking the Western claims about AGI/ASI/singularity at face value?
DeepSeek from my perspective should incentivize slowing down development (if you agree with the fast follower dynamic. Also by reducing profit margins generally), and I believe it has.
Any evidence of DeepSeek marginally slowing down AI development?
There’s a psychotherapy school called “metacognitive therapy” and some people swear by it being simple and a solution to >50% of psychological problems because it targets the root causes of psychological problems (saying from memory of what was in the podcast that I listened to in the Summer of 2023 and failed to research the topic further; so my description might be off but maybe somebody will find some value in it).
In the case of engineering humans for increased IQ, Indians show broad support for such technology in surveys (even in the form of rather extreme intelligence enhancement), so one might focus on doing research there and/or lobbying its people and government to fund such research. High-impact Indian citizens interested in this topic seem like very good candidates for funding, especially those with the potential of snowballing internal funding sources that will be insulated from western media bullying.
I’ve also heard that AI X-risk is much more viral in India than EA in general (in comparative terms, relative to the West).
And in terms of “Anything right-leaning” a parallel EA culture, preferably with a different name, able to cultivate right-wing funding sources might be effective.
Progress studies? Not that they are necessarily right-leaning themselves but if you integrate support for [progress-in-general and doing a science of it] over the intervals of the political spectrum, you might find that center-right-and-righter supports it more than center-left-and-lefter (though low confidence and it might flip if you ignore the degrowth crowd).
With the exception of avoiding rationalists (and can we really blame Moskovitz for that?)
care to elaborate?
Some amphetamines kinda solve akrasia-in-general to some extent (much more so than caffeine), at least for some people.
I’m not claiming that they’re worth it.
I imagine “throw away your phone” will get me 90% of the way there.
I strongly recommend https://www.minimalistphone.com/
It didn’t get me 90% of the way there (“there” being “completely eliminating/solving akrasia”) but it probably did reduce [spending time on my phone in ways I don’t endorse] by at least one order of magnitude.
Active inference is an extension of predictive coding in which some beliefs are so rigid that, when they conflict with observations, it’s easier to act to change future observations than it is to update those beliefs. We can call these hard-to-change beliefs “goals”, thereby unifying beliefs and goals in a way that EUM doesn’t.
You’re probably aware of it but it makes sense to explicitize that this move also puts in the goal category many biases, addictions, and maladaptive/disendorsed behaviors.
EUM treats goals and beliefs as totally separate. But in practice, agents represent both of these in terms of the same underlying concepts. When those concepts change, both beliefs and goals change.
Active inference is one framework that attempts to address it. Jeffrey-Bolker is another one, though I haven’t dipped my toes into it deep enough to have an informed opinion on whether it’s more promising than active inference for the thing you want to do.
Based on similar reasoning, Scott Garrabrant rejects the independence axiom. He argues that the axiom is unjustified because rational agents should be able to lock in values like fairness based on prior agreements (or even hypothetical agreements).
I first thought that this introduces epistemic instability because vNM EU theory rests on the independence axiom (so it looked like: to unify EU theory with active inference you wanted to reject one of the things defining EU theory qua EU theory) but then I realized that you hadn’t assumed vNM as a foundation for EU theory, so maybe it’s irrelevant. But still, as far as I remember, different foundations of EU theory give you slightly different implications (and many of them have some equivalent of the independence axiom; at least Savage does), so it might be good for you to think explicitly about what kind of EU foundation you’re assuming. But it also might be irrelevant. I don’t know. I’m leaving this thought-train-dump in case it might be useful.
I don’t think anything I said implied interest in your thesis.
I was mostly explaining why the act of pasting your thesis in Spanish to LW was a breach of some implicit norms that I thought were so obvious that they didn’t even need to be stated and also trying to understand why you did it (you linked some previous question post but I couldn’t find the answer to my question with quick ctrl+f+[keyword] which is what I think is a reasonable amount of effort whenever someone answers a specific, simple question with a link that is not a straightforward answer to this question).
LW is an English-speaking site. I’ve never seen a non-English post (or even comment?) and while I don’t know if it’s explicitly written anywhere, I feel like a long post in Spanish is not aligned with this site’s spirit/intention.
If I wanted to share my work that is not in English, I would make a linkpost with a translated abstract and maybe an introduction and link to the pdf in some online repository.
Why are you posting a post in Spanish on LessWrong?
Tesis presentada como exigencia parcial a la obtención del título de Especialista en Neurociencia Clínica de la Facultad AVM
Did you just copy-paste the PDF of some guy’s thesis?
https://www.lesswrong.com/posts/TYgztDNXhobbqMpXh/goodhart-typology-via-structure-function-and-randomness