Case study: A simple algorithm for fixing motivation
So here I was, trying to read through an online course to learn about cloud computing, but I wasn’t really absorbing any of it. No motivation.
Motives are a chain, ending in a terminal goal. Lack of motivation meant that my System 1 did not believe what I was doing would lead to achieving any terminal goal. The chain was broken.
So I traversed the chain to see which link was broken.
Why was I doing the online course? Because I want to become better at my job.
Do I still think doing the online course will make me better at my job? Yes I do.
Do I want to get better at my job? Nah, doesn’t spark joy.
Why do I want to get better at my job? Because I want to get promoted.
Do I still think doing better will make me get promoted? Yes I do.
Do I want to get promoted? Nah, doesn’t spark joy.
Why do I want to get promoted? Because (among other things) I want more influence on my environment, for example by having more money.
Do I still think promotion will give me more influence? Yes I do
Do I want more influence? Nah
Why do I want more influence (via money)? Because (among other things) I want to buy a house and do meetups, and live with close friends at the center of a vibrant community that helps people
Do I think more money will get me this house? Yes I do
Do I want to live with close friends at the center of a vibrant community that helps people? Well, usually yes, but today I kind of just want to go to the beach with my gf, and decompress.
Well okay, but most days you do want this thing.
Shit you’re right, I do want to do this online course
And motivation was restored. Suddenly, I feel invigorated. To do the course, and to write this post.
Another thing you can do when you get to that top level: “Is this the best way to get that? (living with close friends at the center of a vibrant community), if not, what is?”
Ibogaine seems to reset opiate withdrawal. There are many stories of people with 20 year old heroin addictions that are cured within one session.
If this is true, and there are no drawbacks, then we basically have access to wireheading. A happiness silver bullet. It would be the hack of the century. Distributing ibogaine + opiates would be the best known mental health intervention by orders of magnitude.
Of course, that’s only if there are no unforeseen caveats. Still, why isn’t everybody talking about this?
Anti-tolerance drugs seem neglected, tractable, and scalable. We’ve done some shallow investigation at QRI and think it is pretty promising. Have been keeping it as a bullet point as we ask around in funding and academic circles. It’s an area that could use a dedicated effort for sure.
Based on a quick glance at the Wikipedia page, it looks like ibogaine may have a significant risk of toxicity (and also the experience of being on it does not sound necessarily fun? I would not choose to take it): https://en.wikipedia.org/wiki/Ibogaine
Also, I think this would rely on opiates being a pleasure-causing experience for everyone or almost everyone, which doesn’t seem obviously true to me. (Source: recently had major surgery, had experience of various opiates including given by IV, kind of hate all of them except for the part where they result in less physical pain.)
We can fund the search for analogs to ibogaine with no side effects. We can figure out if microdosing ibogaine works (some promising but weak evidence it does).
Typical mind fallacy: people with good lives underestimate drastically how important low dose opiates can be for helping people with unlivable chronic pain.
Oh, yeah, for the subpopulation where opiates are therapeutic this seems really valuable. (Which, who knows, I could end up being in, if I’m unlucky and get chronic nerve pain from my amputation). But IMO that’s a pretty different thing from “wireheading” or a “happiness silver bullet” and it really confuses the issue to call it that.
I had an above knee amputation due to cancer in March and have been on opiates – several different kinds, less over time, sometimes when I was in a lot of pain and sometimes more prophylactically when I’m not in pain but am preparing for something I expect to be painful. I mostly hate the experience of being on them, especially the “high” if I take it before I’m actually in pain from physiotherapy or whatever. (I do appreciate being in less pain. Pain is bad.)
I...guess it’s interesting and I could see a different person liking the experience? I get a lot of dissociative effects, especially with the IV opiates they gave me in hospital. (Feeling like I’m floating above my body, feeling like I don’t have free will and am just watching my actions happen from a distance.) I don’t particularly enjoy this. They also make me feel tired and out of it / cognitively impaired, and I am really, really averse to that. I ended up drinking so much coffee in the hospital trying to fight this off.
My guess is that brains vary and some people would experience this as “feeling great”. (I’ve noticed this with other things like stimulants; I really like how coffee makes me feel, for example, but I know a lot of people who experience it as anxiety/unpleasant jitteriness.)
Today I had some insight in what social justice really seems to be trying to do. I’ll use neurodiversity as an example because it’s less likely to lead to bad-faith arguments.
Let’s say you’re in the (archetypical) position of a king. You’re programming the rules that a group of people will live by, optimizing for the well-being of the group itself.
You’re going to shape environments for people. For example you might be running a supermarket and deciding what music it’s going to play. Let’s imagine that you’re trying to create the optimal environment for people.
The problem is, since there is more than one person that is affected by your decision, and these people are not exactly the same, you will not be able to make the decision that is optimal for each one of them. If only two of your customers have different favourite songs, you will not be able to play both of them. In some sense, making a decision over multiple people is inherently “aggressive”.
But what you can do, is reduce the amount of damage. My understanding is that this is usually done by splitting up the people as finely as possible. You might split up your audience into stereotypes for “men”, “women”, “youngsters”, “elders”, “autistic people”, “neurotypicals”, etc. In this case, you can make a decision that would be okay for each of these stereotypes, giving your model a lower error rate.
The problem with this is that stereotypes are leaky generalizations. Some people might not conform to it. Your stereotypes might be mistaken. Alternatively, there might be some stereotypes that you’re not aware of.
Take these 2 models. Model A knows that some people are highly sensitive to sound. Model B is not aware of it. If your model of people is A, you will play much louder music in the supermarket. As a result, people that are highly sensitive to sound will be unable to shop there. This is what social justice means with “oppression”. You’re not actively pushing anyone down, but you are doing so passively, because you haven’t resolved your “ignorance”.
So the social justice project, as I understand it, is to enrich our models of humans to make sure that as many of them as possible are taken into consideration. It is a project of group epistemics, above all.
That means that good social justice means good epistemics. How do you collaboratively figure out the truth? The same laws apply as they would to any truthseeking. Have good faith, give it some probability that you’re wrong, seek out to understand their model first, don’t discard your own doubts, and be proud and grateful when you change your mind.
The problem is that reducing the amount of damage is not the same thing as maximizing value. It’s not utilitarian.
If you are faced with making a decision that gives 99% of people +1 utility but 1% −10 utility, an approached that targets damage reduction means that you make a choice to leave 0.89 utility for the average person on the table.
So the social justice project, as I understand it, is to enrich our models of humans to make sure that as many of them as possible are taken into consideration.
Giving someone consideration is not enough from a critical theory perspective. Consideration is not equity. Equity is about also giving them power to take part in the decision making.
This would be nice. But in practice I don’t see splitting the audience along many dimensions; rather the differences are shoehorned into sex/gender, sexual orientation, and race (e.g. insisting that “Muslim” is a race). In a social justice debate, an asperger is more likely to be called an asshole than accepted as a disadvantaged minority. Also, the dimension of wealth vs poverty is often suspiciously missing.
If you are a benevolent dictator, it would better to simply have two supermarkets—one with music and one without—and let everyone choose individually where they prefer to shop. Instead of dividing them into categories, assigning the categories to shops, then further splitting the categories into subcategories, etc. But this means treating people as individuals, not as categories. Specifically, trying to help people by helping categories is an XY problem (you end up taking resources from people at the bottom of the “advantaged” categories, and giving them to people at the top of the “disadvantaged” categories; for example Obama’s daughters would probably qualify for a lot of support originally meant for poor people).
Epistemically, social justice is a mixed bag, in my opinion. Some good insights, some oversimplifications. Paying attention to things one might regularly miss, but also evolving its own set of stereotypes and dogmas. It is useful as yet another map in your toolbox, and harmful when it’s the only map you are allowed to use.
I find this interesting as this gives one of the better arguments I can recall for there being something positive at the heart of social justice such that it isn’t just one side trying to grab power from another to push a different set of norms, since that’s often what the dynamics of it look like to me in practice, whatever the intent of social justice advocates, and I find such battles not compelling (why grant one group power rather than another, all else equal, if they will push for the things they want to the exclusion of those who would then not be in power just the same as those in power now do to those seeking to gain power?).
You may have heard of the poverty trap, where you have so little money that you’re not able to spend any money on the things you need to make more. Being broke is an attractor state.
You may have heard of the loneliness trap. You haven’t had much social interaction lately, which makes you feel bad and anxious. This anxiety makes it harder to engage in social interaction. Being lonely is an attractor state.
I think the latter is a close cousin of something that I’d like to call the irrelevance trap:
Lemma 1: having responsibilities is psychologically empowering. When others depend on your decisions, it is so much easier to make the right decision.
Lemma 2: being psychologically empowered makes it more likely for you to take on responsibility, and for others to give you responsibility, because you’re more able to handle it.
I speculate that some forms of depression (the dopaminergic type) are best understood as irrelevance traps. I’m pretty sure that that was the case for me.
How do you escape such a trap? Well you escape a loneliness trap by going against your intuition and showing up at a party. You escape an irrelevance trap by going against your intuition and taking on more responsibility than you feel you can handle.
I like this direction of thought. Note that for all of these traps, success is more often a matter of improvement rather than binary change or “escape from trap”. And persistence plays a large role—very few improvements come from a single attempt.
Been thinking all day about why LW does not spark joy anymore for me. I left about 2 years ago to seek a meaningful social life elsewhere, but honestly what I found was… mostly a barren wasteland. Now, with enough slack to come back to LW with a lot of energy, I simply find that my heart is not in it. Why?
One problem is, well I’ve essentially settled for the Netherlands as my base, but no matter how much I tried with the meetups, I never quite managed to breathe life into the local LW community. I mean I did get a regular attendance of 15 people after 2 years of consistent meetups, but when I came back after a 6 month hiatus in Blackpool it was all gone, and I had to start over. I haven’t been motivated to organise meetups ever since.
Just interact online? Not meaningful. I’ve invested way too many skill/meaning points into charisma (body language being a big part of that) to throw all that away. Not even covid was reason enough for me to stop seeing people physically even temporarily.
I have some other gripes with LW, and the core cause of most of those is that rationalists tend to be more neurotic than average, while I’m less neurotic, but c’mon, I’m not going to find a community that matches my personality even better than LW, and if I do it’s probably even more sparsely represented in Amsterdam
I did all the epistemic virtue. I rid myself of my ingroup bias. I ventured out on my own. I generated independent answers to everything. I went and understood the outgroup. I immersed myself in lots of cultures that win at something, and I’ve found useful extracts everywhere.
And now I’m alone. I don’t fully relate to anyone in how I see the world, and it feels like the inferential distance between me and everyone else is ever increasing. I’ve lost motivation for deep friendships, it just doesn’t seem compatible with learning new things about the world. That sense of belonging I got from LessWrong is gone too. There are a few things that LW/EA just doesn’t understand well enough, and I haven’t been able to get it across.
I don’t think I can bridge this gap. Even if I can put things to words, they’re too provisional and complicated to be worth delving into. Most of it isn’t directly actionable. I can’t really prove things yet.
I’ve considered going back. Is lonely dissent worth it? Is there an end to this tunnel?
Can’t speak for anyone else, and I don’t know what my counterfactual selves feel like. https://www.lesswrong.com/posts/6NvbSwuSAooQxxf7f/beware-of-other-optimizing—I don’t know if you and I are similar in ways that matter on this topic. In fact, I don’t know what mental features are important for how to optimize on this topic. Anyway, this is not advice, simply a framing that works for me.
For me, I believe it’s worth it. The tunnel widens a lot, and has LOTS of interesting features in it, but it does not end—there is a fairly fundamental truth underlying that loneliness, and I don’t know of any acceptable ways for me to deny or forget that truth (to myself).
I’ve become hyper-aware of the complexity and variance in humanity, and in myself moment-to-moment and year to year. This makes me quite able to have deep connections with many people, EVEN WHILE understanding that they model the universe differently than I on many dimensions. We can’t have and don’t need agreement on everything, or even on ontologically fundamental topics. We can agree that sitting around a campfire talking about our human experiences is desirable, and that’s enough. With other groups, I can explore moral philosophy without a realism assumption, even if I don’t particularly want to hang out with them on less intellectual topics.
The sense of ungroundedness sine waves over time afaict. The old strategies for connection had untenable foundations (e.g. tacit shared metaphysical assumptions), so you’ll need to learn new ones. The Charisma Myth and NVC are good for bootstrapping some of the skills. Motivation in the new regime can’t arise because you don’t have even the proto-skills necessary to get a rewarding feedback loop going yet.
Here’s a faulty psychological pattern that I recently resolved for myself. It’s a big one.
I want to grow. So I seek out novelty. Try new things. For example I might buy high-lumen light bulbs to increase my mood. So I buy them, feel somewhat better, celebrate the win and move on.
Problem is, I’ve bought high-lumen bulbs three times in my life now already, yet I sit here without any. So this pattern might happen all over again: I feel like upgrading my life, get this nice idea of buying light bulbs, buy them, celebrate my win and move on.
So here’s 4 life-upgrades, but did I grow 4 times? Obviously I only grew once. From not having high lumen light bulbs to having them.
My instinct towards growth seems to think this:
growth = novelty
But in reality, it seems to be more like this:
growth = novelty—decay
which I define as equal to
growth = novelty + preservation
The tap I installed that puts this preservation mindset into practice seems to be very helpful. It’s as follows: if I wonder what to do, instead of starting over (“what seems like the best upgrade to add to my life?”) I first check whether I’m on track with the implementation of past good ideas (“what did my past self intend to do with this moment again?”)
Funnily enough, so far the feeling I get from this mindset seems pretty similar to the feeling I get from meditation. And meditation can be seen as training yourself to put your attention on your past intentions too.
I think this one goes a lot deeper than what I’ve written here. I’ll be revisiting this idea.
I have gripes with EA’s that try to argue about which animals have consciousness. They assume way too readily that consciousness and valence can be inferred from behavior at all.
It seems quite obvious to me that these people equate their ability to empathize with an animal with the ability for the animal to be conscious, and it seems quite obvious to me that this is a case of mind projection fallacy. Empathy is just a simulation. You can’t actually see another mind.
If you’re going to make guesses about whether a species is conscious, you should first look at neural correlates of consciousness and valence and then try to find these correlates in animals. You don’t look at animal behavior at all. We have absolutely no reason to believe that behavior correlates with consciousness. That’s just your empathy getting in the way. The same empathy that attributes feelings to stuffed animals.
We have absolutely no reason to believe that behavior correlates with consciousness.
Not to be pedantic, but what else could consciousness possibly be, except for a way of describing the behavior of some object at a high level of abstraction?
If consciousness was not a behavior, but instead was some intrinsic property of a system, then you run into the exact same argument that David Chalmers uses to argue that philosophical zombies are conceivable. This argument was forcefully rebutted in the sequences.
ETA: When I say behavior, I mean it in the physical sense. A human who is paralyzed but nonetheless conscious would not be behaviorally identical to a dead human. Superficially yes, but behavior means more than seeing what goes on outside. While you might say that I’m simply using a different definition of behavior than you were, I think it’s still relevant, because any evolutionary reason for consciousness must necessarily show up in observational behavior, or else there is no benefit and we have a mystery.
Not to be pedantic, but what else could consciousness possibly be, except for a way of describing the behavior of some object at a high level of abstraction?
It could be something that is primarily apparent to the person that has it.
If consciousness was not a behavior, but instead was some intrinsic property of a system, then you run into the exact same argument that David Chalmers uses to argue that philosophical zombies are conceivable.
That runs together two claims: that consciousness is not behaviour, and that it is independent of physics. You don’t have to accept the second claim in order to accept the first.
And it remains the case that Chalmers doesn’t think zombies are really possible.
I think it’s still relevant, because any evolutionary reason for consciousness must necessarily show up in observational behavior, or else there is no benefit and we have a mystery.
“Primarily accessible to the person that has it” does not mean “no behavioural consequences”.
It could be something that is primarily apparent to the person that has it.
I’m not convinced that this definition is sufficiently clear, or that consciousness should be defined this way. Rather, it’s a property of consciousness that people claim that it’s “readily apparent”, but I am not convinced that it has this readily apparent quality to it.
In general, rather than taking the properties of consciousness at face value, I take Dennett’s approach of evaluating people’s claims about consciousness from a behavioral science perspective. From my perspective, once you’ve explained the structure, dynamics, and the behavior, you’ve explained everything.
And it remains the case that Chalmers doesn’t think zombies are really possible.
Are you sure? Chalmers argues in Chapter 3 of The Conscious Mind that zombies are logically possible. I am not really even sure what force the Zombie argument could hold if he thought it was not logically possible.
From my point of view, that’s missing the central point quite badly.
Could you let me know why? What about consciousness is missed by a purely behavioral description of an object (keeping in mind that what I mean by behavior is very broad, and includes things like the behavior of electrical signals)?
I have gripes with EA’s that try to argue about which animals have consciousness. They assume way too readily that consciousness and valence can be inferred from behavior at all.
I think people who refer to animal behavior in making statements about consciousness are making a claim more along the lines of “given that a being has a brain with superficial similarities to ours and was evolved via a process similar to our own evolution, we can take it’s behavior as higher level indicators of what its brain is doing and infer things about consciousness.” Otherwise, these people would also grant consciousness to all sorts of things we make with superficially human behavior but obviously different mechanisms (ie non-playable characters in MMOs, chatbots).
If you’re going to make guesses about whether a species is conscious, you should first look at neural correlates of consciousness and valence and then try to find these correlates in animals. You don’t look at animal behavior at all.
I read a lot more about consciousness back in the day and I’m not convinced that neural correlates are any better evidence for consciousness than behavior, given that the beings we’re considering already have brains. I’m no expert but per wikipedia on neural correlations of consciousness, we don’t have much in terms of neural correlates:
Given the absence of any accepted criterion of the minimal neuronal correlates necessary for consciousness, the distinction between a persistently vegetative patient who shows regular sleep-wave transitions and may be able to move or smile, and a minimally conscious patient who can communicate (on occasion) in a meaningful manner (for instance, by differential eye movements) and who shows some signs of consciousness, is often difficult.
Per Open Philanthropy’s 2017 report on consciousness on cortex-requiring views (CRVs), we’re not really sure how important having a cortex is for consciousness:
Several authors have summarized additional arguments against CRVs,157 but I don’t find any of them to be even moderately conclusive. I do, however, think all this is sufficient to conclude that the case for CRVs is unconvincing. Hence, I don’t think there is even a “moderately strong” case for the cortex as a necessary condition for phenomenal consciousness (in humans and animals). But, I could imagine the case becoming stronger (or weaker) with further research.
And from the same report, there aren’t really any clear biological factors that can be used to draw lines about consciousness:
How did my mind change during this investigation? First, during the first few months of this investigation, I raised my probability that a very wide range of animals might be conscious. However, this had more to do with a “negative” discovery than a “positive” one, in the following sense: Before I began this investigation, I hadn’t studied consciousness much, and I held out some hope that there would turn out to be compelling reasons to “draw lines” at certain points in phylogeny, for example between animals which do and don’t have a cortex, and that I could justify a relatively sharp drop in probability of consciousness for species falling “below” those lines. But, as mentioned above, I eventually lost hope that there would (at this time) be compelling arguments for drawing any such lines in phylogeny (short of having a nervous system at all).
Moreover, people who have done way more thorough research into correlates of consciousness than me use both (ie anatomical features as an example of neural correlates, motivational trade-offs, as an example of behavior). Given that animals already have a bunch of similarities to humans, it strikes me as a mistake not to consider behavior at all.
A reductio ad absurdum for this is the strong skeptical position: I have no particular reason to believe that anything is conscious. All configurations of quantum space are equally valuable, and any division into “entities” with different amounts of moral weight is ridiculous.
We have absolutely no reason to believe that behavior correlates with consciousness.
The strong version of this can’t be true. You claiming that you’re conscious is part of your behaviour. Hopefully, it’s approximately true that you would claim that you’re conscious iff you believe that you’re conscious. If behaviour doesn’t at all correlate with consciousness, it follows that your belief in consciousness doesn’t at all correlate with you being conscious. Which is a reductio, because the whole point with having beliefs is to correlate them with the truth.
Still have various options for what to do next, but most likely I will spend at least a year trying to build a large rationality community in Amsterdam. I’m talking 5+ events a week, a dedicated space, membership program, website, retreats, etc.
The emphasis will be on developing applied rationality. My approach will be to cover many different paradigms of self-improvement. My hunch is that one will start noticing patterns that these paradigms have in common.
I’m thinking authentic relating, radical honesty, CFAR-style applied rationality, shadow work, yoga/meditation, psychedelic therapy, street epistemology, tantra, body work, nonviolent communication, etc. If you know anything that would fit in this list, please comment!
This would be one pillar of the organisation, and the other one would be explicitly teaching an Effective Altruist ethos to justify working on rationality in the first place.
If this goes really well, I’m hoping this will develop into something like “the CFAR of Europe” at some point.
Much of the list is mutually exclusive/contradictory.
Or rather, there is good stuff that can be mined from all of the above, and incorporated into a general practice of rationality, but there’s a big difference between saying:
“rationality can be extracted from X, Y, and Z”
“X, Y, and Z are good places to look for rationality”
“X, Y, and Z are examples of rationality.”
The first is almost trivially true of all things, so it also seems straightforwardly true of the list “authentic relating, shadow work, yoga/meditation, psychedelic therapy, tantra, body work, etc.”
The second, applied to your list, is a gray area where the word “good” would be doing a lot of work. Searching for rationality in [authentic relating, shadow work, yoga/meditation, psychedelic therapy, tantra, body work, etc.] is not a straightforward process of separating the gold from the dross; it’s a process of painstakingly distinguishing the gold from the fool’s gold while breathing in atmosphere specifically evolved to confuse and poison you. It’s not clear that these domains in particular should be promoted to the attention of someone seeking to develop rationality tech, over other plausible avenues of investigation.
The third, applied to your list, is substantially more false than true (though it still contains some truth).
I think this is important to note less because it matters to me what you’re doing off in Europe with a project you feel personally inspired by, and more because it matters to me how we allow the word “rationality” to be defined and used here on LessWrong.
As stated, you’ve said some stuff that’s compatible with how I think we ought to define and use the word “rationality,” but you’ve definitely not ruled out everything else.
I agree with you that we need to separate the good stuff from the bad stuff and that there is a risk here that I end up diluting the brand of rationality by not doing this well enough.
My intuition is that I’m perfectly capable of doing this, but perhaps I’m not the best person to make that call, and I’m reminded that you’ve personally called me out in the past on being too lax in my thinking.
I feel like you have recently written a lot about the particular way in which you think people in LW might be going off the rails, so I could spend some time reading your stuff and trying to pass your ITT.
It doesn’t, but I tend to go with the assumption that if one person voices an objection, there are 100 more with the same objection that don’t voice it I put this on my to do list, might take a few weeks to come back to it, but I will come back to it
Yep, I was going to write something similar to what Duncan did. Some topics seem so strongly connected with irrationality, that if you mention them, it will almost inevitably attract people already interested in that topic including the irrational parts, and those people will be coordinated about the irrational parts. While you will be pushing in the direction of rationality, they will be actively pushing in different directions, how sure you are to win all these battles? I assume that creating a community that is 50% rational and 50% mystical would not make you happy. Maybe not even 80% rational and 20% mystical.
One of my pet peeves about the current rationalist community is what seems to me as quite uncritical approval of Buddhism. Especially when contrasted to our utter dismissal of Christianity, which only makes a contrarian exception for Chesterton. (I imagine that Chesterton himself might ask us whether we are literally atheists, or merely anti-Christians.) I am not saying that people here are buying Buddhism hook, line, and sinker; but they are strongly privileging a lot of stuff that comes from it. Like, you can introduce a concept from Buddhism, and people will write articles about how that actually matches our latest congnitive science or some kind of therapy. Do the same with a concept from Christianity, and you will get some strongly worded reprimand about how dangerous it is to directly import concepts from a poisonous memeplex, unless you carefully rederive it from the first principles, in which case it is unlikely to be 100% the same, and it is better to use a different word to express it.
I will end my rant here, just saying that your plans sounds to me like the same thing might happen with Buddhism and Hinduism and New Age and whatever at the same time, so I would predict that you will lose some of the battles. And ironically, you may lose some potential rationalists, as they will observe you losing these battles (or at least not winning them conclusively) and decide that they would prefer some place with stronger norms against mysticism.
(Then again, things are never easy, there is also the opposite error of Hollywood rationality, etc.)
I think it is worth someone pursuing this project, but maybe it’d make more sense to pursue it under the post-rationality brand instead? Then again, this might reduce the amount of criticism from rationality folk in exchange for increasing Viliam’s worries.
(The main issue I have with many post-rationalists is that they pay too little heed to Ken Wilbur’s pre/trans fallacy).
Buddhists claim that they can put brains in a global maximum of happiness, called enlightenment. Assuming that EA aims to maximize happiness plain and simple, this claim should be taken seriously. It currently takes decades for most people to reach an enlightened state. If some sort of medical intervention can reduce this to mere months, this might drive mass adoption and create a huge amount of utility.
Buddhists claim that they can put brains in a global maximum of happiness, called enlightenment. Assuming that EA aims to maximize happiness plain and simple, this claim should be taken seriously.
This sounds like a Pascal’s Wager argument. Christians, after all, claim they can put you in a global maximum of happiness, called heaven. What percentage of our time is appropriate to spend considering this?
I’m not saying meditation is bunk. I’m saying there has to be some other reason why we take claims about it seriously, and the official religious dogma of Buddhists is not particularly trustworthy. We should base interest in meditation on our model of users who make self-reports, at the very least—and these other reasons for interest in meditation do not support characterizations like “global maximum of happiness.”
I don’t think “if you do this you’ll be super happy (while still alive)” is comparable to “if you do this you’ll be super happy (after you die)”. The former is testable, and I have close friends who have already fully verified it for themselves. I’ve also noticed in myself a superlinear relation between meditation time and likelihood to be in a state of bliss, and I have no reason to think this relation won’t hold when I meditate even more.
The buddha also urged people to go and verify his claims themselves. It seems that the mystic (good) part of buddhism is much more prominent than the organised religion (bad) part, compared to christianity.
It seems contradictory to preach enlightenment since you could be working towards enlightenment instead of preaching. Evangelical Christians don’t have that issue.
Pascal was right, Christianity is the rational choice, even compared to all other religions.
But could you explain why you think wireheading is bad? Besides, I don’t think the comparison is completely justified. Enlightenment isn’t just claimed to increase happiness, but also intelligence and morality.
It’s not currency per se, but the essential use case of crypto seems to be to automate the third party.
This “third party” can be many things. It can be a securities dealer or broker. It can be a notary. It can be a judge that is practicing contract law.
Whenever there is a third party that somehow allows coordination to take place, and the particular case doesn’t require anything but mechanical work, then crypto can do it better.
A securities dealer or broker doesn’t beat a protocol that matches buyers and sellers automatically. A notary doesn’t beat a public ledger. A judge in contract law doesn’t beat an automatically executed verdict, previously agreed upon in code.
(like damn, imagine contracts that provably have only one interpretation. Ain’t that gonna put lawyers out of business)
And maybe a bank doesn’t beat peer to peer transactions, with the caveat that central banks are pretty competent institutions, and if anyone will win that race it is them. While I’m optimistic about cryptocurrency, I’m still skeptical about private currency.
Why aren’t presidential races already essentially ITT Tournaments? It would seem like that skill would make you really good at drawing support from lots of different demographics.
Organizing tournaments is something that requires little resources. Appointing presidents on the other hand requires resources that are fully out of reach.
When talking about colonizing Mars speaking about using warp drives to get there doesn’t read like “the idea is highly unpolished”.
As someone who never came across religion before adulthood, I’ve been trying to figure it out. Some of it’s claims seem pretty damn nonsensical, and yet some of it’s adherents seem pretty damn well-adjusted and happy. The latter means there’s gotta be some value in there.
The most important takeaway so far is that religious claims make much more sense if you interpret them as phenomenological claims. Claims about the mind. When buddhists talk about the 6 worlds, they talk about 6 states of mood. When christians talk about a covenant with god, they talk about sticking to some kind of mindset no matter what.
Back when this stuff was written, people didn’t seem to distinguish between objective reality and subjective experience. The former is a modern invention. Either that, or this nuance has been lost in translation over the centuries.
So here’s two extremes. One is that human beings are a complete lookup table. The other one is that human beings are perfect agents with just one goal. Most likely both are somewhat true. We have subagents that are more like the latter, and subsystems more like the former.
But the emphasis on “we’re just a bunch of hardcoded heuristics” is making us stop looking for agency where there is in fact agency. Take for example romantic feelings. People tend to regard them as completely unpredictable, but it is actually possible to predict to some extent whether you’ll fall in and out of love with someone based on some criteria, like whether they’re compatible with your self-narrative and whether their opinions and interests align with yours, etc. The same is true for many intuitions that we often tend to dismiss as just “my brain” or “neurotransmitter xyz” or “some knee-jerk reaction”.
There tends to be a layer of agency in these things. A set of conditions that makes these things fire off, or not fire off. If we want to influence them, we should be looking for the levers, instead of just accepting these things as a given.
So sure, we’re godshatter, but the shards are larger than we give them credit for.
Neither one of those assumptions are true. There’s a lot we don’t know about neuroscience, but we do know that we’re not giant lookup tables, and we don’t have singular goals.
For a citation, go open any neuroscience textbook. We are made of associative memories, but we are not one giant associative look-up table. These are vastly different architectures, and the proof is simply that neurons are locally acting. For humans to be a giant lookup table would require neural networking that we objectively do not have.
For the claim about being singular goal systems, I could point to many things but maybe Maslow’s hierarchy of needs (hierarchy of separate, factorable goals!) is sufficient:
It’s interesting because Maslow’s Hierarchy actually seems to point to the exact opposite idea to me. It seems to point to the idea that everything we do, even eating food, is in service of eventual self-actualization.
This is of course ignoring the fact that Maslow seems to basically be false experimentally.
Prioritization of goals is not the same as goal unification.
It can be if the basic structure is “I need to get my basic needs taken care of so that I can work on my ultimate goal”.
I think Kaj has a good link on experimental proof for Maslow’s Hierarchy.
I also think that it wouldn’t be a stretch to call Self-determination theory a “single goal” framework, that goal being “self-determination”, which is a single goal made up of 3 seperate subgoals, which crucially, must be obtained together to create meaning (if they could be obtained seperately to create meaning, and people were OK with that, than I don’t think it would be fair to categorize it as a single goal theory.
It can be if the basic structure is “I need to get my basic needs taken care of so that I can work on my ultimate goal”.
That’s a fully generic response though. Any combination of goals/drives could have a (possibly non-linear) mapping which turns them into a single unified goal in that sense, or vice versa.
Let me put it more simply: can achieving “self-determination” alleviate your need to eat, sleep, and relieve yourself? If not, then there are some basic biological needs (maintenance of which is a goal) that have to be met separately from any “ultimate” goal of self-determination. That’s the sense in which I considered it obvious we don’t have singular goal systems.
Any combination of goals/drives could have a (possibly non-linear) mapping which turns them into a single unified goal in that sense, or vice versa. .
Yeah, I think that if the brain in fact is mapped that way it would be meaningful to say you have a single goal.
Let me put it more simply: can achieving “self-determination” alleviate your need to eat, sleep, and relieve yourself? If not, then there are some basic biological needs (maintenance of which is a goal) that have to be met separately
Maybe, it depends on how the brain is mapped. I know of at least a few psychology theories which would say things like avoiding pain and getting food are in the service of higher psychological needs. If you came to believe for instance that eating wouldn’t actually lead to those higher goals, you would stop.
I think this is pretty unlikely. But again, I’m not sure.
There is a further problem with Maslow’s work. Margie Lachman, a psychologist who works in the same office as Maslow at his old university, Brandeis in Massachusetts, admits that her predecessor offered no empirical evidence for his theory. “He wanted to have the grand theory, the grand ideas—and he wanted someone else to put it to the hardcore scientific test,” she says. “It never quite materialised.”
However, after Maslow’s death in 1970, researchers did undertake a more detailed investigation, with attitude-based surveys and field studies testing out the Hierarchy of Needs.
”When you analyse them, the five needs just don’t drop out,” says Hodgkinson. “The actual structure of motivation doesn’t fit the theory. And that led to a lot of discussion and debate, and new theories evolved as a consequence.”
In 1972, Clayton Alderfer whittled Maslow’s five groups of needs down to three, labelled Existence, Relatedness and Growth. Although elements of a hierarchy remain, “ERG theory” held that human beings need to be satisfied in all three areas—if that’s not possible then their energies are redoubled in a lower category. So for example, if it is impossible to get a promotion, an employee might talk more to colleagues and get more out of the social side of work.
More sophisticated theories followed. Maslow’s triangle was chopped up, flipped on its head and pulled apart into flow diagrams.
Of course, this doesn’t really contradict your point of there being separable, factorable goals. AFAIK, the current mainstream model of human motivation and basic needs is self-determination theory, which explicitly holds that there exist three separate basic needs:
Autonomy: people have a need to feel that they are the masters of their own destiny and that they have at least some control over their lives; most importantly, people have a need to feel that they are in control of their own behavior.
Competence: another need concerns our achievements, knowledge, and skills; people have a need to build their competence and develop mastery over tasks that are important to them.
Relatedness (also called Connection): people need to have a sense of belonging and connectedness with others; each of us needs other people to some degree
Although for the purposes of this discussion it seems that Maslow’s specific factorization of goals is questionable, but not the general idea of a hierarchy of needs. Does that sound reasonable?
These aren’t contradictory or extremes of a continuum, they’re different levels of description of agency. A complete enough lookup table is indistinguishable from having goals. A deep enough neural net (say, a brain) is a pretty complete lookup table.
The “one goal” idea is a slightly confused modeling level—“goal” isn’t really unitary or defined well enough to say whether a set of desires is one conjunctive goal or many coordinated and weighted goals.
If you’re struggling with confidence in the LW crowd, I recommend aiming for doing one thing well, instead of trying too hard to prevent criticism. You will inevitably get criticism and it’s better to embrace that.
Optimal finance means optimal allocation of money across your life, regardless of when you earn it.
That’s part of it, but there’s also coordination between people (e.g., investors coming together to finance a capital-intensive business that no single person can afford to fund) and managing risk and incentives (e.g., a sole proprietorship has better incentives but worse risk characteristics compared to a company with many shareholders, a company issuing both stocks and bonds so investors can make their own risk/reward tradeoff, etc.).
I think maybe something like “finance is about how a group of people can cooperate to pursue EU maximization over time, given some initial endowment of assets and liabilities” would capture most of it?
Well yes, but no, the point is that these other people are merely means, but optimally distributing your assets over time is a means that screens off the other people, in a sense. In the end, assuming that people are really just optimizing for their own value, they might trade for that but in the end their goal is their own allocation.
When exploring things like this, taboo the phrase “really is”. No abstraction really is anything. You might ask “how can I use this model”, or “what predictions can I make based on this” or “what was that person trying to convey when they said that”.
Less philosophically, you’re exploring an uncommon use of the word “finance”. Much more commonly, it’s about choice of mechanism for the transfer, not about the transfer itself. What you describe is usually called “saving” as opposed to “consumption”. It’s not finance until you’re talking about the specific characteristics of the medium of transfer. Optimal savings means good choices of when to consume, regardless of when you earn. Optimal finance is good placement of that savings (or borrowing, in the case of reverse-transfer) in order to maximize the future consumption.
Question for the Kegan levels folks: I’ve noticed that I tend to regress to level 3 if I enter new environments that I don’t fully understand yet, and that this tends to cause mental issues because I don’t always have the affirmative social environment that level 3 needs. Do you relate? How do you deal with this?
Case study: A simple algorithm for fixing motivation
So here I was, trying to read through an online course to learn about cloud computing, but I wasn’t really absorbing any of it. No motivation.
Motives are a chain, ending in a terminal goal. Lack of motivation meant that my System 1 did not believe what I was doing would lead to achieving any terminal goal. The chain was broken.
So I traversed the chain to see which link was broken.
Why was I doing the online course? Because I want to become better at my job.
Do I still think doing the online course will make me better at my job? Yes I do.
Do I want to get better at my job? Nah, doesn’t spark joy.
Why do I want to get better at my job? Because I want to get promoted.
Do I still think doing better will make me get promoted? Yes I do.
Do I want to get promoted? Nah, doesn’t spark joy.
Why do I want to get promoted? Because (among other things) I want more influence on my environment, for example by having more money.
Do I still think promotion will give me more influence? Yes I do
Do I want more influence? Nah
Why do I want more influence (via money)? Because (among other things) I want to buy a house and do meetups, and live with close friends at the center of a vibrant community that helps people
Do I think more money will get me this house? Yes I do
Do I want to live with close friends at the center of a vibrant community that helps people? Well, usually yes, but today I kind of just want to go to the beach with my gf, and decompress.
Well okay, but most days you do want this thing.
Shit you’re right, I do want to do this online course
And motivation was restored. Suddenly, I feel invigorated. To do the course, and to write this post.
Another thing you can do when you get to that top level: “Is this the best way to get that? (living with close friends at the center of a vibrant community), if not, what is?”
Ibogaine seems to reset opiate withdrawal. There are many stories of people with 20 year old heroin addictions that are cured within one session.
If this is true, and there are no drawbacks, then we basically have access to wireheading. A happiness silver bullet. It would be the hack of the century. Distributing ibogaine + opiates would be the best known mental health intervention by orders of magnitude.
Of course, that’s only if there are no unforeseen caveats. Still, why isn’t everybody talking about this?
Anti-tolerance drugs seem neglected, tractable, and scalable. We’ve done some shallow investigation at QRI and think it is pretty promising. Have been keeping it as a bullet point as we ask around in funding and academic circles. It’s an area that could use a dedicated effort for sure.
Based on a quick glance at the Wikipedia page, it looks like ibogaine may have a significant risk of toxicity (and also the experience of being on it does not sound necessarily fun? I would not choose to take it): https://en.wikipedia.org/wiki/Ibogaine
Also, I think this would rely on opiates being a pleasure-causing experience for everyone or almost everyone, which doesn’t seem obviously true to me. (Source: recently had major surgery, had experience of various opiates including given by IV, kind of hate all of them except for the part where they result in less physical pain.)
We can fund the search for analogs to ibogaine with no side effects. We can figure out if microdosing ibogaine works (some promising but weak evidence it does).
Typical mind fallacy: people with good lives underestimate drastically how important low dose opiates can be for helping people with unlivable chronic pain.
Oh, yeah, for the subpopulation where opiates are therapeutic this seems really valuable. (Which, who knows, I could end up being in, if I’m unlucky and get chronic nerve pain from my amputation). But IMO that’s a pretty different thing from “wireheading” or a “happiness silver bullet” and it really confuses the issue to call it that.
Have you tried opiates? You don’t need to be in pain for it to make you feel great
I had an above knee amputation due to cancer in March and have been on opiates – several different kinds, less over time, sometimes when I was in a lot of pain and sometimes more prophylactically when I’m not in pain but am preparing for something I expect to be painful. I mostly hate the experience of being on them, especially the “high” if I take it before I’m actually in pain from physiotherapy or whatever. (I do appreciate being in less pain. Pain is bad.)
I...guess it’s interesting and I could see a different person liking the experience? I get a lot of dissociative effects, especially with the IV opiates they gave me in hospital. (Feeling like I’m floating above my body, feeling like I don’t have free will and am just watching my actions happen from a distance.) I don’t particularly enjoy this. They also make me feel tired and out of it / cognitively impaired, and I am really, really averse to that. I ended up drinking so much coffee in the hospital trying to fight this off.
My guess is that brains vary and some people would experience this as “feeling great”. (I’ve noticed this with other things like stimulants; I really like how coffee makes me feel, for example, but I know a lot of people who experience it as anxiety/unpleasant jitteriness.)
As for being on ibogaine, a high dose isn’t fun for sure, but microdoses are close to neutral and their therapeutic value makes them net positive
Today I had some insight in what social justice really seems to be trying to do. I’ll use neurodiversity as an example because it’s less likely to lead to bad-faith arguments.
Let’s say you’re in the (archetypical) position of a king. You’re programming the rules that a group of people will live by, optimizing for the well-being of the group itself.
You’re going to shape environments for people. For example you might be running a supermarket and deciding what music it’s going to play. Let’s imagine that you’re trying to create the optimal environment for people.
The problem is, since there is more than one person that is affected by your decision, and these people are not exactly the same, you will not be able to make the decision that is optimal for each one of them. If only two of your customers have different favourite songs, you will not be able to play both of them. In some sense, making a decision over multiple people is inherently “aggressive”.
But what you can do, is reduce the amount of damage. My understanding is that this is usually done by splitting up the people as finely as possible. You might split up your audience into stereotypes for “men”, “women”, “youngsters”, “elders”, “autistic people”, “neurotypicals”, etc. In this case, you can make a decision that would be okay for each of these stereotypes, giving your model a lower error rate.
The problem with this is that stereotypes are leaky generalizations. Some people might not conform to it. Your stereotypes might be mistaken. Alternatively, there might be some stereotypes that you’re not aware of.
Take these 2 models. Model A knows that some people are highly sensitive to sound. Model B is not aware of it. If your model of people is A, you will play much louder music in the supermarket. As a result, people that are highly sensitive to sound will be unable to shop there. This is what social justice means with “oppression”. You’re not actively pushing anyone down, but you are doing so passively, because you haven’t resolved your “ignorance”.
So the social justice project, as I understand it, is to enrich our models of humans to make sure that as many of them as possible are taken into consideration. It is a project of group epistemics, above all.
That means that good social justice means good epistemics. How do you collaboratively figure out the truth? The same laws apply as they would to any truthseeking. Have good faith, give it some probability that you’re wrong, seek out to understand their model first, don’t discard your own doubts, and be proud and grateful when you change your mind.
The problem is that reducing the amount of damage is not the same thing as maximizing value. It’s not utilitarian.
If you are faced with making a decision that gives 99% of people +1 utility but 1% −10 utility, an approached that targets damage reduction means that you make a choice to leave 0.89 utility for the average person on the table.
Giving someone consideration is not enough from a critical theory perspective. Consideration is not equity. Equity is about also giving them power to take part in the decision making.
This would be nice. But in practice I don’t see splitting the audience along many dimensions; rather the differences are shoehorned into sex/gender, sexual orientation, and race (e.g. insisting that “Muslim” is a race). In a social justice debate, an asperger is more likely to be called an asshole than accepted as a disadvantaged minority. Also, the dimension of wealth vs poverty is often suspiciously missing.
If you are a benevolent dictator, it would better to simply have two supermarkets—one with music and one without—and let everyone choose individually where they prefer to shop. Instead of dividing them into categories, assigning the categories to shops, then further splitting the categories into subcategories, etc. But this means treating people as individuals, not as categories. Specifically, trying to help people by helping categories is an XY problem (you end up taking resources from people at the bottom of the “advantaged” categories, and giving them to people at the top of the “disadvantaged” categories; for example Obama’s daughters would probably qualify for a lot of support originally meant for poor people).
Epistemically, social justice is a mixed bag, in my opinion. Some good insights, some oversimplifications. Paying attention to things one might regularly miss, but also evolving its own set of stereotypes and dogmas. It is useful as yet another map in your toolbox, and harmful when it’s the only map you are allowed to use.
I find this interesting as this gives one of the better arguments I can recall for there being something positive at the heart of social justice such that it isn’t just one side trying to grab power from another to push a different set of norms, since that’s often what the dynamics of it look like to me in practice, whatever the intent of social justice advocates, and I find such battles not compelling (why grant one group power rather than another, all else equal, if they will push for the things they want to the exclusion of those who would then not be in power just the same as those in power now do to those seeking to gain power?).
You may have heard of the poverty trap, where you have so little money that you’re not able to spend any money on the things you need to make more. Being broke is an attractor state.
You may have heard of the loneliness trap. You haven’t had much social interaction lately, which makes you feel bad and anxious. This anxiety makes it harder to engage in social interaction. Being lonely is an attractor state.
I think the latter is a close cousin of something that I’d like to call the irrelevance trap:
Lemma 1: having responsibilities is psychologically empowering. When others depend on your decisions, it is so much easier to make the right decision.
Lemma 2: being psychologically empowered makes it more likely for you to take on responsibility, and for others to give you responsibility, because you’re more able to handle it.
I speculate that some forms of depression (the dopaminergic type) are best understood as irrelevance traps. I’m pretty sure that that was the case for me.
How do you escape such a trap? Well you escape a loneliness trap by going against your intuition and showing up at a party. You escape an irrelevance trap by going against your intuition and taking on more responsibility than you feel you can handle.
I like this direction of thought. Note that for all of these traps, success is more often a matter of improvement rather than binary change or “escape from trap”. And persistence plays a large role—very few improvements come from a single attempt.
Sounds very close to what Peterson says
He does influence my thinking
Been thinking all day about why LW does not spark joy anymore for me. I left about 2 years ago to seek a meaningful social life elsewhere, but honestly what I found was… mostly a barren wasteland. Now, with enough slack to come back to LW with a lot of energy, I simply find that my heart is not in it. Why?
One problem is, well I’ve essentially settled for the Netherlands as my base, but no matter how much I tried with the meetups, I never quite managed to breathe life into the local LW community. I mean I did get a regular attendance of 15 people after 2 years of consistent meetups, but when I came back after a 6 month hiatus in Blackpool it was all gone, and I had to start over. I haven’t been motivated to organise meetups ever since.
Just interact online? Not meaningful. I’ve invested way too many skill/meaning points into charisma (body language being a big part of that) to throw all that away. Not even covid was reason enough for me to stop seeing people physically even temporarily.
I have some other gripes with LW, and the core cause of most of those is that rationalists tend to be more neurotic than average, while I’m less neurotic, but c’mon, I’m not going to find a community that matches my personality even better than LW, and if I do it’s probably even more sparsely represented in Amsterdam
I’m basically out of ideas. Help is welcome
I did all the epistemic virtue. I rid myself of my ingroup bias. I ventured out on my own. I generated independent answers to everything. I went and understood the outgroup. I immersed myself in lots of cultures that win at something, and I’ve found useful extracts everywhere.
And now I’m alone. I don’t fully relate to anyone in how I see the world, and it feels like the inferential distance between me and everyone else is ever increasing. I’ve lost motivation for deep friendships, it just doesn’t seem compatible with learning new things about the world. That sense of belonging I got from LessWrong is gone too. There are a few things that LW/EA just doesn’t understand well enough, and I haven’t been able to get it across.
I don’t think I can bridge this gap. Even if I can put things to words, they’re too provisional and complicated to be worth delving into. Most of it isn’t directly actionable. I can’t really prove things yet.
I’ve considered going back. Is lonely dissent worth it? Is there an end to this tunnel?
Can’t speak for anyone else, and I don’t know what my counterfactual selves feel like. https://www.lesswrong.com/posts/6NvbSwuSAooQxxf7f/beware-of-other-optimizing—I don’t know if you and I are similar in ways that matter on this topic. In fact, I don’t know what mental features are important for how to optimize on this topic. Anyway, this is not advice, simply a framing that works for me.
For me, I believe it’s worth it. The tunnel widens a lot, and has LOTS of interesting features in it, but it does not end—there is a fairly fundamental truth underlying that loneliness, and I don’t know of any acceptable ways for me to deny or forget that truth (to myself).
I’ve become hyper-aware of the complexity and variance in humanity, and in myself moment-to-moment and year to year. This makes me quite able to have deep connections with many people, EVEN WHILE understanding that they model the universe differently than I on many dimensions. We can’t have and don’t need agreement on everything, or even on ontologically fundamental topics. We can agree that sitting around a campfire talking about our human experiences is desirable, and that’s enough. With other groups, I can explore moral philosophy without a realism assumption, even if I don’t particularly want to hang out with them on less intellectual topics.
I have often felt similarly.
The sense of ungroundedness sine waves over time afaict. The old strategies for connection had untenable foundations (e.g. tacit shared metaphysical assumptions), so you’ll need to learn new ones. The Charisma Myth and NVC are good for bootstrapping some of the skills. Motivation in the new regime can’t arise because you don’t have even the proto-skills necessary to get a rewarding feedback loop going yet.
Here’s a faulty psychological pattern that I recently resolved for myself. It’s a big one.
I want to grow. So I seek out novelty. Try new things. For example I might buy high-lumen light bulbs to increase my mood. So I buy them, feel somewhat better, celebrate the win and move on.
Problem is, I’ve bought high-lumen bulbs three times in my life now already, yet I sit here without any. So this pattern might happen all over again: I feel like upgrading my life, get this nice idea of buying light bulbs, buy them, celebrate my win and move on.
So here’s 4 life-upgrades, but did I grow 4 times? Obviously I only grew once. From not having high lumen light bulbs to having them.
My instinct towards growth seems to think this:
But in reality, it seems to be more like this:
which I define as equal to
The tap I installed that puts this preservation mindset into practice seems to be very helpful. It’s as follows: if I wonder what to do, instead of starting over (“what seems like the best upgrade to add to my life?”) I first check whether I’m on track with the implementation of past good ideas (“what did my past self intend to do with this moment again?”)
Funnily enough, so far the feeling I get from this mindset seems pretty similar to the feeling I get from meditation. And meditation can be seen as training yourself to put your attention on your past intentions too.
I think this one goes a lot deeper than what I’ve written here. I’ll be revisiting this idea.
Presumably not the main point, what ends up happening to your luminators?
Moved to a new country twice, they broke once.
But the real cause is that I didnt regard these items as my standard inventory, which I would have done if I had more of a preservation mindset.
I have gripes with EA’s that try to argue about which animals have consciousness. They assume way too readily that consciousness and valence can be inferred from behavior at all.
It seems quite obvious to me that these people equate their ability to empathize with an animal with the ability for the animal to be conscious, and it seems quite obvious to me that this is a case of mind projection fallacy. Empathy is just a simulation. You can’t actually see another mind.
If you’re going to make guesses about whether a species is conscious, you should first look at neural correlates of consciousness and valence and then try to find these correlates in animals. You don’t look at animal behavior at all. We have absolutely no reason to believe that behavior correlates with consciousness. That’s just your empathy getting in the way. The same empathy that attributes feelings to stuffed animals.
Not to be pedantic, but what else could consciousness possibly be, except for a way of describing the behavior of some object at a high level of abstraction?
If consciousness was not a behavior, but instead was some intrinsic property of a system, then you run into the exact same argument that David Chalmers uses to argue that philosophical zombies are conceivable. This argument was forcefully rebutted in the sequences.
ETA: When I say behavior, I mean it in the physical sense. A human who is paralyzed but nonetheless conscious would not be behaviorally identical to a dead human. Superficially yes, but behavior means more than seeing what goes on outside. While you might say that I’m simply using a different definition of behavior than you were, I think it’s still relevant, because any evolutionary reason for consciousness must necessarily show up in observational behavior, or else there is no benefit and we have a mystery.
It could be something that is primarily apparent to the person that has it.
That runs together two claims: that consciousness is not behaviour, and that it is independent of physics. You don’t have to accept the second claim in order to accept the first.
And it remains the case that Chalmers doesn’t think zombies are really possible.
“Primarily accessible to the person that has it” does not mean “no behavioural consequences”.
I’m not convinced that this definition is sufficiently clear, or that consciousness should be defined this way. Rather, it’s a property of consciousness that people claim that it’s “readily apparent”, but I am not convinced that it has this readily apparent quality to it.
In general, rather than taking the properties of consciousness at face value, I take Dennett’s approach of evaluating people’s claims about consciousness from a behavioral science perspective. From my perspective, once you’ve explained the structure, dynamics, and the behavior, you’ve explained everything.
Are you sure? Chalmers argues in Chapter 3 of The Conscious Mind that zombies are logically possible. I am not really even sure what force the Zombie argument could hold if he thought it was not logically possible.
It’s not intended to be a complete definition of consciousness, just a nudge away from behaviourism.
From my point of view, that’s missing the central point quite badly.
And elsewhere that they are metaphysically impossible.
Could you let me know why? What about consciousness is missed by a purely behavioral description of an object (keeping in mind that what I mean by behavior is very broad, and includes things like the behavior of electrical signals)?
What is missed is the way it seems from the inside,as I pointed out originally. I don’t have to put my head into an FMRI to know that I am conscious.
I think people who refer to animal behavior in making statements about consciousness are making a claim more along the lines of “given that a being has a brain with superficial similarities to ours and was evolved via a process similar to our own evolution, we can take it’s behavior as higher level indicators of what its brain is doing and infer things about consciousness.” Otherwise, these people would also grant consciousness to all sorts of things we make with superficially human behavior but obviously different mechanisms (ie non-playable characters in MMOs, chatbots).
I read a lot more about consciousness back in the day and I’m not convinced that neural correlates are any better evidence for consciousness than behavior, given that the beings we’re considering already have brains. I’m no expert but per wikipedia on neural correlations of consciousness, we don’t have much in terms of neural correlates:
Per Open Philanthropy’s 2017 report on consciousness on cortex-requiring views (CRVs), we’re not really sure how important having a cortex is for consciousness:
And from the same report, there aren’t really any clear biological factors that can be used to draw lines about consciousness:
Moreover, people who have done way more thorough research into correlates of consciousness than me use both (ie anatomical features as an example of neural correlates, motivational trade-offs, as an example of behavior). Given that animals already have a bunch of similarities to humans, it strikes me as a mistake not to consider behavior at all.
A reductio ad absurdum for this is the strong skeptical position: I have no particular reason to believe that anything is conscious. All configurations of quantum space are equally valuable, and any division into “entities” with different amounts of moral weight is ridiculous.
The strong version of this can’t be true. You claiming that you’re conscious is part of your behaviour. Hopefully, it’s approximately true that you would claim that you’re conscious iff you believe that you’re conscious. If behaviour doesn’t at all correlate with consciousness, it follows that your belief in consciousness doesn’t at all correlate with you being conscious. Which is a reductio, because the whole point with having beliefs is to correlate them with the truth.
Right, right. So there is a correlation.
I’ll just say that there is no reason to believe that this correlation is very strong.
I once won a mario kart tournament without feeling my hands.
I decided to quit my job.
Still have various options for what to do next, but most likely I will spend at least a year trying to build a large rationality community in Amsterdam. I’m talking 5+ events a week, a dedicated space, membership program, website, retreats, etc.
The emphasis will be on developing applied rationality. My approach will be to cover many different paradigms of self-improvement. My hunch is that one will start noticing patterns that these paradigms have in common.
I’m thinking authentic relating, radical honesty, CFAR-style applied rationality, shadow work, yoga/meditation, psychedelic therapy, street epistemology, tantra, body work, nonviolent communication, etc. If you know anything that would fit in this list, please comment!
This would be one pillar of the organisation, and the other one would be explicitly teaching an Effective Altruist ethos to justify working on rationality in the first place.
If this goes really well, I’m hoping this will develop into something like “the CFAR of Europe” at some point.
Much of the list is mutually exclusive/contradictory.
Or rather, there is good stuff that can be mined from all of the above, and incorporated into a general practice of rationality, but there’s a big difference between saying:
“rationality can be extracted from X, Y, and Z”
“X, Y, and Z are good places to look for rationality”
“X, Y, and Z are examples of rationality.”
The first is almost trivially true of all things, so it also seems straightforwardly true of the list “authentic relating, shadow work, yoga/meditation, psychedelic therapy, tantra, body work, etc.”
The second, applied to your list, is a gray area where the word “good” would be doing a lot of work. Searching for rationality in [authentic relating, shadow work, yoga/meditation, psychedelic therapy, tantra, body work, etc.] is not a straightforward process of separating the gold from the dross; it’s a process of painstakingly distinguishing the gold from the fool’s gold while breathing in atmosphere specifically evolved to confuse and poison you. It’s not clear that these domains in particular should be promoted to the attention of someone seeking to develop rationality tech, over other plausible avenues of investigation.
The third, applied to your list, is substantially more false than true (though it still contains some truth).
I think this is important to note less because it matters to me what you’re doing off in Europe with a project you feel personally inspired by, and more because it matters to me how we allow the word “rationality” to be defined and used here on LessWrong.
As stated, you’ve said some stuff that’s compatible with how I think we ought to define and use the word “rationality,” but you’ve definitely not ruled out everything else.
Appreciate the criticism.
I agree with you that we need to separate the good stuff from the bad stuff and that there is a risk here that I end up diluting the brand of rationality by not doing this well enough.
My intuition is that I’m perfectly capable of doing this, but perhaps I’m not the best person to make that call, and I’m reminded that you’ve personally called me out in the past on being too lax in my thinking.
I feel like you have recently written a lot about the particular way in which you think people in LW might be going off the rails, so I could spend some time reading your stuff and trying to pass your ITT.
Does that sound like a good plan to you?
I mean, that’s unusually generous of you. I don’t think [my objection] obligates you to do [that much work]. But if you’re down, I’m down.
It doesn’t, but I tend to go with the assumption that if one person voices an objection, there are 100 more with the same objection that don’t voice it
I put this on my to do list, might take a few weeks to come back to it, but I will come back to it
Yep, I was going to write something similar to what Duncan did. Some topics seem so strongly connected with irrationality, that if you mention them, it will almost inevitably attract people already interested in that topic including the irrational parts, and those people will be coordinated about the irrational parts. While you will be pushing in the direction of rationality, they will be actively pushing in different directions, how sure you are to win all these battles? I assume that creating a community that is 50% rational and 50% mystical would not make you happy. Maybe not even 80% rational and 20% mystical.
One of my pet peeves about the current rationalist community is what seems to me as quite uncritical approval of Buddhism. Especially when contrasted to our utter dismissal of Christianity, which only makes a contrarian exception for Chesterton. (I imagine that Chesterton himself might ask us whether we are literally atheists, or merely anti-Christians.) I am not saying that people here are buying Buddhism hook, line, and sinker; but they are strongly privileging a lot of stuff that comes from it. Like, you can introduce a concept from Buddhism, and people will write articles about how that actually matches our latest congnitive science or some kind of therapy. Do the same with a concept from Christianity, and you will get some strongly worded reprimand about how dangerous it is to directly import concepts from a poisonous memeplex, unless you carefully rederive it from the first principles, in which case it is unlikely to be 100% the same, and it is better to use a different word to express it.
I will end my rant here, just saying that your plans sounds to me like the same thing might happen with Buddhism and Hinduism and New Age and whatever at the same time, so I would predict that you will lose some of the battles. And ironically, you may lose some potential rationalists, as they will observe you losing these battles (or at least not winning them conclusively) and decide that they would prefer some place with stronger norms against mysticism.
(Then again, things are never easy, there is also the opposite error of Hollywood rationality, etc.)
I think it is worth someone pursuing this project, but maybe it’d make more sense to pursue it under the post-rationality brand instead? Then again, this might reduce the amount of criticism from rationality folk in exchange for increasing Viliam’s worries.
(The main issue I have with many post-rationalists is that they pay too little heed to Ken Wilbur’s pre/trans fallacy).
Cause X candidate:
Buddhists claim that they can put brains in a global maximum of happiness, called enlightenment. Assuming that EA aims to maximize happiness plain and simple, this claim should be taken seriously. It currently takes decades for most people to reach an enlightened state. If some sort of medical intervention can reduce this to mere months, this might drive mass adoption and create a huge amount of utility.
This sounds like a Pascal’s Wager argument. Christians, after all, claim they can put you in a global maximum of happiness, called heaven. What percentage of our time is appropriate to spend considering this?
I’m not saying meditation is bunk. I’m saying there has to be some other reason why we take claims about it seriously, and the official religious dogma of Buddhists is not particularly trustworthy. We should base interest in meditation on our model of users who make self-reports, at the very least—and these other reasons for interest in meditation do not support characterizations like “global maximum of happiness.”
I don’t think “if you do this you’ll be super happy (while still alive)” is comparable to “if you do this you’ll be super happy (after you die)”. The former is testable, and I have close friends who have already fully verified it for themselves. I’ve also noticed in myself a superlinear relation between meditation time and likelihood to be in a state of bliss, and I have no reason to think this relation won’t hold when I meditate even more.
The buddha also urged people to go and verify his claims themselves. It seems that the mystic (good) part of buddhism is much more prominent than the organised religion (bad) part, compared to christianity.
It seems contradictory to preach enlightenment since you could be working towards enlightenment instead of preaching. Evangelical Christians don’t have that issue.
Pascal was right, Christianity is the rational choice, even compared to all other religions.
It sounds like your argument would also favor wireheading, which I think the community mostly rejects.
But could you explain why you think wireheading is bad?
Besides, I don’t think the comparison is completely justified. Enlightenment isn’t just claimed to increase happiness, but also intelligence and morality.
I may have finally figured out the use of crypto.
It’s not currency per se, but the essential use case of crypto seems to be to automate the third party.
This “third party” can be many things. It can be a securities dealer or broker. It can be a notary. It can be a judge that is practicing contract law.
Whenever there is a third party that somehow allows coordination to take place, and the particular case doesn’t require anything but mechanical work, then crypto can do it better.
A securities dealer or broker doesn’t beat a protocol that matches buyers and sellers automatically. A notary doesn’t beat a public ledger. A judge in contract law doesn’t beat an automatically executed verdict, previously agreed upon in code.
(like damn, imagine contracts that provably have only one interpretation. Ain’t that gonna put lawyers out of business)
And maybe a bank doesn’t beat peer to peer transactions, with the caveat that central banks are pretty competent institutions, and if anyone will win that race it is them. While I’m optimistic about cryptocurrency, I’m still skeptical about private currency.
Here’s an idea: we hold the Ideological Turing Test (ITT) world championship. Candidates compete to pass an as broad range of views as possible.
Points awarded for passing a test are commensurate with the amount of people that subscribe to the view. You can subscribe to a bunch of them at once.
The awarding of failures and passes is done anonymously. Points can be awarded partially, according to what % of judges give a pass.
The winner is made president (or something)
Why aren’t presidential races already essentially ITT Tournaments? It would seem like that skill would make you really good at drawing support from lots of different demographics.
That basically reads like “just so you know, I’m kidding”.
It’s supposed to read like “this idea is highly unpolished”
Organizing tournaments is something that requires little resources. Appointing presidents on the other hand requires resources that are fully out of reach.
When talking about colonizing Mars speaking about using warp drives to get there doesn’t read like “the idea is highly unpolished”.
Look, if you can’t appreciate the idea because you don’t like it’s delivery, you’re throwing away a lot of information
Given specific criticisms to proposals that people make isn’t throwing away information.
It seemed like Toon was trying to say “I have a cool idea for a tournament but haven’t thought about a good prize”
Prizes are not constraints for running tournaments and treating them like they are makes it harder to bring a tournament to reality.
I guess it depends on why you want to run the tournament.
As someone who never came across religion before adulthood, I’ve been trying to figure it out. Some of it’s claims seem pretty damn nonsensical, and yet some of it’s adherents seem pretty damn well-adjusted and happy. The latter means there’s gotta be some value in there.
The most important takeaway so far is that religious claims make much more sense if you interpret them as phenomenological claims. Claims about the mind. When buddhists talk about the 6 worlds, they talk about 6 states of mood. When christians talk about a covenant with god, they talk about sticking to some kind of mindset no matter what.
Back when this stuff was written, people didn’t seem to distinguish between objective reality and subjective experience. The former is a modern invention. Either that, or this nuance has been lost in translation over the centuries.
My shortform on religion being about belief taxes may interest you
So here’s two extremes. One is that human beings are a complete lookup table. The other one is that human beings are perfect agents with just one goal. Most likely both are somewhat true. We have subagents that are more like the latter, and subsystems more like the former.
But the emphasis on “we’re just a bunch of hardcoded heuristics” is making us stop looking for agency where there is in fact agency. Take for example romantic feelings. People tend to regard them as completely unpredictable, but it is actually possible to predict to some extent whether you’ll fall in and out of love with someone based on some criteria, like whether they’re compatible with your self-narrative and whether their opinions and interests align with yours, etc. The same is true for many intuitions that we often tend to dismiss as just “my brain” or “neurotransmitter xyz” or “some knee-jerk reaction”.
There tends to be a layer of agency in these things. A set of conditions that makes these things fire off, or not fire off. If we want to influence them, we should be looking for the levers, instead of just accepting these things as a given.
So sure, we’re godshatter, but the shards are larger than we give them credit for.
Neither one of those assumptions are true. There’s a lot we don’t know about neuroscience, but we do know that we’re not giant lookup tables, and we don’t have singular goals.
(Citation needed)
I think both of those assumptions are unlikely, but am skeptical of your certainty.
For a citation, go open any neuroscience textbook. We are made of associative memories, but we are not one giant associative look-up table. These are vastly different architectures, and the proof is simply that neurons are locally acting. For humans to be a giant lookup table would require neural networking that we objectively do not have.
For the claim about being singular goal systems, I could point to many things but maybe Maslow’s hierarchy of needs (hierarchy of separate, factorable goals!) is sufficient:
https://en.wikipedia.org/wiki/Maslow’s_hierarchy_of_needs
It’s interesting because Maslow’s Hierarchy actually seems to point to the exact opposite idea to me. It seems to point to the idea that everything we do, even eating food, is in service of eventual self-actualization.
This is of course ignoring the fact that Maslow seems to basically be false experimentally.
Prioritization of goals is not the same as goal unification.
Citation?
It can be if the basic structure is “I need to get my basic needs taken care of so that I can work on my ultimate goal”.
I think Kaj has a good link on experimental proof for Maslow’s Hierarchy.
I also think that it wouldn’t be a stretch to call Self-determination theory a “single goal” framework, that goal being “self-determination”, which is a single goal made up of 3 seperate subgoals, which crucially, must be obtained together to create meaning (if they could be obtained seperately to create meaning, and people were OK with that, than I don’t think it would be fair to categorize it as a single goal theory.
That’s a fully generic response though. Any combination of goals/drives could have a (possibly non-linear) mapping which turns them into a single unified goal in that sense, or vice versa.
Let me put it more simply: can achieving “self-determination” alleviate your need to eat, sleep, and relieve yourself? If not, then there are some basic biological needs (maintenance of which is a goal) that have to be met separately from any “ultimate” goal of self-determination. That’s the sense in which I considered it obvious we don’t have singular goal systems.
Yeah, I think that if the brain in fact is mapped that way it would be meaningful to say you have a single goal.
Maybe, it depends on how the brain is mapped. I know of at least a few psychology theories which would say things like avoiding pain and getting food are in the service of higher psychological needs. If you came to believe for instance that eating wouldn’t actually lead to those higher goals, you would stop.
I think this is pretty unlikely. But again, I’m not sure.
This BBC article discusses it a bit:
Of course, this doesn’t really contradict your point of there being separable, factorable goals. AFAIK, the current mainstream model of human motivation and basic needs is self-determination theory, which explicitly holds that there exist three separate basic needs:
Thanks, I learned something.
Although for the purposes of this discussion it seems that Maslow’s specific factorization of goals is questionable, but not the general idea of a hierarchy of needs. Does that sound reasonable?
Well, it sounds to me like it’s more of a heterarchy than a hierarchy, but yeah.
These aren’t contradictory or extremes of a continuum, they’re different levels of description of agency. A complete enough lookup table is indistinguishable from having goals. A deep enough neural net (say, a brain) is a pretty complete lookup table.
The “one goal” idea is a slightly confused modeling level—“goal” isn’t really unitary or defined well enough to say whether a set of desires is one conjunctive goal or many coordinated and weighted goals.
If you’re struggling with confidence in the LW crowd, I recommend aiming for doing one thing well, instead of trying too hard to prevent criticism. You will inevitably get criticism and it’s better to embrace that.
A thought experiment: would you A) murder 100 babies or B) murder 100 babies? You have to choose!
I’ve been trying to figure out what finance really is.
It’s not resource allocation between different people, because the intention is that these resources are paid back at some point.
It’s rather resource re-allocation between different moments in one person’s life.
Finance takes money from a time-slice of you that has it, and gives it to a time-slice of you that can best spend it.
Optimal finance means optimal allocation of money across your life, regardless of when you earn it.
That’s part of it, but there’s also coordination between people (e.g., investors coming together to finance a capital-intensive business that no single person can afford to fund) and managing risk and incentives (e.g., a sole proprietorship has better incentives but worse risk characteristics compared to a company with many shareholders, a company issuing both stocks and bonds so investors can make their own risk/reward tradeoff, etc.).
I think maybe something like “finance is about how a group of people can cooperate to pursue EU maximization over time, given some initial endowment of assets and liabilities” would capture most of it?
Well yes, but no, the point is that these other people are merely means, but optimally distributing your assets over time is a means that screens off the other people, in a sense. In the end, assuming that people are really just optimizing for their own value, they might trade for that but in the end their goal is their own allocation.
Ok, I think that makes sense.
When exploring things like this, taboo the phrase “really is”. No abstraction really is anything. You might ask “how can I use this model”, or “what predictions can I make based on this” or “what was that person trying to convey when they said that”.
Less philosophically, you’re exploring an uncommon use of the word “finance”. Much more commonly, it’s about choice of mechanism for the transfer, not about the transfer itself. What you describe is usually called “saving” as opposed to “consumption”. It’s not finance until you’re talking about the specific characteristics of the medium of transfer. Optimal savings means good choices of when to consume, regardless of when you earn. Optimal finance is good placement of that savings (or borrowing, in the case of reverse-transfer) in order to maximize the future consumption.
Sure. In this case “what it really is” means “what does it optimize for, why did people invent it”
I think putanumonit wrote a post that made a similar point. Might be useful in this investigation.
I’m thinking of postmodernism and modernism not as being incompatible but as being the two components of a babble and prune system on a societal level
Personalized mythic-mode rendition of Goodhart’s law:
“Everyone wants to be a powerful uncompromising force for good, but spill a little good and you become a powerful uncompromising force for evil”
Question for the Kegan levels folks: I’ve noticed that I tend to regress to level 3 if I enter new environments that I don’t fully understand yet, and that this tends to cause mental issues because I don’t always have the affirmative social environment that level 3 needs. Do you relate? How do you deal with this?
By dealing with trauma and taking shame and guilt as object.
By incorporating the need for belonging into my systemic understanding.
I had this for a long time but it’s pretty rare now.