The Growth of My Pessimism: Transhumanism, Immortalism, Effective Altruism.
Why I Grew Skeptical of Transhumanism
Why I Grew Skeptical of Immortalism
Why I Grew Skeptical of Effective Altruism
Only Game in Town
Wonderland’s rabbit said it best: The hurrier I go, the behinder I get.
We approach 2016, and the more I see light, the more I see brilliance popping everywhere, the Effective Altruism movement growing, TEDs and Elons spreading the word, the more we switch our heroes in the right direction, the behinder I get. But why? - you say.
Clarity, precision, I am tempted to reply. I have left the intellectual suburbs of Brazil, straight into the strongest hub of production of things that matter, The Bay Area, via Oxford’s FHI office, I now split my time between UC Berkeley, and the CFAR/MIRI office. In the process, I have navigated an ocean of information, read hundreds of books, papers, saw thousands of classes, became proficient in a handful of languages and a handful of intellectual disciplines. I’ve visited the Olympus and I met our living demigods in person as well.
Against the overwhelming forces of an extremely upbeat personality surfing a hyper base-level happiness, these three forces: approaching the center, learning voraciously, and meeting the so-called heroes, have brought me to the current state of pessimism.
I was a transhumanist, an immortalist, and an effective altruist.
Why I Grew Skeptical of Transhumanism
The transhumanist in me is skeptical of technological development fast enough for improving the human condition to be worth it now, he sees most technologies as fancy toys that don’t get us there. Our technologies can’t and won’t for a while lead our minds to peaks anywhere near the peaks we found by simply introducing weirdly shaped molecules into our brains. The strangeness of Salvia, the beauty of LSD, the love of MDMA are orders and orders of magnitude beyond what we know how to change from an engineering perspective. We can induce a rainbow, but we don’t even have the concept of force yet. Our knowledge about the brain, given our goals about the brain, is at the level of knowledge of physics of someone who found out that spraying water on a sunny day causes the rainbow. It’s not even physics yet.
Believe me, I have read thousands of pages of papers in the most advanced topics in cognitive neuroscience, my advisor spent his entire career, from Harvard to Tenure, doing neuroscience, and was the first person to implant neurons that actually healed a brain to the point of recovering functionality by using non-human neurons. As Marvin Minsky, who invented the multi-agent computational theory of mind, told me: I don’t recommend entering a field where every four years all knowledge is obsolete, they just don’t know it yet.
Why I Grew Skeptical of Immortalism
The immortalist in me is skeptical because he understands the complexity of biology from conversations with the centimillionaires and with the chief scientists of anti-aging research facilities worldwide, he met the bio-startup founders and gets that the structure of incentives does not look good for bio-startups anyway, so although he was once very excited about the prospect of defeating the mechanisms of ageing, back when less than 300 thousand dollars were directly invested in it, he is now, with billions pledged against ageing, confident that the problem is substantially harder to surmount than the number of man-hours left to be invested in the problem, at least during my lifetime, or before the Intelligence Explosion.
Believe me, I was the first cryonicist among the 200 million people striding my country, won a prize for anti-ageing research at the bright young age of 17, and hang out on a regular basis with all the people in this world who want to beat death that still share in our privilege of living, just in case some new insight comes that changes the tides, but none has come in the last ten years, as our friend Aubrey will be keen to tell you in detail.
Why I Grew Skeptical of Effective Altruism
The Effective Altruist is skeptical too, although less so, I’m still founding an EA research institute, keeping a loving eye on the one I left behind, living with EAs, working at EA offices and mostly broadcasting ideas and researching with EAs. Here are some problems with EA which make me skeptical after being shook around by the three forces:
-
The Status Games: Signalling, countersignalling, going one more meta-level up, outsmarting your opponent, seeing others as opponents, my cause is the only true cause, zero-sum mating scarcity, pretending that poly eliminates mating scarcity, founders X joiners, researchers X executives, us institutions versus them institutions, cheap individuals versus expensive institutional salaries, it’s gore all the way up and down.
-
Reasoning by Analogy: Few EAs are able to and doing their due intellectual diligence. I don’t blame them, the space of Crucial Considerations is not only very large, but extremely uncomfortable to look at, who wants to know our species has not even found the stepping stones to make sure that what matters is preserved and guaranteed at the end of the day? It is a hefty ordeal. Nevertheless, it is problematic that fewer than 20 EAs (one in 300?) are actually reasoning from first principles, thinking all things through from the very beginning. Most of us are looking away from at least some philosophical assumption or technological prediction. Most of us are cooks and not yet chefs. Some of us have not even waken up yet.
-
Babies with a Detonator: Most EAs still carry their transitional objects around, clinging desperately to an idea or a person they think more guaranteed to be true, be it hardcore patternism about philosophy of mind, global aggregative utilitarianism, veganism, or the expectation of immortality.
-
The Size of the Problem: No matter if you are fighting suffering, Nature, Chronos (death), Azathoth (evolutionary forces) or Moloch (deranged emergent structures of incentives), the size of the problem is just tremendous. One completely ordinary reason to not want to face the problem, or to be in denial, is the problem’s enormity.
-
The Complexity of The Solution: Let me spell this out, the nature of the solution is not simple in the least. It’s possible that we luck out and it turns out the Orthogonality Thesis and the Doomsday Argument and Mind Crime are just philosophical curiosities that have no practical bearing in our earthly engineering efforts, that the AGI or Emulation will by default fall into an attractor basin which implements some form of MaxiPok with details that it only grasps after CEV or the Crypto, and we will be Ok. It is possible, and it is more likely than that our efforts will end up being the decisive factor. We need to focus our actions in the branches where they matter though.
-
The Nature of the Solution: So let’s sit down side by side and stare at the void together for a bit. The nature of the solution is getting a group of apes who just invented the internet from everywhere around the world, and get them to coordinate an effort that fills in the entire box of Crucial Considerations yet unknown—this is the goal of Convergence Analysis, by the way—find every single last one of them to the point where the box is filled, then, once we have all the Crucial Considerations available, develop, faster than anyone else trying, a translation scheme that translates our values to a machine or emulation, in a physically sound and technically robust way (that’s if we don’t find a Crucial Consideration otherwise which, say, steers our course towards Mars). Then we need to develop the engineering prerequisites to implement a thinking being smarter than all our scientists together who can reflect philosophically better than the last two thousand years of effort while becoming the most powerful entity in the universe’s history, that will fall into the right attractor basin within mindspace. That’s if Superintelligences are even possible technically. Add to that we or it have to guess correctly all the philosophical problems that are A)Relevant B)Unsolvable within physics (if any) or by computers, all of this has to happen while the most powerful corporations, States, armies and individuals attempt to seize control of the smart systems themselves. without being curtailed by the hindrance counter incentive of not destroying the world either because they don’t realize it, or because the first mover advantage seems worth the risk, or because they are about to die anyway so there’s not much to lose.
-
How Large an Uncertainty: Our uncertainties loom large. We have some technical but not much philosophical understanding of suffering, and our technical understanding is insufficient to confidently assign moral status to other entities, specially if they diverge in more dimensions than brain size and architecture. We’ve barely scratched the surface of technical understanding on happiness increase, and philosophical understanding is also in its first steps.
-
Macrostrategy is Hard: A Chess Grandmaster usually takes many years to acquire sufficient strategic skill to command the title. It takes a deep and profound understanding of unfolding structures to grasp how to beam a message or a change into the future. We are attempting to beam a complete value lock-in in the right basin, which is proportionally harder.
-
Probabilistic Reasoning = Reasoning by Analogy: We need a community that at once understands probability theory, doesn’t play reference class tennis, and doesn’t lose motivation by considering the base rates of other people trying to do something, because the other people were cooks, not chefs, and also because sometimes you actually need to try a one in ten thousand chance. But people are too proud of their command of Bayes to let go of the easy chance of showing off their ability to find mathematically sound reasons not to try.
-
Excessive Trust in Institutions: Very often people go through a simplifying set of assumptions that collapses a brilliant idea into an awful donation, when they reason:
I have concluded that cause X is the most relevant
Institution A is an EA organization fighting for cause X
Therefore I donate to institution A to fight for cause X.
To begin with, this is very expensive compared to donating to any of the three P’s: projects, people or prizes. Furthermore, the crucial points to fund institutions are when they are about to die, just starting, or building a type of momentum that has a narrow window of opportunity where the derivative gains are particularly large or you have private information about their current value. To agree with you about a cause being important is far from sufficient to assess the expected value of your donation. -
Delusional Optimism: Everyone who like past-me moves in with delusional optimism will always have a blind spot in the feature of reality about which they are in denial. It is not a problem to have some individuals with a blind spot, as long as the rate doesn’t surpass some group sanity threshold, yet, on an individual level, it is often the case that those who can gaze into the void a little longer than the rest end up being the ones who accomplish things. Staring into the void makes people show up.
-
Convergence of opinions may strengthen separation within EA: Thus far, the longer someone is an EA for, the more likely they are to transition to an opinion in the subsequent boxes in this flowchart from whichever box they are at at the time. There are still people in all the opinion boxes, but the trend has been to move in that flow. Institutions however have a harder time escaping being locked into a specific opinion. As FHI moves deeper into AI, and GWWC into poverty, 80k into career selection etc… they become more congealed. People’s opinions are still changing, and some of the money follows, but institutions are crystallizing into some opinions, and in the future they might prevent transition between opinion clusters and free mobility of individuals, like national frontiers already do. Once institutions, which in theory are commanded by people who agree with institutional values, notice that their rate of loss towards the EA movement is higher than their rate of gain, they will have incentives to prevent the flow of talent, ideas and resources that has so far been a hallmark of Effective Altruism and why many of us find it impressive, it’s being an intensional movement. Any part that congeals or becomes extensional will drift off behind, and this may create unsurmountable separation between groups that want to claim ‘EA’ for themselves.
Only Game in Town
The reasons above have transformed a pathological optimist into a wary skeptical about our future, and the value of our plans to get there. And yet, I don’t see other option than to continue the battle. I wake up in the morning and consider my alternatives: Hedonism, well, that is fun for a while, and I could try a quantitative approach to guarantee maximal happiness over the course of the 300 000 hours I have left. But all things considered, anyone reading this is already too close to the epicenter of something that can become extremely important and change the world to have the affordance to wander off indeterminately. I look at my high base-happiness and don’t feel justified in maximizing it up to the point of no marginal return, there clearly is value elsewhere than here (points inwards), clearly the self of which I am made has strong altruistic urges anyway, so at least above a threshold of happiness, has reason to purchase the extremely good deals in expected value happiness of others that seem to be on the market. Other alternatives? Existentialism? Well, yes, we always have a fundamental choice and I feel the thrownness into this world as much as any Kierkegaard does. Power? When we read Nietzsche it gives that fantasy impression that power is really interesting and worth fighting for, but at the end of the day we still live in a universe where the wealthy are often reduced to having to spend their power in pathetic signalling games and zero sum disputes or coercing minds to act against their will. Nihilism and Moral Fictionalism, like Existentialism all collapse into having a choice, and if I have a choice my choice is always going to be the choice to, most of the time, care, try and do.
Ideally, I am still a transhumanist and an immortalist. But in practice, I have abandoned those noble ideals, and pragmatically only continue to be an EA.
It is the only game in town.
- My Coming of Age as an EA: 12 Problems with Effective Altruism by 28 Nov 2015 9:00 UTC; 3 points) (EA Forum;
- 1 Dec 2015 0:45 UTC; 3 points) 's comment on Promoting rationality to a broad audience—feedback on methods by (
You sound unhappy. Do you still hold these conclusions when you are very happy?
You have correctly identified that I wrote this post while very unhappy. The comments, as you can see by their lighthearted tone, I wrote pretty happy.
Yes, I stand by those words even now (that I am happy).
Are you saying don’t think probabilistically here? I’d love a specific post on just your thoughts on this.
Yes I am.
Step 1: Learn Bayes
Step 2: Learn reference class
Step 3: Read 0 to 1
Step 4: Read The Cook and the Chef
Step 5: Reason why are the billionaires saying the people who do it wrong are basically reasoning probabilistically
Step 6: Find the connection between that and reasoning from first principles, or the gear hypothesis, or whichever other term you have for when you use the inside view, and actually think technically about a problem, from scratch, without looking at how anyone else did it.
Step 7: Talk to Michael Valentine about it, who has been reasoning about this recently and how to impart it at CFAR workshops.
Step 8: Find someone who can give you a recording of Geoff Anders’ presentation at EAGlobal.
Step 9: Notice how all those steps above were connected, become a Chef, set out to save the world. Good luck!
Note that the billionaires disagree on this. Thiel says that people should think more like calculus and less like probability, while Musk(the inspiration for the cook and the chef) says that people think in certainties while they should think in probabilities.
Not my reading. My reading is that Musk thinks people should not consider the probability of succeding as a spacecraft startup (0% historically) but instead should reason from first principles, such as thinking what are the materials from which a rocket is made, then building the costs from the ground up.
First, I think we should seperate two ideas.
Creating a reference class.
Thinking in probabilities.
“Thinking in probabilities” is a consistent talking point for Musk—every interview where’s he asked how he’s able to do what he does, he mentions this.
Here’s an example I found with a quick Google search:
So that covers probability.
In terms of reference class, I think what Thiel and Musk are both saying is that previous startups are really bad to use as a reference class for new startups. I don’t know if that means they generally reject the idea of reference classes, but it does give me pause in using them to figure out the chances of my company succeeding based on other similar companies.
I model probabilistic thinking as something you build on top of all this. First you learn to model the world at all (your steps 3-8), then you learn the mathematical description of part of what your brain is doing when it does all this. There are many aspects of normative cognition that Bayes doesn’t have anything to say about, but there are also places where you come to understand what your thinking is aiming at. It’s a gears model of cognition rather than the object-level phenomenon.
If you don’t have gears models at all, then yes, it’s just another way to spout nonsense. This isn’t because it’s useless, it’s because people cargo-cult it. Why do people cargo-cult Bayesianism so much? It’s not the only thing in the sequences. The first post, The Simple Truth, big parts of Mysterious Answers to Mysterious Questions, and basically all of Reductionism are about the gears-model skill. Even the name rationalism evokes Descartes and Leibniz, who were all about this skill. My own guess is that Eliezer argued more forcefully for Bayesianism than for gears models in the sequences because, of the two, it is the skill that came less naturally to him, and that stuck.
What would cargo-cult gears models look like? Presumably, scientism, physics envy, building big complicated models with no grounding in reality. This too is a failure mode visible in our community.
So for us to understand what you’re even trying to say, you want us to read a bunch of articles, talk to one of your friends, listen to a speech, and only then will we become EAs good enough for you? No thanks.
Diego points to a variety of resources that all make approximately the same point, which I’ll attempt to summarize: If you apply probabilistic “outside view” reasoning to your projects and your career, in practice this means copying approaches that have worked well for other people. But if it’s clear that an approach is working well, then others will be copying it too, and you won’t outperform. So your only realistic shot at outperforming is to find a useful and underused “inside view” way of looking at things.
(FYI, I’ve found that keeping a notebook has been very useful for generating & recording interesting new ideas. If you do it for long enough you can start to develop your own ontology for understanding areas you’re interested in. Don’t worry too much about your notebook’s structure & organization: embrace that it will grow organically & unpredictably.)
This is wrong. Human beings are not a pool of identical rational agents competing in the same game from the same starting point aiming for the same endpoint.
people make mistakes, systematically.
most people start with less IQ than you, dear reader. You have an unfair advantage, so go exploit it using perfectly standard methods like getting a technology job.
if you have particular tastes, ambitions or goals (you might not even know about them, some self exploration is required) then you may be aiming for a prize that few other people are trying to claim
If someone took the time to analyze lots of historically important inventors, entrepreneurs, and thinkers, I doubt the important common factor would be that they made fewer mistakes than others.
Yes, you can “outperform” without much difficulty if you consider getting a nice job to be “outperforming” or you change the goalposts so you’re no longer trying to do something hard.
I think this depends on reference class and what one means by ‘mistakes’. The richest financier is someone whose strategy is explicitly ‘don’t make mistakes.’ (Really, it’s “never lose money” plus the emotional willingness to do the right thing, even if it’s boring instead of clever.)
I think the heart of the disagreement here is the separation between things that are ‘known to someone’ and ‘known to no one’—the strategies one needs to discover what other people have already found are often different from the strategies one needs to discover what no one knows yet, and both of them are paths to success of varying usefulness for various tasks.
Depends on the investment class. Even Charlie Munger (Warren Buffet’s partner) says “If you took our top fifteen decisions out, we’d have a pretty average record.”
Yes, even if success in the domain is basically about avoiding mistakes, I imagine that if there are huge winners in the domain they got there by finding some new innovative way to get their rate of mistakes down.
Nope, finance doesn’t work like that. The richest financier is one who (1) has excellent risk management; and (2) got lucky.
Notably, risk management is not about avoiding risks (and so, possible mistakes). It’s about managing risk—acknowledging that mistakes will be made and making sure they won’t kill you.
So, obviously ‘never’ is hyperbole on Buffett’s part.
I’ll buy that value investing stopped working as well because of increased investor sophistication and a general increase in asset prices. As a somewhat related example, daily S&P 500 momentum investing worked up until 1980, and now you need to track more sophisticated momentum measurements. But to quote Cliff Asness (talking about momentum investing, not value investing):
Getting a nice job with a stable relationship, raising children well and having a good circle of friends that you like, indulging your particular tastes is outperforming the average person.
Perhaps what you’re talking about is radical outperformance—“being Steve Jobs”, changing the world etc.
In my opinion seriously aiming for that kind of life is a massive mistake—there is no recipe for it, those who achieve it do so through extraordinary luck + skill + genetic advantages which cannot be reliably replicated by any method whatsoever.
There are lots of bits and pieces—e.g. the notes outlined above that two billionaires have signed on to.
Since when is a high probability of failure by itself a good reason not to do anything? If you’re a rational expected utility maximizer you do things according to their expected value, which means in some cases it makes sense to do things that initially seem impossible.
If you want to wuss out on life and take the path of least resistance, avoid all the biggest and most interesting bosses in the game, and live a life that has little greater challenge or purpose—fine by me. But frankly if that’s the case I’ll have to tap out out of this conversation, since it’s a bad use of my time and I don’t really want to absorb the attitudes of people like you, who explicitly state that they’re totally uninterested in accomplishing anything meaningful.
You can’t reload.
Thanks. I will give some of those articles a look when I have the chance. However, it isn’t true that every activity is competitive in nature. Many projects are cooperative, in which case it’s not necessarily a problem if you and other people are taking similar approaches and doing them well. We also shouldn’t overestimate the competition and assume that they are going to be applying probabilistic reasoning, when in reality we can still outperform by applying basic rules of rationality.
No, that’s if you want to understand why a specific Lesswrong afficionado became wary of probabilistic thinking to the point of calling it a problem of the EA community. If you don’t care about my opinions in general, you are welcome to take no action about it. He asked for my thoughts, I provided them.
But the reference class of Diego’s thoughts contains more thoughts that are wrong than that are true. So on priors, you might want to ignore them :p
So, after all this learning about all the niggling details that keep frustrating all these grand designs, you still think an intelligence explosion is something that matters/is likely? Why? Isn’t it just as deus-ex-machina as the rest of this that you have fallen out from after learning more about it?
Not really. My understanding of AI is far from grandiose, I know less about it than about my fields (Philo, BioAnthro) - I’ve merely read all of FHI, most of MIRI, half of AIMA, Paul’s blog, maybe 4 popular and two technical books on related issues, Max 60 papers on AGI per se, I don’t code, and I only have the coarse grained understanding of it. - But in this little research and time I had to look into it, I saw no convincing evidence for a cap on the level of sophistication that a system’s cognitive abilities can achieve. I have also not seen very robust evidence that would countenance the hypothesis of a fast takeoff.
The fact that we have not fully conceptually disentangled the dimensions of which intelligence is composed is mildly embarassing though, and it may be that AGI is a Deus ex-machina because actually, more as Minsky or Goertzel, less as MIRI or Lesswrong, General Intelligence will turn out to be a plethora of abilities that don’t have a single denominator, ofter superimposed in a robust way.
But for now, nobody who is publishing seems to know for sure.
Beware the Dunning–Kruger effect.
Looking at the big picture, you could also say that there convincing evidence for a cap on the lifespan of a biological organism. Heck, some trees have been alive for over 10,000 years! Yet, once you look at the nitty-gritty details of biomedical research, it becomes clear that even adding just a few decades to the human lifespan is a very hard problem and researchers still largely don’t know how to solve it.
It’s the same for AGI. Maybe truly super-human AGI is physically impossible due to complexity reasons, but even if it is possible, developing it is a very hard problem and researchers still largely don’t know how to solve it.
I think you misunderstood my claim for sarcasm. I actually think I don`t know much about AI (not nearly enough to make a robust assessment).
I’d tend to disagree with this; we have a pretty good idea of how some areas of the brain work (V1 cortex), we are making good progress in understanding how other parts work (cortical microcircuits, etc.) and we haven’t seen anything to indicate that other areas of the brain work using extremely far-fetched and alien principles to what we already know.
But I always considered ‘understanding the brain’ to be a bit overrated, as the brain is an evolutionary hodge-podge, a big snowball of accumulated junk that’s been rolling down the slope for 500 million years. In the future we’re eventually going to understand the brain for sentimental reasons, but I’d give only 1% probability that understanding it is necessary for the intelligence explosion to occur. Already we have machines that are capable of doing tasks corresponding to areas of the brain that we have no idea of how they work. In fact we aren’t even sure how our machines work either! We just know they do. We’re far more likely to stumble upon AI than to create it through a forced effort of brain emulation.
He have non-confirmed simplified hypothesis with nice drawings for how microcircuits in the brain work. The ignore more than a million things (literally, they just have to ignore specific synapses, the multiplicity of synaptic connection etc… if you sum those things up, and look at the model, I would say it ignores about that many things). I’m fine with simplifying assumptions, but the cortical microcircuit models are a butterfly flying in a hurricane.
The only reason we understand V1 is because it is a retinotopic inverted map that has been through very few non-linear transformations—same for the tonotopic auditory areas—as soon as V4, we are already completely lost (for those who don’t know, the brain has between 100-500 areas depending on how you count, and we have a medium guess of a simplified model that applies well to two of them, and medium to some 10-25). And even if you could say which functions V4 participates more in, this would not tell you how it does it.
All true points, but consider your V4 example. We have software that is gradually approaching mammalian-level ability for visual information processing (not human-level just yet, but our visual cortex is larger than most animals’ entire cortices, so that’s not surprising). So, as far as building AI is concerned, so what if we don’t understand V4 yet, if we can produce software that is that good at image processing?
I am more confident that we can produce software that can classify images, music and faces correctly than I am that we can integrate multimodal aspects of these modulae into a coherent being that thinks it has a self, goals, identity, and that can reason about morality. That’s what I tried to address in my FLI grant proposal, which was rejected (by the way, correctly so, it needed the latest improvements, and clearly—if they actually needed it—AI money should reach Nick, Paul and Stuart before our team.) We’ll be presenting it in Oxford, tomorrow?? Shhh, don’t tell anyone, here, just between us, you get it before the Oxford professors ;) https://docs.google.com/document/d/1D67pMbhOQKUWCQ6FdhYbyXSndonk9LumFZ-6K6Y73zo/edit
I think one possible key to regaining your motivation would be to apply the counter-objections of point 9 to the overall objections of the entire post.
The technology does exist. In hypnosis, we do party tricks including the effects of the weirdly shaped molecules. Think about this redirect. We do lucid dreaming. We do all the cool stuff from eastern meditations and some that probably haven’t been done before (“ultra-height”). We can do everything the human mind is capable of experiencing. We produce usable social/dating/relationship advice, sports training, sales training, therapy, anything that involves using your brain better. We can redesign our own personality.
It sounds like magic, but it’s just sufficiently advanced. When installing a Death Star power core in the root chakra does exactly what I expect, the only observation of objective reality is that the brain figured out what I’m trying to do and made it happen somehow. It could be a fun research topic to find out the neurology of different techniques, but the current dominant scientific theory says hypnosis is a form of placebo.
That sounds like a wildly overreaching claim. We can do that now / in the near future? I don’t think so.
/blinks. What do you expect installing a Death Star power core in the root chakra to do?
(will it let you shoot death rays out of your ass?)
At a local NLP seminar everything was fun, while they reinduced drug experiences. They went on building and increase the intensity. After a while the intensity reached the point that a person dropped unconscious and stayed that way for 10 minutes and the fun was over.
To me it seems possible to raise the intensity of experiences via hypnosis very far.
Getting people drunk/high is one of the classics of stage hypnosis. What steps have you taken to observe reality before reaching that conclusion?
Establish and maintain a higher baseline of subjective well-being. People already have concepts like “chi” or “mental energy”; a generator produces more energy; and the “root chakra” is “where energy enters the body”. I know that last one because I decided it sounds good.
These concepts are “real” in the same sense as a programming language. There is no inheritance in the transistors, but you can pretend as long as the compiler does the right thing with your code. Apparently the human brain is intelligent enough that we can simply make shit up.
Ah, well, good luck with that.
Let me get this straight. When faced with a theory saying the human brain is intelligent, you have trouble considering it possible. You don’t expect a theory that explains something you can’t explain, to say things that sound ridiculous to you, in a universe that runs on quantum mechanics. Your response to a theory that explains something you can’t explain, is to ignore the evidence and sneer at it. You are upvoted.
Could you please point me at some learning material so I can fit in better around here?
(Note to future readers: as of the time of writing this post, Lumifer’s post had on net balance 1 single upvote, and it came from me.)
@Jurily: nowhere did your post say or even imply that “the human brain is intelligent,” and this post doesn’t help either. What you described was a very ambitious project of rewriting the brain at will with hypnosis, which under the current understanding of psychology is a very extraordinary claim, especially considering the mystical-sounding jargon you threw in. So skepticism is more than justified.
When physicists have two experiments proving two mutually exclusive theories, they come up with a theory that explains both, no matter how ridiculous it sounds, and then redesign their methodology to test the new predictions. Newtonian physics is still accurate enough to explain a soccer game, reality hasn’t changed when GR explained the quirks.
Under the current “understanding” of psychology, people want to fuck their parents at age 3 and depression is an “illness” even though 150 years of research hasn’t demonstrated the cause or a cure. Their “treatments” look to me like trying to close a Chrome tab by radiating the box while I can produce permanent results just by telling people to basically calm down and stop being stupid after they have already given up on “evidence-based therapy”. What does it mean when “PTSD” is frequently “misdiagnosed” as “ADHD” and neither has a cure? What does it mean when we literally have self-driving cars before a professor of psychology comes up with a way to scare people into not texting on the road to save their lives?
Until psychology adopts a research methodology strong enough to conclude the Oedipus complex has always been bullshit or at least develop the idea that their theories are supposed to explain placebo as part of observed reality, it is not a field of science. They’re just priests of the Church of the Published Article.
Be a skeptic, just don’t think that means you’re supposed to stick your fingers in your ear and chant “pseudoscience” until men stop going into labour on stage just because someone told them to. If nothing else, at least give me the courtesy of assuming for five seconds that I might have some sane reason to come to a den of rationalists and profess my crackpot beliefs.
Your impression of what psychologists believe is outdated. Today’s psychologists already know that Freudian psychoanalysis doesn’t work. It’s been years since it was part of the standard understanding of psychology. And the placebo effect is already accounted for in every serious randomized trial.
Your implication that depression is not a real thing needs to be explained in more detail, especially with such a kilometer-tall red flag as your use of square quotes for evidence-based medicine.
So, what’s your evidence that stage hypnosis is a viable therapy?
That’s nice, but what about the axiom of medicine, when was that examined? How did they prove the idea that statistics is an effective research method for neural networks of 10^14 synapses trained on unique input exhibiting mostly unique symptoms?
Yes, I applaud their very effective ways to completely ignore it. Where’s the research on producing better and permanent placebo? Where are the results? Don’t you think that’s in scope for a field called psychology? If not, who should be researching it? In what way is “placebo” not a thought-stopper for psychology?
Depression is a real thing, it’s just not a hardware problem. They should be doing tech support, not medicine. Half of NLP is basically trying to find out what they see on the screen, and they still get better results. Psychology needs to qualify their methods as “evidence-based” to distinguish it from “result-based”.
If you think medicine is a better fit for the human brain than a computing metaphor, feel free to demonstrate the existence of a mental immune system.
I mention stage hypnotists a lot because they need to make it blatantly obvious that something is happening. They optimize for entertainment, not therapy. You can observe their results on Youtube.
For therapy, my evidence is Mark Cunningham’s work. When he does an erotic hypnosis demo on a subject with anorgasmia, you can tell she was telling the truth because the session lasts about 20 minutes longer than usual. The results are also blatantly obvious. Look for Adina in his Renegade Hypnotist Project. It’s up on TPB, along with a bunch of his other stuff. Some of his other demos are also up on Youtube.
Here is Richard Bandler dealing with a schizophrenic. He’s also using hypnosis everywhere he goes, also up on TPB.
You will not find one person who has done erotic hypnosis on either side of the chair who believes it possible to hang on to anything diagnosed as depression after ten orgasms in half an hour. One.
The intimidating complexity of the brain doesn’t turn it into a strange, otherworldly realm where the same boring laws of physics somehow cease to apply.
Your idea of what a placebo does is very confused. A placebo is not a backdoor fix to reboot the brain with a secret magic word. A placebo is anything that is physically ineffectual but resembles actual therapy, and the only reason why it’s still a necessary evil in research is because it gives a standard of comparison to ascertain how much of the effect of an actual treatment was due to mere suggestion. It is (outside of rare scenarios where a doctor is in an extremely precarious situation with no viable therapy at hand) the epitome of unethical to prescribe a placebo.
Unless you’re a dualist, every mental disorder is a hardware problem. There’s simply no other place where things can happen.
Don’t put words in my mouth. I’ve never spoken against the computing metaphor.
I can also observe faith healing and exorcisms on YouTube. Show me large-scale, randomized, controlled trials published in peer-reviewed journals.
Now you’re making a testable claim. You’re saying lots of orgasms cure depression. What’s the scientific evidence?
I gave directions to Hogwarts. I gave the simplest, easiest and most fun testable claim I could think of. It is part of the claim that the process of testing it is guaranteed to improve your life. No study will change any of that. Go observe reality.
You haven’t provided any theory. You made incoherent noises about chi, chakras, stuffing a Death Star power core up someone’s ass, and making shit up.
I’ve linked a stage hypnotist training class and made testable predictions you find obviously false. It’s meaningless to discuss smartphone design until you’ve shown the willingness to press the power button and see what happens.
This is entirely peripheral to any point you’re actually making, but: In what possible sense is it true that Marvin Minsky “invented the computer”?
Very sorry about that, I thought he held the patent for some aspect of computers that had become widespread, in the same way Wozniak holds the patent for personal computers. This was incorrect. I’ll fix it.
What patent for personal computers does Wozniak hold?
Possibly answering my own question, I see that Woz is sole inventor on a patent with the impressively broad-sounding title “Microcomputer for use with video display”, US4136359, and that this fact is remained on in various places on the web. But if you look at the patent’s actual claims, it’s not so general after all—they’re all about details of controlling the timing of the video signals.
[EDITED to fix an autocorrect typo.]
US Patent No. 4,136,359: “Microcomputer for use with video display”[38]—for which he was inducted into the National Inventors Hall of Fame. US Patent No. 4,210,959: “Controller for magnetic disc, recorder, or the like”[39] US Patent No. 4,217,604: “Apparatus for digitally controlling PAL color display”[40] US Patent No. 4,278,972: “Digitally-controlled color signal generation means for use with display”[41]
Yeah, as I said above US4136359 is doubtless an excellent patent but it really isn’t in any useful sense a patent on the personal computer. It’s a patent on a way of getting better raster graphics out of a microcomputer connected to a television.
YCombinator now consider it to be a great time to invest in biotech startups. Sam Altman says that the industry changed in a way that makes biotech startups possible.
The situation surely isn’t perfect but it’s better than it’s used to be.
In the section on EA, you include discussion of AGI, existential risk, and the existential risk of an AGI, which seem to me different subjects. Can you clarify what you see as the relation between these things and EA?
My picture of EA is distributing anti-malarial bed nets, or trying to improve clean water supplies. While some in the EA movement may judge existential risk or AGI to be the area they should direct their vocation towards (whether because of their rating of the risk itself or their own comparative advantage), they are not listed among, for example, Givewell’s recommended charities.
EA is an intensional movement.
http://effective-altruism.com/ea/j7/effective_altruism_as_an_intensional_movement/
I concur, with many other people that when you start of from a wide sample of aggregative consequentialist values and try to do the most good, you bump into AI pretty soon. As I told Stuart Russell a while ago to explain why a Philosopher Anthropologist was auditing his course:
That’s how I see it anyway. Most of the arguments for it are in “Superintelligence” if you disagree with that, then you probably do disagree with me.
Not particularly disagreeing, I just found it odd in comparison to other EA writings. Thanks for the clarification.
It’s actually fairly common in EA circles by now to acknowledge AI as an issue. The disagreements tend to be more about whether there are useful things to be done about it, or whether there are specific nonprofits worth supporting. (Givewell has a blogpost in that direction)
Ok, so some of the things that you value are hard to work towards, but as you say, working towards those things is still worth your while. When I’ve been in similar situations, pretending to be a new homunculus has helped, and I’m sure that you’ve figured out other brilliant coping strategies on your own.
I see that you’ve become less interested in transhumanism, though, and your post doesn’t give me a solid feel for why this is, so I’m somewhat curious. Did you shift your focus towards EA and away from transhumanism for utilitarian/cost-benefit reasons? Did you just look back one day and realize that your values had changed? Something else? I’m curious about this partly because there’s a part of me that doesn’t want my current values to change, and partly because I’m sad that transhumanism no longer interests you as it did. Thanks!
He says at the end he’s still a transhumanist. I think the point was that, in practice, it seemed difficult to work directly towards transhumanism/immortalism (and perhaps less likely that such a thing will be achieved in our lifetimes, although I’m less sure about that)
(Diego, curious if my model of you is accurate here)
I am particularly skeptical of transhumanism when it is described as changing the human condition, and the human condition is considered to be the mental condition of humans as seen from the human’s point of view.
We can make the rainbow, but we can’t do physics yet. We can glimpse at where minds can go, but we have no idea how to precisely engineer them to get there.
We also know that happiness seems tighly connected to this area called the NAcc of the brain, but evolution doesn’t want you to hack happiness, so it put the damn NAcc right in the medial slightly frontal area of the brain, deep inside, where fMRI is really bad, where you can’t insert electrodes correctly. Also, evolution made sure that each person’s NAcc develops epigenetically into different target areas, making it very, very hard to tamper with it to make you smile. And boy, do I want to make you smile.
You didn’t explain anything about the evolution of your thoughts related to cryonics/brain preservation in particular. Why is that?
Basically because I never cared much for cryonics, even with the movie about me being done about it. Trailer:
https://www.youtube.com/watch?v=w-7KAOOvhAk
For me cryonics is like soap bubbles and contact improv. I like it, but you don’t need to waste your time knowing about it.
But since you asked: I’ve tried to get rich people in contact with Robert McIntyre, because he is doing a great job and someone should throw money at him.
And me, for that matter. All my donors stopped earning to give, so I’m with no donor cashflow now, I might have to “retire” soon—Brazilian economy collapsed and they may cut my below life cost scholarship.EDIT: Yes, my scholarship was just suspended :( So I won’t be just losing money, I’ll be basically out of it, unfortunately. I also remind people that donating to individuals is way cheaper than to institutions—yes I think so even now that I’m launching another institution. The truth doesn’t change, even if it becomes disadvantageous to me.
Wow, that’s so cool! My message was censored and altered.
Lesswrong is growing an intelligentsia of it’s own.
(To be fair to the censoring part, the message contained a link directly to my Patreon, which could count as advertising? Anyway, the alteration was interesting, it just made it more formal. Maybe I should write books here, and they’ll sound as formal as the ones I read!)
Also fascinating that it was near instantaneous.
What happened? That sounds very weird.
Oh, so boring..… It was actually me myself screwing up a link I think :(
Skill: being censored by people who hate censorship. Status: not yet accomplished.
I’m not sure whether “money directly invested in anti-aging” is a good way to think about aging. To make process on the problem we need to advance biologic research itself. We made a lot of progress by building cheap DNA sequencing. In addition to human DNA. Theranos sells medical tests at half the price of medicare reinbursement as their starting price. As time goes on I expect blood tests get a lot cheaper than they are now.
When it comes to labwork Emerald Cloud Lab and companies with a similar model can make it cheaper and make science more effective in the process.
In the QS space a lot of companies work on creating better measurement devices. Apple wants to do more in health.
They’ve had a very bad six weeks.
There a lot money at stake in fighting them. She lobbies for Medicare paying companies less money for tests. There are companies losing billions of dollars, so they hire PR to fight Theranos.
That’s not an unexpected development. You don’t change the status quo without making enemies.
They’re honestly quite possibly just bullshitting about their grand plans. Theranos, that is. I wouldn’t be surprised if they had some interesting ideas that they are utterly unprepared to follow through on and that they are massively overselling the importance and quality of.
Why would a company who doesn’t trust into it’s tech working explicitely lobby the FDA to test their products to make sure that the marketplace trusts them?
I don’t think there a good reason that blood testing roughly didn’t change much in price in the last 10 years while DNA sequencing got 10,000 as cheap. We good cheaper DNA sequencing because multiple companies focused on radically improving sequencing technology.
Having a direct to consumer marketplace where people know the price of testing before they buy it is likely very useful for the field to produce price competition that leads to the development of cheaper testing.
We don’t need the price improvement of sequencing for blood test but having moore’s law for it or half of moore’s law would be a game changer. Do you think there are basic reasons why blood testing shouldn’t be able to radically improve in price over time?
Is it reasonable to say that what really matters is whether there’s a fast or slow takeoff? A slow takeoff or no takeoff may limit us to EA for the indefinite future, and fast takeoff means transhumanism and immortality are probably conditional on and subsequent to threading the narrow eye of the FAI needle.
See the link with a flowchart on 12.
I looked at the flowchart and saw the divergence between the two opinions into mostly separate ends: settling exoplanets and solving sociopolitical problems on Earth on the slow-takeoff path, vs focusing heavily on how to build FAI on the fast-takeoff path, but then I saw your name in the fast-takeoff bucket for conveying concepts to AI and was confused that your article was mostly about practically abandoning the fast-takeoff things and focusing on slow-takeoff things like EA. Or is the point that 2014!diego has significantly different beliefs about fast vs. slow than 2015!diego?
Interesting that I conveyed that. I agree with Owen Cotton Barratt that we ought to focus efforts now into sooner paths (fast takeoff soon) and not in the other paths because more resources will be allocated to FAI in the future, even if fast takeoff soon is a low probability.
I personally work on inserting concepts and moral concepts on AGI because almost any other thing I could do there are people who will do better already, and this is an area that interpolates with a lot of my knowledge areas, while still being AGI relevant. See link in the comment above with my proposal.
I’d recheck your links to the EA forum; this one was a LW link, for example.
The text is posted at the EA forum too here, there all the links work.