Eric Weinstein argues strongly against returns being 20century level, and says they are now vector fields, not scalars. I concur (not that I matter)
diegocaleiro
The Girardian conclusion, and general approach of this text make sense.
But the strategy that is best is forgiving 2 tits for tat, or something like that, worth emphasizing.
Also it seems you are putting some moral value in long term mating that doesn’t necessarily reflect our emotional systems or our evolutionary drives. Short tem mating is very common and seen in most societies where there’s enough resources to go around and enough intersexual geographical proximity. Recently there are more and stronger arguments emerging against female short term strategies. But it would be a far cry to claim that we already know decisively that the expected value for a female of short terming is necessarily negative. It may depend on fetal androgens, and it may be that the measurements made so far took biased samples to calculate the cost of female promiscuity. In the case of males, as far as I know, there is literally no data associating short terming with long term QALY loss, none. But I’d be happy to be corrected.
Notice also that the moral question is always about the sex you are not. If you are female, and data says it doesn’t affect males, then you are free to do whatever. If you are male, and the data says short terming females become long term unhappy, then the moral responsibility for that falls on you, specially if there’s information assymetry.
This sounds cool. Somehow it reminded me of an old, old essay by Russell on architecture.
It’s not that relevant, so just if people are curious
I am now a person who moved during adulthood, and I can report past me was right except he did not account for rent.
It seems to me the far self is more orthogonal to your happiness. You can try to optimize for maximal long term happiness.
Interesting that I conveyed that. I agree with Owen Cotton Barratt that we ought to focus efforts now into sooner paths (fast takeoff soon) and not in the other paths because more resources will be allocated to FAI in the future, even if fast takeoff soon is a low probability.
I personally work on inserting concepts and moral concepts on AGI because almost any other thing I could do there are people who will do better already, and this is an area that interpolates with a lot of my knowledge areas, while still being AGI relevant. See link in the comment above with my proposal.
Not my reading. My reading is that Musk thinks people should not consider the probability of succeding as a spacecraft startup (0% historically) but instead should reason from first principles, such as thinking what are the materials from which a rocket is made, then building the costs from the ground up.
You have correctly identified that I wrote this post while very unhappy. The comments, as you can see by their lighthearted tone, I wrote pretty happy.
Yes, I stand by those words even now (that I am happy).
I am more confident that we can produce software that can classify images, music and faces correctly than I am that we can integrate multimodal aspects of these modulae into a coherent being that thinks it has a self, goals, identity, and that can reason about morality. That’s what I tried to address in my FLI grant proposal, which was rejected (by the way, correctly so, it needed the latest improvements, and clearly—if they actually needed it—AI money should reach Nick, Paul and Stuart before our team.) We’ll be presenting it in Oxford, tomorrow?? Shhh, don’t tell anyone, here, just between us, you get it before the Oxford professors ;) https://docs.google.com/document/d/1D67pMbhOQKUWCQ6FdhYbyXSndonk9LumFZ-6K6Y73zo/edit
He have non-confirmed simplified hypothesis with nice drawings for how microcircuits in the brain work. The ignore more than a million things (literally, they just have to ignore specific synapses, the multiplicity of synaptic connection etc… if you sum those things up, and look at the model, I would say it ignores about that many things). I’m fine with simplifying assumptions, but the cortical microcircuit models are a butterfly flying in a hurricane.
The only reason we understand V1 is because it is a retinotopic inverted map that has been through very few non-linear transformations—same for the tonotopic auditory areas—as soon as V4, we are already completely lost (for those who don’t know, the brain has between 100-500 areas depending on how you count, and we have a medium guess of a simplified model that applies well to two of them, and medium to some 10-25). And even if you could say which functions V4 participates more in, this would not tell you how it does it.
Oh, so boring..… It was actually me myself screwing up a link I think :(
Skill: being censored by people who hate censorship. Status: not yet accomplished.
Wow, that’s so cool! My message was censored and altered.
Lesswrong is growing an intelligentsia of it’s own.
(To be fair to the censoring part, the message contained a link directly to my Patreon, which could count as advertising? Anyway, the alteration was interesting, it just made it more formal. Maybe I should write books here, and they’ll sound as formal as the ones I read!)
Also fascinating that it was near instantaneous.
No, that’s if you want to understand why a specific Lesswrong afficionado became wary of probabilistic thinking to the point of calling it a problem of the EA community. If you don’t care about my opinions in general, you are welcome to take no action about it. He asked for my thoughts, I provided them.
But the reference class of Diego’s thoughts contains more thoughts that are wrong than that are true. So on priors, you might want to ignore them :p
US Patent No. 4,136,359: “Microcomputer for use with video display”[38]—for which he was inducted into the National Inventors Hall of Fame. US Patent No. 4,210,959: “Controller for magnetic disc, recorder, or the like”[39] US Patent No. 4,217,604: “Apparatus for digitally controlling PAL color display”[40] US Patent No. 4,278,972: “Digitally-controlled color signal generation means for use with display”[41]
Basically because I never cared much for cryonics, even with the movie about me being done about it. Trailer:
https://www.youtube.com/watch?v=w-7KAOOvhAk
For me cryonics is like soap bubbles and contact improv. I like it, but you don’t need to waste your time knowing about it.
But since you asked: I’ve tried to get rich people in contact with Robert McIntyre, because he is doing a great job and someone should throw money at him.
And me, for that matter. All my donors stopped earning to give, so I’m with no donor cashflow now, I might have to “retire” soon—Brazilian economy collapsed and they may cut my below life cost scholarship.EDIT: Yes, my scholarship was just suspended :( So I won’t be just losing money, I’ll be basically out of it, unfortunately. I also remind people that donating to individuals is way cheaper than to institutions—yes I think so even now that I’m launching another institution. The truth doesn’t change, even if it becomes disadvantageous to me.
See the link with a flowchart on 12.
I think you misunderstood my claim for sarcasm. I actually think I don`t know much about AI (not nearly enough to make a robust assessment).
Yes I am.
Step 1: Learn Bayes
Step 2: Learn reference class
Step 3: Read 0 to 1
Step 4: Read The Cook and the Chef
Step 5: Reason why are the billionaires saying the people who do it wrong are basically reasoning probabilistically
Step 6: Find the connection between that and reasoning from first principles, or the gear hypothesis, or whichever other term you have for when you use the inside view, and actually think technically about a problem, from scratch, without looking at how anyone else did it.
Step 7: Talk to Michael Valentine about it, who has been reasoning about this recently and how to impart it at CFAR workshops.
Step 8: Find someone who can give you a recording of Geoff Anders’ presentation at EAGlobal.
Step 9: Notice how all those steps above were connected, become a Chef, set out to save the world. Good luck!
I am particularly skeptical of transhumanism when it is described as changing the human condition, and the human condition is considered to be the mental condition of humans as seen from the human’s point of view.
We can make the rainbow, but we can’t do physics yet. We can glimpse at where minds can go, but we have no idea how to precisely engineer them to get there.
We also know that happiness seems tighly connected to this area called the NAcc of the brain, but evolution doesn’t want you to hack happiness, so it put the damn NAcc right in the medial slightly frontal area of the brain, deep inside, where fMRI is really bad, where you can’t insert electrodes correctly. Also, evolution made sure that each person’s NAcc develops epigenetically into different target areas, making it very, very hard to tamper with it to make you smile. And boy, do I want to make you smile.
Copied from the Heterodox Effective Altruism facebook group (https://www.facebook.com/groups/1449282541750667/):
Giego Caleiro I’ve read the comments and now speak as me, not as Admin:
It sems to me that the Zurich people were right to exclude Roland from their events. Let me lay out the reasons I have, based on extremely partial information:
1) IF Roland brings back topics that are not EA, such as 9/11 and Thai prostitutes, it is his burden to both be clear and to justify why those topics deserve to be there.
2) The politeness of EAs is in great part the reason that some SJWs managed to infiltrate it. Having regulations and rules that determine who can be kicked out is bad, because it is a weapon that the SJWs have been known to wield with great care and precision. That is, I much prefer a group where people are kicked out without justification than one in which reason is given (I say this as someone who was kicked out of at least 2 physical spaces related to EA, so it does not come lightly). Competition drives out SJWs, so I would recommend to Roland to create a new meeting that is more valuable than it’s predecessor, and attract people to it. (this community was created by me, with me as an admin, precisely for those reasons. I believed that I could legitimately help generate more valuable debate than previous EA groups, including the one that I myself created, but feared would be taken over by more SJWish types. This one is protected).
3) Another reason to be pro-kicking out: I and Tartre run a facebook chat group where I make a point of never explaining kicking anyone out. As far as I can tell, it has the best density of interesting topics of any facebook chat related to rationalists and EAs. It is necessary to be selective.
4) That said: Being excluded from social groups is horrible, it feels like dying to a lot of people, and it makes others fear it happening to them like the plague. So it allows for the kind of pernicious coordination in (DeScioli 2013) and full blown Girardian Scapegoating. There’s a balance that needs to be struck to avoid SJWs from taking little bureocracies, then mobbing people out, thus tyrannizing others into condescention with whatever is their current day flavour of acceptable speech.
5) Because being excluded from social groups is horrible, HEAs need to create a welcoming network of warmth and kindness towards those who are excluded or accused. We don’t want people to feel like they are dying, we don’t want they hyppocampi compromised and their serotonin levels lowered. Why? Because this happens to a LOT of people when they transition from being politically left leaning to being politically right leaning (or when they take the sexual strategy Red Pill). If we, HEAs, side with the accusers, the scapegoaters, the mob, we will be one more member of the Ochlocracy. This is both anti-utilitarian, as the harm to the excluded party is nearly unbearable, and anti-heterodox, as in all likelihood at least in part a person was excluded for not sharing a belief or behavioral pattern with those who are doing the excluding. So I highly recommend that, on priors, HEAs come forth in favor of the person.
During my own little scapegoating event, Divia Caroline Eden was nice enough to give me a call and inquire about psychological health, make sure I wasn’t going to kill myself and that sort of thing (people literally do that, if you have never been scapegoated, you cannot fathom what it is like, it cannot be expressed in words) and maybe 4 other people messaged me online showing equal niceness and appreciation.
Show that to Roland now, and maybe he’ll return the favor when and if it happens to you. As an HEA, you are already in the group of risk.