Hey people! Sorry, due to uber related issues going to be a few minutes late. Shouldn’t be more than 10 though.
25Hour
MSP ACX Hangout: Davanni’s Pizza
Minneapolis-St Paul ACX Article Club: Problem Actors and Groups!
St. Paul – ACX Meetups Everywhere Spring 2024
Minneapolis-St Paul ACX Article Club: Prosaic Life Skills and Agency
Minneapolis-St Paul ACX Article Club: Super-Illegal Crypto Shenanigans
Minneapolis-St Paul ACX Article Club: Meditation and LSD
MSP Article Discussion Meetup: The EMH, Long-Term Investing, and Leveraged ETFs
Article Discussion And Free Pizza—St Paul
So this all makes sense and I appreciate you all writing it! Just a couple notes:
(1) I think it makes sense to put a sum of money into hedging against disaster e.g. with either short term treasuries, commodities, or gold. Futures in which AGI is delayed by a big war or similar disaster are futures where your tech investments will perform poorly (and depending on your p(doom) + views on anthropics, they are disproportionately futures you can expect to experience as a living human).
(2) I would caution against either shorting or investing in cryptocurrency as a long-term AI play; as patio11 in his Bits About Money has discussed (most recently in A review of Number Go Up, on crypto shenanigans (bitsaboutmoney.com) ), cryptocurrency is absolutely rife with market manipulation and other skullduggery; shorting it can therefore easily result in losing your shirt even in a situation where cryptocurrencies otherwise ought to be cratering.
Worth considering that humans are basically just fleshy robots, and we do our own basic maintenance and reproduction tasks just fine. If you had a sufficiently intelligent AI, it would be able to:
(1) persuade humans to make itself a general robot chassis which can do complex manipulation tasks, such as Google’s experiments with SayCan
(2) use instances of itself that control that chassis to perform its own maintenance and power generation functions
(2.1) use instances of itself to build a factory, also controlled by itself, to build further instances of the robot as necessary.
(3) kill all humans once it can do without them.
I will also point out that humans’ dependence on plants and animals has resulted in the vast majority of animals on earth being livestock, which isn’t exactly “good end”.
This seems doubtful to me; if Yan truly believed that AI was an imminent extinction risk, or even thought it was credible, what would Yann be hoping to do or gain by ridiculing people who are similarly worried?
Hey, I really appreciated this series, particularly in that it introduced me to the fact that leveraged etfs (1) exist and (2) can function well as a fixed proportion of overall holdings over long periods.
Is the lesswrong investing seminar still around/open to new participants, by any chance? I’ve been doing lots of research on this topic (though more for long-term than short-term strategies) and am curious about how deep the unconventional investing rabbit hole goes.
It’s a beautiful dream, but I dunno, man. Have you ever seen Timnit engage charitably and in-good-faith with anyone she’s ever disagreed publicly with?
And absent such charity and good faith, what good could come of any interaction whatsoever?
This is a tiny corner of the internet (Timnit Gebru and friends) and probably not worth engaging with, since they consider themselves diametrically opposed to techies/rationalists/etc and will not engage with them in good faith. They are also probably a single-digit number of people, albeit a group really good at getting under techies’ skin.
Re: blameless postmortems, i think the primary reason for blamelessness is because if you have blameful postmortems, they will rapidly transform (at least in perception) into punishments, and consequently will not often occur except when management is really cheesed off at someone. This was how the postmortem system ended up at Amazon while i was there.
Blameful postmortems also result in workers who are very motivated to hide issues they have caused, which is obviously unproductive.
Reasonable points, all! I agree that the conflation of legality and morality has warped the discourse around this; in particular the idea of Stable Diffusion and such regurgitating copyrighted imagery strikes me as a red herring, since the ability to do this is as old as the photocopier and legally quite well-understood.
It actually does seem to me, then, that style copying is a bigger problem than straightforward regurgitation, since new images in a style are the thing that you would ordinarily need to go to an artist for; but the biggest problem of all is that fundamentally all art styles are imperfect but pretty good substitutes in the market for all other art styles.
(Most popular of all the art styles—to judge by a sampling of images online—is hyperrealism, which is obviously a style that nobody can lay either legal OR moral claim to.)
So i think that if Stability tomorrow came out with a totally unimpeachable version of SD with no copyrighted data of any kind (but with a similarly high quality of output) we would have, essentially, the same set of problems for artists.
Interestingly i believe this is a limitation that one of the newest (as yet unreleased) diffusion models has overcome, called DeepFloyd; a number of examples have been teased already, such as the following Corgi sitting in a sushi doghouse:
https://twitter.com/EMostaque/status/1615884867304054785?t=jmvO8rvQOD1YJ56JxiWQKQ&s=19
As such the quoted paragraphs surprised me as an instance of a straightforwardly falsifiable claim in the documents.
I think that your son is incorrectly analogizing heroin/other opiate cravings to be similar to “desire for sugar” or “desire to use X social media app” or whatever. These are not comparable. People do not get checked into sugar rehab clinics (which they subsequently break out of); they do not burn down each one of their social connections to get to use an hour of TikTok or whatever; they do not break their own arms in order to get to go to the ER which then pumps them full of Twitter likes. They do routinely do these things, and worse, to delay opiate withdrawal symptoms.
(For reference, my wife is a paramedic and she has seen this last one firsthand. Tell me: have you ever, in your life, had something you wanted so much that you would break one of your own limbs to get it?)
Another way of putting this is that opiate use frequently gives you a new utility function where the overwhelmingly dominant term is “getting to consume opiates.”
For reference, I’m not automatically suspicious of drugs—I wrote https://www.lesswrong.com/posts/NDmbnaniJ2xJnBASx/perhaps-vastly-more-people-should-be-on-fda-approved-weight .
believes he has enough self control to not get addicted
So first, as poster above points out, there is not a good way to establish this. You have certainty on this topic well above what the evidence merits.
But leaving that aside. A lot of the core issue here is that the risk/reward profile absolutely sucks for recreational opiates given almost any reasonable set of initial assumptions.
Like, suppose you’re right and you don’t get addicted. I guess you have… discovered a new hobby, I guess? Whereas if you’re wrong then your life is pretty much destroyed, as is the life of everyone who loves you most.
EDIT: Another pretty-routine circumstance my wife runs into at work: Narcan injections are used to bring somebody back if they’ve stopped breathing due to opiate overdose. Patients need to be restrained beforehand since they will frequently attack providers out of anger for ruining their high, even after it is pointed out to them that they weren’t breathing and were approx. 1 minute from death.
Primarily people come to this on the discord, so I just have this on lw for visibility